Generating Characters for Tabletop Games

A guide by Euchale

Step 1: What do I even want to generate in the first place?

For me this means I read through the entire chapter of an adventure and write down all notable characters, enemies and objects/locations. If you are homebrewing, I would recommend using whatever source you can. I have for example used ChatGPT as an advisor on cultures I am not well versed in (e.g. Suggest me some water based mythological creatures from South America).
Generate that List and make sure to include some descriptions. So lets do this for an adventure from Savage Worlds ETU: Degrees of Horror
As you can see I have roughly sorted the list already into people and objects. Each person has a name and a short description.
The reason why we gather everything first and then start generating is so we have a unified prompt for all of them and thus more coherence between prompts!

Step 2: Finding the right Model

Next I come up with which Model I want to use. Since ETU is a realistic setting, I like to go for more realistic Models. However artsy models perform better for mystical creatures. For a list of models that are recommended you can go here: https://artroomai.gitbook.io/tutorials/resources/models
I always try to let the Model do the "heavy lifting" so pick a model that gives you an art style close to what you are looking for without excessive prompting. Don´t be fooled by the pretty pictures on civitai. Click on the little i in the circle to see the prompt, if the prompt is more than 10 terms, then you will be the one who has to do the heavy lifting.
For the purpose of this tutorial I will use the Zovya A to Z model. Its very versatile and easy to prompt for.
Next up we need to test if our model even knows what we are trying to generate. That is usually not a problem with Humans, but if you are going to generate some Lizard people, or monsters. So just type in a very simple prompt and we will see what we get. The goal is not to get a good result at this stage, but rather to test what terms the Model understands, and which ones it does not.
If we go back to our list above, the one term that stands out a chupacabra, a mystical creature combining features of both humans and wolves, looking very similar to a werewolf, however it has quills in addition to fur.
Not perfect, but I can work with it.
Term for the above was: "a werewolf, quills instead of fur, porcupine" yes, sometimes you have to be creative to get good results.
Once we are reasonably sure our model understands all terms we can move on to the next step:

Step 3: Finding our Prompt

I pick one of the entries from my list and try to tweak my prompt until I get a good result for it, gathering plenty of different terms from https://artroomai.gitbook.io/tutorials/resources/prompting-tutorials can help. I particularly would like to highlight SFX/Attributes, but do not underestimate how much adding an Artist can help.
This is what I found and now we come to crafting a prompt. Since most of the things I want to generate are portraits, I will start with: "A portrait of a <description of object>"
So lets do this for Jackson Green. He is a student that had to go into hiding and is now battle hardened and unshaven.
"A portrait of a grizzled black teenager, chin curtain"
A portrait of a grizzled black teenager, chin curtain,
Ok this roughly looks what I am going for, but I haven´t even used any of the cool terms I picked earlier, so lets do that now and see what we get.
"A portrait of a grizzled black teenager, chin curtain, wearing camoflage, scars, onyx, neo-expressionism, long exposure, in the style of Bruce Timm and Cleon Peterson and Conrad Roset" I´ve also added "Bear" in Negative Prompts, otherwise I would occasionally get Grizzly ears on my gens.
Now this is much better!
Once I am satisfied with the prompt I have generated I will continue to use that same prompt for all of my other gens with as little change as possible. Staying consistent is key!
Another handy trick is to give all "good guys" the same art style and all "bad guys" the same art style. That way you train your players to instictively tell good and bad apart. You can of course use this technique to have a character pretend being good!
Now I have tried to make the good Chupacabra, however when using the same prompt as for Jackson "A portrait of a werewolf, quills instead of fur, porcupine, onyx, neo-expressionism, long exposure, in the style of Bruce Timm and Cleon Peterson and Conrad Roset" I get people.
Distinctly not like the creature above
This can often happen, when the other terms in your prompt drive you towards humans, so reduce your prompt down until you find what makes the prompt look like a human. I´ll start with artists first.
And suddenly we have something again that looks like the creature above. So it was the artists. I could refine further by changing the artist names, or removing just some of them but for my purposes that image is already good enough.
Now we need to generate the evil variants of characters. Lets start with this one, as I already have the prompt down.
A portrait of a humanoid wolf, quills instead of fur, porcupine, onyx, Suprematism, Silver, Glowing, dark, moody,
Lets do one more humanoid, just so you see more of the process. I will do Helen the Wizard. I have decided to not make her old, as the whole story is about her being immortal.
A portrait of a wizard, wearing a balenciaga dress, onyx, Suprematism, evil laugh, Silver, bloodstone, Glowing, dark, moody, in the style of Bruce Timm and Cleon Peterson and Conrad Roset
Perfectly normal wizard
Yes, this is a nearly perfect image. Her expression and her eyes are slightly off, the sharp geometric forms in the background just emit a level of evil. Please note that you can prompt for expressions, it works reasonably well.
I hope you all found this tutorial useful!

Bonus Step 1: I can't get my character in the right Pose

Sometimes its hard to get your result to have the pose you want. What can help there is using Controlnet. https://artroomai.gitbook.io/tutorials/resources/extra-features-tutorials/controlnet
In short, you can use a reference image and generate your result from there. So if you have an image that has the right pose, you can use the Pose Controlnet to get a great result. If you have an image with armor/weapons or are making monsters and you want to use them as reference, I would recommend using HED/SoftEdge.

Bonus Step 2: I have 10 images that are really close, but none is quite 100% what I want

You have two options:
Option 1: Use In- and Outpainting. This is a complicated process, but Sticks broke it down and wrote a great tutorial on it: https://artroomai.gitbook.io/tutorials/resources/extra-features-tutorials/the-paint-tab
Option 2: Use GIMP to assmble the parts that you like, save that image and then use it as a starting image with low variation. This will fix the seams and you will get a more coherent image.

Bonus Material: Loras

You can use Loras to teach the AI a concept it doesn't know, or to ensure that the AI generates a specific thing. Loras can be found on Civitai by selecting Lora at the top. For more info on how to use them, check out the Lora entry. https://artroomai.gitbook.io/tutorials/resources/extra-features-tutorials/loras