It’s locally run stable diffusion 1.5 with a few models and loras, nothing too special. Using an old 1080ti it’s more than enough.
Most of the work here is being done by the model
It’s locally run stable diffusion 1.5 with a few models and loras, nothing too special. Using an old 1080ti it’s more than enough.
Most of the work here is being done by the model
Extras -
Okay went with Egyptian mummies (sort of).
Theme suggestion: magic, clerics, sorceresses, powerful magic woman etc,
Entry:
Prompt
parameters (cgi), 1woman, (ancient Egyptian mummy), (mummy wrapped in bandages), (dark tanned skin), (bathing by a river), (bandaged arms, bandaged leg, bandaged torso, bandaged stomach), (natural breasts, sexy pose, relaxed), (sunset ), (alluring, smile, sexy, slutty, inviting),
[ultra quality, highly detailed, realistic, high quality, 4k, 8k, Meticulous design], , [perfect face, perfect hands, perfect hand, detailed hand]. Negative prompt: EasyNegative, painting, plastic, airbrushed, shine, shiny, photoshop, doll, bad anatomy, bad hands, extra fingers, deformed hands, watermark, text, ugly, child, immature, cartoon. Steps: 40, Sampler: DPM++ 2M Karras, CFG scale: 7, Seed: 3122720251, Face restoration: CodeFormer, Size: 512x512, Model hash: d05db48865, Model: Cust_Rev122_SFEB11_v01, Lora hashes: “mummy_costume_v0.2: 1fe09ffc6744”, Version: 1.6.0
postprocessing Postprocess upscale by: 4, Postprocess upscaler: R-ESRGAN 4x+
I find the best chance of having a recognisable face is to have the subject facing the viewer. With subject looking away or at a weird angle it has a hard time for sure. Have part of your prompt reference eyes or face “gold eyes” or “portrait” may help.
Turning on face restore helps.
Embeddings and Loras also help to some extent.
embedding: ng_deepnegative_v1_75t / EasyNegative / badhandv4,
lora : epiCRealismHelper
Models: I am finding the “realcartoon” set to be pretty reliable.
You can also try adetailer, which after the base image is generated will re-generate in more detail either the face or body (your choice). Note: I am not using this at the moment as it slows down generation considerably.
What model(s) are you using? Does the same happen if you copy this prompt for example? Happy to try one of your prompts out to see what the result is.
Edit: final thought, some models struggle with resolutions they weren’t trained in, so stick to 512x512 unless you’re sure then upscale your choice.