You generated 50 images with --cref and careful prompting. Maybe 10 are usable. The face in image 3 is perfect, but the outfit in image 7 is better, and the accessories in image 9 are what you need. Your perfect character is a combination of parts from different images.
Everyone tells you to re-prompt, adjust --cw, try different seeds, use Vary Region. All of that costs more credits, takes more time, and introduces more randomness. Compix does the opposite: take the 10 images you already have, draw shapes on the regions you want to mix, and get 620+ unique character variations without generating a single new image.
Guides Midjourney to generate NEW images matching a reference character. Better than nothing, but still costs credits, still introduces variation, still unpredictable. You get images that are SIMILAR to your reference, not EXACT combinations of your best parts.
Regenerates ONE region of ONE image using AI. Costs credits. Result is random — you can't specify "use the face from image 3." You can only tell Midjourney "regenerate this area" and hope it gives you something good.
Trying to reproduce a specific result by controlling randomness. Works sometimes, fails often. You're fighting the model's stochastic nature instead of working with the outputs you already have.
Takes your EXISTING Midjourney outputs. Extracts exact pixel regions using freeform shapes. Combines them in every mathematical combination. The face from image 3 is the ACTUAL face from image 3 — not an AI approximation. 9 images × 3 shapes = 620+ variations. Zero credits. Zero generation.
Pick the 5-15 images from your Midjourney generations that have the best individual parts — best faces, best outfits, best poses, best accessories. They don't need to be perfect. They just need to have at least one great region each.
Upload all images at once. One becomes your anchor — the base character. The others become your variation sources. They pixel-lock automatically to the same coordinate space.
Draw a freeform shape around the head. Another around the outfit. Another around the accessories. Each shape defines one axis of variation. Inside each shape, you instantly see that exact region from every other image.
Hit Generate. The combinatorial engine produces every possible assignment — face from image 1 + outfit from image 4 + accessories from image 9, then face from image 2 + outfit from image 4 + accessories from image 7, and so on. 9 images × 3 shapes = 620+ unique results. Each one is pixel-perfect.
How 9 images become 72 combinations with 1 shape, and 620+ with 3 shapes. The detailed guide. See the math →
The practical guide to taking the face from one image, the outfit from another, and getting every combination. How to combine →
Blink comparison catches drift that --cref and seeds miss. Spot exactly where your character changed between generations. Detect drift →