You ran a batch of 50-100 images through your SD/ComfyUI workflow. Cherry-picked the best 10-15. Some nailed the face but the outfit is wrong. Others have the perfect outfit but the expression drifted. Others got the accessories right but the pose is off.
The standard fix: re-run the pipeline with regional prompting, inpainting, or ControlNet adjustments. Each fix is another generation pass — more GPU time, more electricity, more waiting. Compix skips all of that. Upload your 10 best outputs, draw freeform shapes on the swappable regions, and get 620+ unique combinations without touching your pipeline again.
Apply different prompts to different regions of a NEW image. Complex to set up, unreliable at region boundaries, still generates new pixels. Each variation is a new generation pass consuming GPU time.
Fix one region of one image by regenerating it. Result is unpredictable — the new region may not match the original style. Each fix is a separate GPU pass. And you can't specify "use the exact face from batch image 3."
Takes your EXISTING batch outputs. Extracts exact pixel regions. Combines them in every mathematical combination. The face from batch image 3 is the ACTUAL pixels from batch image 3. No generation. No GPU. 9 images × 3 shapes = 620+ variations in seconds.
From your ComfyUI workflow, A1111 batch, or Forge output folder — pick the 5-15 images with the best individual regions. They don't all need to be good. They just each need at least one great part: a face, an outfit, a pose, an accessory.
Upload all images. One becomes your anchor, the rest become variation sources. They should be the same resolution and roughly the same framing — which they will be if they came from the same batch parameters.
Freeform shape around the head. Rectangle around the torso/outfit. Circle around an accessory. Each shape is an axis of variation. Inside each shape, the exact pixels from every source image appear as clickable options.
Hit Generate. Every permutation of every region across every source image is computed. 9 images × 3 shapes = 620+. Each result is full resolution, pixel-exact. No ControlNet, no LoRA, no sampler settings. Pure combinatorial math in the browser.
Compix uses the browser Canvas API for all pixel operations. No WASM, no WebGL, no server. The combinatorial engine computes all non-empty subsets of shapes, then for each subset computes all variant assignments across source images. Each combination gets a two-pass offscreen render at full resolution — base layer from the anchor, then each active shape region composited from the assigned source. The formula: sources × (2shapes - 1) × variantsactive_shapes. Pure math. Deterministic. Runs in milliseconds per combination.
How 9 images become 72 combinations with 1 shape, and 620+ with 3 shapes. The complete guide. See the math →
The technical deep dive: combinator vs blender, the formula, and why generation platforms will never build this. Read more →
Before combining, blink-compare your batch to quickly identify which images have the best regions. Compare →