For AI Artists • Stable Diffusion • Midjourney • Flux • ComfyUI

You can't see character drift
by comparing images side by side.

Nobody told you this. But it's why you keep picking the wrong generation.

When you put two AI images side by side, your brain compares a perception to a memory — not two perceptions simultaneously. A 4-pixel eye shift is invisible. A jaw that widened by 3% is invisible. Hair that moved slightly is invisible. You're not comparing images. You're comparing what you see now to what you remember from half a second ago. That's a memory test. Not a consistency check.

Drift visible in <2 minutes Up to 50 generations at once Images never leave your device

Every tutorial teaches you how to prevent drift.
Nobody teaches you how to detect it after generation.

LoRA, IP-Adapter, InstantID, --cref, seed locking, reference weights. The entire AI art world is fighting drift at the generation level. Trying to stop it before it happens. But drift still happens. And when it does — you're staring at 50 images trying to figure out which ones held consistency. With no tool. With only your eyes. Side by side.

What happens when you compare side by side

Your eye moves between two images. Your brain registers image A. Your eye moves to image B. Your brain tries to compare what it sees to what it remembers. But visual memory degrades in under a second. A 4-pixel eye drift, a jaw that subtly widened, hair that moved 8 pixels to the left — completely invisible. You decide they look the same. You pick one. You continue. Three scenes later you realize you've been working with an inconsistent character for an hour.

What happens with blink comparison

Both images alternate at the same pixel coordinates, 300 times per second if you want. Your eyes don't move. Your visual system — which evolved to detect motion — does all the work. A 4-pixel eye drift screams at you as motion. A jaw that widened appears to pulse. Hair that moved flickers. Consistent elements are completely invisible — no motion means no drift. You know in 3 seconds. Not 3 minutes of squinting.

Same two generations. Side by side vs blink.

The face drifted 6 pixels. Invisible side by side. Impossible to miss in blink.

Split comparison — inspect at any point
Split wipe comparison of two AI character generations — drag the divider to inspect drift at pixel level
Blink comparison — drift is impossible to miss
Same two AI character generations compared with blink comparison — character drift visible as motion

Both comparisons show the exact same two images. One hides drift. One makes it unmissable.

Pixel lock. Why most comparison tools give you false results.

Every AI generator produces images where the character sits at slightly different positions on the canvas — even across consistent generations. When you compare without pixel lock, blink comparison shows positional drift even when the character itself is perfectly consistent. You think you have drift. You don't. Your comparison is lying to you.

Without pixel lock

Image A has the character at canvas position 240, 180. Image B has the exact same character at position 243, 182 — 3 pixels right, 2 pixels down. Blink comparison without pixel lock shows the entire character as "drifted" even though the face, pose, and style are identical. You're seeing canvas position noise, not actual character drift. You throw away a perfectly good generation.

With pixel lock — how Compix works

Compix scans the content bounds of every image in your batch and locks them all to the same coordinate space. Canvas position noise is eliminated. What you see in blink is only actual character change — face movement, pose shift, style inconsistency. Not canvas positioning artifacts. Your comparison is accurate. Your decisions are based on real data.

From 50 generations to confirmed consistency in 2 minutes.

This is the workflow that doesn't exist inside any generator. It happens after.

1

Set your anchor

Drop your reference — the generation you consider your strongest character representation. This is what everything else gets compared against. It doesn't have to be perfect. It just needs to be your baseline.

2

Load your entire batch

Drop all your generations at once. Up to 50 images. They pixel-lock automatically — content-scanned and aligned to the same coordinate space. No manual alignment. No Canva. No Photoshop layers. Drop and done.

3

Blink through everything

Hit play at 300ms. Watch. Drifted generations scream at you as motion — face jumps, pose shifts, style flickers. Consistent generations are silent — no motion, no drift. You've found your consistent ones without looking at a single image individually.

4

Confirm with diff and split

Switch to Diff on anything that looked borderline in blink. The heatmap shows exactly which pixels changed. Drag the split wipe across the face to confirm. You're not guessing anymore. You know.

Drifted images are not failures.
They're variation material.

The AI art world treats drifted generations as wasted compute. Generate again. Try another seed. Add more reference weight. But the drifted image you just threw away might have had a perfect expression. A perfect hand gesture. A lighting moment you couldn't prompt for. The drift was in the wrong place — not everywhere.

The diff heatmap shows you exactly what drifted

Load a drifted generation. Switch to Diff. The heatmap glows where change happened. If the face drifted but the lighting and background are perfect — you know. The diff tells you which regions are salvageable and which aren't. You're not throwing away the whole image. You're reading it.

See also: How to get more variations from 5 AI images than your generator gave you from 50 →

Freeform shape extraction — isolate what's usable

The diff showed you the face drifted but the pose is perfect. Use Compix's freeform shape tool to isolate exactly that region — draw the shape around the pose, extract it as an invariant state. Now you have a variation your generator never actually gave you. Pixel-locked. Ready to composite. The "failed" generation became usable material.

See also: Creating variations from drifted AI generations →

If you generate batches and pick the best one — this is for you.

Stable Diffusion & Forge users

You generate 20–50 images per session with different seeds, CFG scales, or sampler settings. You alt-tab through the output folder trying to pick the most consistent character. You know drift is happening. You can't quantify it. Blink comparison and pixel diff give you the comparison tools your output folder never had.

ComfyUI users

The Image Comparer node compares two images. That's the ceiling. When you have a batch of 30 outputs and need to find which seeds held character consistency — you're back to manual inspection. Compix handles 50 at once, all pixel-locked, blink and diff across all of them.

Midjourney & Flux artists

You're scrolling Discord to compare older variations to new ones. You're screenshotting grids and stacking them in Canva. You're comparing a memory to a perception and calling it QA. Drop your exported variations into Compix — blink comparison tells you in seconds which ones actually held consistency across your --cref or --sref workflow.

AI comic & storyboard artists

You need the same character to appear consistently across 20, 50, 100 panels. Every panel is generated. Every generation has potential drift. One panel with an eye that shifted 5 pixels breaks immersion for readers. Blink comparison catches that panel before it becomes a problem. Diff heatmap tells you exactly how much it drifted.

Questions about drift detection

When you compare images side by side, your brain compares a perception to a memory — not two simultaneous perceptions. A 4-pixel eye shift is completely invisible because your visual memory isn't precise enough to detect sub-10-pixel changes across a saccade. Blink comparison alternates both images at the same pixel coordinates while your eyes stay still. Your visual system — which evolved specifically to detect motion — treats the drift as motion. A 4-pixel shift appears as a visible jump. Side by side, it was invisible.
Pixel lock means all comparison images are aligned to exactly the same coordinate space before comparison begins. AI generators place characters at slightly different canvas positions even across "consistent" generations — usually within 2–5 pixels. Without pixel lock, blink comparison shows positional noise as drift even when the character itself is perfectly consistent. Compix scans content bounds across your entire batch and eliminates this noise automatically. What you see in blink is only actual character change.
Yes. Drag your output folder into Compix — every PNG or JPG becomes a comparison state. All of them pixel-lock automatically. Set one as your anchor (your reference generation) and blink through all the others. No setup, no workflow nodes, no plugins. Your images never leave your device — all processing runs locally in the browser.
The ComfyUI Image Comparer node compares two images side by side inside the workflow. It has no blink comparison, no pixel diff heatmap, no split wipe, and no pixel lock. It also compares two images — not 50. Compix sits outside your generation workflow and handles your entire output batch: one anchor, up to 50 comparisons, all three comparison modes, pixel-locked and ready in seconds.
Your images never leave your device. All comparison, pixel diff computation, and export runs entirely in your browser using the Canvas API and Web Workers. There is no server, no upload, no data collection. You can verify this yourself: disconnect your internet connection after loading the page and the tool works identically. Compix is fully installable as a PWA for offline use.

Detection is step one. Combining the best parts is step two.

Combine the Best Parts From Each Image

The face from image 3, the outfit from image 7, the accessories from image 9. Draw freeform shapes on each region and get every possible combination — 620+ from 9 images. Combine now →

Create 620+ Variations Without Regenerating

Your drifted generations aren't failures — they're variation material. Freeform shape extraction turns any region of any image into a usable variant. See the math →

Pixel Diff Heatmap

The mathematical layer under the blink. Every changed pixel glows. Every consistent pixel stays dark. See exactly what drifted and how much. Open diff tool →

Stop guessing which generation held consistency.

Drop your batch. Set your anchor. Blink at 300ms. You'll know in seconds.

Check Your Generations Now →