Which upscaler actually produced better detail?
You ran the same image through Topaz Gigapixel and Real-ESRGAN. You opened both in Finder. You zoomed in on the hair, then the texture, then the background. They look similar — or do they? Drop both into Compix and let pixel diff show you exactly where they diverge, down to every individual pixel. No squinting required.
Every upscaling algorithm makes different tradeoffs: sharpness versus smoothness, detail preservation versus hallucinatio, edge ringing versus soft blur. These differences are real — but they're often subtle enough that standard viewing methods don't reliably reveal them.
Open both upscaled files in a viewer. Zoom to 100%. Look at the hair. Switch files. Look at the hair again. Try to remember what it looked like in the other one. Repeat for skin texture. Repeat for the background. Give up and pick the one that "feels" crisper.
Load both into one tool. Blink at 400ms — algorithmic style differences pop immediately as the image visibly shifts between two "feels." Then switch to pixel diff: a heatmap shows every divergence point across the entire image simultaneously. No memory required. No switching between applications.
Different comparison questions need different approaches. Here's when to use each.
The before/after check. Anchor your original. Add the upscaled output as a state. Blink between them at 400ms to verify the upscale actually improved detail rather than introducing artifacts. A good upscale should add information — you should see more detail, not just a larger version of the same image with smoothing applied.
Best mode: Blink + Split wipe for regional inspection
The head-to-head algorithm comparison. Upscale the same image in both tools at the same output resolution. Drop both in, anchor one, add the other as a state. The pixel diff heatmap shows every divergence point simultaneously — you see the entire image's differences at once, not just the region you happen to zoom into.
Best mode: Pixel diff heatmap
Topaz Gigapixel Suppress Noise at 30 vs. 60. Real-ESRGAN with different model variants. Lightroom Enhance at different detail levels. Load all variants as states against a single anchor and blink through them sequentially. The strongest setting becomes obvious in under 60 seconds.
Best mode: Blink comparison (multi-state)
Topaz is known for strong detail recovery on faces and textures. When diffing against your original, look for hallucinated fine structure in areas the original doesn't contain — pores, hair strands, fabric weave. The diff heatmap will show you whether Gigapixel's additions are confined to high-frequency areas or spreading into smooth mid-tone regions where they shouldn't be.
Real-ESRGAN tends to produce sharper edges with occasional ringing artifacts, particularly around high-contrast edges. Blink comparison against Topaz output often reveals the stylistic difference immediately — Real-ESRGAN reads as "sharper" while Topaz reads as "smoother." The pixel diff heatmap shows the divergence is concentrated at edge boundaries.
SwinIR-based models often produce a different texture character than GAN-based models. The blink test at 300ms makes the stylistic difference between a SwinIR upscale and a Real-ESRGAN upscale immediately apparent — the image's entire feel shifts between the two styles. The diff heatmap shows the changes are distributed globally rather than confined to edges.
Lightroom's AI Enhance (formerly Enhance Details) takes a different approach — it's optimized for RAW detail recovery rather than general upscaling. When comparing against a standard bicubic export from the same RAW, the diff heatmap typically shows changes concentrated in fine detail areas: eyelashes, feathers, fabric threads — exactly what the algorithm claims to improve.
Waifu2x is trained primarily on anime-style images and applies strong denoise processing. When comparing anime or illustration upscales against other algorithms, the blink test makes the smoothing character immediately obvious — Waifu2x output has a characteristic "painted" quality that differs from photo-realism-trained models.
SD-based upscalers (img2img at high denoise, Ultimate SD Upscale, Controlnet tile) can produce dramatically different results from the same input depending on the denoise strength. Load multiple denoise levels as states and blink through them — the pixel diff heatmap shows exactly how much the image is changing versus being enhanced at each level.
Diff heatmap between two upscaled variants — bright regions show where algorithms diverge.
The goal of an AI upscaling algorithm is to increase resolution while preserving or recovering detail that the original image contains, without introducing detail that wasn't there. Evaluating whether any given upscale achieved this requires examining multiple types of content in the image — because different algorithms make different tradeoffs in different content areas.
High-frequency detail areas (hair, fur, feathers, fabric, grass) are where algorithms diverge most dramatically. This is where one algorithm might produce a convincing reconstruction while another introduces smearing or artificial sharpening halos. Use split wipe to park the divider across a hair region and drag slowly — you'll see exactly how each algorithm handles the transition from coarse to fine structure.
Smooth mid-tone areas (skin, sky, painted walls) should look identical or nearly identical between a good upscale and the original. If the diff heatmap shows significant change in a smooth area, it usually indicates the algorithm introduced noise, grain, or texture where none existed in the original. This is not improvement — it's hallucination.
Edge boundaries are where ringing artifacts and fringing typically appear. A high-contrast edge — a window frame against sky, text on a background, a sharp architectural line — will often reveal whether an algorithm produces ringing (light/dark halos along edges) or correctly preserves the edge without enhancement artifacts. Blink comparison makes ringing artifacts immediately obvious because they appear as an additional flickering element at the edge boundary.
AI upscaling comparison has a genuine subjectivity component: "better" depends on the intended use. For large-format print, you may want the algorithm that produces the most perceptual sharpness even if it introduces some hallucinated detail. For scientific or forensic use, you want the algorithm that changes the image least while scaling it. For anime upscaling, Waifu2x's strong denoise may be exactly right while Real-ESRGAN's grain preservation may be wrong.
The blink test and pixel diff don't tell you which upscale is "better" — they show you exactly how they differ. The judgment of which difference matters for your use case is still yours to make. But that judgment is much more reliable when you're looking at a complete, objective picture of the differences rather than trying to reconstruct them from memory while squinting at two open windows. For AI image generation comparison more broadly, see the dedicated AI art comparison page.
The standalone pixel diff heatmap page — full documentation on what each type of difference means and when to use each comparison mode. Open →
Compare Stable Diffusion outputs, Midjourney variations, Flux generations. For SD-based upscaling workflows especially. Open →
The complete comparison toolkit — blink, split wipe, pixel diff, multi-state, GIF export. Open →
Layer your best upscale results into compositions, comparison layouts, or animated sequences. Open →