End-to-End Camera System MTF Sandbox

Optics Sensor ISP System Performance
Introduction

This interactive sandbox explores how optical design, sensor physics, and image processing interact as a single system. In the left-hand user control panel, pick one of the System Presets to see default system performance and warnings.

For a challenge, Break System, and adjust the Optics SensorISP sliders to recover from degraded System Performance.


Sandbox Modeling Assumptions
  • Monochromatic illumination (no chromatic aberration)
  • Circular aperture, uniform transmission
  • Zernike polynomials: RMS wavefront error in waves (λ)
  • Incoherent imaging: PSF = |FFT(pupil)|²
  • Pixel MTF: rectangular aperture (sinc)
  • X-axis normalized to sensor Nyquist; diffraction cutoff shown separately
Wavefront PhaseOpticsRMS: 0.00 λ
Analysis Note

RMS >0.07λ (Maréchal criterion) = significant degradation. P-V error is ~4× larger and misleading.

Point Spread FunctionOptics
Analysis Note

Log scale reveals diffraction rings. Energy in tails degrades contrast even when core looks sharp.

Holistic System Performance - MTF CascadeSystem
Diffraction Limit
Optics MTF
Sensor MTF
ISP MTF
System MTF
Analysis Note

MTF at Nyquist should be <10% to avoid aliasing. ISP MTF >100% = sharpening overshoot → halos. Optics MTF is always ≤ diffraction limit.

Edge/Line SpreadSystem
ISP MTF DecompositionISP
Combined ISP
Noise Reduction
Sharpening
Unity (100%)
Analysis Note

NR rolls off high frequencies (blur). Sharpening boosts mid-frequencies (can exceed 100%). Their product reveals why "fix it in post" has fundamental limits.

SNR vs FrequencySensor
Analysis Note

SNR <20 dB at fine detail = noise dominates. Megapixel marketing becomes fiction here.

Electron WellSensorDR: -- stops
Analysis Note

Shows signal level in the pixel "bucket". Headroom to FWC = highlight protection. Margin above read noise = shadow detail.

Noise DecompositionSensor
Analysis Note

Shot-noise limited = good (noise scales with √signal). Read-noise limited = low light (fixed noise floor dominates).

Quality Loss WaterfallSystem
Analysis Note

Shows where contrast budget goes. System losing 80% to optics can't be saved by better ISP.

Mathematical Framework: From Wavefront to Image Quality

This section documents the complete chain of equations used to compute each visualization, from the wavefront aberration at the exit pupil through to the final system MTF that determines image quality. All variables are defined where first used.

1. Wavefront Phase Map

The wavefront error $W(\rho,\theta)$ describes the optical path difference (OPD) across the exit pupil, expressed in waves ($\lambda$). We model it using Zernike polynomials in polar coordinates:

$$W(\rho, \theta) = \sum_n a_n Z_n(\rho, \theta)$$

where:

  • $\rho$ — normalized radial coordinate (0 at center, 1 at pupil edge)
  • $\theta$ — azimuthal angle in the pupil plane
  • $a_n$ — Zernike coefficient for mode $n$ (in waves $\lambda$)
  • $Z_n$ — Zernike polynomial (Noll indexing)

The specific Zernike polynomials implemented are:

$Z_4(\rho) = \sqrt{3}(2\rho^2 - 1)$
Defocus
$Z_5(\rho,\theta) = \sqrt{6}\,\rho^2 \sin(2\theta)$
Astigmatism (45°)
$Z_6(\rho,\theta) = \sqrt{6}\,\rho^2 \cos(2\theta)$
Astigmatism (0°)
$Z_7(\rho,\theta) = \sqrt{8}(3\rho^3 - 2\rho) \sin(\theta)$
Coma (vertical)
$Z_8(\rho,\theta) = \sqrt{8}(3\rho^3 - 2\rho) \cos(\theta)$
Coma (horizontal)
$Z_{11}(\rho) = \sqrt{5}(6\rho^4 - 6\rho^2 + 1)$
Primary Spherical

The RMS wavefront error is computed as:

$$W_{\text{RMS}} = \sqrt{\frac{1}{A} \iint_{\text{pupil}} W(\rho,\theta)^2 \,\rho\, d\rho\, d\theta}$$

Assumption: Circular, unobstructed pupil. Wavefront is zero outside the unit circle ($\rho > 1$).

Quality criterion: Maréchal criterion states that $W_{\text{RMS}} < 0.07\lambda$ yields "diffraction-limited" performance (Strehl ratio $> 0.8$).

2. Point Spread Function (PSF)

The PSF is the squared modulus of the Fourier transform of the generalized pupil function $P(\rho,\theta)$:

$$P(\rho, \theta) = A(\rho, \theta) \cdot \exp\bigl[i\,2\pi\, W(\rho, \theta)\bigr]$$
$$\text{PSF}(x, y) = \bigl|\mathcal{F}\{P(\rho, \theta)\}\bigr|^2$$

where:

  • $A(\rho,\theta)$ — pupil amplitude (1 inside pupil, 0 outside)
  • $W(\rho,\theta)$ — wavefront error in waves
  • $\mathcal{F}\{\cdot\}$ — 2D Fourier transform
  • $i$ — imaginary unit ($\sqrt{-1}$)

The PSF is normalized so that its peak value equals 1. Physical coordinates in the image plane scale with:

$$\Delta x = \lambda \cdot (f/\#)$$

where:

  • $\lambda$ — wavelength (μm)
  • $f/\#$ — f-number (focal length / aperture diameter)

The Airy disk diameter (first zero of the diffraction pattern) is:

$$d_{\text{Airy}} = 2.44 \cdot \lambda \cdot (f/\#)$$

Assumption: Monochromatic light at the specified wavelength. Incoherent imaging (intensities add, not amplitudes).

3. Encircled Energy (EE95)

The encircled energy $\text{EE}(r)$ is the fraction of total PSF energy within a circle of radius $r$:

$$\text{EE}(r) = \frac{\displaystyle\iint_{x^2+y^2 \le r^2} \text{PSF}(x,y)\, dx\, dy}{\displaystyle\iint \text{PSF}(x,y)\, dx\, dy}$$

The EE95 diameter is twice the radius at which $\text{EE}(r) = 0.95$. This metric captures the effective spot size including aberration tails, which the Airy diameter alone misses.

4. Modulation Transfer Function (MTF)

MTF quantifies contrast preservation as a function of spatial frequency. The system MTF is the product of individual component MTFs:

$$\text{MTF}_{\text{system}}(f) = \text{MTF}_{\text{optics}}(f) \times \text{MTF}_{\text{sensor}}(f) \times \text{MTF}_{\text{ISP}}(f)$$

4a. Diffraction-Limited MTF

For an ideal circular aperture with no aberrations:

$$\text{MTF}_{\text{diff}}(f) = \frac{2}{\pi} \left[\arccos\!\left(\frac{f}{f_c}\right) - \frac{f}{f_c}\sqrt{1 - \left(\frac{f}{f_c}\right)^{\!2}}\,\right] \quad \text{for } f < f_c$$
$$\text{MTF}_{\text{diff}}(f) = 0 \quad \text{for } f \ge f_c$$

where the optical cutoff frequency is:

$$f_c = \frac{1}{\lambda \cdot (f/\#)}$$
  • $f$ — spatial frequency (lp/mm or cycles/mm)
  • $f_c$ — cutoff frequency where MTF reaches zero

4b. Optics MTF (with aberrations)

The optics MTF is the normalized autocorrelation of the generalized pupil function:

$$\text{MTF}_{\text{optics}}(f) = \frac{\left|\displaystyle\iint P(x,y) \cdot P^*(x-\Delta, y)\, dx\, dy\right|}{\displaystyle\iint |P(x,y)|^2\, dx\, dy}$$

where the lateral shift $\Delta$ corresponds to spatial frequency $f$. This is computed via numerical integration with bilinear interpolation for sub-pixel shifts.

Constraint: $\text{MTF}_{\text{optics}} \le \text{MTF}_{\text{diff}}$ always (aberrations can only reduce MTF).

4c. Sensor MTF (Pixel Aperture)

A rectangular pixel acts as a spatial averaging filter. Its MTF is the sinc function:

$$\text{MTF}_{\text{pixel}}(f) = \left|\text{sinc}(\pi f p)\right| = \left|\frac{\sin(\pi f p)}{\pi f p}\right|$$

where:

  • $p$ — pixel pitch (mm)
  • $f$ — spatial frequency (lp/mm)

The Nyquist frequency (maximum unaliased frequency) is:

$$f_{\text{Nyquist}} = \frac{1}{2p} = \frac{500}{p_{\mu m}} \;\text{[lp/mm]}$$

Assumption: 100% fill factor for pixel MTF. Actual fill factor affects sensitivity but not the aperture MTF.

4d. ISP MTF (Image Signal Processing)

The ISP applies sharpening (unsharp mask) and noise reduction (blur). The combined effect on MTF:

$$\text{MTF}_{\text{ISP}}(f) = \text{MTF}_{\text{NR}}(f) \times \text{MTF}_{\text{sharp}}(f)$$

Noise reduction is modeled as a Gaussian low-pass filter:

$$\text{MTF}_{\text{NR}}(f) = \exp\!\left[-\left(\frac{f}{f_{\text{NR}}}\right)^{\!2}\right]$$

where $f_{\text{NR}} = (1 - \text{NR}_{\text{strength}}) \times f_{\text{Nyquist}}$ sets the cutoff.

Sharpening boosts mid-to-high frequencies via unsharp masking:

$$\text{MTF}_{\text{sharp}}(f) = 1 + \text{gain} \times \bigl(1 - \text{MTF}_{\text{blur}}(f)\bigr)$$

where $\text{MTF}_{\text{blur}}$ is a Gaussian with $\sigma$ proportional to sharpening strength.

Note: $\text{MTF}_{\text{ISP}} > 1$ indicates artificial contrast enhancement (oversharpening), which creates edge halos.

5. Line Spread Function (LSF) and Edge Spread Function (ESF)

The LSF is the integral of the PSF along one axis (response to a line source):

$$\text{LSF}(x) = \int_{-\infty}^{\infty} \text{PSF}(x, y)\, dy$$

The ESF is the cumulative integral of the LSF (response to a step edge):

$$\text{ESF}(x) = \int_{-\infty}^{x} \text{LSF}(x')\, dx'$$

The displayed spread functions include the effects of:

  1. Optical PSF (diffraction + aberrations)
  2. Pixel aperture convolution (rectangular kernel of width = pixel pitch)
  3. ISP processing (noise reduction blur + sharpening)

Normalization: ESF is normalized to range $[0, 1]$. LSF is normalized so peak $= 1$.

6. Signal-to-Noise Ratio (SNR)

SNR determines the noise floor that limits useful resolution. The photometric chain converts scene illuminance to detected electrons:

6a. Illuminance to Irradiance

$$L = \frac{E_{\text{scene}} \cdot \rho}{\pi}$$
$$E_{\text{image}} = \frac{\pi \cdot L \cdot T}{4 \cdot (f/\#)^2}$$

where:

  • $E_{\text{scene}}$ — scene illuminance (lux)
  • $\rho$ — surface reflectance (0.18 for 18% gray card)
  • $L$ — scene luminance (cd/m²)
  • $T$ — lens transmission (0.9 assumed)
  • $E_{\text{image}}$ — image plane illuminance (lux)

6b. Photon Flux and Electron Collection

$$\Phi = \frac{E_{\text{image}}}{K_m \cdot E_{\text{photon}}}$$
$$E_{\text{photon}} = \frac{h \cdot c}{\lambda}$$
$$N_e = \Phi \cdot A_{\text{pixel}} \cdot t_{\text{exp}} \cdot \text{FF} \cdot \text{QE}$$

where:

  • $\Phi$ — photon flux (photons/m²/s)
  • $K_m$ — luminous efficacy (180 lm/W for daylight)
  • $E_{\text{photon}}$ — energy per photon (J)
  • $h$ — Planck constant ($6.626 \times 10^{-34}$ J·s)
  • $c$ — speed of light ($3 \times 10^{8}$ m/s)
  • $A_{\text{pixel}}$ — pixel area (m²) $= p^2$
  • $t_{\text{exp}}$ — exposure time (s)
  • $\text{FF}$ — fill factor (fraction of pixel that is light-sensitive)
  • $\text{QE}$ — quantum efficiency (electrons per incident photon)
  • $N_e$ — collected electrons

6c. Noise Model and SNR

$$N_{e,\text{clamped}} = \min(N_e,\, \text{FWC})$$
$$\sigma_{\text{shot}} = \sqrt{N_{e,\text{clamped}}}$$
$$\sigma_{\text{total}} = \sqrt{N_{e,\text{clamped}} + \sigma_{\text{read}}^2}$$
$$\text{SNR} = \frac{N_{e,\text{clamped}}}{\sigma_{\text{total}}}$$
$$\text{SNR}_{\text{dB}} = 20 \cdot \log_{10}(\text{SNR})$$

where:

  • $\text{FWC}$ — full well capacity (maximum electrons before saturation)
  • $\sigma_{\text{shot}}$ — shot noise (Poisson statistics)
  • $\sigma_{\text{read}}$ — read noise (electrons RMS)
  • $\sigma_{\text{total}}$ — total noise (shot + read in quadrature)

Assumption: Shot noise dominates at high signal (photon-limited). Read noise dominates at low signal (read-limited).

6d. SNR vs Spatial Frequency

As spatial frequency increases, signal contrast decreases with the MTF while noise remains constant:

$$\text{SNR}(f) = \text{SNR}_{\text{DC}} \times \text{MTF}_{\text{system}}(f)$$
$$\text{SNR}_{\text{dB}}(f) = \text{SNR}_{\text{DC,dB}} + 20 \cdot \log_{10}\bigl(\text{MTF}_{\text{system}}(f)\bigr)$$

Threshold: $\text{SNR} < 20\,\text{dB}$ (~10:1) is generally considered unusable for resolving detail.

7. Quality Loss Waterfall

The waterfall chart decomposes the system MTF at a reference frequency (typically Nyquist/2) into multiplicative losses:

$$\text{MTF}_{\text{system}} = \text{MTF}_{\text{optics}} \times \text{MTF}_{\text{sensor}} \times \text{MTF}_{\text{ISP}}$$

Expressed as percentage contrast remaining:

$$\text{Starting contrast} = 100\%$$
$$\text{After optics} = 100\% \times \text{MTF}_{\text{optics}}$$
$$\text{After sensor} = 100\% \times \text{MTF}_{\text{optics}} \times \text{MTF}_{\text{sensor}}$$
$$\text{Final (after ISP)} = 100\% \times \text{MTF}_{\text{system}}$$

Each bar shows the contrast loss at that stage. This visualization reveals the limiting factor: if optics loses 80% of contrast, no amount of ISP sharpening can recover true detail—it can only amplify noise and create artifacts.

8. Key System Parameters

ParameterSymbolTypical RangeImpact
F-number$f/\#$1.4 – 16↑ increases Airy disk, ↓ diffraction MTF
Wavelength$\lambda$400 – 700 nm↑ increases Airy disk, ↓ diffraction MTF
Pixel pitch$p$0.8 – 10 μm↓ increases Nyquist, ↓ electrons/pixel
Fill factor$\text{FF}$50 – 100%↑ improves sensitivity
Quantum efficiency$\text{QE}$40 – 80%↑ improves sensitivity
Read noise$\sigma_{\text{read}}$1 – 10 e⁻↓ improves low-light SNR
Full well capacity$\text{FWC}$3k – 100k e⁻↑ improves dynamic range
Exposure time$t_{\text{exp}}$1/10000 – 1 s↑ more electrons, motion blur risk
ISO gain$G$100 – 25600↑ amplifies signal AND noise

9. Design Trade-off: Cutoff Ratio

A key figure of merit is the ratio of optical cutoff to sensor Nyquist:

$$\text{Cutoff Ratio} = \frac{f_c}{f_{\text{Nyquist}}} = \frac{2p}{\lambda \cdot (f/\#)}$$
  • Ratio $< 1$: Diffraction-limited. Optics blur the image before the sensor can resolve it. Pixels are "too small" for the aperture.
  • Ratio $\approx 2$–$3$: Well-matched system. Optics and sensor contribute roughly equally to MTF loss.
  • Ratio $> 4$: Oversampled. Pixels are larger than needed; resolution is wasted but SNR is excellent.

Design goal: Match $f_c \approx 2 \times f_{\text{Nyquist}}$ for optimal resolution/noise trade-off.

Principal Engineer Commentary: System Architecture & Design Review

This section provides the kind of cross-functional insight that emerges from years of debugging imaging pipelines, sitting in design reviews, and explaining to product managers why their "simple request" requires a complete system redesign. These are the patterns and failure modes that don't appear in textbooks.

🔴 Failure Diagnosis: "Why Do My Images Look Bad?"

When image quality complaints arrive, the first task is attribution—which subsystem is the limiting factor? Use this diagnostic flowchart:

Symptom → Root Cause Mapping

SymptomPrimary SuspectVerificationCommon Misdiagnosis
Soft images, good lightingOptics (aberrations or diffraction)Check MTF_optics vs diffraction limit gap"Sensor needs more pixels"
Soft images, low light onlyISP noise reductionDisable NR, check if sharpness returns"Lens is soft wide open"
Grainy fine detailSNR floor at high frequenciesCheck SNR vs frequency plot—below 20dB?"ISP sharpening too aggressive"
Edge halos / ringingISP oversharpeningMTF_ISP > 100% at mid frequencies"Chromatic aberration"
Moiré patternsAliasing (MTF too high at Nyquist)Check MTF > 10% at Nyquist"Sensor defect"
Color fringing on edgesLateral chromatic aberrationCompare MTF at 450nm vs 650nm"Purple fringing is normal"
Corner softnessField-dependent aberrationsSlide field height to 1.0, check RMS"Vignetting"
Inconsistent sharpness frame-to-frameFocus hunting / thermal driftNot modeled here—mechanical issue"Sensor noise"

Key Insight: The waterfall chart is your first stop. If optics loses 70% of contrast before reaching the sensor, the ISP team cannot fix it. This prevents months of wasted ISP tuning on a fundamentally optics-limited system.

🏗️ System Architecture Decisions

Every camera system embodies a set of trade-offs frozen at design time. Understanding these helps you navigate what's negotiable vs. what's physics.

The Fundamental Triangle: Resolution × Sensitivity × Cost

You get to pick two. Here's how the presets in this tool map to that trade-off:

  • Smartphone (1μm pixels): Chose resolution + cost. Sensitivity suffers—tiny pixels collect few photons. Compensates with computational photography (multi-frame stacking, AI denoising).
  • Mirrorless (3.9μm pixels): Chose resolution + sensitivity. Cost suffers—larger sensor die, more expensive optics to cover the image circle.
  • Machine Vision (5.9μm pixels): Chose sensitivity + cost. Resolution suffers, but in industrial inspection, you control lighting and distance, so this is often acceptable.

Pixel Size: The Decision That Echoes Forever

Once a sensor is fabbed, pixel pitch is locked. Everything else flows from it:

Pixel PitchNyquistTypical FWCDynamic RangeDiffraction Limit (f/#)
0.8 μm625 lp/mm~3,000 e⁻~60 dBf/1.4 already limited
1.4 μm357 lp/mm~8,000 e⁻~68 dBf/2.0
3.0 μm167 lp/mm~25,000 e⁻~76 dBf/4.0
6.0 μm83 lp/mm~50,000 e⁻~82 dBf/8.0

Architecture Rule: If you're designing a system where f/2.8 is the fastest aperture, pixels smaller than ~2μm are wasting silicon on resolution the optics can't deliver. This is why "megapixel count" is often marketing fiction.

The ISP Paradox: Sharpening Creates Problems It Claims to Solve

Aggressive sharpening (MTF > 100%) produces visually "sharper" images by boosting edge contrast. But it:

  • Amplifies noise at high frequencies—the SNR floor gets worse
  • Creates halos around edges—the ringing visible in LSF overshoot
  • Defeats MTF measurement—production QA sees "good MTF" that's actually ISP artifact
  • Interacts badly with compression—JPEG artifacts concentrate at the artificial edges

The "Marketing" ISP tuning preset demonstrates this failure mode. It looks impressive in A/B demos but falls apart under scrutiny.

🗣️ Cross-Functional Communication Guide

Different stakeholders speak different languages. Here's a translation guide for design reviews:

Talking to Product Management

They SayThey MeanYou Should Explain
"We need more megapixels"Competitors have bigger numbersShow SNR vs frequency—more pixels with same sensor size = worse low-light
"Make it sharper"Images look soft compared to iPhoneiPhone uses computational multi-frame. Single-frame has physics limits.
"Can't we just fix it in software?"Hardware is expensive to changeShow waterfall—if optics loses it, software can only fake it (and badly)
"The sensor spec says 12-bit"Marketing said it's better12-bit ADC ≠ 12 stops DR. Show actual FWC and read noise calculation.

Talking to Optical Engineers

They SayThey MeanYou Should Ask
"Diffraction limited on-axis"Center is fine, corners may not be"What's the RMS at 0.7 field height?"
"Meets MTF spec"At one frequency, one field point, one temperature"Full field MTF at Nyquist/2, show me the variation"
"P-V is 0.5λ"One bad ray, rest might be fine"What's the RMS? P-V is misleading for image quality"
"Nominal design meets spec"Before tolerancing and manufacturing"What's the Monte Carlo yield at this spec?"

Talking to ISP/Algorithm Engineers

They SayThey MeanYou Should Clarify
"Our denoiser is state-of-the-art"Good on benchmarks, maybe not your data"At what SNR does it start hallucinating texture?"
"We can sharpen that back"Boost high frequencies in processing"That amplifies noise. Show me SNR after sharpening."
"The neural network handles it"It's a black box that seems to work"What's the failure mode? When does it break?"
"Just needs tuning"6 months of parameter tweaking ahead"Can we fix this in optics/sensor instead?"

📋 Design Review Checklist

Before signing off on a camera system design, verify these items. Each links to something you can check in this tool:

Optics Review

  • ☐ Cutoff ratio > 1.5: If diffraction cutoff is below Nyquist, you're wasting sensor resolution. Check status bar.
  • ☐ RMS < 0.1λ across field: Slide field height to 1.0. If RMS exceeds 0.15λ, corners will be noticeably soft.
  • ☐ Airy disk < 2× pixel pitch: If not, system is diffraction-limited regardless of aberrations. May be acceptable (it's physics, not design flaw).
  • ☐ EE95 < 3× pixel pitch: Energy should concentrate within a few pixels. Large EE95 = wasted light in adjacent pixels.

Sensor Review

  • ☐ Full well > 10,000 e⁻: Below this, dynamic range suffers. Check if use case requires HDR.
  • ☐ Read noise < 3 e⁻: For low-light applications. Scientific cameras can hit < 1 e⁻.
  • ☐ QE > 50%: Modern BSI sensors achieve 70-80%. Front-illuminated tops out ~50%.
  • ☐ SNR > 30 dB at target illuminance: Check SNR plot. Below 30 dB, noise is visible. Below 20 dB, detail is gone.

System Integration Review

  • ☐ MTF at Nyquist < 20%: Higher risks aliasing. Check for moiré in test charts.
  • ☐ No ISP MTF > 120%: Oversharpening creates halos. Marketing loves it; customers hate it in real photos.
  • ☐ Waterfall shows balanced loss: If one stage dominates (>60% loss), that's your optimization target.
  • ☐ Exposure headroom exists: At target scene illuminance, check exposure percentage. Should have room to stop down or speed up.

🎯 Optimization Strategies by Use Case

Different applications have different priorities. Here's where to spend your budget:

Smartphone (Constrained Volume, Variable Lighting)

  • Priority: Low-light SNR, computational photography compatibility
  • Accept: Diffraction-limited at f/1.8, smaller pixels
  • Key metric: SNR at 10 lux. If < 20 dB, night mode will struggle.
  • Watch out for: Tiny pixels + aggressive sharpening = "crunchy" texture in shadows

Machine Vision (Controlled Environment)

  • Priority: Consistent MTF, high frame rate, global shutter
  • Accept: Lower resolution, monochrome, fixed focus
  • Key metric: MTF at the spatial frequency of your smallest defect. Must exceed 50% for reliable detection.
  • Watch out for: Thermal drift in optics. Industrial environments get hot; MTF changes with temperature.

Security/Surveillance (24/7 Operation)

  • Priority: Low-light performance, wide dynamic range (day/night)
  • Accept: Lower resolution, rolling shutter artifacts
  • Key metric: SNR at 0.1 lux (moonlight). Must identify faces, not just detect motion.
  • Watch out for: IR cut filter switching creates focus shift. Day/night focus consistency is hard.

Medical/Scientific (Accuracy Critical)

  • Priority: Linear response, no ISP manipulation, calibrated MTF
  • Accept: Higher cost, specialized software
  • Key metric: Absolute MTF at specific frequencies (often specified by regulatory standards)
  • Watch out for: Any non-linear processing (gamma, tone mapping) corrupts quantitative data.

⚠️ Common Mistakes & Misconceptions

Mistake: "More Megapixels = Better Images"

Reality: Resolution is limited by the weakest link in the chain. A 100MP sensor behind a kit lens at f/8 resolves no more real detail than 24MP—the extra pixels just sample the same blur more finely. Use the waterfall to see where resolution is actually lost.

Mistake: "This Lens is Sharper Based on MTF Charts"

Reality: Published MTF charts often show only on-axis performance, at optimal aperture, with no sensor. Real system MTF includes the pixel aperture, which adds a sinc rolloff. A lens with 80% MTF at 50 lp/mm might yield only 40% system MTF after the sensor. Always evaluate system MTF.

Mistake: "f/1.4 is Always Better Than f/2.8"

Reality: Fast apertures collect more light (good for SNR) but have larger aberrations and shallower depth of field. At f/1.4, many lenses have RMS > 0.15λ, yielding worse MTF than the same lens at f/2.8 despite the brighter image. The "sweet spot" is typically 1-2 stops down from maximum aperture.

Mistake: "The ISP Can Fix Optical Problems"

Reality: The ISP can boost contrast at certain frequencies (sharpening), but it cannot recover information lost to blur or noise. Sharpening a blurry image creates fake edges. Denoising a noisy image removes real texture. The waterfall shows what's truly lost vs. what's recoverable.

Mistake: "Full Well Capacity = Dynamic Range"

Reality: Dynamic range is FWC / read noise, expressed in dB or stops. A sensor with 50,000 e⁻ FWC but 10 e⁻ read noise has 5000:1 DR (~74 dB, ~12 stops). A sensor with 10,000 e⁻ FWC but 1.5 e⁻ read noise has 6666:1 DR (~76 dB, ~13 stops). Lower FWC can yield higher DR with sufficiently low read noise.

Mistake: "Aliasing is Always Bad"

Reality: Aliasing (MTF > 0 at Nyquist) causes moiré on periodic patterns like fabric or fences. But an anti-aliasing filter that eliminates aliasing also reduces sharpness on all images. Many modern cameras omit the AA filter, accepting occasional moiré for generally sharper results. Know your use case: product photography of textiles needs AA; portrait photography doesn't.

📈 Reading the Plots: What Good vs. Bad Looks Like

MTF Plot Health Indicators

PatternInterpretationAction
Optics MTF hugs diffraction limitWell-corrected opticsNone needed—this is ideal
Large gap between optics and diffractionSignificant aberrationsCheck Zernike coefficients; consider stopping down
Sensor MTF drops rapidlyLarge pixels relative to optical resolutionMay be intentional (SNR priority) or wasteful
ISP MTF > 100%Sharpening boost activeCheck for halos in LSF; consider reducing
System MTF > 10% at NyquistAliasing riskTest with fine patterns; consider AA filter
All curves converge below 50 lp/mmSystem is not resolving fine detailEvaluate if resolution meets requirements

SNR Plot Health Indicators

PatternInterpretationAction
Flat high SNR across frequenciesPhoton-rich, well-exposedIdeal for high-quality capture
SNR drops below 20 dB before Nyquist/2Noise dominates at mid frequenciesNeed more light, larger pixels, or longer exposure
Very high SNR that drops suddenlyMTF cliff from sensor or opticsResolution limited by blur, not noise
Low SNR even at DCSeverely underexposedCheck exposure; sensor may be read-noise limited

Waterfall Health Indicators

PatternInterpretationAction
~Equal drops at each stageWell-balanced systemNo single bottleneck—this is good
Optics dominates (>50% loss)Optics-limited systemImprove lens or stop down; ISP can't help
Sensor dominates (>50% loss)Oversampled—pixels too bigMay be intentional for SNR; check if resolution matters
ISP shows gain (negative loss)Sharpening compensating for blurArtificial boost; check for artifacts
Final MTF < 20%System is not resolving target frequencyFundamental redesign needed for this spec

🔧 Using This Tool in Practice

Scenario 1: Evaluating a New Sensor

You're considering a new sensor with smaller pixels. How do you decide?

  1. Enter the pixel pitch, FWC, read noise from the datasheet
  2. Set your lens's typical f-number
  3. Set your typical scene illuminance
  4. Check the cutoff ratio—is it > 1.5? If not, you're wasting pixels.
  5. Check SNR at Nyquist/2—is it > 25 dB? If not, noise will mask the extra resolution.
  6. Compare waterfall to current system—where does the new sensor help/hurt?

Scenario 2: Debugging Field Complaints

Customers report "soft images." How do you diagnose?

  1. Load approximate system parameters into the tool
  2. Check waterfall—which stage loses the most contrast?
  3. If optics: check aberration mode. Add typical Zernike values. Is RMS excessive?
  4. If sensor: check SNR plot. Are images underexposed in the field?
  5. If ISP: check for excessive NR. Toggle noise reduction to see impact.
  6. If MTF looks fine but images are soft: problem may be focus, motion blur, or compression (not modeled here)

Scenario 3: Setting ISP Parameters

How much sharpening is appropriate?

  1. Start with sharpening = 0, NR = 0 to see raw optical+sensor performance
  2. Check waterfall—how much contrast is lost before ISP?
  3. Add sharpening gradually. Watch MTF_ISP—stay below 130% to avoid obvious halos.
  4. Check LSF for overshoot. If peak exceeds baseline, halos will be visible.
  5. Add noise reduction if SNR < 30 dB. Watch how it trades sharpness for smoothness.
  6. Final check: does ISP "recover" more than was lost? If so, you're creating artifacts, not restoring detail.

Scenario 4: Spec'ing a New System

Product wants "4K resolution, good low-light." How do you turn that into specs?

  1. 4K = 3840 pixels horizontal. For a full-width FOV, calculate required Nyquist.
  2. Example: 10mm sensor width → 384 lp/mm Nyquist → 1.3μm pixel pitch
  3. Enter this pixel pitch. Check cutoff ratio—what f-number matches?
  4. "Good low-light" = SNR > 25 dB at 10 lux. Check SNR plot.
  5. If SNR insufficient: need larger pixels (sacrificing resolution) or faster lens (sacrificing DOF/cost)
  6. Iterate until waterfall shows balanced losses and SNR meets target. This is your spec.
System PerformanceNyquist:167 lp/mm
Airy:3.8 μm
Cutoff Ratio:2.4
Electrons:1000 e⁻
SNR:42 dB
Exposure:50%