The parabola is not the optimal mirror shape for maximizing mirascope viewing angle.

Principle 1.Light travels in straight lines.

Principle 2.Angle in equals angle out.

Thesis

In 1969.

A custodial worker at UC Santa Barbara was cleaning a storage closet filled with old military searchlight mirrors when he saw something strange. Dust that seemed to float in the air and couldn't be wiped away. A physicist identified it as a real image, an optical effect where light comes together so precisely that it makes something appear to be there even when it's not. This accidental discovery led to the creation of the mirascope, and the parabolic shape of those surplus mirrors has been the basis for every version made since then.

Over the next few decades, researchers found that parabolas have a special mathematical property: any light ray coming from the focal point bounces off perfectly parallel to the axis, and the other way around. This property is exact and makes the mirascope's image very sharp. However, a small assumption was overlooked. Since the parabola creates a perfect focus, it was considered the best mirror shape for the device. Nobody made a distinction between making the focus as precise as possible and making the experience better for the viewer. These are actually two different problems. Every parabolic mirascope built has a viewing angle of about 55 degrees from horizontal. This angle is a result of the parabolic shape, not a fundamental law of optics.

This project starts with a basic idea: the only physical rules that apply to the mirascope are that light moves in straight lines and bounces back at the same angles. The parabolic shape and the need for rays to travel parallel between the mirrors, as well as the 55-degree viewing limit, are all results of one specific geometric solution, not limits set by physics. A different mirror shape could send light along completely different paths and still bring the rays together at the image point, even if no human designer would think of drawing those paths. Such a shape might be able to trade some of the parabola's perfect focus for a wider viewing area, especially when the object is small and gives off its own light, and a slightly blurry image would be okay if it means being able to see it from more angles.

The tools needed to search for a shape like this, such as evolutionary algorithms that can explore millions of freeform surface geometries, exact ray-tracing engines that can evaluate each one, and multi-objective scoring that balances image quality with viewing angle and brightness, did not exist when the mirascope was first discovered. They were also not available for practical use when it was last studied in 2007. However, these tools are available now. This project plans to use them, starting with the two basic laws of reflection and making no assumptions about the mirror shape, to explore a question that has never been asked before: is the parabola really the best shape for a mirascope, or was it just the shape that was readily available?

Notation

Symbols used in the math and code.

Geometry

f
focal length of a parabolic mirror — the parabola is y = x² / (4f)
f1, f2
focal lengths of the bottom (M1) and top (M2) mirrors
dsep
vertex-to-vertex distance between the two mirrors
D
mirror diameter (full width across the rim)
a
aperture diameter — the hole in the top mirror that light exits through
x, y
coordinates on the mirror surface

Vectors

ñ
surface normal direction (unnormalized)
n
unit surface normal — used in the reflection formula
d
incoming ray direction (unit vector)
r
reflected ray direction

Ray parameterization

ox, oy
ray origin coordinates
dx, dy
x and y components of the ray direction d
t
parameter along the ray — distance from origin when d is unit length

Code-only symbols

s
mirror sign — +1 for the bottom mirror (M1, opens upward), −1 for the top mirror (M2, opens downward). Implements the y-flip noted in formula ①.
vy
y-coordinate of a mirror's vertex (its lowest or highest point on the optical axis)

Geometric Model

Three functions. All geometry derives from these.

Surface Normal of a Parabola

  • Mirror surface: y = x² / (4f)
  • Slope at x: dy/dx = x / (2f)
  • ñ = ( −x/2f , 1 )
  • n = ñ / |ñ| = ( −x/2f , 1 ) / √((x/2f)² + 1)

Top mirror (M2) flips the y-component sign to −1.

Reflection Formula

  • r = d − 2(d·n)n

d = incoming ray direction
n = surface normal
r = reflected ray direction

Standard specular reflection. Confirmed: Wikipedia / optics literature.

Ray–Parabola Intersection

  • x(t) = ox + t·dx
  • y(t) = oy + t·dy
  • Substitute into y = x² / (4f) → quadratic in t
  • Selection: smallest positive t

Smallest positive t is correct for a concave mirror traced from inside the bowl. Requires an epsilon guard (t > ε) on bounce rays to skip self-intersection at the surface just left.

Simulator

Trace the rays. Watch the image form.

D (mirror diameter)
mirror depth
real image height
real image Ø
magnification
viewing angle
spot RMS (red)
spot RMS (green)
spot RMS (blue)
rays traced
image-forming
total bounces
mm
mm
mm
mm
mm
mm
mm

Changelog

Research log of changes and findings.

rewrite Spot cap derived from viewing distance, not "5% of object."

Replaced the arbitrary spot_cap = max(1 mm, 0.05 × Ø_obj) with a physics-grounded criterion: spot_cap = viewing_distance × eye_resolution, where eye angular resolution is 1 arcminute (≈ 0.000291 rad). Added a "viewing distance" input (default 500 mm; useful range 100–3000 mm). For typical desk viewing (50 cm): cap ≈ 145 µm. For museum/display (3 m): cap ≈ 870 µm. The optimizer now produces ONE device per (h_obj, Ø_obj, viewing_distance) — no more arbitrary tuning knob.

The 5%-of-object cap had no physical anchor; it was a number I chose because it produced reasonable device sizes. Different choices gave different "optima," so the optimizer's answer was conditional on a fudge factor I'd hardcoded. Eye-resolution at viewing distance is the actual physical criterion for "image looks sharp." Devices designed for closer viewing need lower spot RMS; devices for farther viewing tolerate more. The user specifies the application context (viewing distance), and the optimizer derives the criterion. Convergence to one answer per use case, with no tunable heuristics in the pipeline.

feat Cross-referenced phydemo's mirascope simulation.

Reviewed the open-source ray-optics simulator at phydemo.app/ray-optics/gallery/the-mirascope. Their JSON shows: (a) asymmetric mirascope geometry — different focal coefficients for M1 (5.5) and M2 (6.55), confirming asymmetric configurations are accepted in published educational software, not just our novel direction; (b) aperture ratio r_a ≈ 0.16, distinct from the Adhya-Noé toy's 0.45 and our optimizer's earlier 0.35 — three different valid operating points for three different objectives; (c) their cleaner-looking visualization uses a small object + narrow ray fan + fewer rays, which masks aberration that our wider-fan visualization correctly surfaces.

The asymmetric direction we've been exploring is endorsed by independent published software, removing one open question. The disagreement on aperture ratio is real but not a contradiction — it's evidence that "the right mirascope" depends entirely on what you're optimizing for, which is exactly the case for grounding the spot cap in physics (the entry above). Our visualization "knots" are honest geometric aberration, not noise — confirmed by side-by-side comparison.

fix Paraxial KAT now skips when grazing angle is outside paraxial domain.

At the user's r_a=0.35 / a=298 mm geometry, the smallest-bouncing ray angle (grazing) is ~29°. The Paraxial KAT compared this clearly-non-paraxial trace to a paraxial matrix prediction and reported FAIL with a message blaming the trace — but Cross-Check KAT showed bit-perfect agreement at 8.99e-14 mm, so the trace is fine. The "FAIL" was the test inappropriately measuring real geometric aberration as if it were a bug. Fixed by adding an a-priori domain check: if grazing angle > 25°, the test SKIPs with a clear explanation. The 25° threshold is principled (sin θ ≈ θ has 5% error at 18° and grows fast); it's not tuned to make any specific case pass.

A test reporting "FAIL — real trace bug" against a trace that's bit-perfect under independent verification is a false-positive failure mode worse than no test at all. The Paraxial KAT has a real domain of validity that I'd been ignoring; making it explicit prevents misleading the user. Trace correctness in the high-grazing regime is still covered by Cross-Check KAT (which compares two unrelated implementations of the same physics) — the Paraxial KAT was always redundant for that purpose.

feat Self-expanding optimizer bounds + Paraxial KAT robustness.

Two improvements addressing problems exposed by the prior r_a truncation: (1) Optimizer now self-detects when the winner sits at any boundary (r_a == max(RA_VALUES), r1/r2 within ε of R_LO/R_HI, or d_sep at the ceiling) and re-runs with expanded bounds, up to 3 expansions. The readout reports "Bounds expanded N×" with what changed. (2) Paraxial KAT now retries with progressively wider grazing fans (1° → ~9° spread) before declaring N/A — fixes the silent N/A at high r_a configurations.

A bounded search that doesn't say it was bounded is dishonest about its conclusions. Self-expansion converts silent truncation into either a true optimum or a visible expansion log. The Paraxial KAT change closes a coverage gap: at r_a ≈ 0.35 the narrow grazing fan didn't produce stable convergence, and the test silently bailed; the validator should never silently produce N/A in the regime where the optimizer's actual answer lives.

fix Widened RA_VALUES — search box was truncating against published mirascope.

Optimizer's RA_VALUES previously capped at 0.22. Adhya & Noé (SPIE 2007, "A Complete Ray-trace Analysis of the Mirage Toy") report the actual published mirascope toy uses an aperture ratio of ~0.45 — more than 2× outside our former search ceiling. Extended set to [0.08, 0.10, 0.12, 0.15, 0.18, 0.22, 0.25, 0.30, 0.35, 0.40, 0.45, 0.50].

The earlier 0.18 optimum was the best within a search that didn't reach where the textbook toy actually operates. Either 0.18 survives the wider search (real optimum, just different from the toy's because the objective is different) or it was a truncated answer (real optimum sits higher). Either outcome is informative; the previous one was uninformative because the search was bounded shorter than reality.

feat Viewing Angle KAT — checks the metric math, not the trace.

Adds a fifth validator targeted at the viewing-angle math (above the trace, below the optimizer): three internal-consistency checks. (1) For every escaped ray, atan2(vx, vy) must agree with asin(vx) to 1e-9° — catches non-unit exit directions or angle-convention bugs. (2) For the on-axis green LED, |min_angle + max_angle| must be under 1° — by symmetry of the geometry. (3) The optimizer's intersect_width / 2 must not exceed the simulator's max_half_angle stat — the window can't be wider than what any single ray actually reaches.

The simulator displays one viewing-angle number ("65°", max half-angle of any ray) and the optimizer displays a different one ("120° intersect viewing angle", angular cone all 3 LEDs share). Both are correct in their own way, but if the math layered on top of the trace had a bug, the two could silently drift apart and we'd never know which to trust. This validator catches that case and explicitly reconciles the two numbers — so when the page says 65° and 120°, they're verified internally consistent.

fix Cross-Check KAT was silently no-op — fixed.

First Cross-Check KAT release silently validated nothing. The main trace stores bounce points as [x, y] arrays; the independent trace stored them as {x, y} objects. The comparison code accessed .x on both, producing undefined − undefined = NaN at every comparison. NaN never satisfies d > max, so max stayed at its initial 0 and the test reported "PASS, max Δ = 0.00e+0 mm" — entirely vacuously. Fixed by reading both array and object shapes explicitly, and adding a hard guard: any non-finite diff forces FAIL.

This is exactly the confirmation-bias failure mode the user warned about — a test that appears green while doing zero work. The structural lesson: any comparison test must include at least one case it's known to fail on to prove it can fail. Adding that meta-test to the validator is the next step.

feat Cross-Check KAT — second independent trace implementation.

Adds a fourth validator: a complete second ray tracer that uses fundamentally different algorithms — numerical marching + bisection for ray-parabola intersection (vs the main trace's analytical quadratic root-finding), and rotation-matrix reflection derived from normal angle (vs the main trace's vector formula r = d − 2(d·n)n). Same physics, completely different code paths. Test runs 4 representative rays through both implementations and compares every bounce point; passes if max Δ < 1 µm.

Addresses the confirmation-bias question directly. Reversibility proves internal consistency; the two analytical KATs prove agreement with closed-form formulas — but all three could in principle pass for a trace that's wrong in a self-consistent way. Independent implementation cross-check is the standard scientific-computing technique for ruling that out: two unrelated code paths almost never share the same bug. Agreement at micron precision is the strongest software-only evidence the trace is correct.

fix Paraxial KAT N/A'd because mirascope geometry rejects paraxial rays.

Original Paraxial KAT used a ±2° fan from the LED. In mirascope geometry those rays travel up the optical axis and pass straight through the aperture without ever bouncing off M2 — the aperture is intrinsically larger than what a paraxial fan covers. Result: KAT always returned N/A. Replaced with a narrow 1° fan anchored just outside the aperture-grazing angle (the smallest angle from the LED that actually hits M2's mirror surface). Tolerance widened to 15% relative + 2 mm / 0.05× absolute floor to absorb the systematic bias from the non-zero central angle.

A KAT that never runs validates nothing. The grazing-angle fan is the correct paraxial-equivalent for mirascope geometry: it has small angular spread (so within-bundle aberration is small), it always bounces (so an image forms), and its centroid is close to but not identical to the pure paraxial prediction. The looser tolerance reflects this physical reality — bias is bounded but non-zero.

feat Paraxial KAT — closes the asymmetric-bug gap.

Adds a third validator: matrix-formalism paraxial prediction (image height + magnification for any f₁, f₂, d_sep, h_obj) compared against a paraxial-only trace (±2° fan). Runs always, including asymmetric configurations.

Reversibility proves the trace is internally consistent; the existing Formula KAT proves it agrees with textbook formula for the canonical case. That left a theoretical gap: a bug consistent under time-reversal that only manifests when f₁ ≠ f₂ would pass both existing tests. The Paraxial KAT closes that gap by exercising the trace under arbitrary asymmetry against an independent analytical oracle (2×2 ray transfer matrices). Three independent tests now bracket the trace from different physics directions.

fix Optimizer over-sized devices by ~8×.

Two changes. Picker tolerance was an absolute 1° from the max viewing angle; replaced with ≥ 90% of max. Spot-RMS cap was a flat 1 mm; replaced with max(1 mm, Ø_obj × 5%).

A 50 × 50 mm object had been forcing d_sep = 720 mm, D = 2 m. Both bounds were research-grade absolutes that ignored object scale. The new picker reads the natural knee of the angle-vs-scale curve; the new spot cap matches the optical-engineering rule of thumb that "sharp" is RMS relative to image size. Same object now optimizes at d_sep ≈ 314 mm, D ≈ 850 mm with a wider intersect viewing angle.

feat d_sep is now a search variable.

Two-level search: outer loop scans 7 d_sep values geometrically spaced from a physical floor up to ~12× that floor; inner loop runs the existing 15 × 15 × 6 ratio grid + Nelder-Mead at each scale. Output reports a scale curve d_sep → angle showing how the trade saturates with device size.

User specifies only the object (h_obj, Ø_obj); the optimizer should find the entire device, including its size. Holding d_sep fixed at the user's slider value broke that — the optimizer would refuse to find a working device for objects too large for the chosen scale, even though a larger device would have worked.

rewrite Dimensionless ratios + intersect viewing angle.

Search variables changed from absolute mm (f₁, f₂, d_sep, a) to dimensionless ratios (r₁, r₂, r_a) = (f₁/d_sep, f₂/d_sep, a/D). Objective changed from "max half-angle of any single ray" to the angular cone in which all three LED images are simultaneously visible (intersection of per-LED escape windows). Hard constraints — image at aperture plane, spot RMS < 1 mm, ≥ 50% rays escape — replaced soft penalties.

Single-ray max angle had discretization noise at the same scale as the optimization gain we were searching for, and absolute parameters made the answer scale-dependent. Ratios make the answer scale-invariant; intersect-width is a proper viewing-cone metric used in optical engineering; hard constraints stop the optimizer from "cheating" by widening the aperture to escape more rays.

feat Optimize Shape button — first version.

Adds a Nelder-Mead simplex over (f₁, f₂, d_sep, a) that holds the user's object fixed and searches for the geometry that maximizes viewing angle. Reuses traceRay / findConvergence in a DOM-free evaluator so the optimizer and the displayed simulator share physics.

Establishes the inverse-design pattern: user defines what they want to image, sim works backward to the device. First implementation had the wrong objective and a too-narrow search space — superseded by the rewrite below — but proved the architecture: the trace engine and the optimizer can share one source of truth.