Tag: quantum foundations

  • Closure Physics

    Internal narratability as a constraint on physical law


    Abstract

    Why do the laws of physics look simultaneously rigid and contingent? This paper proposes that much of physical law is neither arbitrary nor metaphysically necessary, but conditionally forced by the requirement that a universe be knowable from within. We introduce a hierarchy of closure constraints—requirements for internal narratability by localized agents with records—and argue that imposing these constraints collapses the space of admissible physical theories. Many principles often treated as contingent (quantum structure, no-cloning, exclusion, finite signal speed) emerge as closure conditions rather than mechanisms. A formal research program is outlined, centered on proving that record objectivity plus no-signalling collapses generalized probabilistic theories to quantum mechanics.


    1. The Central Claim

    Physics does not discover arbitrary laws, nor does it uncover metaphysical necessities.
    It discovers closure conditions: constraints that must hold if a universe is to support internal observers capable of forming records, comparing observations, and building shared models of their world.

    This yields conditional necessity:

    • If only mathematical consistency is required, almost any structure is allowed.
    • If internal narratability is required, the space of viable theories collapses sharply.
    • From the internal perspective, surviving laws appear rigid and unavoidable.

    This is not teleology and not strong anthropics. The claim is epistemic:

    Only universes whose grammar supports internal modeling can be described by physics conducted from within.

    The framework does not assert that only inhabitable universes exist. It asserts that only inhabitable grammars can ever be known from the inside.


    2. Grammatical Stratification: The Closure Stack

    We define a hierarchy of closure constraints. Each layer eliminates large classes of otherwise consistent theories.


    Layer 0 — External Narratability

    L0: Consistency and Persistence

    There exist well-defined states and state transitions with nontrivial invariant structure over time.

    This includes block universes, deterministic automata, and globally constrained but epistemically sterile systems. Most consistent mathematics lives here.


    Layer 1 — Internal Narratability

    L1: Subsystem Factorization

    There exist subsystems whose effective state spaces approximately factor and remain autonomous over timescales long compared to their internal dynamics.

    This introduces effective locality, objects, and separable agents. Purely global-constraint worlds fail here.


    Layer 2 — Shared Records

    L2: Record Objectivity (Intervention-Stable, No-Conspiracy)

    Definition (Record)

    A record is a classical variable RRR encoded in a localized subsystem such that:

    1. Local generation
      RRR is produced by a localized interaction between an apparatus AAA and a target system SSS.
    2. Repeatable accessibility
      Multiple agents can later read RRR (directly or via independent environmental fragments) without disturbing its value, up to arbitrarily small error.
    3. Intervention stability
      For spacelike-separated regions, the marginal statistics of RRR are invariant under changes of measurement settings chosen in those regions, except via allowed causal influence.
    4. Robustness (no fine-tuning / no conspiracy)
      Conditions (2)–(3) hold on an open set of microscopic states and parameters; they do not require measure-zero coordination of hidden variables or global pre-arrangement tied to agent choices.

    Layer 2 encodes the minimal requirement for public facts. It is not a statement about equilibrium correlations or thermodynamic typicality, but about measurement-generated classical data in multi-agent settings.


    Layer 3 — No-Signalling Locality

    L3: Bounded Influence

    Operational signalling between subsystems is bounded by a finite propagation constraint. Correlations may exist, but cannot be used for controllable superluminal communication.

    This makes the existence of a speed limit grammatical (its numerical value is contingent) and rules out cloning-like operations when combined with L2.


    Layer 4 — Stable Complexity

    L4: Reusable Structure and Dissipation

    There exist bound states and error-correcting architectures supporting:

    • long-lived information storage,
    • reusable components,
    • scalable computation,
    • robustness under generic perturbations with finite resources.

    Records are necessarily low-entropy structures; thus L4 implies dissipation and a thermodynamic arrow of time. Exclusion-like rigidity and a stable vacuum are forced at this layer.


    3. The Bootstrap Clarified

    Universes do not transition from “no observer grammar” to “observer grammar.”

    Rather:

    • A grammar either supports internal narratability in principle or it does not.
    • Early epochs may instantiate no observers, but the closure constraints are already present.
    • Observers do not create laws; they make pre-existing closure constraints operationally visible.

    This is logical filtration over possible grammars, not temporal selection.


    4. The Central Tension: Layer 2 vs Classical Local Theories

    Classical stochastic theories can satisfy L1 and can produce stable macroscopic records in equilibrium regimes. Difficulties arise when one simultaneously demands:

    • Bell-violating correlations,
    • freely choosable local measurement settings,
    • no superluminal signalling (L3),
    • and robust record objectivity (L2).

    In classical local hidden-variable models, Bell-violating correlations require either:

    • explicit nonlocal influence (violating L3), or
    • superdeterministic/global coordination of hidden variables with future measurement settings, or
    • contextual dependence of records on remote interventions.

    The latter two violate the robustness and intervention-stability clauses of L2. This motivates the conjecture that classical theories cannot simultaneously satisfy L2 and L3 in Bell-violating regimes without fine-tuning.

    Quantum theory appears to occupy the unique middle ground: non-classical correlations, no signalling, and stable decohered records.


    5. Project 2: Collapse of GPT Space Under Record Objectivity

    Framework

    We work within generalized probabilistic theories (GPTs) admitting convex state spaces, local measurements, multipartite composition, and operational no-signalling.

    Axioms

    Assume a GPT satisfies:

    • (L1) Subsystem factorization
    • (L2) Record objectivity
    • (L3) No-signalling locality
    • (C1) Compositional sufficiency (enough reversible transformations to represent local interventions)
    • (C2) Informational closure (e.g. local tomography or a close analogue)

    Conjecture A — Quantum Minimality (Formal)

    Any GPT satisfying (L1–L3, C1–C2) is either:

    1. classical (noncontextual), or
    2. operationally equivalent to finite-dimensional quantum theory (or a strict subtheory).

    Classical theories fail (L2) when required to reproduce Bell-violating correlations under free local interventions without fine-tuning. Super-quantum (PR-box-type) theories fail robustness, compositional stability, or record objectivity.

    If proven, this result would show that Hilbert-space quantum mechanics is forced by internal narratability, not selected by aesthetic or metaphysical preference.


    6. Dimensionality and Topological Persistence

    Dimensionality is neither pure grammar nor free parameter. L4 suggests an additional requirement:

    Topological persistence: the theory must admit stable, localized, topologically nontrivial structures usable for scalable encoding.

    Only three spatial dimensions robustly support knots, links, long-lived bound states, and dissipation simultaneously. Higher-dimensional theories may exist fundamentally but must contain an effective 3+1-dimensional sector to satisfy L1–L4.

    This is a robustness claim, not a uniqueness theorem.


    7. Spin–Statistics and Rigidity

    Spin–statistics, usually derived within relativistic QFT, may be reframed as a closure condition:

    If identical excitations could aggregate without exclusion or coherence rules, records would delocalize or collapse under composition.

    Conjecture D: relativistic causality plus durable, intervention-stable records forces a spin–statistics–type connection. Violations destabilize locality or complexity.


    8. Residual Freedom

    Closure constraints strongly fix structure but leave parameters contingent:

    • gauge groups,
    • coupling constants,
    • masses,
    • symmetry breaking patterns,
    • initial conditions.

    The framework aims to explain why there are constraints, not to predict numerology.


    9. What This Framework Does—and Does Not—Claim

    It does not claim:

    • that only inhabitable universes exist,
    • that laws are metaphysically necessary,
    • that the Standard Model is uniquely determined.

    It does claim:

    • that most consistent grammars are epistemically sterile,
    • that internal narratability imposes severe, non-anthropic constraints,
    • that many “fundamental principles” are closure conditions rather than mechanisms.

    This is stronger than weak anthropics, weaker than metaphysical necessity.


    10. Conclusion

    From the outside, laws look contingent. From the inside, they look unavoidable.
    Closure physics explains why both impressions are correct.

    If Conjecture A is proven, quantum structure ceases to be mysterious: it becomes the minimal grammar under which a universe can contain agents who know they exist.

    The only question left untouched is the genuinely metaphysical one:

    Why does anything exist at all, rather than nothing?

    That may lie beyond physics. But once existence is granted, the demand that reality be self-consistently knowable from within appears to fix far more of physics than is usually acknowledged.

  • From Vanes to Tables

    From Vanes to Tables

    How Decoherence Makes Quantum Geometry Unreadable — and Why Tennis Balls Work

    We are told—correctly—that the most fundamental description of nature is not solid objects but quantum fields, phases, and symmetries. Particles are excitations, not things. Potentials matter more than forces. Paths interfere. Orientation can be redundant.

    And yet:
    there is a table in front of me.

    The table is rigid. Localized. Persistent. It does not flicker between alternatives. It does not feel like geometry or interference. It feels classical.

    So the productive question is not “where did the geometry go?”
    It is:

    When, and why, did the geometry stop being readable?

    This article is about that transition—not as a philosophical mystery, but as a sequence of physical steps. The central concept is decoherence, used carefully: as a mechanism with a defined scope, not as a magic word that explains everything.


    Mario world (defined once, properly)

    Gauge structure is geometric rather than mechanical, so to keep that geometry visible I’ll use a deliberately spatial metaphor. Every element maps directly to standard physics.

    Mario = a quantum system we track (an electron, an atom, a molecule, or a collective degree of freedom such as the center of mass of a solid).

    Belt buckle angle = the internal quantum phase of the state (e.g. the U(1) phase of a charged particle).

    Weather vanes = a gauge connection (the vector potential AμA_\mu​), which tells us how phases compare at neighboring points.

    Loops of vanes = holonomy / Wilson loops: the net phase accumulated around a closed path.

    Flags = background fields that lock a direction in internal space (Higgs-type symmetry breaking). These are not required for electromagnetism.

    With just vanes and buckles, Mario world already reproduces electromagnetism. With more vanes—and sometimes flags—it reproduces the Standard Model.


    A concrete example: an electron and a magnetic field

    Consider an electron moving through a region threaded by magnetic flux, even if the magnetic field vanishes along the paths themselves.

    Mario is the electron.
    The belt buckle angle is the phase of the wavefunction:

    ψeiθψ\psi \rightarrow e^{i\theta}\psi

    The weather vanes encode the electromagnetic vector potential AμA_\mu​.

    As Mario moves, his buckle rotates according to the line integral of AμA_\mu​ along his path. If he takes two different paths that form a loop, the relative phase is:Δθ=qAμdxμ\Delta \theta = \frac{q}{\hbar}\oint A_\mu\,dx^\mu

    This phase difference is observable, even though no local force acts along the paths. This is the Aharonov–Bohm effect, and it is precisely why gauge potentials are physically real rather than mere mathematical conveniences.

    Global phase does not matter—but relative phase between alternatives does. Mario world keeps that distinction visible.

    This is electromagnetism in its pure gauge-theoretic form: vanes and buckles, no flags.


    Two transitions that are often confused

    To understand how Mario becomes a “tennis ball of numbers,” we must separate two ideas that are frequently blurred.

    1. Classical limit (stationary phase)

    When the action SS is large compared to \hbar, the path integral is dominated by stationary-phase paths. Expectation values follow classical equations of motion.

    This explains why heavy objects move classically on average.

    It does not explain why macroscopic superpositions are unobservable.

    2. Decoherence (entanglement + tracing out)

    Decoherence occurs when Mario becomes entangled with degrees of freedom we do not track—photons, phonons, air molecules, internal vibrations.

    The combined state takes the form:

    Ψ=iciMarioiEnvironmenti|\Psi\rangle=\sum_i c_i\,|{\rm Mario}_i\rangle\otimes|{\rm Environment}_i\rangle

    If we describe only Mario and trace out the environment, the reduced density matrix rapidly loses its off-diagonal (interference) terms.

    Decoherence explains:

    • why certain alternatives stop interfering
    • why a preferred classical basis (usually position-like) becomes stable

    These two transitions often occur together in large systems—but they are not the same thing.


    Decoherence: one mechanism, many accelerants

    It is clearest to say this directly:

    Environmental decoherence is the mechanism.
    Size, mass, and many-body complexity are reasons it happens extremely fast.

    Large action, many constituents, and environmental coupling are not independent “routes” to classicality. They are overlapping physical contexts in which decoherence is effectively instantaneous and, for all practical purposes, irreversible—even though recoherence is not forbidden in principle.

    The common endpoint is:

    Relative phase between macroscopically distinct alternatives becomes unrecoverable for any realistic measurement.

    At that point, phase-sensitive descriptions stop distinguishing observable outcomes.
    The geometry has not vanished—it has become unreadable.


    How “definite” is “definite”?

    Decoherence does not produce perfectly sharp classical states. It produces pointer states: states that are stable under continual environmental monitoring.

    For macroscopic objects, these are narrow wavepackets in position and orientation space. Their widths are set by thermal motion, scattering rates, and mass.

    For a table, those widths are fantastically small—many orders of magnitude below anything we can probe.

    So “definite” here means:

    stable under decoherence, not mathematically exact.


    The table (without cheating)

    A table does not come from decoherence alone.

    Two distinct pieces of physics are involved.

    1. Why matter forms a rigid table at all

    This is condensed-matter physics:

    • electromagnetic bonding
    • Pauli exclusion (providing enormous resistance to compression)
    • lattice formation
    • collective modes (phonons)
    • elastic response

    This explains rigidity, solidity, and structural stability.

    2. Why the table looks classical

    This is decoherence:

    • superpositions of “table here” and “table there” decohere almost instantly
    • phase information disperses into internal and environmental degrees of freedom
    • only coarse, robust variables survive

    So the correct statement is:

    A table is a stable quantum phase of matter whose phase geometry still exists but has become dynamically unreadable at macroscopic scales.

    Condensed matter gives you the table.
    Decoherence gives you the table as a classical object.


    The return of the tennis ball

    At this point it may feel as if the story has drifted away from the physicist’s most familiar move: “just treat it as a particle with numbers.”

    It hasn’t.
    This is exactly where that move becomes legitimate.

    Once decoherence has rendered phase geometry unreadable for the observables we can actually measure, the full quantum description carries no additional accessible predictive power. At that point, the system’s state can be replaced—without loss—by a small bundle of classical variables:

    (x(t),p(t),m,q,σx,σp)(x(t),\,p(t),\,m,\,q,\,\sigma_x,\,\sigma_p)

    The widths σx\sigma_x and σp\sigma_p​ are finite but stable, set by environmental monitoring and internal dynamics, and utterly negligible at human scales.

    This replacement is the tennis ball of numbers.

    It is not a claim that the underlying quantum geometry has disappeared, nor that the quantum description is false. It is a claim about epistemic compression: when phase-sensitive distinctions no longer affect observable outcomes, the optimal description collapses to conserved quantities, trajectories, and probabilities.

    The tennis ball is what a quantum system looks like once geometry stops buying you predictive power.


    What decoherence explains—and what it doesn’t

    Decoherence:

    • explains suppression of macroscopic interference
    • explains why classical variables are stable
    • explains basis selection

    It does not, by itself, explain why one specific outcome occurs rather than another.

    Different interpretations respond differently:

    • Everettian views say all outcomes occur in decohered branches.
    • Collapse-based views say decoherence prepares the stage, but collapse is additional.

    This article does not choose between them. It doesn’t need to.


    Conclusion

    Classical objects are stable quantum systems whose phase geometry still exists but has become dynamically unreadable, leaving only a small set of robust variables worth tracking.

    Mario world doesn’t replace the mathematics.
    It reveals when geometry matters—and when it stops earning its keep.

    The existence of tables is contingent on the physics of matter: bonding, exclusion, and the conditions under which they operate. But given a stable macroscopic structure, decoherence makes its classical appearance overwhelmingly robust under ordinary conditions.

    Once those conditions are met, there is no further mystery about why the table looks classical.

    That part is not philosophical.
    It is dynamical.