Internal narratability as a constraint on physical law
Abstract
Why do the laws of physics look simultaneously rigid and contingent? This paper proposes that much of physical law is neither arbitrary nor metaphysically necessary, but conditionally forced by the requirement that a universe be knowable from within. We introduce a hierarchy of closure constraints—requirements for internal narratability by localized agents with records—and argue that imposing these constraints collapses the space of admissible physical theories. Many principles often treated as contingent (quantum structure, no-cloning, exclusion, finite signal speed) emerge as closure conditions rather than mechanisms. A formal research program is outlined, centered on proving that record objectivity plus no-signalling collapses generalized probabilistic theories to quantum mechanics.
1. The Central Claim
Physics does not discover arbitrary laws, nor does it uncover metaphysical necessities.
It discovers closure conditions: constraints that must hold if a universe is to support internal observers capable of forming records, comparing observations, and building shared models of their world.
This yields conditional necessity:
- If only mathematical consistency is required, almost any structure is allowed.
- If internal narratability is required, the space of viable theories collapses sharply.
- From the internal perspective, surviving laws appear rigid and unavoidable.
This is not teleology and not strong anthropics. The claim is epistemic:
Only universes whose grammar supports internal modeling can be described by physics conducted from within.
The framework does not assert that only inhabitable universes exist. It asserts that only inhabitable grammars can ever be known from the inside.
2. Grammatical Stratification: The Closure Stack
We define a hierarchy of closure constraints. Each layer eliminates large classes of otherwise consistent theories.
Layer 0 — External Narratability
L0: Consistency and Persistence
There exist well-defined states and state transitions with nontrivial invariant structure over time.
This includes block universes, deterministic automata, and globally constrained but epistemically sterile systems. Most consistent mathematics lives here.
Layer 1 — Internal Narratability
L1: Subsystem Factorization
There exist subsystems whose effective state spaces approximately factor and remain autonomous over timescales long compared to their internal dynamics.
This introduces effective locality, objects, and separable agents. Purely global-constraint worlds fail here.
Layer 2 — Shared Records
L2: Record Objectivity (Intervention-Stable, No-Conspiracy)
Definition (Record)
A record is a classical variable R encoded in a localized subsystem such that:
- Local generation
R is produced by a localized interaction between an apparatus A and a target system S. - Repeatable accessibility
Multiple agents can later read R (directly or via independent environmental fragments) without disturbing its value, up to arbitrarily small error. - Intervention stability
For spacelike-separated regions, the marginal statistics of R are invariant under changes of measurement settings chosen in those regions, except via allowed causal influence. - Robustness (no fine-tuning / no conspiracy)
Conditions (2)–(3) hold on an open set of microscopic states and parameters; they do not require measure-zero coordination of hidden variables or global pre-arrangement tied to agent choices.
Layer 2 encodes the minimal requirement for public facts. It is not a statement about equilibrium correlations or thermodynamic typicality, but about measurement-generated classical data in multi-agent settings.
Layer 3 — No-Signalling Locality
L3: Bounded Influence
Operational signalling between subsystems is bounded by a finite propagation constraint. Correlations may exist, but cannot be used for controllable superluminal communication.
This makes the existence of a speed limit grammatical (its numerical value is contingent) and rules out cloning-like operations when combined with L2.
Layer 4 — Stable Complexity
L4: Reusable Structure and Dissipation
There exist bound states and error-correcting architectures supporting:
- long-lived information storage,
- reusable components,
- scalable computation,
- robustness under generic perturbations with finite resources.
Records are necessarily low-entropy structures; thus L4 implies dissipation and a thermodynamic arrow of time. Exclusion-like rigidity and a stable vacuum are forced at this layer.
3. The Bootstrap Clarified
Universes do not transition from “no observer grammar” to “observer grammar.”
Rather:
- A grammar either supports internal narratability in principle or it does not.
- Early epochs may instantiate no observers, but the closure constraints are already present.
- Observers do not create laws; they make pre-existing closure constraints operationally visible.
This is logical filtration over possible grammars, not temporal selection.
4. The Central Tension: Layer 2 vs Classical Local Theories
Classical stochastic theories can satisfy L1 and can produce stable macroscopic records in equilibrium regimes. Difficulties arise when one simultaneously demands:
- Bell-violating correlations,
- freely choosable local measurement settings,
- no superluminal signalling (L3),
- and robust record objectivity (L2).
In classical local hidden-variable models, Bell-violating correlations require either:
- explicit nonlocal influence (violating L3), or
- superdeterministic/global coordination of hidden variables with future measurement settings, or
- contextual dependence of records on remote interventions.
The latter two violate the robustness and intervention-stability clauses of L2. This motivates the conjecture that classical theories cannot simultaneously satisfy L2 and L3 in Bell-violating regimes without fine-tuning.
Quantum theory appears to occupy the unique middle ground: non-classical correlations, no signalling, and stable decohered records.
5. Project 2: Collapse of GPT Space Under Record Objectivity
Framework
We work within generalized probabilistic theories (GPTs) admitting convex state spaces, local measurements, multipartite composition, and operational no-signalling.
Axioms
Assume a GPT satisfies:
- (L1) Subsystem factorization
- (L2) Record objectivity
- (L3) No-signalling locality
- (C1) Compositional sufficiency (enough reversible transformations to represent local interventions)
- (C2) Informational closure (e.g. local tomography or a close analogue)
Conjecture A — Quantum Minimality (Formal)
Any GPT satisfying (L1–L3, C1–C2) is either:
- classical (noncontextual), or
- operationally equivalent to finite-dimensional quantum theory (or a strict subtheory).
Classical theories fail (L2) when required to reproduce Bell-violating correlations under free local interventions without fine-tuning. Super-quantum (PR-box-type) theories fail robustness, compositional stability, or record objectivity.
If proven, this result would show that Hilbert-space quantum mechanics is forced by internal narratability, not selected by aesthetic or metaphysical preference.
6. Dimensionality and Topological Persistence
Dimensionality is neither pure grammar nor free parameter. L4 suggests an additional requirement:
Topological persistence: the theory must admit stable, localized, topologically nontrivial structures usable for scalable encoding.
Only three spatial dimensions robustly support knots, links, long-lived bound states, and dissipation simultaneously. Higher-dimensional theories may exist fundamentally but must contain an effective 3+1-dimensional sector to satisfy L1–L4.
This is a robustness claim, not a uniqueness theorem.
7. Spin–Statistics and Rigidity
Spin–statistics, usually derived within relativistic QFT, may be reframed as a closure condition:
If identical excitations could aggregate without exclusion or coherence rules, records would delocalize or collapse under composition.
Conjecture D: relativistic causality plus durable, intervention-stable records forces a spin–statistics–type connection. Violations destabilize locality or complexity.
8. Residual Freedom
Closure constraints strongly fix structure but leave parameters contingent:
- gauge groups,
- coupling constants,
- masses,
- symmetry breaking patterns,
- initial conditions.
The framework aims to explain why there are constraints, not to predict numerology.
9. What This Framework Does—and Does Not—Claim
It does not claim:
- that only inhabitable universes exist,
- that laws are metaphysically necessary,
- that the Standard Model is uniquely determined.
It does claim:
- that most consistent grammars are epistemically sterile,
- that internal narratability imposes severe, non-anthropic constraints,
- that many “fundamental principles” are closure conditions rather than mechanisms.
This is stronger than weak anthropics, weaker than metaphysical necessity.
10. Conclusion
From the outside, laws look contingent. From the inside, they look unavoidable.
Closure physics explains why both impressions are correct.
If Conjecture A is proven, quantum structure ceases to be mysterious: it becomes the minimal grammar under which a universe can contain agents who know they exist.
The only question left untouched is the genuinely metaphysical one:
Why does anything exist at all, rather than nothing?
That may lie beyond physics. But once existence is granted, the demand that reality be self-consistently knowable from within appears to fix far more of physics than is usually acknowledged.
Leave a Reply