GL Theory: Greg's Computational Guide
to the Universe

The biggest mysteries in physics, explained through the lens of the machines we build every day
VERSION 19.0 · MARCH 2026 · PLAIN ENGLISH EDITION
A Note from the Author

It is a curious thing to have a passion for a subject you can't quite grasp. I would imagine this is all that is really needed for the seeds of AI psychosis to take hold.

What follows is something of a "hobby" of mine, that while on the surface might seem odd, to "ponder the world" as a hobby. But, I'd argue it's actually a well-known fact that the best way to spend ANY amount of time on ANY amount of drugs is to sit back and go "YO what IF LIKE…"

Most of what follows is clearly by written AI, but this is more or less the result of my "pondering"/ranting about a subject I lack the skills to fully grasp for the last 8 or 9 years. So while the words are AI assited but I asure you the delusions are my own.

I have not figured out the exact number yet, but I imagine the odds of someone being committed rise significantly the moment they start uttering the words "I have a theory of the universe".

So to try and get ahead of whoever it was that plotted to get Kayne, I'd like to spell out what this actually is.

This is my best guess on how it could all work. Do I think it's right? I'd say the odds are as close to zero as something can be. However, if this somehow has any iota of a concept that inspires a thought in someone more capable, even if born out of pure opposition, than I'd consider that a win.

Most of what I describe below is things about our known universe that absolutely fascinate (scare the living fuck out of me). And I thought having it on the World Wide Web makes it easier for me to send as advance briefing for any family function or dinner party I should be expected to attend.

In a world where AI and and do almost everything I wanted one section that was just me.

Cheers,
A Very Serious Person

This document is an attempt to explain the observations of physics through the lens of a computational framework — five ideas borrowed from the machines we build every day.

It will walk you through some of the biggest mysteries and most beautiful experiments in our understanding of the universe: the kind of things that have puzzled the greatest minds for a century. And it will try to explain each one using ideas we can see in computers, networks, and graphics cards.

At the very least, you will come away with an introduction to some of the most extraordinary things humans have discovered about how reality works. At most, you will see a pattern that connects them all.

The Framework — Five Principles

1 Two Types of Structure. Reality is one engine containing two types of graph structure. Content graphs represent particles, fields, and interactions — they have topology (connections and dependencies) but no spatial location. The spatial graph is space itself — not a grid, but a graph: a network of regions connected to neighboring regions, where "neighboring" and "distance" are properties of the graph's edge weights. Space doesn't contain the graph. The graph IS space. The relationship between the two layers is many-to-many: each spatial region can reference many content nodes (dense regions), and each content node can be referenced by many spatial regions (entanglement). The spatial graph is the render layer — but it is inside the engine, not separate from it.
2 One Rule. Each step, the engine takes everything that currently exists and produces what exists next. One input, one output, always. No randomness, no branching, no alternatives. Time has three levels in this framework: Engine time is the global tick counter — it never dilates, every region participates every tick. Experienced time is the number of state transitions actually computed on your content graph per tick — this is what clocks measure, this is what dilates, and the dilation is actual, not perceived: fewer state transitions genuinely happened. Rendered time is the committed frame — a static snapshot with no inherent direction, which is why physics equations are time-symmetric. The arrow of time comes from the computation direction: you can't un-run a function.
3 Everything is a Graph. At the engine level, everything is represented as a graph — a network of connected things, where each piece may depend on others. You can picture it as dots connected by lines. Graphs live in the engine, not on the screen. The screen references graphs — it projects them, like pixels on a monitor displaying an image stored in memory.
4 There is a Budget. Each step, the engine can only do a fixed amount of work. This limit never changes. If a graph can't be fully resolved in one step, the leftover carries over to the next step — a stack of unfinished work. That stack is the beginning of mass. The return value of completed work is energy. That return value becomes input to the next cycle. This is conservation of energy: output from one tick becomes input to the next. Nothing is created or destroyed — return values are passed forward.
5 Lazy Evaluation and Double-Buffered Rendering. Each tick, the engine commits a frame. You always see frame N while the engine computes frame N+1. You are always off by one. The engine computes ALL graphs every tick — nothing is skipped. The question is not whether a graph gets computed. It is: does anything demand a definite answer from it? This is lazy evaluation — values aren't collapsed to final answers until a strict consumer demands one. A basketball is its own strict consumer: every atom demands definite values from its neighbors just to compute the next tick — it resolves itself. A lone photon has no strict consumer — nothing needs its definite position — so the engine carries the multi-path walk forward as the graph's evolving state at the data layer, not projected to a definite spatial location. That walk IS the wave function. Measurement is a cache bust: a strict consumer — a committed structure that needs a definite value — demands resolution. The engine produces one answer. The walk is replaced by a result. That is "collapse."

What This Means for Physics — The Claims

The universe is not random. What looks random from inside the screen is a computation whose full input you can't see. Every outcome was always determined. You are missing variables — not because they're hidden inside particles, but because they live in the engine, which has no spatial location at all.
Quantum mechanics and general relativity don't contradict. They describe different layers. QM is the physics of the engine. GR is the physics of the screen. They were never meant to be reconciled — they were never describing the same thing.
Graphs live in the content layer. The spatial graph references them — like pixels referencing objects in memory. Multiple spatial regions can reference the same content graph. When that content graph resolves, every spatial region referencing it updates simultaneously. No signal between regions needed. This is what entanglement is.
Mass is computational overflow that perpetuates itself. A directed acyclic graph (no cycles) resolves cleanly — no overflow, no mass. A cyclic graph overflows — the self-reference creates permanent stack. If it regenerates every step, you get stable matter. The mass/massless distinction is the distinction between cyclic and acyclic graph topologies. Matter-antimatter pairs are inverse graphs — when they meet, they cancel perfectly and all stored work returns as energy.
Gravity is pressure from that overflow — and it cannot be blocked. Mass's ongoing work drains budget from everything around it. That drain is gravity. You can't shield it because it operates in the engine. Your shield exists only on the screen. Wrong layer entirely.
Spacetime is the routing table. Flat space means uniform routing costs. Curved space means the routing costs have been skewed by mass's drain. Einstein measured the routing table with extraordinary precision. He just didn't know that's what it was.
Gravity and time dilation are one mechanism, not two. Mass's budget consumption simultaneously distorts the spatial graph's edge weights (curvature) and starves neighboring content graphs of compute cycles (dilation). One drain, two observable effects. This is why the Schwarzschild metric covers both with one equation.
Time dilation is actual, not perceived. Near mass, fewer state transitions genuinely occurred on your content graph. Not fewer measured. Not fewer perceived. Fewer actual. GPS satellites correct for 45 microseconds per day because fewer state transitions were computed on the ground clock than the satellite clock, across the same number of engine ticks.
Time is a counter, not a river. How many state changes happened to you — that's your time. Near mass, fewer happen. Moving fast, fewer happen. At light speed, none happen. The arrow of time exists because the Rule runs forward — you can't un-run a computation.
Energy is the return value of completed work. When the engine finishes resolving a graph, the output is energy. That return value becomes input to the next tick. Conservation of energy is not an imposed law — it is the fact that function outputs become function inputs, total bounded by C.
The wave function is NOT the particle — it is the content graph's evolving state at the data layer. When no strict consumer demands a definite value, the engine computes the graph's influence through every path each tick without resolving to a single answer. That multi-path walk IS the wave function. It lives at the data layer — computed, evolving, but not projected to a definite spatial-graph position. You can't observe it directly because it hasn't been written as a definite value. "Collapse" is a cache bust: when a strict consumer — a committed structure that needs a definite value — demands resolution, the engine produces one answer.
Entanglement requires no communication whatsoever. Two entangled particles are one content graph referenced by two spatial regions. When the content graph resolves, both regions update — not because a signal traveled between them, but because they were always referencing the same data. Like two users editing the same shared document.
E = mc² — the budget appears squared because it plays two roles. It determines how much work each overflow frame stores AND how fast the release propagates. Same quantity, two jobs, multiplied together. This "dual-role pattern" also explains why quantum probability is the square of the amplitude.
The speed of light is not a law imposed on nature. It is the maximum amount of work the engine can do per step. You can't exceed it because you can't spend more budget than exists.
The universe has pixels. The Planck length (~10⁻³⁵ meters) is the smallest meaningful unit of distance. The Planck time (~10⁻⁴⁴ seconds) is the smallest meaningful time interval. These are the screen's resolution and the engine's tick rate. Below these scales, our equations stop making sense — because you've hit the pixel boundary.
The quantum eraser does not require the future to affect the past. The interference was always in the data. The detector sorted results into groups that washed it out. Erasing the sorting recombines the groups and the pattern reappears. Like removing a filter from a spreadsheet — the data never changed.
Things stay quantum when nothing demands a definite answer. Things become classical when they contain or connect to strict consumers. An isolated particle has no strict consumer — nothing needs its definite position — so the engine carries its multi-path walk forward tick after tick. A basketball has 10²⁶ atoms, each demanding definite values from its neighbors to compute the next tick. It is its own strict consumer — it resolves itself. Decoherence is not caused by "observation." It is caused by strict consumers demanding definite values.

The rest of this document takes the biggest puzzles in physics, one at a time, and shows what happens when you look at them through these five principles. Each section explains the physics, looks inside the machine, and then connects the two.

01

Why Do Physics' Two Best Theories Contradict Each Other?

Physics has two theories that each work perfectly — and completely disagree with each other. Quantum mechanics governs the very small: particles, atoms, subatomic forces. It has been tested to fourteen decimal places of accuracy. General relativity governs the very large: gravity, spacetime, black holes. It has predicted gravitational waves, GPS corrections, and the bending of starlight. Both are extraordinarily precise. Both have passed every test. And for a hundred years, every attempt to combine them into one unified theory has failed.

Think about a video game. There are two things happening at the same time — inside the same hardware.

First, there are the game objects — the actual data. A character's health, position, inventory. A weapon's damage value. The connections between objects: this character owns that weapon, this door connects to that room. All of this is pure data: numbers, lists, connections. None of it has a physical location inside the computer. It's just graph structure — dots connected by lines.

Second, there is the framebuffer — a grid of pixels that makes the data visible. Each pixel references the game objects and displays them spatially. The 3D world you see on screen is built from the framebuffer. It has distances, positions, depth. But the framebuffer is not separate from the hardware — it is inside the graphics card, part of the same system as the game objects. It's a different type of data structure serving a different purpose.

The framebuffer has rules of its own. Light behaves realistically. Shadows fall correctly. Physics feels right. You could spend years studying the screen and develop an accurate theory of the game's world. But that theory would never explain the game objects — because the framebuffer doesn't contain its own explanation. To understand why the screen looks the way it does, you have to look at the data structures underneath.

This framework proposes that reality has the same structure — inside one engine:

Content graphs are the data. Particles, fields, interactions — represented as networks of connected things with dependencies. They have topology but no spatial location. A particle's graph doesn't know where it "is." It's just a pattern of dependencies. This is where quantum mechanics lives.

The spatial graph is the framebuffer. A network of regions, each connected to neighboring regions, each referencing content graphs. This network IS space. Distance, proximity, routing costs — all are properties of this graph. This is where general relativity lives.

Both live inside one engine. Both are evaluated by one rule each tick. They are not separate layers — they are two types of structure within a single system.

Figure 1 — The Architecture of Reality: Two Types of Structure in One Engine
ONE ENGINE — TWO TYPES OF STRUCTURE CONTENT GRAPHS — NO SPATIAL LOCATION • Particles, fields, interactions. Topology only. No position, no distance. • Strict consumer present: resolved (classical). No strict consumer: walked but uncommitted (quantum). • Multiple spatial regions can reference the same content graph. ← Quantum Mechanics describes these regions reference content graphs SPATIAL GRAPH — THIS IS SPACE • Regions connected to neighboring regions. Distance and routing costs live here. • Each region references zero or more content graphs. • We are structures inside this graph. We only see committed values. ← General Relativity describes this Both types of structure live inside one engine, evaluated by one rule each tick.
QM and GR are not incompatible. They describe different types of structure within the same engine. QM describes the behavior of content graphs — the engine's dependency walks of uncommitted graphs, cache-bust collapse. GR describes the behavior of the spatial graph — routing costs, edge weights, how overflow stacks warp the spatial structure. They were never describing the same thing.

Where Things Actually Live

Here is a distinction that changes everything: content graphs have no spatial location. The spatial graph references them. A particle doesn't "live" at a point in space. A particle is a content graph — a pattern of dependencies — and some region in the spatial graph holds a reference to it. The reference gives the particle a location. The particle's graph doesn't know or care where that location is.

Think about your computer's memory and monitor. Each pixel on screen doesn't contain an image. Each pixel references a value in video memory — a number that determines its color. The actual data lives in the graphics card's memory. The screen just displays it. If you change a value in memory, the pixel changes. The pixel doesn't "know" anything. It's reading from the source. And crucially — the framebuffer and the game data are both inside the same graphics card. They're not separate systems. They're different types of data structure inside one chip.

Now here is the part that matters for physics: multiple pixels can reference the same object in memory. A single 3D model in a game might be visible from two cameras at once. Both views display the same object. If the object changes in memory, both views update — not because a signal went from one camera to the other, but because both cameras were looking at the same underlying data.

This is exactly how entanglement works. Two particles that have interacted are one content graph being referenced by two regions in the spatial graph. The distance between those regions is real in the spatial graph. It is meaningless in the content graph — content graphs have no spatial properties. When the content graph resolves, every spatial region referencing it updates. No signal between regions needed. They were always reading from the same source.

Figure 1b — Graphs Live in the Engine. The Screen References Them.
ENGINE — DATA LAYER No space. No distance. Just graphs. A B ONE GRAPH references references SCREEN — RENDER LAYER LOCATION A — left side of galaxy ↑ particle A displayed here this pixel references the graph above ← billions of light years on screen → distance exists HERE on the screen distance does NOT exist in the engine above LOCATION B — right side of galaxy ↑ particle B displayed here this pixel ALSO references the graph above
The engine contains one content graph (A–B linked). The spatial graph references it from two regions, billions of light years apart. The distance is real in the spatial graph — but the content graph has no distance. When the content graph resolves, both spatial references update. No signal travels. Both were always reading from the same data. This is entanglement.

The Universe Has Pixels

This might sound like a metaphor — "pixels" on a screen — but the universe actually has a minimum resolution. Physicists discovered this in the 1900s:

The Planck length — about 1.6 × 10⁻³⁵ meters — is the smallest meaningful unit of distance. Below this scale, our equations of physics stop producing sensible answers. The Planck time — about 5.4 × 10⁻⁴⁴ seconds — is the smallest meaningful unit of duration.

These are not just measurement limits. They are structural. Below the Planck scale, the concepts of "distance" and "duration" lose their meaning — exactly as you would expect if they are properties of a screen with a finite resolution, not properties of some continuous underlying fabric.

In this framework: the Planck length is the pixel size. The Planck time is the tick duration. The universe literally has a resolution and a frame rate.

You Always See Committed Frames

Each tick, the engine evaluates the full graph — content and spatial — and commits the next frame. But it works like a modern GPU: double-buffered. You see the last committed frame (frame N) while the engine is computing the next one (frame N+1). You never see work in progress. You only see finished frames.

Every graph in the engine falls on one side of a dividing line:

Strict consumer present — resolve: a committed structure's next state requires a definite value from this graph. The engine produces one answer and writes it to the committed frame. This is classical reality. Your coffee cup, a clock tick, a measured particle. Mass — a self-referencing overflow stack that regenerates every tick — is its own strict consumer and resolves every tick by definition.

No strict consumer — carry the walk: a lone particle in flight, an unobserved photon. Nothing demands a definite answer. The engine computes the graph's influence through every path but carries the walk forward as the graph's evolving state at the data layer. That walk is the wave function. It is computation in progress — lazy evaluation in action: the engine doesn't produce answers nobody asked for.

You — as a committed structure embedded in the spatial graph — can only see committed frames. You cannot see the engine's dependency walk of uncommitted graphs (wave functions) or the internal process of a self-referencing overflow (mass's cycling). You see resolved outputs. From inside the spatial graph, the engine's multi-path walk looks like "the particle was in two places at once." From outside, the engine was evaluating an uncommitted graph's influence through every available path.

Figure 1c — Double-Buffered Rendering: What You See vs. What the Engine Computes
FRAME N — WHAT YOU SEE The last committed frame. Classical reality. 7 3 42 M mass Overflow: always always resolved Every value definite. This is what you observe. FRAME N+1 — ENGINE COMPUTING Full dependency walk in progress. ? No strict consumer: all paths walked ! Cache bust = forced resolve You never see this. Only the committed result.
Left: Frame N — the last committed frame. Every value definite. Mass is always resolved (self-referencing overflow, its own strict consumer). This is classical reality. Right: Frame N+1 being computed. The engine walks all graphs — committed ones get resolved, uncommitted ones get their full dependency tree walked through every path (the wave function). A cache bust (gold) forces resolution of an uncommitted graph. You only ever see committed frames.
Why this matters for everything that follows Lazy evaluation is the key to understanding quantum mechanics. When no strict consumer demands a definite answer, the engine walks a graph's full dependency tree each tick without resolving, computing every possible influence path. That walk IS the wave function — the content graph's evolving state at the data layer. When a strict consumer appears — a detector at a slit, an atom absorbing a photon — it demands a definite value. Cache bust. The engine resolves the graph, producing one answer. Every dependent graph recomputes. That is "collapse." You never see the walk directly. You only see committed frames. From inside the spatial graph, it looks like the particle "was in two places at once." From outside, the engine was computing an unresolved graph through every path, and a strict consumer forced it to pick one.

You Are Mario

One more piece of evidence for this two-layer picture. Physics has known for thirty years that the maximum amount of information that can fit inside any region of 3D space is determined not by the volume of that region, but by the surface area of its boundary.

Think about a room full of books. Double the room's size and you'd expect to fit twice as many books. But nature says: the maximum number of books depends on the wall space, not the floor space. The 3D interior is somehow encoded on its 2D boundary. Physicists call this the holographic principle, and it is deeply strange — unless the interior isn't independently stored but is being computed from the boundary.

This is exactly what you would expect from a two-layer system. Mario experiences distance, movement, depth. His world feels real. But at the data layer — in memory — he is a flat data structure. His hat has no interior. What the engine stores is the boundary description. The interior is computed when needed. You are Mario. The holographic principle is not a coincidence. It is exactly what two-layer architecture looks like from the inside.

In a Computer
In This Framework
What We Observe
CS
A game engine runs inside a graphics card. Game objects (data) and the framebuffer (display grid) are both in the same hardware. The framebuffer references game objects. Pixels reference data in memory. Multiple pixels can reference the same object. The screen has a resolution (pixels per inch).
Framework
Content graphs (particles, fields — no space) and the spatial graph (regions, distances — IS space) are both inside one engine. Spatial regions reference content graphs. Multiple regions can reference one content graph. Planck length is the spatial graph's resolution. Planck time is the tick rate.
Physics
QM works at small scales. GR at large scales. They contradict. Holographic principle: information on surfaces, not volumes. Planck length/time: minimum meaningful units. All consistent with a two-layer system with finite resolution.
✦ What This Means

You have never seen a content graph directly. Everything you have ever observed — every experiment, every measurement — is a resolved value read from the spatial graph. Content graphs have no space, no distance, no time. They have topology. The spatial graph references them, giving them the appearance of location. When physicists say QM and GR contradict, they are describing two types of structure inside one system and wondering why the descriptions don't match. They don't match because content graphs and the spatial graph have different properties — and that's exactly what you'd expect from two types of structure serving different functions inside one engine.

02

How the Engine Works — Graphs, Stacks, Budgets, and Frames

Before we can explain what mass is, or why gravity can't be shielded, or what the double-slit experiment is really showing us, we need to understand how the engine works. This section introduces four ideas from computing that you'll recognize in every section that follows. None of them are complicated — but all of them are essential.

Three Levels of Time

This is the most important concept in the framework. Get it right, and everything that follows is intuitive.

Time Operates at Three Distinct Levels

Engine time (the global tick) — the step counter: Ω, Ω+1, Ω+2. This never dilates. Every region participates in every tick. The engine computes the full next state, always.
Experienced time (state transitions on your content graph) — the number of internal state transitions actually computed on YOUR subgraph during a given tick. This is what a clock measures. This is what dilates. Near mass, less budget is available (mass's stack consumed it). Moving fast, budget goes to spatial-graph updates instead. In both cases: fewer state transitions genuinely happened. Not fewer perceived — fewer actual.
Rendered time (the committed frame) — a static snapshot with no inherent time direction. Physics equations are time-symmetric because they describe committed frames. The arrow of time comes from the computation direction: you can't un-run a function.

These three levels dissolve the apparent paradox: "How can time dilate if the engine renders everything globally?" The engine ticks at the same rate everywhere. But it doesn't do the same amount of work on every content graph each tick.

Spacetime Is the Routing Table

What Einstein called spacetime — the mathematical structure described by the metric tensor — is the spatial graph's edge-cost matrix. Flat spacetime means all edge weights are uniform. Curved spacetime means the edge weights have been distorted by mass's budget consumption.

This matters because gravity and time dilation are not two separate phenomena that happen to share equations. They are one mechanism producing two observable effects. Mass's permanent stack consumes budget, which simultaneously distorts the spatial graph's edge weights (curvature) and starves neighboring content graphs of compute cycles (dilation). One drain, two consequences.

Why Some Things Resolve and Others Don't — Lazy Evaluation

If the engine computes all graphs every tick, why don't all graphs produce definite answers? Our own computers resolve small functions instantly. Nobody says "this function is too small — carry it as a probability distribution." So why would the engine?

The answer: it doesn't skip anything. It computes everything. But there is a difference between computing a graph's influence and producing a definite answer.

Programmers call this lazy evaluation. In functional programming, a value isn't collapsed to a final answer until something downstream actually needs that answer to proceed. Not because it's too small. Not because it's too complex. Because no consumer has demanded the result yet.

The Engine's Decision — One Question

Strict consumer present → Resolve. Produce one definite value. Write it to the spatial graph. This is classical reality. A basketball is its own strict consumer — every atom demands definite values from its neighbors just to compute the next tick. It resolves itself because its internal dependencies require it.
No strict consumer → Carry the walk. Nothing demands a definite answer. The multi-path computation IS the content graph's evolving state. It lives at the data layer — computed, updated each tick, but not projected to a definite spatial-graph position. This is quantum behavior. You can't observe it directly because you're a committed structure and it hasn't been written as a definite value.

The wave function lives entirely at the content-graph layer. The spatial graph holds a reference to the unresolved content graph — it knows it's there — but the reference points to an evolving multi-path computation, not a definite value. Measurement is a cache bust — a strict consumer whose next state requires a definite value from an unresolved graph. The engine produces one answer. The walk is replaced by a result. That is "collapse."

The Graph

Imagine a family tree. You are connected to your parents. Your parents are connected to their parents. Your cousins connect to the same grandparents through different paths. The whole family is linked together in a network of relationships.

Now think about a recipe. You can't frost a cake until you bake it. You can't bake it until you mix the batter. You can't mix the batter until you measure the ingredients. Each step depends on the one before it — there is an order you have to follow.

Computer scientists call these networks of connected things a graph. A graph is just dots connected by lines. Each dot (called a node) is a thing. Each line (called an edge) is a relationship: "this depends on that" or "this connects to that."

Figure 2a — What a Graph Looks Like
A SIMPLE GRAPH A B C A depends on B and C. To finish A, first finish B and C. ADDING A CONNECTION B C new edge! B now depends on C. The graph's shape changed. This is how a detector affects a particle. A CYCLE — POINTS BACK TO ITSELF M mass M depends on itself. To finish M, you need M first. This loop never ends. This is mass.
A graph is dots connected by lines. Left: A depends on B and C — finish B and C before A. Center: adding a new edge changes the graph's shape — this is what a detector does. Right: a dot that connects back to itself creates a cycle that never resolves — this is mass.

One rule governs graphs: you cannot finish processing a node until you have finished everything it connects to. If node A connects to B and C, you finish B and C first. If B connects to two more nodes, you finish those first. Resolving a graph is a walk — you follow the edges, resolve what you find, and work your way back.

When Graphs Combine

When two things interact, their graphs merge. Imagine two small companies, each with their own organizational chart. If the companies merge, the org charts combine into one larger chart. The merged graph is bigger and more complex than either was alone.

Each graph can be described as a table of numbers — rows and columns showing which nodes connect to which and how strongly. Computer scientists call this a matrix. When two graphs interact, their matrices combine — roughly, they multiply together. The result describes the merged graph.

Here is where it gets interesting: some graphs are the exact mirror of another. Their matrix is the mathematical inverse — like a negative of a photograph. When a graph meets its inverse, they combine and everything cancels. The merged result is the simplest possible graph: nothing. Zero structure. Zero overflow. All stored work returns as energy.

This is what happens when matter meets antimatter. A particle's graph and its antiparticle's graph are inverses of each other. When they meet, the matrices cancel perfectly. Total annihilation. Every bit of stored computation returns as energy. E = mc². The framework doesn't need a special rule for annihilation — it falls directly out of how graph combination works.

The Stack

Imagine you are filling out a government form. Question 7 says: "See Form B." You set the first form aside — it's not done yet — and open Form B. Form B's question 3 says: "Attach Schedule C." You set Form B aside and open Schedule C. Schedule C says: "First complete Worksheet D." You now have a pile of partially-finished paperwork:

Figure 2b — The Paperwork Stack
A STACK THAT CLEARS → ENERGY Original Form Form B Schedule C Worksheet D ← top D finishes → C resumes → C finishes → B resumes → B finishes → Form done Stack clears. Work complete. This is energy — work that finishes and propagates. A STACK THAT NEVER CLEARS → MASS Task A Sub-task Sub-sub-task ...recreates Task A Loop! The last task regenerates the first. The pile never shrinks. Runs every cycle. Consumes budget from neighbors. This is mass — a permanent stack that drains the system.
Left: a stack that clears — work completes and propagates. This is energy. Right: a stack that never clears — each cycle regenerates the first task. This is mass. It runs every tick, consuming shared resources from everything nearby.

This is how every computer works. Each task that needs a sub-task done first puts itself on pause (on the stack). When sub-tasks finish, they come off the top, and paused tasks resume. Most stacks clear. But if a task creates a sub-task that recreates the original — a loop — the stack never clears. This is called a stack overflow.

The critical connection: on a shared system, a permanent stack consumes resources from everything around it. The processor, memory, and cache are shared. A stack that runs every cycle, consuming budget every cycle, leaves less for everything else. Other processes slow down. Not because anyone told them to — because the shared resources they need are being consumed by the permanent stack.

The Budget

Imagine you can do eight hours of productive work per day. If your to-do list takes six hours — great, done with time to spare. If it takes twelve, you only get through eight, and the remaining four carry over to tomorrow. Your daily capacity is your budget. Each day is a tick.

The universe's engine has a fixed budget per tick. This single number determines almost everything:

The speed of light — a graph operation that uses the entire budget on propagation. One hop per tick, at full budget. Can't go faster: no more budget to spend.

Mass — work that exceeds the budget. The leftover carries over. If it loops, the stack is permanent.

Time dilation — your share of the budget is reduced. Near mass, the permanent stack's work drains your budget. You get less done per tick. Your counter falls behind. That's "slower time."

The Committed Frame

Your computer already does this. Right now.

A modern GPU uses double buffering. While your monitor displays the current frame (frame N), the graphics card is already computing the next one (frame N+1). You never see a half-finished frame. You only see completed results. The card flips between two buffers — one being displayed, one being drawn — so the transition is seamless.

But building each frame is itself a massive computation. A GPU doing ray tracing walks every light ray through the entire 3D scene — rays bouncing off walls you can't see, rays from off-screen lights, rays through glass. It walks all of them because any might contribute to the final color of any pixel. The GPU doesn't decide in advance which rays matter. It walks every path, computes every contribution, lets those contributions add together (or cancel out) at each pixel, and only then commits — one definite color per pixel, written to the buffer.

The universe's engine does the same thing. Each tick, the engine computes frame N+1 while the universe — everything you observe — displays frame N. The engine walks the entire dependency graph: committed nodes get resolved, uncommitted nodes get their full dependency tree walked through every possible path. Only when the full computation is done does the engine commit. Frame N+1 becomes the new display. The engine starts computing frame N+2.

What C Is — The Clock Speed

The budget — C — is the total amount of work the engine can do per tick. It is the clock speed of reality. Just as a CPU's clock speed determines how many operations it completes per cycle, C determines how much graph resolution happens per tick.

This single number sets almost everything. The speed of light is what propagation looks like at full budget — one hop per tick, all C spent on movement. Mass is what happens when a graph's work exceeds C — the overflow carries over. Time dilation is what happens when your share of C shrinks — less work done per tick, fewer state changes, slower clock. C also determines how much total computation the engine can perform — and thus how much budget is available for each content graph's state transitions per tick.

Your laptop's CPU has a clock speed — maybe 3 GHz, three billion cycles per second. The universe's C is measured in Planck times — about 10⁴⁴ ticks per second. The principle is the same. The number is just incomprehensibly larger.

Under the Hood — formal notation Computer scientists call a system that produces exactly one next state from the current state a Moore machine.

S(Ω+1) = F(S(Ω))S = state, Ω = tick, F = the Rule.
C = budget (per-tick work limit = clock speed of reality = speed of light from render layer).
B = remaining budget after mass has consumed some.
θ(G) = resolution trigger — whether graph G has strict consumers (committed structures whose next state depends on G's definite values).
R(Ω) = P(S(Ω))P = projection (committed frame), R = what we observe.
Display is double-buffered: you see R(Ω) while engine computes S(Ω+1).
Cache bust: committed node depends on uncommitted graph → engine resolves subtree → cascades.
In a Computer
In This Framework
What We Observe
CS
Graph: dots + lines — resolve dependencies first. Combining graphs: matrices multiply. Inverse = cancels. Stack: pile of unfinished work — permanent if it loops. Budget: fixed work per tick = clock speed. Double buffer: GPU displays frame N while computing frame N+1. All paths walked before committing.
Framework
State is a graph. Graphs live in the engine. The screen references committed frames. C = budget per tick = clock speed of reality. Overflow that loops = mass. Budget drain = gravity. Engine walks all paths of uncommitted graphs = wave function. Cache bust = collapse. Inverse graphs cancel = annihilation.
Physics
Particles interact. Mass exists. Gravity is universal. Matter + antimatter = total annihilation. Clocks run differently. Particles seem "in two places" until measured. Quantum mechanics looks random but might be determined underneath.
✦ What This Means

Five ideas from computing — graphs, stacks, budgets, double-buffered frames, and lazy evaluation — plus the three-level time model are all you need. If you understand how a pile of paperwork can grow and never clear, how a heavy app slows down your whole laptop, how two mirror-image structures cancel each other out, how a GPU displays one frame while computing the next, and why a smart computer doesn't compute answers nobody asked for — you already have the intuition for mass, gravity, annihilation, quantum mechanics, and why things stay quantum until something demands a definite answer. You just didn't know that's what you were looking at.

03

Why Does Anything Have Mass? And Why Does E = mc²?

Why is there something rather than nothing? Why does matter exist? Einstein showed in 1905 that mass and energy are the same thing in different forms — connected by his famous equation E = mc². A paperclip contains enough energy to power a city. But physics has never explained why certain configurations of energy become stable particles, why those particles have the masses they do, or why the speed of light appears squared in the equation. The Standard Model describes which particles exist with extraordinary precision. It takes their existence as given.

Most of the work the engine does each tick finishes within the budget. Two nodes in the graph interact, the combined work resolves in one tick, the result propagates. Clean. No leftovers. The return value of that completed work is energy — the output of resolved computation. This is what light is. This is what most interactions produce.

But sometimes two nodes interact and the combined work exceeds the budget. There is more to do than one tick allows. The unresolved portion carries over — a stack of unfinished tasks that persists into the next tick.

Most of these overflow stacks unwind in a few ticks. The tasks resolve, release their stored work as energy, and the stack clears. These are unstable particles — they exist briefly and then decay.

But certain overflow stacks form a loop: the unfinished task at the bottom references nodes that eventually reference it again. Each tick, the engine processes this stack, and the processing regenerates the very structure that requires processing next tick. The stack is permanent. It runs every tick. It consumes budget every tick. And everything nearby pays the cost.

That permanent, self-regenerating stack is mass.

Mass is not a property assigned to particles. It is what happens when work overflows the budget and the overflow perpetuates itself. The mass/massless distinction is the distinction between cyclic and acyclic graph topologies. The Big Bang was a state of maximum activity — nearly every interaction overflowed. The first fractions of a second were a massive sorting process: unstable overflows decayed, releasing energy. Stable overflow loops persisted. The particles that survived — protons, electrons, neutrinos — are the overflow topologies that happen to be self-sustaining. Matter was not installed as a feature of the universe. It emerged because the budget is finite and some overflows are stable.

Why some particles are stable and others aren't A proton's overflow topology regenerates identically every tick — it is a stable loop. A proton has never been observed to decay. A free neutron, by contrast, has an overflow topology that slowly unwinds — it decays in about ten minutes, releasing energy as it does. The stability of a particle is determined by whether its overflow loop is a perfect cycle or a gradually unwinding spiral. Physics calls this "particle stability." This framework calls it: whether the stack regenerates cleanly or leaks.

What a Photon Is

A photon is a graph operation that resolves completely within one tick. No overflow. No stack. No mass. It uses its entire budget on propagation — moving through the graph. That's why it travels at the speed of light (one hop per tick at full budget) and why it experiences no time (all budget spent on motion, zero left for internal state changes). A photon is the output of a completed computation, in transit. It is a return value being delivered.

Why some particles have mass and others don't — graph topology In graph-theoretic terms, the distinction is clean. A directed acyclic graph (DAG — no cycles) can be resolved by topological sort. If the work fits within C, it resolves in one tick. No overflow. No mass. This is a photon. A directed graph with cycles cannot be resolved by topological sort. The cycle creates recursion within the tick. The engine truncates after exhausting budget C, carrying forward unresolved nodes. If those nodes regenerate the cycle next tick, the stack is permanent. This is mass. Different graph topologies have different spectral properties — the eigenvalues of their adjacency matrices determine how much overflow they generate, producing different masses. The specific particle masses we observe correspond to the spectral radii of the stable cyclic topologies.

Why E = mc² — And Why c Is Squared

Now we can understand Einstein's most famous equation. Mass is a stack of overflow frames. Each frame holds one tick's worth of unfinished work — exactly one budget's worth of deferred computation. The stack depth is m: how many frames persist.

When mass converts to energy — when the stack unwinds, as in nuclear fission or matter-antimatter annihilation — two things happen at once:

First: the stored work is released. Each frame held one budget's worth of deferred computation. Total stored work = m frames × one budget each = m × budget.

Second: the released work propagates. The return values from the unwinding stack don't just appear — they propagate through the graph at the maximum rate, which is also one budget per tick (the speed of light).

The total impact on the system — the energy released — is the stored work times the propagation rate: m × budget × budget = m × budget².

The budget appears squared because it plays two structural roles in the same operation. It determines how much work each frame stores (first role), and it determines how fast the release propagates (second role). Both roles arise from the budget being the universal per-tick work limit. The same quantity, serving two functions, multiplied together. This is not a coincidence — it is what happens whenever a single quantity does double duty in a computation. We call this the dual-role pattern.

Figure 3 — Why the Budget Is Squared: The Geometry of Energy Release
THE STACK m overflow frames 1 budget stored 1 budget stored 1 budget stored 1 budget stored Total = m × budget each frame = one tick's worth of overflow ONE FRAME'S TOTAL IMPACT budget × budget = budget² ← budget stored → ← budget propagation → = budget² area = impact TOTAL ENERGY m frames × budget² each budget² budget² budget² budget² E = m × budget² In physics notation: E = mc²
Mass is a stack of overflow frames. Each frame holds one budget's worth of deferred work. One frame's total impact when unwound = budget stored × budget propagation rate = budget². The squaring is an area: what was stored times how fast it spreads. Total energy for m frames = m × budget². That's E = mc².
In a Computer
In This Framework
What We Observe
CS
A recursive function that regenerates itself creates a permanent stack. It consumes one time-slice per cycle. Killing the stack releases all stored work. Total output = depth × work per frame × system throughput. If throughput equals work-per-frame (same budget), the budget appears squared.
Framework
Mass = self-perpetuating overflow. Stack depth = mass. Each frame = one budget of deferred work. Energy = return value of completed work. E = m × budget². The budget appears squared because it governs both storage and propagation. This dual-role pattern also appears in quantum probability (α²).
Physics
Matter exists as stable particles. Unstable particles decay. E = mc² converts mass to energy. A kilogram contains 9 × 10¹⁶ joules. Nuclear fission converts a fraction of mass. Annihilation converts 100%. The c² factor is why mass contains so much energy — c is enormous, and squared is astronomical.
✦ What This Means

Matter is not a given. It is the inevitable result of a finite budget and a busy web. When interactions overflow and the overflow sustains itself, you get mass. When that mass converts back to energy, the budget appears squared because it plays two roles — and that's why a tiny amount of mass contains an enormous amount of energy. E = mc² is not mysterious. It is the structure of computation.

04

Why Can't You Block Gravity? Why Does It Bend Light and Slow Time?

Gravity is the most familiar force — everything falls. But it is also the strangest. You can block every other force: a Faraday cage stops electromagnetic fields, lead absorbs radiation. But nothing blocks gravity. It affects everything, everywhere, always. Einstein showed that mass bends spacetime and that this bending IS gravity. Clocks near mass run measurably slower — GPS satellites must be corrected for this every day. Light curves around massive objects. Near a black hole, time stops entirely. The measurements are perfect. But Einstein never explained why mass bends spacetime. He described it. He didn't explain it.

Open the activity monitor on your computer right now. On a Mac, it's called Activity Monitor. On Windows, Task Manager. What you see is a list of every program running, and next to each one, a number: how much of the processor's capacity that program is using.

Find the heaviest one — the program using the most CPU. It might be your web browser with twenty tabs open, or a video editing app, or a game. Now watch what happens to everything else.

Your other apps slow down. Not because the heavy app sent them a "go slower" message. They slow down because they share the same hardware. The processor has a fixed amount of work it can do each cycle — a budget. The heavy app is consuming a large share of that budget. What's left over is split among everything else.

The closer a process is to the heavy one in terms of shared resources — the more cache memory they share, the more they compete for the same memory bus — the worse the impact. A process sharing the same CPU core with the heavy app gets hit hardest. A process on a different core but sharing the same memory bus gets hit less. A process on a completely separate machine doesn't feel it at all.

Now here is the part that matters: you cannot shield against this. There is no setting in your operating system that says "protect this app from a neighbor's resource drain." The competition happens at the hardware level — the silicon, the circuits, the physical wires. Software can't block hardware-level contention. It would be like trying to stop your neighbor's noise by putting up a poster inside your apartment. The noise travels through the building's structure, not through your living room. Wrong layer.

This is gravity.

One Mechanism, Two Effects

Mass is a permanent stack that runs every tick. The engine must process it. That processing consumes budget. Budget consumption in a region does two things simultaneously:

It distorts the spatial graph's edge weights. Neighboring regions' routing costs increase. The cheapest path curves toward mass. Objects follow that path. From the render layer, this looks like gravitational attraction. This is curvature.

It starves neighboring content graphs of compute cycles. A clock near mass has fewer state transitions computed per tick. The clock genuinely advances less. Not "appears to" — genuinely. This is time dilation.

In standard physics, these are described by the same equation (the Schwarzschild metric), and physicists note they are "aspects of the same phenomenon." But it's never been clear why they're the same. In this framework, it's obvious: they're both consequences of budget consumption. One drain, two consequences. The math is the same because the mechanism is the same.

The Budget Drain

Mass is a permanent stack that runs every tick. The engine must resolve it — walk the self-referencing loop, follow every dependency — and that walk extends into neighboring nodes in the graph. Those neighbors get swept into the resolution. Their budget is partially consumed by mass's walk.

Nodes close to mass are heavily affected — most of their budget is consumed. Nodes far away are barely touched. This creates a gradient: high budget drain near mass, tapering off with distance. That gradient is the gravitational field.

Everything in the graph routes along the cheapest available path — the path that costs the least remaining budget to traverse. Near mass, the routing costs are skewed by the drain. The cheapest path curves toward mass. Objects follow that path, and from the render layer, we observe it as gravitational attraction. Physics calls this path a geodesic. The framework calls it: the scheduler routing along minimum cost.

Spacetime Is the Routing Table

In a computer network, a routing table is a map that tells data how to get from one place to another. It says: "To go from A to B, the cheapest route is through C, costing 3 units." When one route gets congested — when a heavy process monopolizes a link — the routing table updates. Traffic flows through a different, cheaper path. The routing table is not a physical road. It is a set of costs and connections that determines how everything navigates.

Spacetime, in this framework, is the routing table of the graph. When no mass is nearby, all routes cost roughly the same — flat spacetime, uniform routing costs. When mass is present, its budget drain makes nearby routes expensive. The routing table updates. Everything routes through cheaper paths, which curve away from the drain zone. From the render layer, this looks like curved spacetime.

What Einstein called the metric tensor — the mathematical object describing distances and geometry at every point in space — is the edge-cost matrix of the graph's routing table. Einstein measured the routing table with extraordinary precision. He predicted gravitational waves (vibrations in the routing table), light bending around stars (light following cheapest paths through a skewed table), and GPS clock corrections (budget deficit near Earth's mass). Every prediction has been confirmed. GR didn't get it wrong. It measured the routing table exactly right. It just didn't know that's what it was measuring.

Figure 4 — From Computer Routing to Gravity: How a Stack Warps the Topology
IN A COMPUTER — ROUTING AROUND A CONGESTED NODE BEFORE: Uniform costs — all routes equal cost: 1 cost: 1 → straight path (cheapest) AFTER: Heavy process saturates center node HEAVY 100% CPU cost: 1 cost: 1 cost: 8! cost: 8! cost: 8! → path curves around congestion (cheapest route now avoids center) IN THE UNIVERSE — THIS IS WHAT GRAVITY LOOKS LIKE FAR FROM MASS full budget per tick uniform routing costs = flat spacetime NEAR MASS budget drained — routing costs warped MASS permanent stack light curves (cheapest path bends) warped routing costs = curved spacetime
Top left: a uniform network — all routing costs equal, cheapest path is straight. Top right: a heavy process saturates the center node — nearby edge costs skyrocket, cheapest path curves around it. Bottom: the same thing in the universe. Far from mass: uniform routing costs = flat spacetime. Near mass: the permanent stack drains budget, warping routing costs. The grid curves. Light follows the cheapest path, which bends around mass. Einstein called this curved spacetime. It is a routing table warped by a congested node.

Why Gravity Cannot Be Shielded

You can block every other force because every other force propagates through content graphs — through dependency chains between content structures. An electromagnetic wave is a content-graph phenomenon: one content structure's state changes propagating to another through dependency edges. You put a Faraday cage — another content structure — between them. The cage breaks the dependency chain. The wave can't reach the interior. Done.

Gravity doesn't propagate through content graphs. It is budget consumption cascading through the spatial graph itself — the very fabric of space. Mass's overflow stack runs every tick, consuming budget. That budget consumption affects every spatial region near it, because they share the same spatial graph structure. Your shield is a content structure. It sits inside the spatial graph. It can no more intercept budget consumption cascading through the spatial graph than a fish can block the current it's swimming in. Same system, different type of structure.

Gravitational Time Dilation

This is already implicit in everything above, but it is worth making explicit because it is one of the best-tested results in physics.

A clock is a device that counts state changes. Each tick of the clock is one state change. Near mass, the remaining budget is lower — mass's walk consumed part of it. With less budget, fewer state changes complete per tick. The clock ticks fewer times. Not because "time slowed down" in some metaphysical sense — because fewer things happened. The budget was consumed by mass. Less budget means less work done. Less work done means fewer ticks on the clock.

GPS satellites orbit about 20,200 km above Earth — far enough from Earth's mass that their budget drain is lower than at the surface. Their clocks tick faster than ground clocks by about 45 microseconds per day. If engineers didn't correct for this, your GPS position would drift by about 10 km per day. Every time your phone shows your location within a few meters, it is accounting for the fact that time passes at different rates at different distances from mass.

Black Holes

A black hole is what happens when mass is concentrated enough that its budget drain reaches 100% at some distance from the center.

The event horizon is the surface where the drain consumes all of the local budget. Remaining budget = zero. Zero state changes per tick. Nothing can happen. Time stops. Nothing can propagate outward because outward propagation would require budget that doesn't exist. This is why nothing escapes a black hole — not because of a force holding things in, but because there is no budget left to process "leaving."

The singularity — the point at the center — is where the overflow stack exceeds any finite budget. In computing terms, this is a stack overflow exception: the pile of unfinished work grew beyond all bounds. Physics' equations produce infinities at the singularity. This framework says: of course they do. It's a crash. The engine can't process an infinite stack.

Hawking radiation: at the very edge of the event horizon, the budget is almost but not quite zero. In this narrow zone, the engine can slowly complete some cleanup operations — resolving the outermost edge of the overflow. Those completed operations produce return values (energy) that propagate outward. From outside, the black hole appears to slowly radiate energy. Over immense timescales, this drain causes the black hole to shrink and eventually evaporate. The stack slowly unwinds from the edge inward.

Gravitational Lensing — Light Follows the Cheapest Path

In 1919, Arthur Eddington photographed stars during a solar eclipse and found that stars near the edge of the sun appeared slightly shifted from their known positions. The sun's mass was bending their light. Einstein had predicted exactly this.

In this framework: light follows the cheapest path through the graph. Near the sun, the routing costs are skewed by the sun's budget drain. The cheapest path curves slightly toward it. Light doesn't "feel" gravity. It follows the routing table, and the routing table is warped.

Gravitational Waves

When mass changes — a binary star system spiraling inward, two black holes merging — the stack's computation pattern changes. Different amounts of work overflow each tick. The return values change. Those changing return values propagate through the spatial graph at speed C. The spatial graph's edge weights oscillate as the return values pass through. That oscillation IS a gravitational wave. LIGO detected exactly this in 2015.

Frame Dragging

If a mass is rotating — its cyclic graph topology has a directional bias in the cycle traversal — then the return values propagating outward carry that directional bias. Neighboring regions' edge weights become asymmetric: cheaper to traverse in the rotation direction than against it. From the render layer, this looks like space itself being "dragged" by the rotating mass. Frame dragging is confirmed experimentally (Gravity Probe B, 2011).

In a Computer
In This Framework
What We Observe
CS
A heavy process consumes shared resources. Neighbors slow down. No API blocks it — hardware contention. Routing tables update when one path gets congested. At 100% CPU saturation, the process has consumed everything — nothing else runs.
Framework
Mass's resolution walk drains neighbor budget. The gradient is gravity. Spacetime is the routing table. Geodesics = cheapest paths. Unshieldable — engine layer vs screen layer. Event horizon: 100% drain. Singularity: stack overflow. Hawking radiation: edge cleanup. Lensing: light follows cheapest route.
Physics
Gravity proportional to mass. Falls as 1/r². Cannot be blocked. GPS corrected 45μs/day. Light bends around stars (Eddington, 1919). Black hole event horizon: time stops. Singularity: equations give infinity. Hawking radiation: black holes slowly evaporate. All explained by one mechanism: budget drain.
✦ What This Means

Gravity is not a force. It is resource contention. A permanent overflow stack consumes budget from its neighbors in the spatial graph, creating a gradient. Everything routes along the cheapest path through that gradient — and from inside the spatial graph, that looks like attraction. You cannot shield it because it propagates through the spatial graph, and your shield is a content structure embedded within it. Einstein's spacetime curvature is the spatial graph's edge-cost matrix, warped by a congested node. Black holes are what happen when the congestion reaches 100%. Every prediction of general relativity — time dilation, light bending, event horizons — falls out of one mechanism: a permanent stack consuming shared resources.

05

Why Do Moving Clocks Run Slow? And Why Does Time Only Flow Forward?

Cosmic rays hit the upper atmosphere and create particles called muons. Muons are unstable — they decay in about 2.2 microseconds. At near-light speed, they should travel about 660 meters before decaying. But they are created at roughly 15 km altitude — and they reach the ground. Their internal clocks are running slower because of their speed. Meanwhile, if you could film a billiard ball collision and play it backward, the physics would still work perfectly. All the fundamental equations of physics work in both time directions. So why do eggs scramble but never unscramble? Why does time have an arrow?

In the previous section, we saw that gravity slows time by draining the local budget from the outside — mass's walk consumes your resources. But there is a second way to lose budget: spending it on motion.

Remember the entity model from Section 01: content graphs have no spatial location. The spatial graph references them. A content graph doesn't have a position — position is a spatial-graph concept. Your "location" is determined by which spatial region currently references your content graph.

So what does moving actually mean? Your content graph doesn't move — it has no position to move from. What changes is which spatial regions reference it. The old region drops its reference. The new region picks it up. The engine must re-evaluate your content graph in its new spatial context — new neighbors, new incoming budget effects, new demand environment.

That re-evaluation is work. It consumes budget. Every tick you're in motion, spatial regions are being remapped — old references dropped, new ones acquired, the content graph re-contextualized. The faster you move, the more regions are remapped per tick, the more budget goes to re-contextualization, the less remains for your content graph's internal state to change.

At the speed of light, the maximum number of screen regions are being remapped per tick. All budget goes to re-rendering. Zero budget remains for internal state changes. No time experienced.

The Equivalence Principle — Derived, Not Assumed

Einstein's equivalence principle states that acceleration is locally indistinguishable from gravity. In this framework, this is derived:

Acceleration means the rate of spatial-graph edge updates is changing. More edge updates per tick = more budget consumed. Gravity means mass's stack is consuming budget in your region. Both are budget drain on your content graph, from different sources. There is genuinely no difference at the budget level between "mass consumed your budget" and "your own acceleration consumed your budget." Same drain. Same effect. The equivalence principle falls out.

The Muon Paradox

A muon created in the upper atmosphere by cosmic rays moves at about 99.5% of the speed of light. At that speed, nearly all of its budget is consumed by the projection remapping — its graph is being handed from screen region to screen region at near-maximum rate. Almost nothing remains for internal processes, including the decay process that would normally destroy it in 2.2 microseconds.

From our perspective on the ground, the muon's internal clock is running roughly 10 times slower than ours. In the 2.2 microseconds of muon-time that pass (during which it would normally decay), about 22 microseconds pass in our reference frame — enough time for it to travel 15 km and reach the ground.

The measurement is exact. The mechanism, in this framework, is budget: the muon is spending so much on motion that its internal state barely changes.

The Twin Paradox

Twin A stays on Earth. Twin B climbs into a spaceship, travels to a distant star at near-light speed, and returns. When they reunite, Twin B is younger.

This is real. It is confirmed by experiments with atomic clocks on aircraft and satellites. It is not a thought experiment.

In this framework: Twin B's graph was remapped across vastly more screen regions than Twin A's. Every remapping consumed budget. Over the entire journey, Twin B's graph had less budget available for internal state changes. Fewer internal changes accumulated. Fewer revisions means less experienced time means younger.

Twin A stayed put — minimal position rewrites, full budget for internal state. More revisions accumulated. More experienced time. Older.

Why Nothing Can Go Faster Than Light

At the speed of light, all of the budget is consumed by propagation. Zero budget remains for internal state changes. A photon uses its entire budget moving. It experiences no time — not because "time stops for light" in some mystical sense, but because there is literally no budget left for anything to happen inside it.

Going faster than light would require spending more budget per tick than exists. This is like trying to write a cheque for more than your bank balance. There is no "overdraft" on the universe's budget. The speed of light is not a rule imposed by nature. It is the structural maximum: you cannot spend more than the budget allows.

The Arrow of Time

This is one of the oldest puzzles in physics, and this framework dissolves it completely.

The equations of classical mechanics and quantum mechanics are time-symmetric — they work equally well run forward or backward. If you film a billiard ball collision and play it in reverse, the reversed film shows valid physics. So where does the one-way direction of time come from? Why can you scramble an egg but never unscramble it?

The standard physics answer involves entropy, thermodynamics, and special initial conditions at the Big Bang. It has never fully satisfied anyone because it pushes the question back without answering it: why were initial conditions low-entropy?

This framework's answer is simpler. The Rule takes an input and produces an output. The output depends on the input. The input does not depend on the output. You cannot un-run a function that has already returned. You cannot un-commit a state that has already been written.

The equations of physics are symmetric because they describe the render layer — and the render layer's geometry can be symmetric. But the engine's Rule runs in one direction by construction. That is the arrow. Not entropy. Not initial conditions. The direction of time is the evaluation direction of the Rule. Functions run forward.

In a Computer
In This Framework
What We Observe
CS
Data transfer costs cycles. Faster transfer = more cycles consumed = fewer available for other work. At max transfer rate, all cycles go to movement. A function takes input, produces output — you can't reverse it. The call stack grows forward only.
Framework
Motion = graph's projection remapping across screen regions. Faster = more regions re-rendered per tick = more budget consumed = less internal state change = slower time. At c: all budget on remapping, zero internal. Arrow: the Rule runs one direction. Not reversible.
Physics
Muons survive to reach ground. Twin paradox confirmed by atomic clocks. Photons experience no time. Nothing exceeds c. Eggs scramble but don't unscramble. Entropy increases. All explained by budget consumption and the directionality of the Rule.
✦ What This Means

Time is not a river that flows or a dimension that stretches. It is a counter: how many state changes happened to you. Near mass, fewer happen — that's gravitational dilation. Moving fast, fewer happen — that's velocity dilation. At light speed, none happen — that's why photons don't age. The speed limit is not a law — it's a budget cap. And the arrow of time is not a mystery for thermodynamics to solve. It is the fact that computation runs forward. One input, one output, no reversal.

06

Why Can't Anything Go Faster Than Light? And Is "Now" the Same Everywhere?

Einstein showed that two observers moving relative to each other can genuinely disagree about whether two events happened at the same time — and neither is wrong. There is no universal "now." This is one of the most counterintuitive results in physics, yet it has been confirmed by every GPS satellite, every particle accelerator, every test anyone has ever run. How can two people disagree about what's happening "right now" and both be correct?

Here is something that happens in every system where information takes time to travel — not just computers, not just databases, but everywhere.

Imagine a teacher in a large lecture hall. She writes a number on the whiteboard. Students in the front row see it almost instantly — light travels fast, and they're close. Students in the back row see it a fraction of a second later. A student watching via a video feed from another building sees it two seconds later.

At any given instant, different students have different versions of "what's on the board." The front-row student knows the latest number. The video-feed student is seeing a number from two seconds ago. They are all correct — they're just looking at different moments in the same sequence. Their "now" is different because information took different amounts of time to reach them.

Now imagine the teacher writes a new number every second. The front-row student is always about one second behind. The back-row student is 1.01 seconds behind. The remote student is three seconds behind. None of them are wrong. They just have different lag to the source.

Computer scientists call this eventual consistency: in any system where one source writes data and multiple observers read it, the observers will eventually all agree — but at any given moment, they hold different versions. This is not a bug. It is a fundamental property of any system where information travels at a finite speed.

This is not just a database thing. It is how all distributed systems work. Your phone gets a push notification later than your laptop because it has more lag to the server. Two weather stations report different temperatures because they are measuring at different times relative to the moving weather front. Every system with a source and observers at different distances has this property.

The universe is this system. The engine writes the state once per tick. Every location in the universe is an observer with a different amount of lag to every event. Your "now" is the latest version of reality that has reached your location. Someone on the other side of the galaxy has a completely different "now" — not because time is strange, but because they have a different amount of lag to the same events.

Two people looking at the same supernova and disagreeing about when it happened are not confused. They are both right. They have different replication lag to the event. They will eventually agree — once the light from the supernova has had time to reach both of them. Until then, their "nows" are different.

The preferred ordering — exists but invisible The engine advances by a global tick. Step 1, step 2, step 3. This implies a preferred causal ordering — an absolute sequence underneath everything. But from inside the render layer, you can't detect it. The projection function produces geometry that looks the same to every observer within it. Physics calls this Lorentz symmetry, and it has been tested to extraordinary precision. The framework's prediction: at extreme (Planck-scale) energies, tiny deviations from this symmetry should be detectable, because the discrete ticking structure of the engine would begin to show through.
In a Computer
In This Framework
What We Observe
CS
Eventual consistency: all replicas eventually agree, but at any moment they hold different versions. Lag = delay between a write and its arrival at each observer. Maximum propagation = network throughput. Fundamental to all distributed systems, not just databases.
Framework
The engine is the source. Each location is an observer with lag = light-travel time to the event. "Now" = your local version number. Speed of light = maximum propagation (one hop per tick at full budget). No universal now. All observers eventually agree. Preferred ordering exists at engine level but is invisible from the screen.
Physics
Relativity of simultaneity: two observers disagree about event ordering. No faster-than-light communication. "Now" is local, not universal. Speed of light is the universal maximum. GPS accounts for this. Confirmed by every experiment in special relativity.
✦ What This Means

There is no universal "now." Not because time is strange or reality is subjective — but because you are an observer in a distributed system, and your version of events depends on how far each state change has traveled to reach you. Einstein said the same thing with Lorentz transformations. This framework says the same thing with replication lag. The speed of light is not a mysterious cosmic rule. It is the maximum throughput of the engine. You can't go faster because there is no more budget to spend.

07

What IS the Double-Slit Experiment Actually Showing Us?

This is the experiment that broke physics. Fire a single particle at a wall with two slits. Common sense says it goes through one slit or the other. But something impossible happens: the particle produces a pattern on the far wall that is only possible if it went through both slits simultaneously. The pattern — alternating bright and dark bands — is the signature of two waves interfering with each other.

Now put a detector at one slit to watch which one the particle goes through. The pattern vanishes. The particle behaves as if it went through just one slit. Turn off the detector — the pattern comes back. It gets stranger: the quantum eraser experiment adds a detector, then erases the detector's record after the particle has already hit the wall. The interference pattern reappears — as if the particle retroactively changed its mind. For a hundred years, no one has explained what is physically happening.

Remember the GPU from Section 02.

Your graphics card displays the current frame while computing the next one. To compute that next frame, it walks every light ray through the entire 3D scene — rays bouncing off objects behind walls, rays from off-screen lights, rays through transparent surfaces. It walks all of them because any of them might contribute to the final color of any pixel on screen.

The GPU doesn't decide in advance which rays matter. It walks every path, computes every contribution, lets those contributions add together (or cancel out) at each pixel, and only then commits — one definite color per pixel, written to the back buffer. The front buffer (what you see) doesn't change until the computation is done.

You, playing the game, only see committed frames. You never see the computation in progress. From inside the game world, light just "goes" to the right places. From outside, you can see the engine was walking every possible path before committing anything.

The universe's engine does exactly the same thing.

Why Some Things Resolve and Others Don't

Each tick, the engine processes every node in the graph. But not every node gets treated the same way:

What the engine does with each node every tick

Strict consumer present — resolve

The graph is complex enough — interconnected, self-referencing, or joined to many committed neighbors — that the engine resolves it to a definite value and writes it to the next frame. Done. One answer. This is "classical" behavior: your coffee cup, a baseball, anything large and interconnected.

No strict consumer — carry the walk

The graph is simple and isolated. The engine doesn't commit it — but it still walks the full dependency tree through every connected path, because this graph might influence committed nodes downstream. This dependency walk is the wave function. It is the engine doing what engines do: evaluating all dependencies before committing a frame.

This is lazy evaluation — the engine doesn't produce answers nobody asked for. Small, isolated particles have no strict consumers. Everyday objects with 10²⁶ interconnected atoms are dense webs of strict consumers — they resolve themselves every tick.

The Double-Slit Experiment — Step by Step

A particle is fired at a barrier with two slits. Here is what happens in the engine:

Step 1 — Nothing demands a definite answer from the particle. Once fired, the particle is an isolated content graph. No committed structure's next state depends on it having a definite position. The engine computes its dependency tree every tick — walks every path — but produces no single answer because nothing is asking for one.

Step 2 — The engine walks every available path. The particle's graph connects through two paths — one through each slit. The engine walks both, computing potential influence on everything downstream. This is not the particle "going through both slits." It is the engine computing an unresolved graph's full dependency tree — exactly what lazy evaluation does with a value nobody has demanded yet.

Step 3 — The walked paths converge. On the far wall, the two walked paths arrive at the same detection nodes. Where they arrive in sync — peaks aligned — their influences add up. Bright band. Where they arrive out of sync — peak meets trough — they cancel. Dark band. This convergence pattern is part of the engine's computation for the next frame.

Step 4 — The screen is a strict consumer. The detection screen is a macroscopic committed structure — dense internal dependencies, self-resolving every tick. When the particle reaches it, the screen's next state depends on the particle's position — the screen NEEDS a definite answer. Cache bust. The engine resolves the particle's graph. One definite spot. But the probability of where it lands was shaped by the two-path walk. Over thousands of particles, the accumulated pattern is the interference pattern: alternating bright and dark bands.

The wave function is the content graph's evolving state at the data layer — not projected to the spatial graph. The particle's content graph connects through path A and path B. No strict consumer demands a definite answer — so the engine walks both paths, carrying the walk as the graph's state at the data layer. The multi-path walk IS the wave function. The particle is not "in two places at once." The engine is computing an unresolved graph's influence through every available path — lazy evaluation. The interference pattern is the statistical fingerprint of many cache busts, each shaped by a data-layer computation you never directly observed.

Why a Detector Destroys Interference

This has puzzled physicists for a century. Place a detector at one slit, and the interference pattern vanishes. The particle goes through just one slit. Why would "looking" change what happens?

"Looking" is the wrong word. Here is what actually happens:

A detector is a physical device — a committed structure whose nodes are resolved every tick. When you place a detector at one slit, you create a dependency: the detector's next state depends on whether a particle came through that slit. That dependency is a cache bust. The engine must resolve the particle's which-path value at that slit to compute the detector's next frame.

Once the which-path graph is resolved, there is only one path contributing to the far wall. One path means no convergence of two paths. No convergence means no interference. The pattern vanishes.

The detector didn't "observe" in any mystical sense. It is a strict consumer that demanded a definite value from an unresolved graph — and that demand forced resolution. Change the consumers, change what gets resolved. That's all that happened.

The Quantum Eraser — The Real Test

This is where it gets extraordinary. The quantum eraser experiment, first performed in the 1990s, is designed to push the double-slit result to its logical extreme.

Setup: you run the double-slit experiment with a detector at each slit — so the which-path information is recorded, and interference is destroyed. But then, after the particle has already hit the far screen, you erase the detector's which-path record. Impossibly, when you look at the subset of results correlated with the erased detectors, the interference pattern reappears.

It appears as though the particle retroactively "changed its mind" about going through one slit or both. This seems to require the future affecting the past.

In this framework, no retrocausality is needed. Here is what actually happens:

The entangled signal-idler pair is one graph. When the signal photon approaches the slits, the engine walks the full graph — including the idler's structure — through both paths. But because the signal and idler are entangled, the two paths through the slits carry different idler-state baggage. Path A correlates with idler state A. Path B correlates with idler state B. These paths are distinguishable within the full dependency tree — not because anyone measured them, but because they connect to different downstream nodes in the idler's subgraph.

Distinguishable paths cannot interfere cleanly. When the signal photon hits the screen (cache bust on position), the two paths land at the same screen positions but they're tagged with different idler correlations. They average out to a blob. No interference visible — but both subsets are in the data.

Now comes the key move. When you "erase" — you measure the idler in a basis that does not distinguish the paths. The measurement resolves the idler, but it resolves it in a way that merges the path-distinguishing states before commitment. In graph terms, this is a null operation on the which-path edge — like adding +1 and then −1. The net effect on which-path distinguishability is zero. No cache bust ever occurred on the which-path edge itself. The multi-path structure was never resolved into "slit A" or "slit B."

When you sort signal hits by their erased-idler partners, you're looking at a subset where the which-path information was never individually resolved. The two-path walk was never interrupted. Interference is right there in that subset.

When you don't erase — you measure the idler in a way that does distinguish paths — that IS a cache bust on the which-path edge. The subtree recomputes with a definite path value. That subset shows no interference.

Think of it like a spreadsheet. You have a column of data that shows a clear pattern. You filter by another column — say, "Category A" and "Category B." Within each filtered view, the pattern disappears because you've separated the data points that create it. Now delete the filter. The original data is unchanged. The pattern reappears.

That is the quantum eraser. The interference was always in the data. The entanglement created a subgraph join that made the paths distinguishable, washing out the pattern in the aggregate. Erasing removes the distinguishability — a null operation on the sorting edge. The subsets recombine. The pattern reappears. And crucially: frame N is already committed. The signal photons already landed. You cannot go back. You can only choose how to sort the already-committed data. The n-1 principle holds: the engine computed frame N+1 from frame N. You are now in a later frame, choosing a filter.

Why Basketballs Don't Show Quantum Behavior

If everything in the graph gets processed every tick, why doesn't a basketball produce an interference pattern when thrown through two doors?

Because a basketball has 10²⁶ atoms. Each atom's next state depends on its neighbors' definite positions and forces. Atom 47 can't compute its next tick without a definite answer from atoms 46 and 48. Every atom is a strict consumer of its neighbors. The basketball's internal dependency structure demands definite values at every node, every tick. It resolves itself — not because it's "large" in some abstract sense, but because its own internal dependencies require it. No multi-path walks survive. Classical behavior, every tick.

The cause is graph complexity — the sheer number of subgraph joins between the basketball's atoms and their environment. An isolated basketball with zero connections to other graphs — impossible in practice, valid in principle — would theoretically show quantum behavior. Not because it got simpler internally, but because without subgraph joins to committed neighbors, its total graph complexity would lose all strict consumers. What prevents quantum behavior in everyday objects is the overwhelming web of strict-consumer dependencies — each atom demanding answers from its neighbors, forcing resolution everywhere. Physicists call this decoherence. In this framework, decoherence is: strict consumers demanding definite values.

What Happens When You Send an Entangled Particle Through the Slits?

Here is a prediction that falls directly out of the framework — and matches real experiments.

If you fire a single, unentangled particle at the double slit, the engine walks a simple graph through two paths. Clean two-path interference pattern.

But what if the particle is entangled with a partner somewhere else? Remember: entangled particles are one graph. The engine doesn't walk just the particle — it walks the full graph, including the partner. The walk through the two slits now carries the partner's structure along with it. The combined graph is more complex than a lone particle's graph.

The result: the single-particle pattern shows no clean interference on its own. It looks random. But when you correlate the results with measurements on the partner particle, you can extract subsets that DO show interference. The partner's measurement acts as a sort key — exactly like the quantum eraser. The full pattern contains all the information. Looking at one particle alone averages over the partner's contributions, washing out the interference. Filter by the partner's value, and clean interference subsets emerge.

Physicists have confirmed this experimentally with entangled photons since the 1990s. In this framework, the result is inevitable: the engine walks the full graph, and the full graph includes both particles. No additional mechanism needed — it falls directly out of "the engine walks the graph."

When Graphs Share a Screen Region

One more consequence of the entity model — and it leads to a genuine prediction.

Multiple graphs can be referenced by the same screen region. When any one of them resolves, the region re-renders. That re-render creates new dependency edges to every graph referenced by that region — even graphs that are completely unrelated to the one that just resolved.

Think of it like an open-plan office. When one person's phone rings loudly (their "graph resolves"), everyone nearby is momentarily disrupted — even people working on completely unrelated projects. The disruption isn't because they're collaborating. It's because they share the same physical space.

This is the framework's mechanism for environmental decoherence: more resolved graphs in the same screen region means more subgraph joins for every uncommitted graph nearby, which pushes their combined complexity more likely to acquire strict consumers, which means it's harder for any graph to stay unresolved. Dense environments (lots of committed stuff nearby) force quantum systems into classical behavior faster — not because of direct interaction, but because proximity creates indirect strict consumers through shared spatial-graph dependencies.

A candidate prediction Standard decoherence theory says a quantum system loses coherence based on its interaction strength with the environment. This framework says it's based on whether strict consumers exist — committed structures whose next state depends on a definite value from the quantum system, regardless of direct interaction strength. Usually these predict the same thing (nearby committed things create subgraph joins through dependency edges). But for two quantum systems that overlap spatially without interacting (different force carriers, no shared quantum numbers), our framework predicts slightly faster decoherence than standard QM. The trigger is proximity creating indirect strict consumers — when one system resolves, the region re-renders, creating new dependency edges that make nearby systems' values needed. If testable, this would distinguish this framework from standard quantum mechanics.
In a Computer
In This Framework
What We Observe
CS
GPU double buffering: display frame N while computing frame N+1. Walk every light path, then commit one color per pixel. Cache invalidation: when a dependency changes, recompute the subtree. A filter on a dataset hides patterns. Remove filter, pattern returns. Data never changed — the view did.
Framework
Wave function = engine's dependency walk of uncommitted graphs. Interference = walked paths converging. Detector = cache bust forcing subtree resolution. Eraser = null operation on the which-path edge — no cache bust occurred. Decoherence = strict consumers demanding definite values. The particle never moved through both slits. The engine walked both paths.
Physics
Double-slit interference from single particles. Detector kills it. Quantum eraser "restores" it. Large objects don't show quantum behavior. All confirmed to extraordinary precision. All explained by one mechanism: the engine walks before it commits.
✦ What This Means

The double-slit experiment is not strange. It is lazy evaluation. The engine computes the particle's influence through every path but doesn't produce a definite answer because nothing is asking for one. The wave function is the content graph's evolving state at the data layer. "Collapse" is a strict consumer demanding a value. The quantum eraser is a null operation on the which-path edge — the data never changed, you changed the filter. And decoherence — the reason your coffee cup doesn't show quantum behavior — is that everyday objects are dense webs of strict consumers, every atom demanding answers from its neighbors. Quantum mechanics is not a mystery. It is lazy evaluation.

08

How Can Two Particles Be Connected Across the Universe?

Prepare two particles together, then separate them to opposite ends of the galaxy. Measure one — and the other's value is instantly determined, no matter how far apart they are. No signal could possibly travel between them in time. Einstein called this "spooky action at a distance" and spent years trying to prove it was an illusion. In 1964, physicist John Bell proved it wasn't: the correlations are real, and they cannot be explained by any hidden information carried separately inside each particle. The connection is genuine. No one has explained what it actually is.

This is the section where the framework's two-layer structure pays off most directly. The explanation is not complicated — but it requires taking the two-layer idea seriously.

What Happens When Two Particles Interact

Two particles meet and interact. In graph terms: their separate graphs merge into one graph. They are no longer two independent things in the engine — they are a single connected structure.

This merged graph has no strict consumer demanding its resolution. Its values do not exist — not "unknown to us" but genuinely not yet computed, because nothing has asked for a definite answer.

Now the particles physically separate in the spatial graph. One goes left, one goes right. In the spatial graph, they look like two distant objects — potentially on opposite sides of the galaxy.

But they are not two objects in the engine. They are one content graph, referenced by two spatial regions.

Remember Section 01: content graphs have no spatial location. The spatial graph references them. When the particles "separate," the content graph doesn't split. The content graph stays as one structure with no spatial properties. What moved were the spatial references. Two different spatial regions now reference the same content graph. The distance between those regions is real in the spatial graph. It is meaningless to the content graph — content graphs have no concept of distance.

What Happens When You Measure One

A scientist measures particle A. The measurement device is a committed structure — its next state depends on the particle's value. That dependency is a cache bust. The engine must resolve the graph.

The engine resolves the whole graph — because it is one graph. It doesn't resolve "particle A" and then notify "particle B." There is no separate A and B in the engine. There is one graph. The engine computes definite values for all nodes in the graph, in one tick.

Every spatial region referencing this content graph updates with the committed values. The region showing "particle A" and the region showing "particle B" both update — because they were both referencing the same content graph all along.

Why There Is No Communication

There was never anything to communicate. Communication requires two separate things sending messages. But there aren't two separate things. There is one content graph, referenced by two spatial regions. When the engine resolves the content graph, both references update. No signal is sent from A to B because A and B are not separate objects — they are the same content graph, referenced from different regions in the spatial graph.

This is exactly like a shared Google Doc. Someone in New York edits the document. A reader in Tokyo sees the change. Not because a signal went from New York to Tokyo — but because both users are looking at the same document. The document doesn't exist in New York or Tokyo. It exists in the cloud. The "distance" between users is irrelevant because the thing they're both reading has no location.

Why Bell's Theorem Doesn't Apply

In 1935, Einstein, Podolsky, and Rosen argued that quantum mechanics must be incomplete. Their reasoning: if measuring one particle instantly determines the other's value across any distance, either the values were predetermined (hidden variables) or something travels faster than light. Since faster-than-light travel violates relativity, Einstein concluded hidden variables must exist.

In 1964, John Bell proved mathematically that no local hidden variables — information stored separately inside each particle — can reproduce quantum predictions. The correlations are too strong. Whatever connects the particles, it isn't a secret note carried by each one independently.

This framework resolves the paradox. The hidden state is not local — not stored inside either particle. It is the graph itself — a structure in the engine layer, which has no spatial location. Bell rules out local hidden variables. This framework's variables are non-local by construction. The graph doesn't exist "inside" particle A or "inside" particle B. It exists in the engine, which has no space.

Entanglement requires no communication whatsoever. Two entangled particles are one content graph referenced by two regions in the spatial graph. When a cache bust forces the content graph to resolve, every spatial reference updates — not because a signal traveled between regions, but because they were always referencing the same data. The distance between them exists only in the spatial graph. In the content graph, there was never a distance. There was never two things. There was one graph, uncommitted, with the engine walking its full dependency tree each tick. When a cache bust came, both references updated. That is all that happened.

In a Computer
In This Framework
What We Observe
CS
A database writes two values in one transaction. They are consistent everywhere instantly — not because a signal traveled, but because they were written together. Two servers reading the values see consistent results from opposite sides of the planet. Structural consistency, not communication.
Framework
Entangled particles share an edge in the engine layer (no spatial distance). When a cache bust forces resolution, the engine resolves the full graph in one step — one computation, two outputs. No signal. No communication. The correlation was structural from the moment of interaction. Bell's theorem: satisfied by construction — these are non-local, not local.
Physics
Entangled particles show instant correlations regardless of distance. Bell's theorem rules out local explanations (1964). Experiments confirm non-local correlations to extraordinary precision. No information is transmitted faster than light. Einstein's "spooky action" is real but not action — it is structural.
✦ What This Means

There is nothing spooky about entanglement. Two particles interact, forming one content graph. Content graphs have no concept of distance — "near" and "far" are spatial-graph properties. When a cache bust forces one particle's resolution, the entire content graph resolves as one computation. Both spatial references update. The correlation isn't transmitted across space. It was written into the content graph when the particles first interacted. It was always there. You just couldn't see it until a cache bust forced the resolution.

09

What Else Falls Out — And What We Don't Know Yet

The Principle of Least Action

For three hundred years, physicists have observed that every physical process follows the minimum-cost path. A ball thrown in the air follows a parabola. Light passing through glass bends at exactly the angle that minimizes travel time. Planets orbit in ellipses that minimize a quantity called "action." This principle — the principle of least action — underlies all of classical mechanics, all of quantum mechanics, all of general relativity. It has never been violated. And it has never been explained.

In this framework, the explanation is one sentence: that is what schedulers do. Every routing algorithm, every compiler optimizer, every pathfinding system finds the cheapest path through a graph. The engine's scheduler routes computation along the minimum-cost path through the graph's dependencies because that is the natural behavior of any system resolving a graph under budget constraints. The universe doesn't "know" the optimal path. The engine just routes the way all schedulers route.

Inertia — Why Heavy Things Are Hard to Move

Mass is a stack of overflow frames. The deeper the stack, the more work is required to change its state — because you have to overwrite frames that are simultaneously rebuilding themselves every tick. A proton's stack is shallow and easy to redirect. A planet's aggregate stack is incomprehensibly deep and resists change proportionally. Inertia is not a separate law. It is the cost of rewriting a deep, self-rebuilding stack.

Entropy — Why You Can't Unscramble Eggs

Each tick, the Rule produces a new state from the current one. That state is written. It cannot be unwritten. The history of states grows monotonically — it is an append-only log. You can add entries but never remove them. This is why entropy always increases: the universe accumulates irreversible history because the Rule runs in one direction. Scrambling an egg adds to the log. Unscrambling would require deleting entries — which the Rule does not permit.

The Dual-Role Pattern

Throughout this document, a recurring structural pattern has appeared: whenever a single quantity serves two roles in the same operation, it appears squared. The budget appears squared in E = mc² because it governs both storage and propagation. The amplitude α appears squared in quantum probability because it determines both the walk contribution and the resolution weight. This is not two separate coincidences. It is one architectural pattern manifesting in two domains. We call this the dual-role pattern — and its recurrence suggests something deep about the structure of computation itself.


Conservation of Energy

Energy is the return value of completed work. Return values from tick N become inputs to tick N+1. The total is bounded by C. Nothing is created or destroyed — return values are passed forward. Conservation of energy is not an imposed law. It is the fact that function outputs become function inputs, and the total throughput is fixed.

Dark Matter — Orphaned Overflow Stacks

Approximately 27% of the universe's energy budget is dark matter — mass that gravitates but doesn't interact electromagnetically or through the strong nuclear force. It is invisible except through gravity.

In this framework, dark matter consists of content graphs whose topology creates overflow (permanent self-perpetuating stacks — therefore mass, therefore budget drain, therefore gravitational effects) but whose graph structure has no dependency edges to the content graphs that make up normal matter's electromagnetic and strong-force interactions. Dark matter's stacks are self-demanding — the cyclic topology means each tick's computation needs its own previous state, creating an internal strict consumer that forces resolution every tick. But they have no coupling to visible matter's content graphs. You can't see it, scatter light off it, or detect it with any force except gravity — because gravity operates through the spatial graph (which dark matter DOES affect via budget drain), while every other force operates through content-graph dependency edges (which dark matter DOESN'T have with normal matter).

Dark Energy — The Cost of the Spatial Graph

Approximately 68% of the universe's energy budget is dark energy — a mysterious force driving the accelerating expansion of the universe.

In this framework, the spatial graph itself has a baseline computational cost — each region requires some minimum work per tick to maintain its structure and edges to neighboring regions. Additionally, the engine walks below-threshold content graphs in the vacuum each tick, and their multi-path dependency walks create a subtle outward computational pressure. As the spatial graph expands (adds regions), more baseline computation is required, which enables further expansion — a self-reinforcing loop. This is speculative within the framework, but consistent with the axioms and produces the right qualitative behavior: a baseline energy density driving accelerating expansion.

What This Framework Has Not Yet Solved

Any framework that claims to explain the universe should be upfront about what it can't do. Here is what this one hasn't solved — and why those gaps matter.

Open Gap 1 — Full General Relativity The framework produces gravitational attraction, time dilation, black holes, geodesics, gravitational lensing, gravitational waves, frame dragging, and unshieldability. But Einstein's complete field equations — which also predict gravitational waves, precise light-bending magnitudes, Mercury's orbital precession, and frame dragging — have not been formally derived from graph topology. The framework is consistent with GR but has not reproduced it from first principles.
Open Gap 2 — The Preferred Ordering The global tick implies a preferred causal sequence. Lorentz symmetry is claimed to be an emergent property of the projection, not the engine. Whether the projection produces exact Lorentz symmetry from a discrete ordering is not proven. Testable prediction: at Planck-scale energies, tiny deviations from Lorentz symmetry should be detectable.
Narrowing Gap 3 — E = mc² Formal Derivation The dual-role argument for why the budget appears squared is structurally grounded. A formal proof that total energy release is exactly m × budget² (and not some other function) requires deriving the render layer's geometry from graph topology. Direction strong. Formal derivation pending.
Narrowing Gap 4 — Why These Specific Particles The overflow mechanism explains why mass exists — but not why the stable overflow topologies correspond to the exact particles we observe (protons, electrons, neutrinos) with their exact masses. The framework explains that stable matter exists. It does not yet predict which configurations arise.
Open Gap 5 — The Engine Is Inaccessible Like all interpretations of quantum mechanics, this framework proposes a structure that cannot be directly measured — content graphs have no spatial location and can't be observed from inside the spatial graph. Testable predictions must come from spatial-graph observations that differ from standard QM+GR. Candidates: (1) Planck-scale Lorentz deviation, (2) decoherence timescales matching the graph-complexity model, (3) a minimum mass unit at the Planck scale, (4) proximity decoherence — two non-interacting quantum systems sharing the same spatial region should decohere slightly faster than standard QM predicts based on interaction strength alone, because proximity to committed structures in the spatial graph increases combined graph complexity regardless of direct content-graph interaction.
Open Gap 5b — Formalizing Strict Consumer Dynamics The crossover point where the engine must commit a graph rather than produce decoherence rates that match observed values, providing a quantitative test.
Narrowing Gap 6 — The Born Rule The dual-role pattern provides a structural explanation for why quantum probability is |α|². Formal proof that the dependency-walk-and-resolve mechanism produces exactly |α|² across all quantum systems is pending. The structural parallel with E = mc² is strong evidence of a unifying principle.
Open Gap 7 — Spatial Graph Expansion How does the spatial graph grow? The universe is expanding. New regions must be created. Does the engine allocate new regions? Do existing dynamics create them? The dark energy sketch above is a qualitative proposal, not a derivation.
Open Gap 8 — Quantum Field Theory The framework describes particles as content graphs but has not addressed virtual particles, renormalization, gauge symmetries, or the path integral formalism. A complete framework must map QFT's field-theoretic structure to graph operations.
Open Gap 9 — Spin, Charge, and Quantum Numbers The framework does not explain how graph topology produces the specific quantum numbers (spin-1/2, charge, color charge) that define particles. These determine all interactions and should emerge from graph structure.
Why Honest Gaps Matter A framework that claims to explain everything with no open problems is not a framework — it is a story. The gaps above are real, they are hard, and they are the precise places where this work needs to go deeper to become a testable scientific theory rather than a compelling interpretive lens. Both things have value. This document is both.
REF

Complete Reference — Everything Mapped

Physical Phenomenon What It Is in This Framework Computer Analogy
Time A counter — how much work the engine completed on your subgraph this tick. Not a dimension. Not a river. Duration is constructed from accumulated state changes. A loop counter. How many iterations a process has completed.
Arrow of time The evaluation direction of the Rule. Next state depends on current; current does not depend on next. You can't un-run a function. Functions run forward. You can't un-return a value.
The tick The causal step counter. Which step are we on? Not a clock — a sequence index. A global loop counter. Step N+1 follows step N by definition.
The budget (C) The per-tick work limit. Fixed. Universal. The clock speed of reality. Determines the speed of light, whether overflow creates mass, and whether strict consumers demand resolution (quantum vs. classical). The CPU's clock speed — how many operations per cycle. The scheduler's time-slice. System throughput limit.
Mass Overflow that self-perpetuates. Stack depth = mass. Each frame holds one budget of deferred work. Arises from overflow, not by design. A recursive process that regenerates every cycle. Permanent stack.
Energy The return value of resolved computation. What comes back when work completes. Output of a completed function call.
Inertia Cost of rewriting a deep overflow stack. Deeper stack = harder to change. Cost of modifying a nested call stack that's rebuilding itself.
Gravity Budget consumption from mass's self-perpetuating overflow cascading through the spatial graph. Emergent. Unshieldable — spatial-graph phenomenon vs content-graph shield. Shared resource contention. Heavy process draining CPU from neighbors.
Gravitational time dilation Budget drained by mass → fewer state changes per tick → slower clock. Process starved by a CPU hog. Fewer iterations per system tick.
Velocity time dilation Motion = content graph's reference shifting across spatial regions. More shifts per tick = more budget consumed by re-contextualization = fewer internal state changes = slower clock. Moving data between servers requires re-evaluation at both ends. Faster transfer = more re-evaluations = fewer cycles for other work.
Speed of light Maximum propagation = full budget per tick. Can't exceed — no more budget. Maximum network throughput. Can't send faster than the bus allows.
Spacetime The edge-cost matrix of the spatial graph. Flat = uniform costs. Curved = cost gradient from mass's budget consumption. Metric tensor = spatial graph's edge-cost matrix. Weighted routing table that updates when one path gets congested.
Geodesics Cheapest paths through the routing table. Objects follow them because the engine routes along minimum cost. Dijkstra's shortest path. Default scheduler routing.
Gravitational lensing Light follows cheapest path. Near mass, costs are skewed. Light curves. Network traffic re-routing around a congested node.
Event horizon Where mass's drain consumes 100% of local budget. Zero state changes. Time stops. Complete CPU starvation. Zero iterations indefinitely.
Black hole singularity Overflow stack exceeds any finite budget. Computational crash. Stack overflow exception.
Hawking radiation Edge cleanup at the overflow boundary where budget is almost but not quite zero. Near-crashed process slowly completing cleanup at the edge.
E = mc² m frames × budget stored × budget propagation rate = m × budget². Dual-role: budget governs both storage and release. Stack unwind: depth × work/frame × throughput. Budget appears twice → squared.
Photon Web operation that resolves within one tick. No overflow, no mass. All budget on propagation → c. No internal state → no time. Return value in transit. Completes in one scheduler slice.
Superposition Content graph with no strict consumer demanding resolution. The engine walks its full dependency tree through every path, carrying the walk as the graph's evolving state at the data layer. Not "unknown" — genuinely unresolved because nothing asked for an answer. This is lazy evaluation. The GPU walking every light ray through a scene before committing a pixel. The rays exist as computation in progress. No pixel has been written yet.
Wave function The content graph's evolving state at the data layer as the engine computes its influence tick by tick without any strict consumer demanding resolution. Lives at the data layer only — not projected to the spatial graph. Computation in progress, not stored data. A GPU's ray-tracing computation for a pixel that hasn't been committed yet. All paths being walked. Not a stored result — active computation.
Wave function "collapse" A cache bust. A strict consumer — a committed structure whose next state requires a definite value — demands resolution. The engine produces one answer. Multi-path walk replaced by a single result. Resolution cascades through all dependent graphs. Cache invalidation — a downstream consumer needs a value, forcing the cache to recompute. The resolved value propagates to every dependent system.
Quantum eraser Entanglement made paths distinguishable in the full graph, washing out interference in aggregate. Erasing = measuring idler in a basis that doesn't distinguish paths = null operation on the which-path edge. No cache bust on that edge occurred. Multi-path walk survives in the sorted subset. Data never changed — the sorting key did. Filtering a spreadsheet hides cross-category patterns. Remove filter, patterns return. Or: adding +1 and −1 to a cache key is a null operation — the cache was never invalidated.
Decoherence Large objects are dense webs of strict consumers — 10²⁶ atoms each demanding definite values from neighbors to compute the next tick. They resolve themselves because their internal dependencies require it. The cause is strict consumption, not observation. A heavily-referenced cache entry that's constantly invalidated by dependent processes — it's always being recomputed because too many things depend on it.
Entanglement Two entangled particles are one content graph referenced by two spatial regions. When a cache bust forces the content graph to resolve, both references update. No signal — they were always referencing the same data. Content graphs have no spatial distance. A shared Google Doc. Edit in New York, change appears in Tokyo. Not a signal — both reading the same document. The document has no location.
Bell's theorem Rules out local hidden variables. This framework's variables are non-local — the content graph has no spatial location. Not inside particle A or B — it IS particles A and B. The consistency isn't stored in either server — it's in the shared document that both are reading from.
Principle of least action Engine routes along cheapest paths. Not a law — a property of schedulers. Dijkstra's algorithm. Compiler optimization. Default scheduling behavior.
Entropy increase The Rule runs one direction. Each tick adds irreversibly to history. Append-only. Write-only log. You can add entries but never remove them.
Holographic principle Content graphs are flat data structures with no inherent dimensionality. The spatial graph gives them the appearance of existing in 3D. Information scales with surface area (content structure) not volume (spatial rendering). A character's data in memory is a flat structure. The 3D appearance is rendered from it.
Planck length / Planck time The spatial graph's minimum resolution (~10⁻³⁵ m) and the engine's tick duration (~10⁻⁴⁴ s). Below these scales, "distance" and "duration" lose meaning — because you've hit the resolution boundary of the spatial graph. Screen resolution and frame rate. Below one pixel, there is no finer detail. Below one frame, there is no finer timing.
Double buffering You see frame N (committed) while the engine computes frame N+1. The engine walks all graphs — committed ones get resolved, uncommitted ones get their full dependency tree walked. You never see work in progress. You only see committed frames. GPU double buffering: front buffer displayed while back buffer is being drawn. VSync flip. You never see a half-rendered frame.
Cache bust (measurement) When a committed structure's next state depends on an uncommitted graph, the engine is forced to resolve that graph's subtree. The resolved value cascades through all dependent graphs. This is "collapse" — not observation, but a dependency forcing resolution. Cache invalidation: a downstream consumer needs fresh data, forcing recomputation of the cache entry and everything that depends on it.
Lazy evaluation The dividing line between quantum and classical. Below: graph is simple and isolated, engine walks all paths without committing. Above: graph complexity (own structure + subgraph joins with committed neighbors) exceeds what the engine can carry uncommitted within C. Engine commits it. The point where maintaining a cache becomes more expensive than just computing the value. Depends on how many things depend on it and how complex the computation is.
Entity model Content graphs have no spatial location. The spatial graph references them — like pixels referencing memory. Multiple spatial regions can reference the same content graph. When the content graph resolves, all referencing regions update simultaneously. Multiple pixels can display the same object in video memory. Change the object, all pixels update. No signal between pixels — they read from the same source.