Beyond Sensing and Transmission: Rethinking Optical Fiber as Storage

For decades, optical fiber has been defined by two core roles: transmission and, more recently, sensing. We use fiber to move data faster and farther, and in emerging “fiber sensing” applications, to observe the physical world itself.

But a recent idea has pushed the conversation in a very different direction.

John Carmack—legendary programmer and long-time explorer of physical limits in computing—has proposed a concept that challenges how we think about memory in AI systems: using long-distance single-mode fiber loops as a form of temporary storage by turning propagation delay into usable capacity.

At first glance, it sounds unconventional. On closer inspection, it feels surprisingly grounded.


When Delay Becomes Capacity

Light travels through optical fiber at roughly two-thirds the speed of light in vacuum. Over 200 kilometers of single-mode fiber, one-way latency is close to 1 millisecond.

Now imagine forming that fiber into a closed loop.

Data is injected as optical pulses and continuously circulates. At any given moment, a large volume of information is not “stored” in memory cells, but physically in flight.

With today’s commercial technology, a 200-kilometer single-mode fiber link can support on the order of 256 Tb/s of transmission bandwidth. That translates to roughly 32 GB of data simultaneously present inside the loop at any moment.

If timing can be controlled precisely—when data enters, when it is read, when it is refreshed—this moving data becomes addressable. Transmission delay effectively turns into a temporary storage layer, offering an equivalent bandwidth measured in tens of terabytes per second.

In other words: transmission becomes storage.


Why This Idea Matters Now

Ten years ago, this would have been little more than an interesting physics thought experiment. Today, the context has changed.

AI workloads are pushing memory systems into uncomfortable territory. Large-scale model training and inference demand extreme bandwidth, predictable latency, and massive data movement. Traditional DRAM is constrained by cost, power consumption, and physical density. Scaling it further is increasingly difficult.

At the same time, not all data access patterns require nanosecond-level random access. Many AI workloads involve structured, deterministic data flows—weights, activations, and feature streams that move in known sequences.

This is where a fiber-loop cache becomes compelling. It favors throughput over randomness, determinism over flexibility, and physical simplicity over dense electronics. As a secondary cache layer, it could offload pressure from DRAM without competing with it directly.


Built on Existing Optical Technology

What makes the concept especially interesting is that it does not depend on speculative components.

The building blocks already exist:

  • Single-mode optical fiber

  • WDM systems

  • High-speed optical transceivers

  • Mature solutions for attenuation control, dispersion compensation, and synchronization

These are all products of the current optical communications ecosystem. Capacity can be scaled by adding parallel loops, and bandwidth grows linearly with wavelength and channel count.

The hardest problems are not optical—they are architectural. Precise timing control, efficient read/write scheduling, fault handling, and long-term signal stability all require careful system design. But none of these challenges violate known physical limits.


Redrawing the Boundary Between Compute, Storage, and Transport

If such an approach proves viable at scale, it would blur a boundary that data center architectures have treated as fixed.

Fiber would no longer be just the medium connecting compute and memory. It would become part of the memory hierarchy itself.

In that world, optical infrastructure is not only about distance or loss, but about how physical properties of light are used directly by system architecture. Storage, transport, and computation would be designed together, rather than layered separately.

For the optical fiber industry, this opens an entirely new dimension of demand. Fiber would not only be deployed to connect systems, but to enable new internal functions within them.


The idea is still at a conceptual stage, and significant engineering work remains. Precise timing, efficient access models, and operational resilience are not trivial problems.

But in an era defined by the memory wall and the limits of scaling electronics, ideas that convert physics into system capability deserve serious attention.

One day, the longest fiber loop inside a data center may not exist to connect two points at all—
but simply to give data somewhere to travel, while it waits to be used.

Leave a Reply

Your email address will not be published. Required fields are marked *

Contact US

If you want to know more about us, you can fill out the form to contact us and we will answer your questions at any time.