Search Results

OFC 2025 Insights – 2 :Are we Ready for Hollow Core Fiber Networks?

Rodrigo Amezcua Correa, Relativity Networks, USA
Paolo Dainese, Corning, USA
Russell Ellis, Microsoft, United Kingdom
Kerrianne Harringtone, University of Bath, United Kingdom
Matěj Komanec, CTU, Prague, Czech Republic
Andrew Lord, BT, United Kingdom
Kazunori Mukasa, Furukawa, Japan
Mohammad Pasandi, Ciena, Canada
Pierluigi Poggiolini, Politecnico di Torino, Italy
Yingying Wang, Linfiber and Jinan University, China
YOFC
China Telecom
Sumitomo (Sato)

Read More »

Is Your RAN Cable Infrastructure Ready to Unlock the Monetization Potential of AI?

Is Your RAN Cable Infrastructure Ready to Unlock the Monetization Potential of AI? The other day I watched the webcast of Light Reading on The Role of AI in Platforms for Future RAN Systems [1], with NVIDEA, Fujitsu and Supermicro sharing their thoughts and market vision. Below you will find my interpretation of this webcast. The telecommunications industry is rapidly evolving, and the integration of Artificial Intelligence (AI) in Radio Access Networks (RAN) is at the forefront of this transformation.

Read More »

Maximizing Space: How Reduced Cladding and Coating Fibers Improve Data Center Efficiency

Maximizing Space: How Reduced Cladding and Coating Fibers Improve Data Center Efficiency Just the other day, I was reminiscing about some exploits from the past. One that came to mind was when I had to put together a mock-up to analyze how much Cat6a we could responsibly manage in an 800mm wide 42RU rack. At that time, we established that anything beyond 240 patches would significantly increase the operational risk during moves, adds, and changes. It was also around that

Read More »

To structure or not to structure IT Cabling for AI Clusters. Part 4

In the first three parts of this blog series, we explored various critical aspects of IT cabling for AI clusters, emphasizing the impact of high bandwidth and latency requirements on connectivity solutions, the practical considerations of deployment and installation, and the sustainability implications of different cabling choices.

Read More »

To structure or not to structure IT Cabling for AI Clusters – Part 2

In Part 2 of the blog series on IT cabling for AI clusters, the focus shifts to deployment and installation factors that influence cabling choices. Pre-mounting and pre-cabling racks offsite for faster deployment is common, but several critical external connections—such as ICI switches, FrontEnd, BackEnd, and management connections—require additional consideration. Running cables to a top-of-rack panel offers benefits such as better cable management, reduced risk of damaging transceivers, and the ability to pre-test systems before deployment. Three options for preparing the computer room for AI clusters are discussed: point-to-point cabling, structured cabling with patch panels in the rack, or structured cabling using an Over Head Enclosure (OHE). The choice between these methods is driven by factors like cost, operational risk, and future reusability. The next blog will dive deeper into structured cabling and future bandwidth considerations.

Read More »

To structure or not to structure IT Cabling for AI Clusters – Part 1

This article explores the pros and cons of point-to-point versus structured cabling in AI clusters, particularly in light of increasing bandwidth demands. It distinguishes between training and inference GenAI clusters, emphasizing the high computing and storage needs of training clusters and the importance of rapid deployment and operational efficiency. The article highlights the impact of Return Loss (RL) on network signal quality in fiber connections, especially in high-bandwidth environments. Additionally, it compares copper and fiber in terms of latency, noting that while copper has lower latency, it is limited in distance. The next blog will cover deployment, installation, power, cooling, and sustainability aspects.

Read More »

The 800Gb and beyond connectivity conundrum

The last couple of months there has been a lot of noise about the expected boom of 400Gb, 800Gb
and 1.6Tb in the next 2-3 years. Yet it seems only yesterday we made the jump of 40Gb to 100Gb.
Similar with latency where requirements increased from us to ms, yet latency in some of my latest
project, related to the gateways to the cloud, was in ms. And I thought we were quite advanced in
these things, was I so wrong or behind in my assumptions?
Figure 1: 2021 Lightcounting study on transceiver speed market growth
I think there are a few aspects currently making noise that need to be put in the right perspective.
Indeed there is advanced requirement around AI with increased bandwidth demand and low
latency expectations. But is this going to impact every aspect of the data center?
First of all the AI clusters will become the brain of the IT and you will still need the customer facing
applications that will run in your DC or cloud environment. Secondly not all applications have a
need for AI, e.g. the application responsible for paying your wages once a month, does not
necessarily has to be AI driven. Third there is the difference between training and inference, where
the amount of AI clusters needed to train models is a multiple of the hardware needed to apply
the inference. All this will impact the amount of AI hardware needed, so were are the sudden
expectations of massive 800G and beyond transceiver sales over the next couple of years come
from.

Read More »

Explore our data center solutions and learn more about related products

ADTEK is dedicated to providing high-quality fiber optic connectors and integrated modules for data centers worldwide. We offer everything from mass production to customized solutions, ensuring top-notch products and services for data center builders and operators.

More latest news

Contact US

If you want to know more about us, you can fill out the form to contact us and we will answer your questions at any time.