Dave

To structure or not to structure IT Cabling for AI Clusters. Part 4

In the first three parts of this blog series, we explored various critical aspects of IT cabling for AI clusters, emphasizing the impact of high bandwidth and latency requirements on connectivity solutions, the practical considerations of deployment and installation, and the sustainability implications of different cabling choices.

Read More »

To structure or not to structure IT Cabling for AI Clusters – Part 2

In Part 2 of the blog series on IT cabling for AI clusters, the focus shifts to deployment and installation factors that influence cabling choices. Pre-mounting and pre-cabling racks offsite for faster deployment is common, but several critical external connections—such as ICI switches, FrontEnd, BackEnd, and management connections—require additional consideration. Running cables to a top-of-rack panel offers benefits such as better cable management, reduced risk of damaging transceivers, and the ability to pre-test systems before deployment. Three options for preparing the computer room for AI clusters are discussed: point-to-point cabling, structured cabling with patch panels in the rack, or structured cabling using an Over Head Enclosure (OHE). The choice between these methods is driven by factors like cost, operational risk, and future reusability. The next blog will dive deeper into structured cabling and future bandwidth considerations.

Read More »

To structure or not to structure IT Cabling for AI Clusters – Part 1

This article explores the pros and cons of point-to-point versus structured cabling in AI clusters, particularly in light of increasing bandwidth demands. It distinguishes between training and inference GenAI clusters, emphasizing the high computing and storage needs of training clusters and the importance of rapid deployment and operational efficiency. The article highlights the impact of Return Loss (RL) on network signal quality in fiber connections, especially in high-bandwidth environments. Additionally, it compares copper and fiber in terms of latency, noting that while copper has lower latency, it is limited in distance. The next blog will cover deployment, installation, power, cooling, and sustainability aspects.

Read More »

ADTEK Expertise: Termination certification for USCONEC MMC connector

ADTEK Expertise: Termination Certification for USCONEC MMC Connector

ADTEK has passed USCONEC’s certification for terminating MMC connectors, highlighting their commitment to quality and innovation in connectivity solutions.

Our production teams managed to produce cable assemblies that consistently had very high performance against random mating on all 16 fibers in the MMC MT-16 connectors, Insertion Loss (IL): ≤0.35dB,Return Loss (RL): ≤-60dB. And End face geometry compliant to IEC 61300-3-35 Ed.3 standards.

VSFF MT connectors supports the highest functional fiber density, optimizing space in data centers, This certification aims at delivering superior performance to ensures excellent signal quality, crucial for high-speed transmissions, and reliability, through certified termination processes that guarantee performance.

Hereby demonstrating ADTEK’s commitment to innovation by using rigorous certification processes and continuous improvement in production techniques to ensure top-tier connectivity solutions, setting industry standards in high-density, low-loss connectivity.

Read More »

The 800Gb and beyond connectivity conundrum

The last couple of months there has been a lot of noise about the expected boom of 400Gb, 800Gb
and 1.6Tb in the next 2-3 years. Yet it seems only yesterday we made the jump of 40Gb to 100Gb.
Similar with latency where requirements increased from us to ms, yet latency in some of my latest
project, related to the gateways to the cloud, was in ms. And I thought we were quite advanced in
these things, was I so wrong or behind in my assumptions?
Figure 1: 2021 Lightcounting study on transceiver speed market growth
I think there are a few aspects currently making noise that need to be put in the right perspective.
Indeed there is advanced requirement around AI with increased bandwidth demand and low
latency expectations. But is this going to impact every aspect of the data center?
First of all the AI clusters will become the brain of the IT and you will still need the customer facing
applications that will run in your DC or cloud environment. Secondly not all applications have a
need for AI, e.g. the application responsible for paying your wages once a month, does not
necessarily has to be AI driven. Third there is the difference between training and inference, where
the amount of AI clusters needed to train models is a multiple of the hardware needed to apply
the inference. All this will impact the amount of AI hardware needed, so were are the sudden
expectations of massive 800G and beyond transceiver sales over the next couple of years come
from.

Read More »

Contact US

If you want to know more about us, you can fill out the form to contact us and we will answer your questions at any time.