Silicon Photonics and the Future of Generative AI

With the advent of Generative AI (GenAI) and Machine Learning (ML), silicon photonics is poised to be a key beneficiary. Leading companies like NVIDIA are significantly investing in the silicon photonics ecosystem, recognizing its role in supporting large language models and generative AI. The integration of silicon photonics is expected to provide solutions to some of the challenges that GenAI poses for networks.

This article explores the burgeoning commercial activity around silicon photonics and the technical reasons behind this significant interest.

GenAI’s Technical Roadblocks

The AI revolution has brought a range of technical challenges in its wake, particularly when it comes to Large Language Models (LLMs) and the natural language processing they require. Training these complex AI models demands immense computational power, often straining the capabilities of even the most advanced hardware. There’s also a need for vast storage capacities to handle large datasets, along with rapid data transfer speeds to minimize latency.

Silicon photonics technology is playing a role in addressing each of these challenges, unblocking the way for further development in AI and ML applications.

Meeting High Compute Demands

Generative models require immense computational power, typically supplied by high-performance GPUs or TPUs within dedicated clusters in the data center.

The attached graphic outlines NVIDIA’s architecture plans for emerging AI/ML Clusters. It is notable that with the new H100 series GPU, they need 28.8Tb/s bandwidth for 8x GPUs. In other words, approximately 4 x 800Gb/s transceivers are required for each H100 GPU processor, a massive amount of bandwidth for each GPU that is deployed.

Silicon Photonics and the Future of Generative AI

Source: Lightcounting, 2023

The Need for Low Latency

Low latency is crucial for AI due to its impact on responsiveness, efficiency and even safety in human-AI interaction and also reduces the amount of buffering (memory) required in AI systems. Optical technologies are well placed to answer this need. Optical signals typically travel through transparent media like optical fibers which have low dispersion and minimal signal distortion, allowing for high-speed transmission without significant latency.

Shorter Reach Interconnects

Today’s AI solutions are often deployed in clusters, where the internal communication occurs over relatively short reaches. However, as AI continues to expand, it is becoming an increasingly important part of datacenters, possibly even leading to hyperscale data centers exclusively dedicated to AI applications. Silicon photonics has the capability to cost-effectively facilitate short reach links within clusters to cater for these AI-centric data center scenarios, but also has the advantage that it can also be used for longer reach applications as well.

Achieving Low Cost, Low Latency, Short Reach Links

Historically, VCSELs have been used for short reach optical applications. However, silicon photonics is an increasingly competitive solution for these links and for the AI application in particular.

Today, there is a significant shortage of 100G VCSELs available on the market and a lack of diversity of suppliers. Silicon photonics offers an orthogonal supply chain to VCSELs, leveraging several silicon fabs in the industry that have this capability.

There is also a broad effort in the optics industry to move to linear pluggable optics (LPO). One of the advantages of LPO is that it reduces latency, power and cost simultaneously, by removing a key component in the transceiver (the DSP). Silicon photonics is much better suited for LPO applications than VCSELs, because it provides a more linear transmit function. This linearity supports a higher Bit Error Rate (BER) which is also vital for minimizing latency by reducing the need for compensation or retransmission within the network.

DustPhotonics recently announced our 800G DR8 PIC, designed for hyperscale data centers and AI applications. This device provides 8 optical channels modulated at 100Gb/s each, supporting the growing bandwidth needs of AI applications in a compact package. The product is suitable for transceivers, AOCs (Active Optical Cables), or for LPO applications, while also meeting requirements for Ethernet 800GBASE-DR8.

What’s Next for Silicon Photonics and GenAI?

The silicon photonics market is projected to grow according to Yole to $613 million by 2028, with a CAGR of 48%, signaling growing confidence in its potential across a wide range of applications.

AI is one of the most prominent of these, with industry players exploring and implementing silicon photonics to drive the next wave of AI development. At DustPhotonics, we will continue to play our part in developing new solutions that make AI more efficient, scalable and cost effective. Get in touch with us to find out more about how our solutions are modernizing data centers and equipping them for the future of AI.


Ready to start your Silicon Photonics journey?

© All rights reversed to DustPhotonics Ltd. Terms of Use


Ronnen Lovinger

CEO & Board Member

Python Automation Engineer­

Modiin, Israel