BLOG
When Did Entertainment become more Fiber than Fun?
Supporting the growing level of streaming media around the world is complex, and requires an ever-growing telecommunication infrastructure to support increasing data speeds and growing capacity in data centers.

When Did Entertainment become more Fiber than Fun?

Remember when high-quality home entertainment started by heading down to browse a selection of movies from the local store? Whether your media of choice was watching cable television, purchasing DVDs, or heading to the movies, the consistent delivery of content was never really questioned.

Today however, there is one clear winner in how we ingest media, and that’s streaming services. Most big players agree that there will be between 3-5 ‘survivors’ in the streaming wars, and with Disney+ announcing 60m subscribers in its first year, and Netflix leading the pack with almost 200m subscribers in total… the spaces are filling up fast.

As you can see above, the reliance on streaming media grew considerably in the last decade and has thrived alongside the COVID-19 lockdown. Governmental orders and fear for health and well-being kept consumers at home and glued to their streaming subscriptions, and in a survey from the Consumer Technology Association, 26% tried a new video streaming service during the first weeks of the pandemic.

Ever Stop to Wonder… What’s Powering the Media Frenzy?

The telecommunication infrastructure requirement to support and maintain a seamless user experience is immense. When it isn’t up to scratch, consumers quickly experience lags, buffering, and poor performance, and start looking to the competition.

Supporting the growing level of streaming media around the world is complex, and requires an ever-growing telecommunication infrastructure. This needs to be capable of handling increasing data speeds and growing capacity in data centers, by pushing more information faster across fiber cables.

In fact, the demand has become so significant in just a few years that data centers are running out of physical space and rely instead on cutting-edge technology components, such as optical transceivers, that plug directly into existing infrastructure to continue building better networks without the need for additional square footage.

Moving to the Cloud to Meet Demand

As streaming media continues to be the preferred method of digesting content, OTT services such as Netflix must scale their business at rates that most companies can’t support with their own infrastructure, especially as the standards of content quality continue to rise. A few years ago, consumers were content with HD video, which quickly became Ultra HD, and now it’s 4K, with 8K on the horizon.

Better quality viewing means much larger file sizes, and the growth in subscribers and business operations adds significant strain on even the largest data centers. So where do they turn? To the public cloud. The cloud, powered by giants such as AWS or Azure, provisions on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user.

Continuing with Netflix as an example, transitioning from a DVD rental platform to a streaming service with 193M subscribers around the world (and constantly growing) was an enormous undertaking. It required the company to migrate their own data centers to Amazon Web Services (AWS) public cloud.

Essentially, Netflix shed their IT infrastructure for the operational side of its business but retained its own content delivery network (CDN) with servers residing inside the Internet Service Providers (ISP) data centers. A content delivery network is a series of proxy servers, caching servers, and their data centers that deliver content to different geographical locations, which is then handed off to the respective ISPs, such as Spectrum or AT&T, before being sent to consumer devices.

Hyperscale operators, such as AWS and Microsoft, are essential for many of these streaming services, simply to handle the massive influx of data. They provide companies more agility and flexibility to scale operations, and support rapid growth by removing the need to procure and manually upgrade servers and equipment.

Expanding Data Center Capacity and Speed 

Although it can be misleading in name, public cloud providers still have physical data centers that they operate, so that their customers don’t have to. Despite their large sizes, (with Range International Information Group being the largest at 6.3 million square feet of space) most square footage is accounted for already by the servers, computers, and storage. Using existing fiber, Cloud providers and data center operators must rely on equipment that can improve speeds and capacity of data throughput so that they can handle the increase in data. Enter, optical fiber products. These are essential in order to continue adding speed and capacity in the space that already exists.

This innovative technology also means that information can move between servers to and from the outside world, without noticeable latency. Until recently, the standard in center data-transfer speed was 10 to 100 gigabits-per-second (Gbps). Now that video streaming accounts for nearly 80% of all internet bandwidth, new higher speed interconnects are being created to allow for 400Gbps speeds across the fiber to improve both the speed and capacity of data throughput.

Think of fiber as a freeway, and these technologies allow for more lanes and a faster speed limit. There is specific equipment that supports data center interconnect (i.e. connecting multiple data centers in different locations) and data center intra-connect (i.e. improving speeds between racks and servers within a data center).

Meeting the Challenges of 400G speeds

Sounds great! However, any business looking to take advantage of 400G speeds needs to consider a few key hurdles before they jump in.

Firstly, producing more speed also means an increase in power consumption, which leads to higher operational costs and more complex technologies required to handle those speeds. Secondly, 400G is still maturing as a technology, so the quality of life benefits and interoperability of various optical equipment isn’t at the same level yet as it is for 100G and 200G.

Here’s where DustPhotonics thrives, creating optical transceivers built for these challenges, hot-swappable optical transceivers to suit the needs of data centers, and at the same time, reduce Capex and Opex costs and improve scale. In some cases, our transceivers have reduced power dissipation of 400G speeds by up to 30% for optical products. Armed with new technologies, 400G will help allow data centers to support entertainment streaming needs such as 4K video, reduce delays and buffering in live-streaming for consumers, and support the massive growth of video-streaming around the globe.

Contact us to learn more about our 400G optical transceivers developments.