100G per Lambda Optics Paving Fast Path to 25Tbps Switches with 100G Electrical I/O

With increased demand for global network bandwidth, the hyper-scale cloud providers are in a position to consume the next generation of higher capacity, 100G SerDes enabled switches as soon as they can get their hands on them.  Cloud providers are already operating at extremely high rates of utilization and would welcome a higher performance, intelligent and more efficient intrastructure boost.  The key technology building blocks are in place to prototype 25.6Tbps switch based networks in 2H2019 and ramp in 2020.  This is a year or two ahead of popular thought.

Today’s ramp of 100G per lambda optics which is enabling 50G SerDes based 12.8Tbps switches in the form DR4/FR4 optical modules are laying the groundwork for a rapid transition to 25.6Tbps switches based on 100G SerDes technology.  To understand the rapid tranisional possibility it is important to look back at optics transistions from 10G to 100G.  The move to 25G SerDes enabled switches required the optics to move from 10G to 25G per lambda.  This transition was challenging and caused a two year delay in ramping 3.2Tbps switch enabled networks.  The next big move to 100G per lambda for the ramp of 12.8Tbps, 1RU switches requires a doubling in baud rate and modulation (PAM2 to PAM4).  As a result, optics again are the delay to mass deployment of the latest generation of 12.8Tbps switches “BUT” this strategic, aggressive move to 100G per lambda as a mainstream technology in 2019 creates the unique inflection point of “optics” leading the next generation switch silicon for the first time since god knows when.  Moving from in-module 50G to 100G gearboxes to 100G to 100G retimers to match the switch single-lane rate is generally recognized as straight forward.

Moving to 100G end-to-end connectivity.  As discussed, the fundamental 100G per lambda optical PMDs are in place.  In parallel, Credo has been publicly  demonstrating low power, high performance 100G single-lane electrical SerDes manufactured in mature 16nm technology since December 2017.  We as an industry simply need to agree on some common sense items such as VSR/C2M reach and 800G optical module specifications and execute on a few strategic silicon tape-outs in the 1H2019 to bring the 25.6Tbps into the light.

In my next Blog I will layout the fundational silicon steps to make 100G single-lane, end-to-end connectivity a mainstream reality in 2020. Stay tuned…

By Jeff Twombly, VP of Marketing and Business Development at Credo, @twoms63.

25/100 Gbps Record Port Setting Shipments in 2Q18, 56Gbps SerDes ramping, path to 112Gbps SerDes in sight

2Q18 saw record setting shipments for both 25 Gbps and 100 Gbps port shipments.  In the Data Center Ethernet Switch market, 100 Gbps is now the largest contributor to revenue, surpassing 10 Gbps.  This is an important milestone as 40 Gbps was never able to exceed 10 Gbps, nor ever able to break the $1B a quarter milestone.  This ends an almost ten year run for 10 Gbps being the dominant technology in the data center.

Strength in 25/100 Gbps was broad based in 2Q18.  Besides record setting shipments into the US hyperscalers, Chinese’s Cloud providers demand was robust for the first time and enterprise demand continued to ramp.

What is coming next is more exciting.  In the past few months multiple switch ASICs vendors began sampling and shipping next generation 12.8 Tbps fabrics based on 56 Gbps SERDES.  Some hyperscalers are deploying this as 200 Gbps ports while others are waiting to deploy 400 Gbps later this year. A year from now, 400 Gbps will set records in port shipments and revenue compared to previous technologies.  We expect formal vendor announcements to lag white box shipments by several quarters indicating that formal announcements by traditional OEMs will begin at the end of 2018.

In 2Q18, Cloud equipment CAPEX grew so significantly that the spend on exclusively DC equipment was higher than total CAPEX just two years ago.  This is an unprecedented investment in networking, compute, and storage and these same cloud hyperscalers are currently investing in 400 Gbps. Simply put, CAPEX and networking spend will be larger in the 400 Gbps ramp compared to the 100 Gbps ramp.  At the same time, server utilization and new Artificial Intelligence (AI) and Machine Learning (ML) workloads are increasing bandwidth demand and making the need for a very robust network more important to Cloud design in the coming several years. These trends will help drive Cloud server connectivity from today’s 25 Gbps to 100 Gbps over the next few years.  The time for each Ethernet upgrade cycle is compressing, which can cause some pressure on suppliers in the short term, but is an overall positive to the Ethernet market as the installed base has multiple reasons to upgrade their networking infrastructure over the next two years.

Significant progress is being made throughout the 400 Gbps supply chain, as not only are optics suppliers ramping, but the ecosystem is quickly moving toward 112 Gbps SERDES as well as chip disaggregation.  400 Gbps will quickly transform from 8X56 Gbps SERDES to 4X112 Gbps SERDES with some hyerscalers already planning for 800 Gbps port speeds. We expect hyperscalers to take advantage of future 112 Gbps SERDES for server access (NICs and cables) as it will be a key building block for several generations of networking products.

By Alan Weckel, @AlanWeckel, Founder and Technology Analyst at @650Group.

100G Workshop in Santa Fe Pushing 400/800 Gbps Ports Ahead

The speed at which networking is evolving in the data center is accelerating.  Four year cycles that we saw in the transition from 10 Gbps to 25 Gbps are shrinking and the 100 Gbps to 400 Gbps port cycle will occur even faster.  The market will move from 56 Gbps SERDES to 112 Gbps SERDES in less than two years. There are a number of reasons why we are in the midst of more rapid technology transitions, but it can be summed up by more intelligent and efficient infrastructure.  The introduction of the Smart NIC, and years of data are allowing the cloud providers to run at extremely high rates of utilization which is causing network bandwidth and topologies to evolve.

There were several key takeaways from the OIF 100G workshop in Santa Fe.  First, the Cloud providers are pushing all their suppliers to ship 56 Gbps SERDES today in high volumes and to quickly move towards 112 Gbps SERDES as quickly as possible.  Cloud providers will move to 112 Gbps SERDES before 2021 if the industry can provide enough volume. Second, there are a number of different ways to get the industry there faster and at volume, including the use of gearboxes and retimers in order to use existing optics.  What this means for the industry is that there is tremendous opportunity.

What was also interesting was the continued discussion of different port densities.  It is possible to increase the density of a 1RU switch or a line card from 32 ports (25.6 Tb/s) to 36 ports (28.8 Tb/s).  While there is some work to be done with certain length optics and power budgets, it is also promising that to increase the port density.  It also can open the door for debate of what is a top-of-rack switch vs. aggregation or end-of-row switch. One could see some cloud providers choose the more traditional 48-port 100 Gbps switch with 400/800 Gbps uplinks instead of using a splitter cable.  Also by moving the top-of-rack to the middle, we could see some unique deployments as we see 100 Gbps server connectivity. 100 Gbps switches can also make their way into the enterprise as an enterprise core/aggregation box and into traditional SPs, both of which will help drive additional port demand.  

As we look into demand of 2020/2021 it is also important to remember the size of the cloud, especially the US Top 5 hyperscalers (Amazon, Apple, Facebook, Google, and Microsoft) which grew their DC equip0ment CAPEX in aggregated by 32% in 2017.  It is likely that in three years (2020), there spend in networking will be nearly twice as much as it was in 2017. There is also potential, with optics pricing and increased use of DCI that the spend in networking can be even higher.

By Alan Weckel, @AlanWeckel, Founder and Technology Analyst at @650Group.

112 Gbps SERDES Based Products Around the Corner

2018 has been off to an impressive start with many 400 Gbps announcements and likely another record year for data center networking growth.  With shipments of 400 Gbps starting in late 2018 and widespread adoption in 2019, it is important to start looking at what is coming next as we look into 2H19 and 2020.  All current 400 Gbps announcements are based of 56 Gbps SERDES, so 8 lanes of 50 Gbps. This is an interim technology as the next important technology which has already been demonstrated electrically and optically is single lane 100 Gbps via a 112 Gbps SERDES.  400 Gbps ports ultimately come in two waves, with the second wave being the more important one for the market and being the enabler of an important building block for the market.

112 Gbps SERDES will be the next big building block for data center networks and it is coming sooner rather than later.  First, hyperscalers will adopt it as a way to move towards 800 Gbps and beyond. Second and shortly after this, enterprise networks, such as the campus core, and telco networks, such as backhaul will benefit from the technology.  56 Gbps does not have these additional market drivers and is more of an incremental technology. In many ways 112 Gbps SERDES is like 28 Gbps SERDES, with widespread adoption beyond hyperscalers.

The ability to use a gearbox and/or retimers to use existing optics and the ability to rethink how a switch gets built gives the market multiple paths to serial 100 Gbps.  OFC 2018 also highlighted that multiple vendors in the ecosystem are looking to quickly move in this direction as well. Bringing the entire supply chain with it will help mitigate the early supply shortages seen with 28 Gbps SERDES in 2016 and 2017.  Keeping in mind the hyperscalers buy in units of 100K or 1M at a time, early volumes need to be large with a strong set of suppliers underneath.

There are many factors in the data center that have caused bandwidth to increase more rapidly in the past several years.  Hyperscalers, using a combination of hardware acceleration (Smart NIC) and software (implementing SDN) are able to get higher utilization of their infrastructure.  At the same time, hyperscalers are in the early stages of micro data center buildouts, their DCI deployments, Artificial Intelligence, and Machine Learning offerings, all of which will quickly consume currently available networking pipes.  The increased demand of this new type of application will require hyperscalers to move more quickly to next generation speeds, something that is easily picked up in the supply chain conversations by the increased speed at which higher speed offerings are hitting the market.

It is likely that by the end 2022 that half the bandwidth shipping in the data center switching market will come from 112 Gbps SERDES based products.  With the hyperscalers being almost twice the size they are today, it becomes very clear that the market is eagerly awaiting this next generation technology.  

By Alan Weckel, @AlanWeckel, Founder and Technology Analyst at @650Group.