400 Gbps Platforms Open the Doors on many new Topologies Even Without Widespread Optics Availability

Aggregation Layer Compression Can be Achieved Today

While Cloud Providers have always been willing to rethink their networks outside industry consensus (starting disaggregation, implementing Whitebox, 25 Gbps, DAC, splitter cables, fixed aggregation, and core boxes, etc.) there has been one part of the network that has remained consistent.  Cloud providers continue to use fiber from the top-of-rack switch all the way to the core with the common reasoning that distance and reliability in those tiers dictate fiber.  Part of this rationale is the fact that the aggregation part of the network is usually dispersed in a data center and not centrally located.

Active Ethernet Cables (AEC) such as HiWire™ AEC, which has already gained the support of a consortium of 25 industry leaders in connectivity and datacenter technology, opens the door to changing this part of the architecture.  400 Gbps optics lag by almost a year for the availability of 400 Gbps switches and the delay in optics is driving an increasing amount of traffic and tiers into existing 100 Gbps switches.  AEC is similar in quality and reliability to fiber, but in a smaller diameter copper form factor and cost.

Cloud Providers of all sizes, enterprises, and Telco service providers can all benefit from rethinking the aggregation layers of their network, especially now when optics are still months away. Centralizing the aggregation switches to one part of the data center, using AEC and 400 Gbps to connect them, can allow for a reduction in the number of switches and the number of tiers in a data center.  This can lead to significant savings in both OPEX and CAPEX. Customers worried about the blast radius can have multiple aggregation racks.

CAPEX savings come from reducing the number of switches needed and the number of optics.  OPEX savings come from reducing the number of ports and given the need for power savings; any little bit can help.

Customers today can get immediate cost savings by switching future builds to this new type of topology and decrease the use of 400G optics, which will be expensive and have limited availability.  Another advantage is that a customer can use the available optics only when needed, thus allowing a broader adoption of 400G and 12.8 Tbps.

A single 12.8 Tbps switch can replace 6 or more 3.2Tb switches as the higher radix requires less inter-switch links between low-radix 3.2 Tbps switches at the same tier.  The 6:1 compression ratio is compelling, especially when power is taken into account comparing one 12.8 Tbps to six fully loaded 3.2 Tbps switches. AEC helps bridge the gap until 400G optics are available, is a good and alternative option even when the 400G optics become available, and with a roadmap to 800G, will prove to be a long term copper alternative to optics in the data center.

By Alan Weckel, Founder and Technology Analyst at 650 Group.

HiWire Consortium to Accelerate Adoption of 400 Gbps Switch Platforms

Next-generation Interconnect Technology Maintains Cost and Power Benefits of Copper at Higher Speeds

DAC has always had a tenuous relationship in the data center. Customers love the low cost, but it has always been the least reliable option and limited on distance. Quality manufacturing can have a big impact on the actual distance a cable can support. Reliability is one of the reasons why enterprises love 10G-Base-T so much compared to using splitter cables as hyperscalers do.

Today, 25 companies founded the HiWire™ Consortium to help accelerate and drive the industry migration towards 400 Gbps and address the need for high-speed server access with copper. Some of the largest suppliers of networking to the Cloud joined together to fill a gap in data center networking. Today the distances that DAC can support reliably continue to shrink as servers move towards higher speeds, and the size of the DAC cable continues to get larger. In some fully loaded scenarios, the cables on a switch can take up more faceplate size making for complicated Top-of-Rack installation and blocking airflow.

HiWire Active Electrical Cables (AEC) helps push out the life of copper for server access technology. AECs are a new type of copper cable that competes against DAC and Active Optical Cables (AOCs). With integrated gearbox, retimer and FEC functionality, AEC also allows for speed shifting, PAM4 to NRZ mode conversion and high integrity, lossless connectivity with in the cable which can enable the industry to look at more efficient network topologies. For example, the cable can covert from 400G PAM4 to 4X100G NRZ or a 100G SFP DD port can be split to 50 Gbps ports on the server NIC, the later is especially interesting with new 7.2-8.0 Tbps switches coming to market with SFP DD ports.

The HiWire Consortium is dedicated to providing data center ethernet customers something that consumers already enjoy – plug and play functionality. In the consumer world, if you pick up a cable with a USB-C mark or an HDMI mark, it just works – no evaluation, no tinkering. The USB community accomplishes this through two groups – the USB Promoters Group that does the technical heavy lift of developing electrical and mechanical standards and the USB Implementers Forum (USB-IF) which manages a 3rd party test infrastructure and licenses the USB mark to those cables that meet the requirement.

In the Ethernet work, the work in IEEE and the many MSAs are analogous to the USB Promoters Group, but we have nothing like the USB-IF – this is the gap the HiWire Consortium has been developed to fill. To assemble the building blocks from IEEE and the many MSAs into a specific set implementations that meet user needs; then to enable a 3rd party to test AECs to this specification and license a trusted mark. The goal is to push much of the qualification burden of integrators, OEMs and ODMs upstream to AEC manufacturers and ensure a consistent, high quality plug & play product experience.

HiWire is interesting for the market as it can be used today in several use cases ahead of 400G optics availability. It can provide a path forward, with similar capabilities to 800 Gbps and the 25.6 Tbps switches which are about to come out to market. Cloud Providers have the opportunity to remain with the copper technologies and splitter cables used for nearly a decade and not having to move to fiber also helps the industry concentrate on Onboard optics and silicon photonics, without the need of an interim technology, because DAC distances shrink too much as servers become more efficient.

Given the rapid adoption of new servers for Artificial Intelligence (AI) and Machine Learning (ML) as well as the use of PCI Gen 4 and delay in 400 Gbps optics availability, we expect a lot of interest in AEC cables.

By Alan Weckel, Founder and Technology Analyst at 650 Group.

Google Stadia – Next Gen Gaming Utilizing Cloud Compute

Another Important Workload Moves Towards the Cloud Pushing for Higher Speed Networking


Google recently announced its gaming Cloud service, Stadia.  Stadia is the start of an important trend of moving rendering of games and other video content to the Cloud and away from devices.  As the technology evolves, the Cloud will be capable of 8K video games that are seamless to most users.

To help reduce latency and insure a premium gaming experience, Google’s approach includes about 7,500 edge nodes and a specialized controller which talks directly to Google’s Cloud.  Stadia represents be a dramatic and significant shift in gaming.   One that will allow casual gamers to play on familiar hardware devices without having to buy a new dedicated gaming system.  It will also allow games to be updated without a user downloading patches, etc.  This will potentially open the market to additional game developers, and it could also affect the fundamental business structure of the industry by moving it towards a subscription model.  A Cloud approach will also let developers roll out or try different versions of games regionally.  This could trigger complementary, highly targeted advertising revenue opportunities.  Imagine, for example, a pizza shop in the game being rendered to a local business for advertising purposes.

Bringing this back to networking, the move towards Cloud rendered gaming is another new use case that will put additional networking demand and ports into the Cloud.  The bandwidth and GPU intensity will only increase as developers and Google learn, grow and optimize the platform. The Cloud will continue to move rapidly towards higher speed technologies.  This is a prime use case of 400 Gbps and why 800 Gbps is so important and needs to follow quickly.  The networking industry will not only enjoy an increase in demand for overall bandwidth but will also benefit from the secondary high-speed network which will exist to connect these gaming clusters together internally. 

Cloud rendered gaming will create incremental new high-speed TAM for networking suppliers.  It is only one of several new use cases that Cloud companies can engage as their infrastructure becomes more robust and ubiquitous.  In a way, one that drives both core and edge computing.

Credo is committed to leading the networking industry’s transition to higher speed by being first to market with next generation SerDes technologies.  Credo’s 112Gbps SerDes are being deployed in a variety of forms including IP, chiplets, line card components, optical components, and Active Ethernet cables. The Credo technology supports nearly every connection made within a Data Center and provides the foundation to move to the 800G performance node.


·       Alan Weckel, Founder and Technology Analyst at 650 Group.

·       Jeff Twombly, VP of Marketing & Business Development Credo.

7 nanometer and Chiplets to Drive Ethernet Switch Market in 2019

Will Enable Second Generation 400 Gbps Capable of Longer Distances

In late 2018, Barefoot networks publicly announced the first ever 7nm plus chiplet switch ASIC product – the Tofino 2 chip.  This is the start of a market transformation as Ethernet Switch design will begin to embrace disaggregation (such as the chiplet type design) much like the rest of data center market.  We will see strong product demonstrations at OFC, OCP, and numerous other shows throughout 2019 as the ecosystem gears up for this important architecture shift. With 7nm available from major foundries, the market will begin to move in this heterogenous direction as a way to higher capacity switch silicon. Not only will this pave the way for 25.6 Tbps and 51.2 Tbps fabrics, we will see increased product agility, lower cost, better power, and more products offerings from chiplet design.

It is important to note that switch ASICs have had both the analog and logic designs put on the same semiconductor chip for over a decade, which forced designers to shrink the analog portion of the design to the same semiconductor process geometry as the logic design.  However, since the analog part of chip design is fundamentally different from the logic design, it moves to a different design time-table than the logic design. Chiplets have a huge advantage as the analog part of the design does not have to shrink at the same pace as the logic component.  This disaggregation of technology allows for older and proven analog chip components to be packaged along with cutting edge process geometry-based logic chips. As we saw in Tofino 2 chip, the analog component is provided by a different vendor and is on a 28nm or 16nm process geometry, while the Barefoot logic is in 7nm.  

Second generation 12.8 Tbps fabrics (defined as 7nm and chiplet architectures vs. 16nm single chip solutions) will also enable Ethernet Switches to take on metro deployments that today are primarily served by stand-alone optical transport gear.  This will significantly increase the addressable market for Ethernet Switch products and vendors, something that is generally a good thing for the Ethernet ecosystem, customers, and industry.

The speed of product innovation in 2019 will be fast-paced.  With 56 Gbps SERDES chips just now starting to ship, we will see many next generation 112 Gbps SERDES announcements in 2019.  This, in turn, will help set up 2020 to be a transition year to the shipment of higher speeds, just in time to meet the demand of new high-bandwidth workloads such as Artificial Intelligence (AI) and video game streaming.  AI will continue to see massive investment dollars in 2019 and beyond, increasing demands on the network, while game streaming will come to life as Microsoft and Sony deliver Their next consoles. These new workloads will significantly impact the network and will cause a change in network evolution, data center speeds and network programmability.

One of the first use cases for 400 Gbps will be in the aggregation/core part of the network.  Cloud providers will look at 400 Gbps and above for connecting their data center properties together.  This will cause Data Center Interconnect (DCI) to become a larger part of Cloud CAPEX in 2019 and 2020.

By Alan Weckel, Founder and Technology Analyst at 650 Group.

100G per Lambda Optics Paving Fast Path to 25Tbps Switches with 100G Electrical I/O

With increased demand for global network bandwidth, the hyper-scale cloud providers are in a position to consume the next generation of higher capacity, 100G SerDes enabled switches as soon as they can get their hands on them.  Cloud providers are already operating at extremely high rates of utilization and would welcome a higher performance, intelligent and more efficient intrastructure boost.  The key technology building blocks are in place to prototype 25.6Tbps switch based networks in 2H2019 and ramp in 2020.  This is a year or two ahead of popular thought.

Today’s ramp of 100G per lambda optics which is enabling 50G SerDes based 12.8Tbps switches in the form DR4/FR4 optical modules are laying the groundwork for a rapid transition to 25.6Tbps switches based on 100G SerDes technology.  To understand the rapid tranisional possibility it is important to look back at optics transistions from 10G to 100G.  The move to 25G SerDes enabled switches required the optics to move from 10G to 25G per lambda.  This transition was challenging and caused a two year delay in ramping 3.2Tbps switch enabled networks.  The next big move to 100G per lambda for the ramp of 12.8Tbps, 1RU switches requires a doubling in baud rate and modulation (PAM2 to PAM4).  As a result, optics again are the delay to mass deployment of the latest generation of 12.8Tbps switches “BUT” this strategic, aggressive move to 100G per lambda as a mainstream technology in 2019 creates the unique inflection point of “optics” leading the next generation switch silicon for the first time since god knows when.  Moving from in-module 50G to 100G gearboxes to 100G to 100G retimers to match the switch single-lane rate is generally recognized as straight forward.

Moving to 100G end-to-end connectivity.  As discussed, the fundamental 100G per lambda optical PMDs are in place.  In parallel, Credo has been publicly  demonstrating low power, high performance 100G single-lane electrical SerDes manufactured in mature 16nm technology since December 2017.  We as an industry simply need to agree on some common sense items such as VSR/C2M reach and 800G optical module specifications and execute on a few strategic silicon tape-outs in the 1H2019 to bring the 25.6Tbps into the light.

In my next Blog I will layout the fundational silicon steps to make 100G single-lane, end-to-end connectivity a mainstream reality in 2020. Stay tuned…

By Jeff Twombly, VP of Marketing and Business Development at Credo, @twoms63.

25/100 Gbps Record Port Setting Shipments in 2Q18, 56Gbps SerDes ramping, path to 112Gbps SerDes in sight

2Q18 saw record setting shipments for both 25 Gbps and 100 Gbps port shipments.  In the Data Center Ethernet Switch market, 100 Gbps is now the largest contributor to revenue, surpassing 10 Gbps.  This is an important milestone as 40 Gbps was never able to exceed 10 Gbps, nor ever able to break the $1B a quarter milestone.  This ends an almost ten year run for 10 Gbps being the dominant technology in the data center.

Strength in 25/100 Gbps was broad based in 2Q18.  Besides record setting shipments into the US hyperscalers, Chinese’s Cloud providers demand was robust for the first time and enterprise demand continued to ramp.

What is coming next is more exciting.  In the past few months multiple switch ASICs vendors began sampling and shipping next generation 12.8 Tbps fabrics based on 56 Gbps SERDES.  Some hyperscalers are deploying this as 200 Gbps ports while others are waiting to deploy 400 Gbps later this year. A year from now, 400 Gbps will set records in port shipments and revenue compared to previous technologies.  We expect formal vendor announcements to lag white box shipments by several quarters indicating that formal announcements by traditional OEMs will begin at the end of 2018.

In 2Q18, Cloud equipment CAPEX grew so significantly that the spend on exclusively DC equipment was higher than total CAPEX just two years ago.  This is an unprecedented investment in networking, compute, and storage and these same cloud hyperscalers are currently investing in 400 Gbps. Simply put, CAPEX and networking spend will be larger in the 400 Gbps ramp compared to the 100 Gbps ramp.  At the same time, server utilization and new Artificial Intelligence (AI) and Machine Learning (ML) workloads are increasing bandwidth demand and making the need for a very robust network more important to Cloud design in the coming several years. These trends will help drive Cloud server connectivity from today’s 25 Gbps to 100 Gbps over the next few years.  The time for each Ethernet upgrade cycle is compressing, which can cause some pressure on suppliers in the short term, but is an overall positive to the Ethernet market as the installed base has multiple reasons to upgrade their networking infrastructure over the next two years.

Significant progress is being made throughout the 400 Gbps supply chain, as not only are optics suppliers ramping, but the ecosystem is quickly moving toward 112 Gbps SERDES as well as chip disaggregation.  400 Gbps will quickly transform from 8X56 Gbps SERDES to 4X112 Gbps SERDES with some hyerscalers already planning for 800 Gbps port speeds. We expect hyperscalers to take advantage of future 112 Gbps SERDES for server access (NICs and cables) as it will be a key building block for several generations of networking products.

By Alan Weckel, @AlanWeckel, Founder and Technology Analyst at @650Group.

100G Workshop in Santa Fe Pushing 400/800 Gbps Ports Ahead

The speed at which networking is evolving in the data center is accelerating.  Four year cycles that we saw in the transition from 10 Gbps to 25 Gbps are shrinking and the 100 Gbps to 400 Gbps port cycle will occur even faster.  The market will move from 56 Gbps SERDES to 112 Gbps SERDES in less than two years. There are a number of reasons why we are in the midst of more rapid technology transitions, but it can be summed up by more intelligent and efficient infrastructure.  The introduction of the Smart NIC, and years of data are allowing the cloud providers to run at extremely high rates of utilization which is causing network bandwidth and topologies to evolve.

There were several key takeaways from the OIF 100G workshop in Santa Fe.  First, the Cloud providers are pushing all their suppliers to ship 56 Gbps SERDES today in high volumes and to quickly move towards 112 Gbps SERDES as quickly as possible.  Cloud providers will move to 112 Gbps SERDES before 2021 if the industry can provide enough volume. Second, there are a number of different ways to get the industry there faster and at volume, including the use of gearboxes and retimers in order to use existing optics.  What this means for the industry is that there is tremendous opportunity.

What was also interesting was the continued discussion of different port densities.  It is possible to increase the density of a 1RU switch or a line card from 32 ports (25.6 Tb/s) to 36 ports (28.8 Tb/s).  While there is some work to be done with certain length optics and power budgets, it is also promising that to increase the port density.  It also can open the door for debate of what is a top-of-rack switch vs. aggregation or end-of-row switch. One could see some cloud providers choose the more traditional 48-port 100 Gbps switch with 400/800 Gbps uplinks instead of using a splitter cable.  Also by moving the top-of-rack to the middle, we could see some unique deployments as we see 100 Gbps server connectivity. 100 Gbps switches can also make their way into the enterprise as an enterprise core/aggregation box and into traditional SPs, both of which will help drive additional port demand.  

As we look into demand of 2020/2021 it is also important to remember the size of the cloud, especially the US Top 5 hyperscalers (Amazon, Apple, Facebook, Google, and Microsoft) which grew their DC equip0ment CAPEX in aggregated by 32% in 2017.  It is likely that in three years (2020), there spend in networking will be nearly twice as much as it was in 2017. There is also potential, with optics pricing and increased use of DCI that the spend in networking can be even higher.

By Alan Weckel, @AlanWeckel, Founder and Technology Analyst at @650Group.

112 Gbps SERDES Based Products Around the Corner

2018 has been off to an impressive start with many 400 Gbps announcements and likely another record year for data center networking growth.  With shipments of 400 Gbps starting in late 2018 and widespread adoption in 2019, it is important to start looking at what is coming next as we look into 2H19 and 2020.  All current 400 Gbps announcements are based of 56 Gbps SERDES, so 8 lanes of 50 Gbps. This is an interim technology as the next important technology which has already been demonstrated electrically and optically is single lane 100 Gbps via a 112 Gbps SERDES.  400 Gbps ports ultimately come in two waves, with the second wave being the more important one for the market and being the enabler of an important building block for the market.

112 Gbps SERDES will be the next big building block for data center networks and it is coming sooner rather than later.  First, hyperscalers will adopt it as a way to move towards 800 Gbps and beyond. Second and shortly after this, enterprise networks, such as the campus core, and telco networks, such as backhaul will benefit from the technology.  56 Gbps does not have these additional market drivers and is more of an incremental technology. In many ways 112 Gbps SERDES is like 28 Gbps SERDES, with widespread adoption beyond hyperscalers.

The ability to use a gearbox and/or retimers to use existing optics and the ability to rethink how a switch gets built gives the market multiple paths to serial 100 Gbps.  OFC 2018 also highlighted that multiple vendors in the ecosystem are looking to quickly move in this direction as well. Bringing the entire supply chain with it will help mitigate the early supply shortages seen with 28 Gbps SERDES in 2016 and 2017.  Keeping in mind the hyperscalers buy in units of 100K or 1M at a time, early volumes need to be large with a strong set of suppliers underneath.

There are many factors in the data center that have caused bandwidth to increase more rapidly in the past several years.  Hyperscalers, using a combination of hardware acceleration (Smart NIC) and software (implementing SDN) are able to get higher utilization of their infrastructure.  At the same time, hyperscalers are in the early stages of micro data center buildouts, their DCI deployments, Artificial Intelligence, and Machine Learning offerings, all of which will quickly consume currently available networking pipes.  The increased demand of this new type of application will require hyperscalers to move more quickly to next generation speeds, something that is easily picked up in the supply chain conversations by the increased speed at which higher speed offerings are hitting the market.

It is likely that by the end 2022 that half the bandwidth shipping in the data center switching market will come from 112 Gbps SERDES based products.  With the hyperscalers being almost twice the size they are today, it becomes very clear that the market is eagerly awaiting this next generation technology.  

By Alan Weckel, @AlanWeckel, Founder and Technology Analyst at @650Group.