The evolution of network architectures from mainframes to cloud
Once upon a time, mainframes were the epicentres of the digital universe. Monolithic applications ran with direct I/O connections, keeping everything closely coupled. Then came the birth of Local Area Networks (LANs), spearheaded by the likes of Banyan Vines, which allowed for multiple CPUs in different cases to interact. This gave rise to the OSI model, which further set the stage for the decentralised Web era. Today's landscape is filled with generic switches and servers making up both cloud and bare-metal clusters, marking a seismic shift from application-specific servers to virtualisation.
In the traditional Access-Aggregation-Core (AAC) setup, the focus was on hardware; the switch-to-server ratio was exceedingly high. These networks primarily used hardware-based packet switching. This came with its own set of problems, such as congestion issues and address conflicts. Essentially, the AAC layout became a bottleneck rather than a facilitator.
The contemporary cloud network introduces a spine-and-leaf architecture that uses inexpensive, uniform switches and servers. Every switch interconnects, dramatically reducing the risks of congestion. The spotlight shifts away from hardware limitations and towards resolving computational challenges, epitomising the agile and scalable nature of modern cloud infrastructures.
Modern applications necessitate rigorous server-to-server traffic—something the AAC layouts didn't foresee. This led to a myriad of challenges:
Switch disaggregation is the mantra for effective cloud network designs. By separating the router and switch functionalities into distinct hardware and software components, cloud architects pave the way for more standard and budget-friendly switch equipment. This results in more agile networks, simpler upgrades, and an almost invisible network presence.
Despite these rapid transformations, one constant remains: routing. While the basic principle—moving packets from source to destination using IP addresses—is simple, the devil is in the details. Packet forwarding typically takes place to the next hop rather than directly to the final destination, making the routing algorithm's efficiency crucial.
In today's cloud networks, multicast routing is becoming increasingly relevant. It allows a single packet to serve multiple servers, but only those interested in receiving it. This eliminates the inefficiency found in broadcast packets, which overwhelm all nodes in the network. With multicast, only the designated Network Interface Cards (NICs) process the packets, making it a perfect fit for scalable operations like software updates or database refreshes.
For example, IPv6 has fully embraced multicast, replacing ARP with neighbour discovery protocols. This not only makes the network more efficient but also prepares it for the scaling challenges that future technologies might bring.