Interested in this book? Show your support by saying what you'd like to pay for it!
This book aims to provide insight into the OpenFlow protocol and it's fundamentals by walking step by step through the technology.
Once OpenFlow is described, I move to consider Software Defined Networking (SDN) in terms of controllers and their applications. As a Network Engineer who configures networks everyday, this is the book that I would want to read so I could learn a bit about how and why OpenFlow/SDN works, and what impact it will have on the marketplace, vendors and the next generation product cycle.
It's early in the technology cycle for OpenFlow/Software Defined Networking and there are few products or even companies that are implementing these technologies. Even so, the impact on the networking market and vendor product has been significant in terms of future products and strategy.
This book looks at the fundamentals of OpenFlow without any particular product or technology approach in mind. This is vendor neutral look at the good and bad of OpenFlow networking. We discuss hardware limitations, the impact of silicon designs and what to consider when evaluating OpenFlow.
This book assumes that you have some level of Network experience and knowledge such as TCP/IP, Ethernet, and broad networking concepts and some familiarity to the day to day operation of networks.
Why OpenFlow is Important - A Brief History
It’s likely that many people have not heard about OpenFlow and Software Defined Networking and that’s not surprising. It seems to have arrived with a bang from nowhere and many people are clamouring with indiscreet enthusiasm for a technology.
Why are we talking about OpenFlow at all ?
The Data Networking industry has been mostly static since the early 2000’s when major technologies like OSPF and BGP in routing, MPLS for tag switching and cross-bar fabrics for Ethernet switches were all in their early stages. The last decade has seen these technologies matures and move to common usage but *not much else*. Performance using 1 Gigabit Ethernet was acceptable but not much consumer demand for 10 Gigabit.
Most people agree that Networking is responding to three infrastructure changes:
Server Virtualisation means that a single virtualisation server (such as VMware ESXi) can have many “guest” servers installed. While ten virtual servers per host is typical today, the density is rapidly increasing . This led to two upgraded (not new) requirements: Speed and Reliability. Firstly, ten servers per physical cable required more speed and greater bandwidth (which are not the same) and drove the demand for 10 Gigabit Ethernet.
The second requirement is reliability. Data Centres have relied on Spanning Tree and Routing for path control for Ethernet Frames and IP Packets respectively but their reliability is relatively poor. Spanning Tree can take five to fifteen seconds to converge after a failure. And routing protocols like OSPF can take as little as five seconds up to thirty seconds in traditional installations to converge around a topology change. If a path change for a single server can impact twenty or more guests, the scale of that impact is highly magnified.
Virtualisation allows for End Point mobility. Networking was intended to reasonably static - desktop computers were wired to the wall, servers installed in a fixed locations. Increasingly, end user computing is dynamic - there are more phones / tablets using WiFI and constantly moving around the network. There are more laptops than ever. And even the data centre , home of the “non-movable server”, has now changed since server virtualization arrived. There is lot of demand for change in networking to support mobility and improved reliability.
The final driver is Scalability. In simple terms, there are more computers than ever before. The ‘Cloud’ data centre is enabling enormous increases in server density, Storage is moving to Ethernet and IP networking and Users can easily have three devices such as a tablet, phone and laptop.
Table of Contents (as at 20130818)