Archive

Archive for the ‘NG Networking’ Category

In a bit pipe world, do we need telecom standards?

GSMA, ETSI, etc. have been defining standards for the telecom world for years. However outside of the telecom industry these standards have found little or no adoption. In a world where telecom operators are fast becoming bit pipes, do we really need telecom standards? Why can’t the telecom industry just use SIP, WebRTC, REST, etc. just like everybody else?

The current systems in telecom are assuming calls and SMS need to be billed for. What would happen if the starting point is: data insights, network apps and connectivity are the only things that are billed? Connectivity is likely going to be unlimited or with very high limits over time. New revenue would have to come from selling data insights, either individually with consent, or aggregated and anonymised. As well as from apps that run inside the network: on CPEs, DSLAMs, mobile base stations, etc. So for the purpose of this blog let’s focus on a world where calls and SMS can no longer be charged for and connectivity is close to unlimited for most normal use cases. To move bits fast through a network, you want the least number of protocol converters. So using many different standards would make things slow and expensive. Additionally telecom operators have overpaid for lots of standards and their software support during years without ever using them. Finally implementing a standard is very costly because often only 20% of the functionality is really used, but the other 80% needs to be there to pass compliance tests.

The nonstandard or empty networking appliance
In a world where software can define networks and any missing functionality is just a networking app away, it would be a lot better to start from an empty networking appliance, i.e. networking hardware without software, and then to buy everything you need. If you need a standard then you might want to buy the minimal/light, equals 20%, implementation and see if you can live with it. Chances are you still have too many functions that are not used. Facebook open sourced its top of the rack networking solution and surprise, surprise, the interface is Thrift based. Thrift is used in all the other Facebook services to have a standard high throughput interface for all its software services. Google probably uses protocol buffers. Apache Avro would be another alternative and the most openly licensed of them all. So instead of focusing on a standard, it would be better to standardise on a highly throughput optimised interface technology instead of public slow standards. Inside a telecom operator this would work very efficient and for those systems that talk to legacy or outside world systems, adding a standard is just a networking app away. This would simplify a telecom network substantially, saving enormous costs and accelerating speed of change because less code needs to be written and maintained making integrations easier. These are all ideas that assume there are actual appliances that are software defined. As soon as general purpose compute becomes fast enough for heavy data plane traffic then the reality will be software defined networking in a virtualized way with autoscaling and all the other cloud goodies. However this reality is still some years off, unfortunately. In the short run virtualization of the control plane and software defined networking appliances [SDNA] for the data plane, is the most realistic option…

Advertisements

The future of networking

Today’s networks are mainly based on hardware acceleration of a limited set of low-level routing rules. This was perfect in a well established world of a limited set of well known protocols traveling mainly from servers to clients. The new reality is that software has become elastic, data has grown big, Internet of Things clients are growing exponentially & are very chatty, streaming 4K video will be the norm, etc. Networks now need to become software defined if they want to survive this new reality.

Is SDN on top of private cloud the solution?
The answer is only partially. There are just too many data plane heavy use cases that still need hardware acceleration in the short term. Too many networking software is not ready for scale out and too many layers of hypervisors and other complex abstractions make current solutions too slow.

The solution is divide and conquer. Just like transactional database are still around after a wave of NoSQL, NewSQL, streaming analytics, graph databases, etc. So will next-generation networking need a mix of best in breed virtualized and appliance-based solutions to each problem.

Networking White Boxes or SDNA
Software defined networking appliances will be the first big innovation in 2015 to be adopted at scale. These boxes combine network acceleration with software-defined networking applications. In the beginning this will mean that standard networking ASICs can be reprogrammed in more dynamic ways. Good examples are the Facebook Wedge and Six Pack. This is however just the start. Expect multi-core processors that can handle many ports at the same time or new versions of ASICs and FPGAs that are more optimised for SDN logic.

Network Orchestration
Just like IT and Cloud got a big Devops boost through orchestration tooling, so will network orchestration tools allow for virtualized SDNs and physical SDNA to be orchestrated seamlessly. These will likely be additions to existing devop and orchestration tools instead of a new set of tools.

Software Define Radio
Instead of using cables and broadband solutions which are expensive to install, expect the next generation of software define radio to allow for end-devices to communicate with their peers and hubs a lot more freely, optimised and ad-hoc. This means that the next-generation of networking needs to take into account a reality of wired and wireless. Also expect protocols to evolve. With software defined radio it will be possible that new wireless standards emerge from small projects on Github that become overnight successes. Yaml, Json, Node.js, WebRTC, etc. were not born in a standardisation group. They became standards through usage. Expect the next generation of networking protocols to come from small super smart startups.

Networking Apps & App Stores
At the moment you buy an appliance that comes with an embedded operating system and a set of pre-installed networking logic. At most you can install plugins. The near future will see networking apps be sold via networking app stores on SDNA from a third-party. Expect new networking startups to become overnight successes because they no longer need to ship atoms worldwide, they just need to upload bits to an app store.

Reshuffling of the networking and telecom market
With software, operating system and hardware being separated, the old networking industry rules get rewritten. It used to be enough to be good at 2 or at least 1 out of three. Excellent hardware, an ok operating system and some good software was enough to be market leader. Now no longer. Best-in-class can be easily obtained in each category. The problem will be when revenue streams come from software maintenance but the supplier actually was only excellent in hardware. These suppliers will see their revenue disappear overnight when others start winning the networking software battle. Expect also the networking services business to change. If hardware, operating system and many networking apps substitute the single vendor approach then very likely IT system integrators will have a better shot at dominating the services market then traditional network vendors. Networking will have become more like IT. Even telecom networking. So IT system integrators have everything to win by accelerating this trend.

Expect many new technologies and business changes that will make new winners and old losers in the next 24 months. My money is on the network software innovators to come out winning…

Given legacy networking software another life

In lots of networking and telecom software companies you can find many great networking and telecom software that has been in production for 5 to 10 years. Normally this software is sold on top of an appliance that gets bought from another company or another department. The bad thing about this is that each software takes up a at least one unit in a data centre rack and when the box vendor declares end of life, the software vendor needs to phone its customers and bring the unhappy news.

Telecom operators have told the market that they want to see NFV. The market had interpreted NFV in two ways. Either we take a costly operating system like Windriver, we put a costly VMWare hypervisor on top, a costly RedHat and then we put our legacy networking software. The word costly is repeated many times on purpose. The visionaries have interpreted NFV as networking logic running on OpenStack. The visionaries are right that this is the way to go. However today’s product managers can’t take a magic wand and convert a legacy solution written to run on a box into a scale out cloud enabled solution. Data plane virtualization is still a skill that is more of an art than a science.

What if there would be a cheap (Snappy Ubuntu Core is open source), innovative alternative that will give your legacy networking stack another three to five years of life. Just enough for your R&D to master the art of data plane virtualization and get a next-generation product on the market. However the same solution will also give network board manufacturers a chance to make money on the software that runs on top of their silicon and as such they would be crazy to declare end of life on a box that makes the cashier ring every month. Finally if your company is lucky and your legacy solution runs on Ubuntu then you will get bare metal performance with NFV flexibility.

So how does this magic work. Canonical, the company behind Ubuntu, released Snappy Ubuntu Core which is the smallest Ubuntu ever. It incorporates innovations from the Ubuntu Phone that allow software to be packaged like a mobile app, called Snappy App or Snap, and sold through a Snap Store. This Snap can either contain third-party executables directly or boot a KVM in which a third-party solution can be booted. Executables give your bare-metal speed, especially if network acceleration cards are present. KVM gives legacy solution an extended life and makes it run a lot faster than running Windriver+VMWare+RedHat. Both Snaps and KVMs can be assigned to one or multiple ports and technically network chaining solutions are possible.

So if you are responsible for a legacy networking or telecom software product, if you are a vendor of networking acceleration boards [although chances are high somebody in your organisation is already working with us] , or if you are a customer that still does not belief the slideware on data plane on OpenStack but needs to put multiple networking functions into one unit, then you should contact Canonical now because our open source solutions can solve your problems. If you don’t have this problem but do know somebody that does, why don’t you forward this blog post. If anything they might pay your drink next time you meet…

%d bloggers like this: