Home > Disrup. Technology, NG Networking, SDNA, telecom revenue 2.0 > In a bit pipe world, do we need telecom standards?

In a bit pipe world, do we need telecom standards?


GSMA, ETSI, etc. have been defining standards for the telecom world for years. However outside of the telecom industry these standards have found little or no adoption. In a world where telecom operators are fast becoming bit pipes, do we really need telecom standards? Why can’t the telecom industry just use SIP, WebRTC, REST, etc. just like everybody else?

The current systems in telecom are assuming calls and SMS need to be billed for. What would happen if the starting point is: data insights, network apps and connectivity are the only things that are billed? Connectivity is likely going to be unlimited or with very high limits over time. New revenue would have to come from selling data insights, either individually with consent, or aggregated and anonymised. As well as from apps that run inside the network: on CPEs, DSLAMs, mobile base stations, etc. So for the purpose of this blog let’s focus on a world where calls and SMS can no longer be charged for and connectivity is close to unlimited for most normal use cases. To move bits fast through a network, you want the least number of protocol converters. So using many different standards would make things slow and expensive. Additionally telecom operators have overpaid for lots of standards and their software support during years without ever using them. Finally implementing a standard is very costly because often only 20% of the functionality is really used, but the other 80% needs to be there to pass compliance tests.

The nonstandard or empty networking appliance
In a world where software can define networks and any missing functionality is just a networking app away, it would be a lot better to start from an empty networking appliance, i.e. networking hardware without software, and then to buy everything you need. If you need a standard then you might want to buy the minimal/light, equals 20%, implementation and see if you can live with it. Chances are you still have too many functions that are not used. Facebook open sourced its top of the rack networking solution and surprise, surprise, the interface is Thrift based. Thrift is used in all the other Facebook services to have a standard high throughput interface for all its software services. Google probably uses protocol buffers. Apache Avro would be another alternative and the most openly licensed of them all. So instead of focusing on a standard, it would be better to standardise on a highly throughput optimised interface technology instead of public slow standards. Inside a telecom operator this would work very efficient and for those systems that talk to legacy or outside world systems, adding a standard is just a networking app away. This would simplify a telecom network substantially, saving enormous costs and accelerating speed of change because less code needs to be written and maintained making integrations easier. These are all ideas that assume there are actual appliances that are software defined. As soon as general purpose compute becomes fast enough for heavy data plane traffic then the reality will be software defined networking in a virtualized way with autoscaling and all the other cloud goodies. However this reality is still some years off, unfortunately. In the short run virtualization of the control plane and software defined networking appliances [SDNA] for the data plane, is the most realistic option…

Advertisements
  1. No comments yet.
  1. No trackbacks yet.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: