70% of the Top 1000 companies are expected to no longer be around in the next decade. Big companies are not adapting to change. Digital Darwinism does the rest.
What is the reason behind Digital Darwinism?
Why can’t companies adapt to change? The ideal sector to see disruptive innovation at work is the technology sector. Many billions are spend on bringing products to market that fail. Many giants of yesterday are no more. Five smart guys and a dotcom name can make a multi-billion empire tremble.
Often the disrupted are very well managed companies. Companies that have put into place top quality processes. Listened to their customers. They continuously cut costs to offer a compelling quality product. Still along comes a new technology and what looked so great yesterday is called legacy today. Cloud is killing X86 servers, X86 servers killed mainframes, etc.
You can go and read the books about disruptive innovation. However there is a more substantial reason why innovation can kill companies so quickly. In most companies there are three categories of people: the weird, the cost centres and the cool. The weird guys are the techies, the geeks, the nerds, etc. You need them but please don’t let them come out of their cubicle. Every one that is not directly bringing in new revenues goes in the cost centre category, e.g. finance, legal, HR, etc. Some CFOs tried to make the cool group but ended up in jail. The cool gang are all the sales, pre-sales and marketing folks. They do the really hot and difficult stuff. Project managers and solution architects are not doing their job well when projects can not be delivered that have been sold by the cool gang.
If this is the reality in your company then you are likely to have to search for another company in the future. The reason is very simple. If your company does not value technical talent; HR is seen as a cost centre; sales and this quarter are the only things that matters; then there will be nobody to tell top management that the right technical guys are not being hired and that the current solutions are fast becoming legacy.
Disruptive innovations kill old business models. Many sales forces are good at selling established products. Most do a poor job at selling innovative new ideas. Expect every 2 to 10 years to have an innovation that kills your old business model. The technical experts often are the first to see those changes coming. The sales people are the last. The technical expert will tell you Mongo is cool. The salesperson will tell you that Oracle is best bought as an appliance and not through the cloud because of performance reasons. The salesperson can not understand that there are other companies that use Open Source or SaaS to gain marketshare. It looks very bad on your quarterly results if you give your software away or only charge a small bit per month instead of an upfront license.
How can you survive Digital Darwinism?
The main step is to stop organising companies around job functions and to see the value in each job function. Yes you need a sales force that manages the customer relationships and can sell many products. However you don’t need pre-sales, business development and marketing to be part of it. It is much better if you organise the rest of the organisation around product offerings with pre-sales, business development, marketing, finance, operations, delivery, R&D and support all forming part of the same product team. In order to make the best products you need to be able to understand what customers want, how to reach them, how to develop the product, how to price, how to segment and how to support customers. This is the reason early startups are so successful. They don’t have to queue to ask for a project manager to be assigned to their project. Modern organisations are full of queues and buffers. This creates slowness. It is a lot better to make people responsible for a product and combine different people from different groups. As soon as the group reaches 100 people then you have to split. Otherwise they become slow again. But you can split by customer segment, not by job function. Like this it is possible to combine different products that compete against one another in one organisation. Sales will be challenged continuously to learn new things.
Another important point is to hire generalists and people that both understand technology and business. The world moves so fast that any expert will become obsolete in some years. It is better to have generalists that are quick learners.
Failure is the best option for future success. As soon as an organisation realises that they can not win each battle, they substantially increase their chance of winning the war. Failure should be part of all processes.
Finally you need to have the discipline to sell market leading products to others. This is the only way to get overpaid and it guarantees that the rest of the organisation does not fall asleep. People love to become millionaires when their company sells out. Why should only startups have this privilege. Take away the reason why people want to suffer in a 5 people company and you will attract top talent independent of your size.
Every day a new orchestration solution is being presented to the world. This post is not about which one is better but about what will happen if you embrace these new technologies.
The traditional scale-up architecture
Before understanding the new solutions, let’s understand what is broken with the current solutions. Enterprise IT vendors have traditionally made software that was sold based on the number of processors. If you were a small company you would have 5 servers, if you were big you would have 50-1000 servers. With the cloud anybody can boot up 50 servers in minutes, so reality has changed. Small companies can manage easily 10000 servers, e.g. think of successful social or mobile startups.
Also software was written optimised for performance per CPU. Many traditional software comes with a long list of exact specifications that need to be followed in order for you to get enterprise support.
Big bloated frameworks are used to manage the thousands of features that are found in traditional enterprise solutions.
The container micro services future
Enterprise software is often hard to use, integrate, scale, etc. This is all the consequence of creating a big monolithic system that contains solutions for as many use cases possible.
In come cloud, containers, micro-services, orchestration, etc. and all rules change.
The best micro services architecture is one where important use cases are reflected in one service, e.g. the shopping cart service deals with your list of purchases however it relies on the session storage service and the identity service to be able to work.
Each service is ran in a micro services container and services can be integrated and scaled in minutes or even seconds.
What benefits do micro services and orchestration bring?
In a monolithic world change means long regression tests and risks. In a micro services world, change means innovation and fast time to market. You can easily upgrade a single service. You can make it scale elastically. You can implement alternative implementations of a service and see which one beats the current implementation. You can do rolling upgrades and rolling rollbacks.
So if enterprise solutions would be available as many reusable services that can all be instantly integrated, upgraded, scaled, etc. then time to market becomes incredibly fast. You have an idea. You implement five alternative versions. You test them. You combine the best three in a new alternative or you use two implementations based on a specific customer segment. All this is impossible with monolithic solutions.
This sounds like we reinvented SOA
Not quite. SOA focused on reusable services but it never embraced containers, orchestration and cloud. By having a container like Docker or a service in the form of a Juju Charm, people can exchange best practice’s instantly. They can be deployed, integrated, scaled, upgraded, etc. SOA only focused on the way services where discovered and consumed. Micro services focus additionally on global reuse, scaling, integration, upgrading, etc.
We are not quite there yet. Standards are still being defined. Not in the traditional standardisation bodies but via market adoption. However expect in the next 12 months to see micro services being orchestrated at large scale via open source solutions. As soon as the IT world has the solution then industry specific solutions will emerge. You will see communication solutions, retail solutions, logistics solutions, etc. Traditional vendors will not be able to keep pace with the innovation speed of a micro services orchestrated industry specific solution. Expect the SAPs, Oracles, etc. of this world to be in chock when all of a sudden nimble HR, recruiting, logistics, inventory, supplier relationship management solutions, etc. emerge that are offered as SaaS and on-premise often open source. Super easy to use, integrate, manage, extend, etc. It will be like LEGO starting a war against custom made toys. You already know who will be able to be more nimble and flexible…
Cisco came up with the term of Fog Computing and The Wall Street Journal has endorsed it, so I guess Fog Computing will become the next hype.
What is Fog Computing?
Internet of Things will embed connectivity into billions of devices. Common thinking says your IoT device is connected to the cloud and shares data for Big Data analytics. However if your Fitbit starts sending your heartbeat every 5 seconds, your thermometer tells the cloud every minute that it is still 23.4 degrees, your car tells the manufacturer its hourly statistics, farmers measure thousands of acres, hospitals measure remote patients health continuously, etc. then your telecom operator will go bankrupt because their network is not designed for this IoT Data Tsunami.
Fog Computing is about taking decisions as close to the data as possible. Hadoop and other Big Data solutions have started the trend to bring processing close to where the data is and not the other way around. Now Fog Computing is about doing the same on a global scale. You want decisions to be taken as close to where the data is generated and stop it from reaching global networks. Only valuable data should be travelling on global networks. Your Fitbit could sent average heartbeat reports every hour or day and only sent alerts when your heartbeat passed a threshold for some amount of time.
How to implement Fog Computing?
Fog Computing is best done via machine learning models that get trained on a fraction of the data on the Cloud. After a model is considered adequate then the model gets pushed to the devices. Having a Decision Tree or some Fuzzy Logic or even a Deep Belief Network run locally on a device to take a decision is lots cheaper than setting up an infrastructure in the Cloud that needs to deal with raw data from millions of devices. So there are economical advantages to use Fog Computing. What is needed are easy to use solutions to train models and send them to highly optimised and low resource intensive execution engines that can be easily embedded in devices, mobile phones and smart hubs/gateways.
Fog Computing is also useful for Non-IoT
Also network elements should become a lot more intelligent. When was the last time you were on a large event with many people around you. Can you imagine any event in the last 24 months where WiFi was working brilliantly? Most of the time WiFi works in the morning when people are still getting in but soon after it stops working. Fog Computing can be the answer here. You only need to analyse data patterns and take decisions on what takes up lots of data. Chances are that all the mobiles, tablets and laptops that are connected to the event WiFi have Dropbox or some other large file sharing enabled. You take some pictures of things on the event and since you are on WiFi the network gets saturated by a photo sharing service that is not really critical for the event. Fog Computing would detect this type of bandwidth abuse and would limit it or even block it. At the moment this has to be done manually but computers would do a lot better job at it. So Software Defined Networking should be all over Fog Computing.
Telecom Operators and Equipment Manufacturers Should Embrace Fog Computing
Telecom operators should heavily invest in Fog Computing by making Open Source standards that can be easily embedded in any device and managed from any cloud. When I say standards, I don’t mean ETSI. I mean organise a global Fog Computing competition with a $10 million award for the best open source Fog Computing solution. Make a foundation around it with a very open license, e.g. Apache License. Invite and if necessary oblige all telecom and general network suppliers to embed it.
The alternatives are…
Not solving this problem will provoke heavy investment in global networks that carry 90% junk data and an IoT Data Tsunami. Solving this problem via network traffic shaping is a dangerous play in which privacy and net neutrality will come up earlier than later. You can not block Dropbox, YouTube or Netflix traffic globally. It is a lot easier if everybody blocks what is not needed or at least minimises such traffic themselves. Most people have no idea how to do it. Creating easy to use open source tools would be a first good step…