MIT is running an article about a new technology that allows printing electronics on commercial scale.
A 30 cents sticker can contain electronics to measure temperature. Imagine the possibilities of combining printing electronics with RFID/NFC or even longer distance transmission. Being able to transmit information from thousands of stickers to close by routers about temperature, electricity usage, identification, speed, etc. can revolutionize a lot of markets: energy monitoring, transport/logistics, gaming, sports, security, clothing, etc. Just like 3D printing this is a technology that needs to be followed because of its disruptive character…
Too many people in the telecom industry are still discussing which API is the best: Parlay, JAIN SLEE, Sip Servlets, GSMA OpenAPI, etc.
However even these APIs are too complex for some people. In this case you can use graphical drag-and-drop environments like for instance QuickFuseApps. You can also opt for flash modules that give you all the functionality you need. Ribbit has some nice ones.
Also on the phone side, drag-and-drop is coming on strong. Google´s App Inventor for Android is a good example.
What does this mean? More and more developers and end-users will be able to create Net Apps themselves. These Net Apps will quickly become complex applications that will often bridge the gap between mobile devices and server & cloud solutions. They will very likely also span every aspect of daily life, e.g. social networking, business, entertainment, etc.
What does this mean for an operator? All the effort that is now put into creating attractive services will no longer be useful. A one-hundred person marketing team can not launch more and better services than a one-million net app creators community. So instead of focusing on finding and developing the next killer app, operators should focus on two aspects:
- Making sure all the building blocks are in place for the net app creators community to be productive.
- Connecting end-users with the proceedings of the net app creators community. In other words: make sure people find the right net apps.
The stakes are high because this is a winner takes-it-all game. Speed, easy of use, direct community feedback will be key. What are you waiting for?
Google Voice has changed the mobile broadband industry in just three months. Who would have thought that Google would start offering free mobile broadband and even give away 10.000 free mobile phones and access points?
It all started with a small governmental change in the summer of 2011. After years of lobbying, the New American Foundation convinced the US government to open some of the previously military spectrum to free wireless communication. The New American Foundation chairman, Eric Schmidt, declared the act a step towards universal broadband access.
Two days before the new spectrum was opened on January 2012, Google surprised the world with the announcement that they would give 10.000 free Nexus Goomax phones if people installed a new sort of device at home called the GooPoint.
The Goopoint turned out to be a new generation of a femtocell network device that was on one side connected to fixed broadband and on the other side was a Goomax antenna.
Goomax, the next generation of wireless connectivity improves on the WiFi and WiMAX standards by allowing Google´s servers to remotely and dynamically control the network and the different Goopoints, a.k.a. Cloud-based network management.
The end result is that the US in two months time had an extra mobile network provider. However this network provider did not install any antennas. Neither did they pay expensive spectrum licenses. The new network was formed by home devices that allowed people within 5 kilometers to connect to mobile broadband for free. Goopoint owners that contributed fixed broadband capacity could earn points and exchange them afterwards for Android Apps among others.
Disclaimer: This is an invented story but could one day become reality.
P2P, short for Peer2Peer, allows computers to communicate directly instead of having a central server managing the communications. P2P became (in)famous for (illegal) file-sharing. Skype was the first company that understood the potential and used it to revolutionize PC to PC communication.
A recent list of dotcoms is starting to find new uses for P2P. Wuala is an example whereby cloud computing and P2P is combined to offer an innovative backup solution.
Also the academic investigations into P2P are very active, an example is the chord project.
Why is P2P interesting?
P2P runs locally on a computer and allows direct communication between other computers. From an operator´s perspective this means that communications are direct between PCs which reduces long-distance routing to central servers.
For the user the big advantage is the redundancy and resilience of a P2P network. Have you ever uploaded content by mistake? On a cloud computing or client-server solution this can be easily removed. If the content becomes popular on a P2P network then it is virtually impossible to remove it.
What are the disadvantages of P2P?
Too much file sharing means that a small set of broadband users are monopolizing the operator´s network. Operators prevent this by reducing the bandwidth an individual user can take up with P2P connections. This bandwidth reduction results in unreliable speeds for legitimate uses of P2P.
How can the operator make money with P2P?
If P2P bandwidth can be reduced, then it can also be increased. These changes in quality of service can be a premium offering. Who would be interested in purchasing them?
By itself they are useless. However if the extra speed resolves real business and consumer problems then there is a market.
Peercling = Peer2Peer + Cloud Computing
Cloud computing is an over-hyped term. Software-as-a-Service is a subset of cloud computing and focuses on providing services in a remote manner on a pay-as-you-use basis. A good example is Salesforce that offers customer relationship management, among other solutions, via remote access on a pay-as-you-use basis.
Remote access is important here. What if a sales director is not connected, e.g. in the car, on a plane, etc.? Cloud computing is often useless if you are not connected.
Peercling to the rescue. Peercling is the combination of P2P and Cloud Computing. A local P2P client allows users to access a local copy of the data while not connected. Afterwards this data is synchronized with other peers, servers or cloud computing solutions.
Why can´t we use client-server?
P2P is important to Peercling. A lot of server or cloud computing solutions have vast quantities of data. A single PC might not be big enough to store this much data. P2P comes to the rescue when applications might not be able to connect to a remote system but are able to connect to local PCs. When does this happen? If the internet connection is down but the LAN is still working or when the server or cloud computing solution is down.
The power of having access to multiple peers allows vast amounts of data that are stored in cloud computing solutions to be distributed over a large number of peers. If the cloud computing solution is not available then the user can continue working because the information can be retrieved from other peers. Afterwards changes are synchronized when the cloud computing solutions is accessible again.
P2P also allows computers to communicate and collaborate together without the need of a server. Sharing documents between a group of people to work on them collaboratively is possible via P2P. Afterwards Cloud Computing can be used as a backup of these collaboratively edited documents.
Show me the money
A lot of Cloud Computing solutions would benefit from having a local client available in case the user is not able to connect. Instead of having individual fat clients, there is room for a general platform that isolates the complexity of P2P and connecting to external services like clouds. Given the fact that these applications need a good internet connection, the safest bet is to have the operator offering this platform. The operator can make sure network speeds are optimal. On this platform third-parties can then offer their “P2P-Cloud Applications”. Think of it as an iPhone (=Peercling client) and an App Store for P2P Apps.
If you like the idea but need some more details, why don´t you contact the author at maarten at telruptive dot com?
In recent years there has been a lot of startups that have successfully adopted the freemium business model. The freemium business model focuses on giving for free the basic service and charging for the usage of premium features. The basic service should be understood in a wide perspective. It can be a web application but also digital content like software, games, etc.
Telecom services tend to be paid for all the time by everybody. There is nothing like a free lunch in telecom. However Google has chosen the freemium business model for its Google Voice. You get a free voicemail application with voice transcription, visual voicemail, etc. If you want to make international calls then you pay.
Why is freemium better than premium?
More and more internet offerings rely on high volumes of users that use them. Even if you would get a copy of the Facebook platform, you would never be able to compete with them because they already have the established user base and all the social graphs that are linked to this user base.
If you launch a new service and you charge only €0.01 to use it, then the uptake is drastically lower than if it would be free. Imagine that you had a great business idea but the service uptake is going slow due to the initial signup charge, then this gives competitors a window of opportunity to copy the service. If the initial service is free then your uptake will probably not allow competitors to copy the service before it becomes mainstream.
If your new free service is successful then you can launch premium features and get advertisement revenues, hence converting it into a freemium service.
Where does freemium apply?
Freemium is not a hammer and all services are nails. Freemium applies best for innovative services that people are not familiar with. If it does not cost anything, you are willing to try.
Imagine a hypothetical service for improved “network quality of service – QoS” on demand called BoostMe. All users that have a slow ADSL connection (e.g. 1-3Mb) would be able to install a PC client that allows them to decide which applications or down/uploads they want to boost the network speed for. The normal telecom thinking would be: “if you want to have faster ADSL why don´t you contract the 10MB or 20MB plan and pay extra?”. So this new application would cost money the moment you press the “Turbo Boost Me” button.
The freemium entrepreneur would think differently. If I have to develop an application to boost QoS and make necessary changes in my network, then this service is only profitable if at least 20% of the users download it. To make sure I get to the 20%, I can make the application usage free. Free at least until you reach certain limits. Limits could be hours per month or megabytes per month.
What would be the net effect? In the beginning, some early adopters would download BoostMe and see that it really works when they are uploading their heavy Youtube video. They would tell their friends and quickly viral marketing would do its work. However speed is addictive so pretty quickly people would get used to the faster speeds and either pay for extra BoostMe credits or switch to a premium ADSL plan. Additionally having a service with a mass adoption would mean that advertisers become interested. Special advertising deals could be offered whereby you get BoostMe credits if you sign up for a music services, game services, social networks, etc. or if you buy new devices like WiFi routers, etc. The net effect will be that advertisers will pay a large chunk of the free services and the premium services guarantee the profits.
If you would have launched the BoostMe application as a pay as you go, then probably you would never have reached mass adoption and the service would be killed some months later.
From the cost side things look similar. In both cases you would have all the development, marketing and operation costs. If the service is not successful when it is free then it is definitely not successful if you have to pay for it. You would have lost your investment either way.
If the service is successful then after some time you can still convert it to a pay-as-you-go model. The trick is to call the initial service “beta” and to tell people that while in “beta” the service is free. This gives you the option to make the service paid when getting out of beta or to keep it as a freemium service if the model works…
When the words “technology and innovation” come to mind, most people think about Google, Apple, Amazon, Facebook, Salesforce, etc. Just a few think about telecom operators. The biggest telecom innovation has been mobile voice. SMS was never a technological innovation but an unplanned surprise success. MMS never got close to SMS. The iPhone and Android did not come from any telecom operator or provider.
Why is it that five people with a limited budget in months are able to stun the world whereas massive multinationals with deep pockets are not?
The reason is simple: “To innovate you have to try often and kill quickly”.
Google launched Wave in the beginning of this year. Google “killed” Wave about 6 months later. Every day Google makes a change to their search algorithm.
The current process
The cost for a telecom operator to innovate is massive. A simplified process would be the following:
- The marketing department receives calls and visits from every possible telecom provider on a daily basis. New ideas are thrown on the table to see if they stick.
- The marketing department selects the best ideas.
- These ideas are scanned by the different other departments, e.g. operations, finance, legal, IT, etc.
- A multi-disciplinary team is assembled to write the requirements for the new service, a.k.a. RFI.
- Several possible telecom providers receive the RFI and provide a response.
- A budget is allocated based on the responses and an RFQ, request for quotation, is organized.
- Several telecom providers respond to the RFQ.
- A bidding war is started and one or two winners are selected. If there are no clear winners then a proof of concept is requested.
- The winner develops the solution. Operations, IT, marketing, legal, accounting, etc. all work together to launch the new service.
- The service is launched.
The whole process easily can take a year or more and costs multiple millions. If this would be your money, you would be very careful how you would spend it as well. The end result is that only a handful of new services are launched. Only those that are expected to be immediate successes.
This process is a very “useful” process for driving down large integration and network equipment costs. However it is not an innovation stimulating process.
How can you bring innovation back to telecom?
The first step is to avoid a small set of marketing people to take decisions on what is a good service or not. The only person that can legitimately decide if a service is good is the end-user.
The dotcoms therefore launch very often incremental and new services. They monitor in detail which ones, users like. It is even possible that different alternatives are launched in parallel to see which of them the users prefer. Direct feedback is critical. If a service is not picking up or users complain about it, it gets killed quickly. Services that get good feedback are continuously improved, based on user´s feedback.
How to apply “try often, kill quickly” to the telecom world?
The major show-stopper is the telecom architectural complexity. Although a marketing person has a good idea, it often takes months to update all the systems. The reason is that network operations, business logic and user data are scattered over multiple systems and departments.
To solve this problem, services and data should be separated from the network. Google´s technological differentiator is their generic data store a.k.a. Bigtable. Bigtable is an in-house developed generic high-volume, always-available data store. More than 60 services are reported to be using this common data store. Services as different as docs, maps, app engine, etc.
Google has over a million servers. Maintenance and operations are fully automated. Software is written in such a way that failure of hardware is assumed. Hardware are not top-end but instead commodity rather low-end servers. Software can easily extend over hundreds of servers.
Applications are isolated and use the servers and data through standard interfaces.
I can´t throw away my legacy
Of course an established operator can not throw away their legacy systems. So until we have a common data store and isolation between software and hardware what can we do?
The trick is to start small, move quickly and use asset exposure. Isolate the legacy systems and expose simple APIs to the telecom assets. Via asset exposure a lot of the “hard-coded” SS7 services can be substituted with network intelligence in the cloud. Mini applications can be written by anybody from a large multinational to an individual developer. As long as users can pick their preferred services and applications from the “net app store”.
Data should also be transitioned to a common data store. In the beginning this might mean nightly synchronization of different silos. However little by little the common data store should become the master of the data. Dotcoms are no longer using sql database as a one-size fits all solution. Google, Yahoo, Facebook, Twitter, etc. all developed their in-house solutions and some even open sourced them.
Applications should be running on public or private clouds hence scaling up demand for the top applications during the day and well as scaling down during the night. This should control CAPEX of the hardware. Too much logic is packed into proprietary hardware. Software should be separated from hardware and written in such a way that it can scale to hundreds of servers.
Development teams should not have a contract of 12 months with a waterfall of requirements. Teams should be small (5-6) and have short iterations to deliver small incremental innovations. The dotcoms have a tendency to release new features multiple times a week. Even some multiple times a day. Get immediate feedback and kill if not successful. For telecom innovation teams to do the same, they should be multi-disciplinary. Ideally a mix of people from the operator and strategic partners. It pays off to have a common architecture to deploy the individual services quickly. It pays off even better to have an open API so the innovation team works on the infrastructure for others to innovate.
The small teams should have at least one person that is business and marketing focused and that has the commercial responsibility to make the service a success. Different small teams that have high pressure of time, innovate quickly. Pressure of time is also important. If there is no external pressure of time, then it has to be build internal. A simple technique is to allow people from all over the organization to take a break of their day job and to take part in an innovation team. They should have clear milestones. One month to come up with an idea. Two months for a prototype. Three months to launch the first beta. Two months to get user traction. Any milestone missed means the project is stopped or at least potentially stopped. Failure is not a shame. Quite the opposite. People will go back to their day to their day to day job with new ideas and new energy. After a while they might try again and have success. Innovation and failure go hand in hand. If you can not afford failure, you can not get innovation…
Every time I use Google or Amazon, they become a bit smarter. They know which type of information I search for, which products I like, which messages I write, etc. Afterwards they can find similar users and recommend me products and services I might like.
The usage of data warehouses is common in the telecom domain. However the collective intelligence that is held in them, hardly sees daylight. I will receive the odd call if my profile fits a potential churner. I might even get notified about a new tariff. But this is where it normally stops.
Why are telecom operators not replicating what made the Amazon´s and Google´s big: collective intelligence? You can easily cluster users, categorize their behavior, help them search what they need, recommend them services they might want, etc.
Let´s take an easy example: tariff plans.
They come in all colors and sizes, change frequently and have direct impact on my happiness. So why don´t I get a recommendation like:
Similar users of our services:
- subscribed to the 500Mb data plan (73%)
- added the “call-for-free-in-the-weekend” option (65%)
- removed the 300 free SMS option (43%)
Even better would be:
Based on your last 3 months behavior and the behavior of users similar to you, you can save €21/month if you change your tariff plan from “expensive-tariff-A” to “cheap-tariff-B”.
Yes of course the operators would be loosing all the money they are overcharging. So in order to avoid lowering ARPU how can we use collective intelligence to increase sales or create new customized services?
“Congratulations with your new iPhone. Other users that purchased an iPhone also:
- subscribed to visual voicemail (54%)
- contracted a theft insurance (39%)
“We have noticed that you call these 5 persons most frequently. Two of them are not a user of our services. You would save €5.43/month if they would join our services. Additionally for €3/month you would be able to call these 5 persons unlimited.”
All my examples are too complex to handle technology-wise? I don´t think they are more complex than what Google, Facebook and Amazon are doing. You just need to make sure you use their technology…