I have been looking into virtualization but what I find are mainly operation system based virtualizations. What I am looking for are application, integration and datastore virtualization solutions. Google’s App Engine and Oracle’s JRocket Virtual come closed to what I am looking for application virtualization. Why do you need an operating system if you could virtualize your application directly? It would save resources and would be more secure. My ideal solution allows developers to write applications and run them on a virtual application server. This virtual app server can scale applications horizontally over multiple machines. Each application is running in a sandbox hence badly written or unsecure applications will run out of resources and are not able to impact other applications. We would need a similar solution for integration solutions. Both would need out of the box support for multi-tenancy in which either a tenant gets a separate instance or multiple tenants can share one instance if supported by the software. Integration should be separated from the application logic and so should data storage.
Integration is key because the virtual applications could be running on a public cloud but would have to be able to interact with on-site systems. Enormous high-throughput, security, multi-tenancy and resistance to failure are key. One API can be linked to multiple back-office systems or different versions. Different versions of an API can be link to the same back-office system to prepare applications before a major back-office upgrade.
A distributed multi-tenant data store should hold all the end-user and application data. Ideally in a schema-less manner that avoids having to migrate data for data schema changes.
All these virtual elements should be managed by an automated scaling and highly distributed administration that can let applications grow or shrink based on demand, assure integration links are always up and get re-established if they fail, store data in a limitless way, etc. But there is more. The administration should allow to deploy different versions of the same application or integration and allow for step-wise migration to new versions and fast roll-backs.
Why do we need all this?
The first company that will have such elements at its disposal will have enormous competitive advantages in delivering innovative services quickly. They can launch new applications quickly and scale them to millions of users in hours. They can integrate diverse sources and make them universally available to be re-used by multiple applications. They can store data without having an army of DBAs for every application. They can try out new features and quickly scale them up or kill them. In short they can innovate on a daily basis.
The Google’s of this world understood years ago that a good architecture is a very powerful competitive weapon. There is a valid trend to offshore technical work. However technical work should be separated in extremely high-value and routine. Never off-shore high-value work. Also never assume that because the resources are expensive, it must be high-value. Defining and implementing this innovation architecture is extremely high-value. Writing applications on top of it is routine at least starting from number 5.
Launching thousands of services in a long tail marketplace might not be as hard as it used to be. However supporting millions of users with these thousands of services definitely is. Technology seems not to be the limitation of telco long tail, support and monetization are.
What support is needed?
Consumers as well as small, medium and large enterprises have different support needs. For simplicity let’s focus on small and medium. Hundreds of thousands of those are available in most countries. Often IT skills are in the best case basic. No dedicated IT staff. Just a helpful colleague if any. Time and resource shortage are plenty.
Before reaping the benefit of any long tail service, people will have to learn about what is being offered: product awareness. Additionally once the product is purchased, help with configuration/customization, product training, product integration, consultancy, product questions, etc. Finally when things go wrong: rapid workarounds and bug fixes.
Traditionally telecom operators have used sales teams, help desks and support organizations to offer more basic types of support. Scaling these organizations up to provide the previously listed items is often not possible. And if it were, it would be economically inviable.
Why is long tail support different?
Google, among others, promotes a services-based marketplace inside its Google App Marketplace. Although a step in the right direction, it will not resolve all the issues.
These long tail services could be an answer for established brands and the more straight-forward support tasks like product training. However a developer that on a Sunday afternoon builds a cool app and all of a sudden is surprised that 50.000 companies downloaded it on Monday, is not able to offer any reasonable support.
What do small mom&pops support services need?
Specialization and economies of scale would be two key factors. The “lucky developer” has specialized skills around application development. However does he or she has knowledge about how to integrate a corporate single sign-on solution into it? Probably not. Also when the developer will be helping one company, he will not have time to help another one.
So our “lucky developer” will need people with additional skills and be able to increase his/her bandwidth.
Option 1: Community Support
By offering the tools on the marketplace for an online support community to build around this “lucky app”, companies can help one another and don’t repeatedly ask the developer the same type of questions. Some communities have demonstrated to be offering faster and better support then most commercial support organizations. However there is a problem here. Bug fixes can only be provided by the “lucky developer”. (S)He can choose to open source the application code but that would very likely allow others to quickly copy and extend the app and destroy all market advantage.
Option 2: Commercial Product Support
The “lucky developer” can foresee potential success and hire some external company that gets trained on the app and is able to resolve most of the bug fixes. A trusted third-party that can have an escrow with the “lucky developer” and take over development in case something happens to him or her.
However this would take time and would only take place for those apps that have a steady growth to success, not an overnight craze.
Some tools could be beneficial here. Version control to share proprietary code with authorized third-parties to let them generate patches and in case of a deployed application access to a mechanism to test and deploy an updated version. Also standardized CRM solutions and multi-channel helpdesk access can offer a unified and high-quality service even for one person support companies.
Option 3: Commercial Specialized Services
Even if a third-party company gets trained on a product, this does not take away that customers will demand specialized services that are outside of the scope of product support. Examples could be security audits, SLAs about service availability, integration support & consultancy, performance benchmarking, commercial volume discounts & pricing, marketing, legal support, etc.
By itself this can be a totally new services marketplace in which both the “lucky programmer” as well as its customers can contract these services.
Tools are completely different based on which service is offered so standardized tools are difficult. Probably tools could become SaaS offerings from third-parties.
Option 4: Reputation
Bringing together community support, commercial product support and commercial specialized services is not enough by itself. All tools will not help without one key aspect: reputation.
How can the “lucky programmer” differentiate between 50 lawyers, 350 security experts, 20 performance benchmarking firms, 30 SLA validation agencies, 120 technical support help-desks, etc.? The answer is reputation.
If a security expert has found security holes in some of the most famous Internet sites and he certifies your application then this means that your application is having a reputation of being save. The higher the reputation, probably the higher the fees the “lucky programmer” has to pay. So not everybody will be able to afford the best, especially in the beginning. But then again, sometimes companies with a top reputation might want to offer their services for free for those “lucky programmers” that are likely to get them free press.
The same is true for buyers. If you see that an SLA validation authority, that is generally trusted, is certifying that the services was up for 99,9999% in the last 24 months then you probably want to buy this service over a service that is slightly cheaper but has no reputation for reliability. Also you will want to buy bug fixing support from an organization that was able to meet a very tough SLA in the last 24 months and has all its customers raving about it.
With the world looking more at XML, SOAP and REST these days, it is perhaps anti-natural to think binary again. However with Protocol Buffers [Protobuf], Thrift, Avro and BSON being used by the large dotcoms, thinking binary feels modern again…
How can we apply binary to telecom? Binary SIP?
SIP is a protocol for handling sessions for voice, video and instant messaging. It is a dialect of XML. For a SIP session to be set-up a lot of communication is required between different parties. What if that communication is substituted by a binary protocol based for instance on protocol buffers? Google’s protocol buffers can dramatically reduce network loads and parsing, even between 10 to a 100 times compared to regular XML.
What would be the advantages:
- Latency – faster parsing and smaller network traffic reduces latency which is key in real-time communication.
- Performance – faster parsing and lower load means that more can be done for less. One server can handle more clients.
- Scalability – distributing the handling of SIP sessions over more machines becomes easier if each transaction can be handled faster.
- No easy debugging – SIP can be human ready hence debugging is “easier”. However in practice tools could be written that allow binary debugging.
- Syncing client & server – clients and server libraries need to be in sync otherwise parsing can not be handled. Protocol buffers ignores extensions that are unknown so there is some freedom for an old client to connect to a newer server or vice-versa.
- Firewalls/Existing equipment – a new binary protocol can not be interchanged with existing equipment. A SIP to binary SIP proxy is necessary.
Google has changed very little to its basic architecture building blocks over the years. Everything runs on top of the Google File System and Bigtable. Except for Google Instant which is reversing Map-Reduce usage, new services have been reusing their existing architecture.
Similar observations can be made for the rest of the main players. So why is it that Telecom operators have not invested in one architecture to launch multiple services? No idea.
One architecture for VAS
The concept is simple. Create one common architecture. This architecture should have multiple components:
- A high-available real-time data store – stores all application and user data
- A right-time data analytics service – allows collective intelligence and data mining
- An asset exposure layer – applications can re-use network assets and get isolated from internal complexities
- Presentation layer – facilitate mobile GUI and Web 2.0 development
- Application Engine – allows applications to run and focus on business logic instead of scaling and integration
- Continuous Deployment – instead of monthly big-bang deployments, incremental daily or weekly releases are possible, even hourly like some dotcoms.
- Unified Administration – one place to know what is happening both technically and business-wise with the applications.
- Long-Tail Business Link – all business and accounting transactions for customers, partners, providers, etc. are centralized.
Having such an architecture in place would allow telco innovations to be brought to market at least ten times faster. Application and service designers would have to focus on business logic and not on the rest. Administrators would have one platform to manage and not a puzzle of systems. Integrations would have to be done ones to a common integration layer.
Building such an architecture should be done in the dotcom style and not a telco RFQ. Only by doing iterative projects which bring together the components can you build an architecture that is really used and not a side project that starts to have its own life.
It even makes sense to open source the architecture. Telco’s business is not about building architectures hence having a common platform that was started by one would benefit the whole industry. It even would give a competitive advantage to the one that started the architecture for knowing it better than any competitor. Of course for this to happen, a telco has to recognize that their future major competitors are not the neighboring telco but a global dotcom…