Posts Tagged ‘facebook’

In a bit pipe world, do we need telecom standards?

GSMA, ETSI, etc. have been defining standards for the telecom world for years. However outside of the telecom industry these standards have found little or no adoption. In a world where telecom operators are fast becoming bit pipes, do we really need telecom standards? Why can’t the telecom industry just use SIP, WebRTC, REST, etc. just like everybody else?

The current systems in telecom are assuming calls and SMS need to be billed for. What would happen if the starting point is: data insights, network apps and connectivity are the only things that are billed? Connectivity is likely going to be unlimited or with very high limits over time. New revenue would have to come from selling data insights, either individually with consent, or aggregated and anonymised. As well as from apps that run inside the network: on CPEs, DSLAMs, mobile base stations, etc. So for the purpose of this blog let’s focus on a world where calls and SMS can no longer be charged for and connectivity is close to unlimited for most normal use cases. To move bits fast through a network, you want the least number of protocol converters. So using many different standards would make things slow and expensive. Additionally telecom operators have overpaid for lots of standards and their software support during years without ever using them. Finally implementing a standard is very costly because often only 20% of the functionality is really used, but the other 80% needs to be there to pass compliance tests.

The nonstandard or empty networking appliance
In a world where software can define networks and any missing functionality is just a networking app away, it would be a lot better to start from an empty networking appliance, i.e. networking hardware without software, and then to buy everything you need. If you need a standard then you might want to buy the minimal/light, equals 20%, implementation and see if you can live with it. Chances are you still have too many functions that are not used. Facebook open sourced its top of the rack networking solution and surprise, surprise, the interface is Thrift based. Thrift is used in all the other Facebook services to have a standard high throughput interface for all its software services. Google probably uses protocol buffers. Apache Avro would be another alternative and the most openly licensed of them all. So instead of focusing on a standard, it would be better to standardise on a highly throughput optimised interface technology instead of public slow standards. Inside a telecom operator this would work very efficient and for those systems that talk to legacy or outside world systems, adding a standard is just a networking app away. This would simplify a telecom network substantially, saving enormous costs and accelerating speed of change because less code needs to be written and maintained making integrations easier. These are all ideas that assume there are actual appliances that are software defined. As soon as general purpose compute becomes fast enough for heavy data plane traffic then the reality will be software defined networking in a virtualized way with autoscaling and all the other cloud goodies. However this reality is still some years off, unfortunately. In the short run virtualization of the control plane and software defined networking appliances [SDNA] for the data plane, is the most realistic option…

Several telecom operators to run into financial problems in the next three years…

November 21, 2014 Leave a comment

In 2017 several telecom operators will run into financial problems, with Vodafone being the most known, unless they start changing today. Why?

The telecom business is a very capital intensive business. Buying spectrum, rolling out the next-generation mobile networks and bringing fiber connections to each home and business is extremely capital intensive. Traditionally operators were the main users of their networks and got large margins on the services that ran on top of them. The truth today is that telecom operators have been completely sidetracked. They no longer have any control of the mobile devices that are used on their networks and neither the services. Data is growing exponentially and is already clogging their networks. A data tsunami is on the horizon. Operators see costs ballooning and ARPU shrinking. There is no way they can start asking substantially more for broadband access. Obama just killed any hope of adding a speed tax on the Internet. The EU wants to kill juicy roaming charges. However the future will be even worse.

New disruptive competitors have entered the market in recent years. Google Fiber is offering gigabit speeds both for uploading and downloading. Youtube and Netflix are generating the majority of Internet traffic in most countries.  Most streaming videos are broadcasted in SD quality. However Netflix is already broadcasting in 4K or ultra high-definition quality on Google Fiber. This means traffic volumes of between 7 to 19GB per hour depending on the codec that is used. Take into account that often different family members can be looking at two or more programmes at the same time. The end result is that today’s networks and spectrum are completely insufficient. Now add the nascent IoT revolution. Every machine on earth will get an IP address and be able to “share its feelings with the world”. Every vital sign of each person in the richer parts of the world will be collected by smart watches and tweeted about on social networks. 90% of the communication that is running inside Facebook’s data centre is machine to machine communication, not user-related communication. Facebook hasn’t even introduced IoT or wearables yet. You can easily imagine them helping even the biggest geek with suggestions on which girl to talk to and what to talk about via augmented reality goggles and with the help of smart watches. Yes it is a crazy example but which telecom marketing department would have given even $1 to Zuckerberg if he would have pitched Facebook to them when it was still known as TheFacebook. It is the perfect example of how “crazy entrepreneurs” make telecom executives look like dinosaurs.

This brings us to the internals on how telecom operators are ran. Marketing departments decide what customers MUST like. Often based on more than doubtful market studies and business plans. In contrast the mobile app stores of this world just let customers decide. Angry Bird might not be the most intelligent app but it sure is a money maker. Procurement departments decide which network and IT infrastructure is best for the company. Ask them what NFV or SDN means and the only thing they can sensibly response is an RFP identifier. Do you really think any procurement department can make a sensible decision on what network technology will be able to compete with Google? More importantly make sure these solutions are deployed at Google speed, integrated at Google speed and scale out at Google speed? If they pick a “Telecom-Grade Feature Monster” that takes years to integrate, then they have killed any chance of that operator being innovative. With all the telecom-grade solutions operators have, why is it that Google’s solutions are more responsive, offer better quality of service and are always available? Vittorio Colao, the Vodafone CEO, was quoted in a financial newspaper yesterday saying Vodafone is going to have to participate in the crazy price war around digital content because BT has moved into mobile. So one of the biggest telecom operators in the world has executive strategies like launching new tariff plans [think RED in 2013], pay crazy money to broadcast football matches, bundle mobile with fixed to be able to discount overall monthly tariffs and erode ARPU even more, etc. If you can get paid millions to just look at what competitors are doing and just badly copy them and dotcoms [the list is long: hosted email, portals, mobile portals, social networks, virtual desktops, IaaS, streaming video, etc.] then please allow me to put your long term viability into question.

So can it actually be done differently. YES, for sure. What if operators would enable customers to customise communication solutions towards their needs. Communication needs have not gone away, if any they augmented. Whatsapp, Google Hangout, etc. are clear examples of how SMS and phone calls can be improved. However they are just the tip of the iceberg of what is possible and what should be done. Network integrated apps via Telco App Stores would give innovators a chance to launch services that customers really like. Hands up who would pay to get rid of their current voicemail? Hands up who really loves their operator’s conference bridge and thinks it is state of the art? Hands up who is of the opinion a bakery is absolute not interested in knowing what its customers think about its products after they have left the shop?

Last week the TAD Summit in Turkey had a very special presentation from Truphone, one of the few disruptive mobile operators in the world. No wonder it won the best presentation award. Truphone, with the help of partners, deployed a telecom solution in minutes that included key components like IMS, SDP, HLR integration, one hundred numbers, dashboards, interactive voice responses, etc. Once deployed, the audience could immediately start calling and participate. All numbers of the people in the audience, their home operator, the operator that sold them their SIM initially, their age and responses to interactive questions were registered and results shown on a real-time dashboard. If the audience would have been in different locations, they could have been put on an interactive map as well. The whole solution took only a few weeks to build with a team of people that all had day jobs. The surprising thing is that it was all build with open source software. It is technically possible to innovate big time in telecom and bring to market new services daily. All at a fraction of today’s cost. The technology is no longer a limiting factor. Old-school thinking, bureaucracy and incompetence are the only things that hold back operators from changing their destiny. Whatever they do, they shouldn’t act like former-Nokia executives in some years and tell the world that Android and the iPhone took them by surprise. Dear mister operator, you have been warned. You have been giving good advise and examples of how to do it better. Now it is time to act upon them…

The Cloud Winners and Losers?

October 15, 2014 Leave a comment

The cloud is revolutionising IT. However there are two sides to every story: the winners and the losers. Who are they going to be and why? If you can’t wait here are the losers: HP, Oracle, Dell, SAP, RedHat, Infosys, VMWare, EMC, Cisco, etc. Survivors: IBM, Accenture, Intel, Apple, etc. Winners: Amazon, Salesforce, Google, CSC, Workday, Canonical, Metaswitch, Microsoft, ARM, ODMs.

Now the question is why and is this list written in stone?

What has cloud changed?
If you are working in a hardware business (storage, networking, etc. is also included) then cloud computing is a value destroyer. You have an organisation that is assuming small, medium and large enterprises have and always will run their own data centre. As such you have been blown out of the water by the fact that cloud has changed this fundamental rule. All of a sudden Amazon, Google and Facebook go and buy specialised webscale hardware from your suppliers, the ODMs. Facebook all of a sudden open sources hardware, networking, rack and data centre designs and makes it that anybody can compete with you. Cloud is all about scale out and open source hence commodity storage, software defined networks and network virtualisation functions are converting your portfolio in commodity products. If you are an enterprise software vendor then you always assumed that companies will buy an instance of your product, customise it and manage it themselves. You did not expect that software can be offered as a service and that one platform can offer individual solutions to millions of enterprises. You also did not expect that software can be sold by the hour instead of licensed forever. If you are an outsourcing company then you assume that companies that have invested in customising Siebel will want you to run this forever and not move to Salesforce.

Reviewing the losers
HP’s Cloud Strategy
HP has been living from printers and hardware. Meg rightfully has taken the decision to separate the cashcow, stop subsidising other less profitable divisions and let it be milked till it dies. The other group will focus on Cloud, Big Data, etc. However HP Cloud is more expensive and slower moving than any of the big three so economies of scale will push it into niche areas or make it die. HP’s OpenStack is a product that came 2-3 years late to the market. A market as we will see later that is about to be commoditised. HP’s Big Data strategy? Overpay for Vertica and Autonomy and focus your marketing around the lawsuits with former owners, not any unique selling proposition. Also Big Data can only be sold if you have an open source solution that people can test. Big Data customers are small startups that quickly have become large dotcoms. Most enterprises would not know what to do with Hadoop even if they could download it for free [YES you can actually download it for free!!!].
Oracle’s Cloud Strategy
Oracle has been denying Cloud existed until their most laggard customer started asking questions. Until very recently you could only buy Oracle databases by the hour from Amazon. Oracle has been milking the enterprise software market for years and paying surprise visits to audit your usage of their database and send you an unexpected bill. Recently they have started to cloud-wash [and Big Data wash] their software portfolio but Salesforce and Workday already are too far ahead to catch them. A good Christmas book Larry could buy from Amazon would be “The Innovator’s Dilemma“.
Dell’s Cloud Strategy
Go to the main Dell page and you will not find the word Big Data or Cloud. I rest my case.
SAP’s Cloud Strategy
Workday is working hard on making SAP irrelevant. Salesforce overtook Siebel. Workday is likely to do the same with SAP. People don’t want to manage their ERP themselves.
RedHat’s Cloud Strategy
[I work for their biggest competitor] RedHat salesperson to its customers: There are three versions. Fedora if you need innovation but don’t want support. CentOS if you want free but no security updates. RHEL is expensive and old but with support. Compare this to Canonical. There is only one Ubuntu, it is innovative, free to use and if you want support you can buy it extra.
For Cloud the story is that RedHat is three times cheaper than VMWare and your old stuff can be made to work as long as you want it according to a prescribed recipe. Compare this with an innovator that wants to completely commoditise OpenStack [ten times cheaper] and bring the most innovative and flexible solution [any SDN, any storage, any hypervisor, etc.] that instantly solves your problems [deploy different flavours of OpenStack in minutes without needing any help].
Infosys or any outsourcing company
If the data centre is going away then the first thing that will go away is that CRM solution we bought in the 90’s from a company that no longer exists.
For the company that brought virtualisation into the enterprise it is hard to admit that by putting a rest API in front of it, you don’t need their solution in each enterprise any more.
Commodity storage means that scale out storage can be offered at a fraction of the price of a regular EMC SAN solution. However the big killer is Amazon’s S3 that can give you unlimited storage in minutes without worries.
A Cisco router is an extremely expensive device that is hard to manage and build on top of proprietary hardware, a proprietary OS and proprietary software. What do you think will happen in a world where cheap ASIC + commodity CPU, general purpose OS and many thousands of network apps from an app store become available? Or worse, a network will no longer need many physical boxes because most of it is virtualised.
What does a cloud loser mean?
A cloud loser means that their existing cash cows will be crunched by disruptive innovations. Does this mean that losers will disappear or can not recuperate? Some might disappear. However if smart executives in these losing companies would be given the freedom to bring to market new solutions that build on top of the new reality then they might come out stronger. IBM has shown they were able to do so many times.

Let’s look at the cloud survivors.
IBM has shown over and over again that it can reinvent itself. It sold its x86 servers in order to show its employees and the world that the future is no longer there. In the past it bought PWC’s consultancy which will keep on reinventing new service offerings for customers that are lost in the cloud.
Just like PWC’s consultancy arm within IBM, Accenture will have consultants that help people make the transition from data centre to the cloud. Accenture will not be leading the revolution but will be a “me-to” player that can put more people faster than others.
X86 is not going to die soon. The cloud just means others will be buying it. Intel will keep on trying to innovate in software and go nowhere [e.g. Intel’s Hadoop was going to eat the world] but at least its processors will keep it above the water.
Apple knows what consumers want but they still need to prove they understand enterprises. Having a locked-in world is fine for consumers but enterprises don’t like it. Either they come up with a creative solution or the billions will not keep on growing.
What does a cloud survivor mean?
A cloud survivor means that the key cash cows will not be killed by the cloud. It does not give a guarantee that the company will grow. It just means that in this revolution, the eye of the tornado rushed over your neighbours house, not yours. You can still have lots of collateral damage…

IaaS = Amazon. No further words needed. Amazon will extend Gov Cloud into Health Cloud, Bank Cloud, Energy Cloud, etc. and remove the main laggard’s argument: “for legal & security reasons I can’t move to the cloud”. Amazon currently has 40-50 Anything-as-a-Service offerings in 36 months they will have 500.
PaaS & SaaS = Salesforce. Salesforce will become more than a CRM on steroids, it will be the world’s business solutions platform. If there is no business solution for it on Salesforce then it is not a business problem worth solving. They are likely to buy competitors like Workday.
Google is the king of the consumer cloud. Google Apps has taken the SME market by storm. Enterprise cloud is not going anywhere soon however. Google was too late with IaaS and is not solving on-premise transitional problems unlike its competitors. With Kubernetes Google will re-educate the current star programmers and over time will revolutionise the way software is written and managed and might win in the long run. Google’s cloud future will be decided in 5-10 years. They invented most of it and showed the world 5 years later in a paper.
CSC has moved away from being a bodyshop to having several strategic important products for cloud orchestration and big data. They have a long-term future focus, employing cloud visionaries like Simon Wardley, that few others match. You don’t win a cloud war in the next quarter. It took Simon 4 years to take Ubuntu from 0% to 70% on public clouds.
What Salesforce did to Oracle’s Siebel, Workday is doing to SAP. Companies that have bought into Salesforce will easily switch to Workday in phase 2.
Since RedHat is probably reading this blog post, I can’t be explicit. But a company of 600 people that controls up to 70% of the operating systems on public clouds, more than 50% of OpenStack, brings out a new server OS every 6 months, a phone OS in the next months, a desktop every 6 months, a complete cloud solution every 6 months, can convert bare-metal into virtual-like cloud resources in minutes, enables anybody to deploy/integrate/scale any software on any cloud or bare-metal server [Intel, IBM Power 8, ARM 64] and is on a mission to completely commoditise cloud infrastructure via open source solutions in 2015 deserves to make it to the list.
Metaswitch has been developing network software for the big network guys for years. These big network guys would put it in a box and sell it extremely expensive. In a world of commodity hardware, open source and scale out, Clearwater and Calico have catapulted Metaswitch to the list of most innovative telecom supplier. Telecom providers will be like cloud providers, they will go to the ODM that really knows how things work and will ignore the OEM that just puts a brand on the box. The Cloud still needs WAN networks. Google Fibre will not rule the world in one day. Telecom operators will have to spend their billions with somebody.
If you are into Windows you will be on Azure and it will be business as usual for Microsoft.
In an ODM dominated world, ARM processors are likely to move from smart phones into network and into cloud.
Nobody knows them but they are the ones designing everybody’s hardware. Over time Amazon, Google and Microsoft might make their own hardware but for the foreseeable future they will keep on buying it “en masse” from ODMs.
What does a cloud winner mean?
Billions and fame for some, large take-overs or IPOs for others. But the cloud war is not over yet. It is not because the first battles were won that enemies can’t invent new weapons or join forces. So the war is not over, it is just beginning. History is written today…

Amazon AWS will continue to compete with its best customers…

If you thought Amazon’s Prime Instant Films is just an exception of Amazon trying to compete with its best customer then you are wrong. This is not an exception but a rule. Simon Wardley just explained why Amazon is fast following their best customers and why more companies should do it to, even in the physical world. The summary is:

If you don’t want to launch a 100 new services and assume failure on 90-95, then let others launch thousands and you commoditise the successful innovations.

So what does this mean?

It means that if you are a young startup that builds everything on AWS then they will just look at the traffic that goes through your servers. If all of a sudden they see that you are picking up more traffic then anybody else, then they will launch a competing solution shortly that commoditises your business. Since they have access to your solution they can actually look inside and see how it works and redesign a more optimised solution.

How to avoid your service to be commoditised by a fast follower?

First of all move faster than anybody else. Full automation is key. If you are faster to respond to customer’s needs then you will attract all customers in a winner takes it all market. Also follow lean startup and A/B testing. Do continuous experiments and only scale up engineering on a new feature after it was demonstrated to be successful with customers on a small scale test.

Second, don’t build for one cloud, build for multiple clouds. If you use cloud orchestration solutions that allow your solution to be moved from one cloud to another one then you are less likely to be trackable by one cloud provider. Treat the cloud providers like they are commodity and move your workloads where it makes more financial sense. Whatever you do, don’t get locked-in by some proprietary services because you will have a hard time moving out. Just ask Netflix how they feel about having their platform ran on top of their biggest competitor’s infrastructure without a chance of moving a way soon. Don’t want to be in their shoes? Use a cloud orchestration solution. Don’t know any open source? Check out Juju

Third, assume you will have fast followers when you start so try to put barriers of entry in place. A good strategy would be to build a business on top of a network effect. Examples: Facebook has over 1 billion users. The more users the more synergies. Even if you would steal away all the code from Facebook and launch Headbook you would not be successful. Network effect businesses tend to be a winner takes it all markets as a consequence. The other counter intuitive strategy is to strategically open source parts of your solution. If you open source parts of your solution then there is nobody that can offer a “cheaper” solution then your freely available solution. So the incentive of building another solution to compete with a free solution is low. Additionally you will get contributions from others hence your team will be able to run faster than anybody else. Finally open source does not mean zero revenue. Netflix has open sourced their architecture. This means they lower their cost and higher their innovation speed but since you don’t have access to their content library and the multiple content they create themselves, it is extremely hard to compete with them. So open source those parts that help your strategy…

Presto – Facebook´s Exabyte-Scale Query Engine

Presto is Facebook´s answer to Cloudera´s Impala, Hortonworks´ Stinger and Google´s Dremel. Presto is an ANSI-SQL compatible real-time data warehouse query engine so existing data tools should be working with it unlike Hive which needed special integration. Presto is in-memory and runs simple queries in few hundred milliseconds and complex queries in a few minutes. Ideal for interactive data warehousing. Unfortunately Presto will not be open sourced until later this year [probably fall], so the Big Data community will have to be patient.

Open Source real-time massive-scale data warehousing is likely to disrupt existing players like Teradata, Oracle, etc. who until recently were able to charge $100K per tera-byte…

Update: Facebook has open source Presto. You can now download it at

If you want to make a Juju Charm of Presto please contact me…

Why is Europe no longer innovating?

A little test. Name a European dotcom that has changed people’s life in the last years? No clue? With only a minor change, substituting European for American and the list would be long: Google, Facebook, Twitter, LinkedIn, Zynga, etc.

Even when innovations make it over the ocean, Europe is limited to doing business development, sales and some limited support. Look at the job pages of the big dotcoms and you will see the VP of Engineering in California and the business development manager in Europe. So the future is defined in California and Europe is just a market to sell the innovations that have been tried and certified in the USA.

Most people would not care less if Zynga would come from the USA or Tongo [No harm meant to anybody from Tongo]. However Europe is missing out on some major innovations that can boost the productivity of any small or medium enterprise. Think about Square, Quickbooks, Dwolla, etc. as examples.

Europe some years ago was leading the mobile and telecom industry with Ericsson, Vodafone, Telefonica, Orange, Deutsche Telekom and Nokia being clear examples. Nowadays it is Apple, Google, Facebook, etc. that lead the mobile and telecom revolution. Many might not realize it but Google has not only disrupted the mobile operating system market. Google has the first global software-defined network in the world. Google is writing history and being a major driver behind Openflow. Also the USA is leading together with Britain in White Spaces and other future wireless innovations.

What needs to change in Europe?

The European Union and local governments have always had a preference to over-protect the communication industry. Many laws protect former state-monopolies from getting real competition. The European Union should really look at White Spaces as a way to bring much-needed innovation back into the industry. Instead of selling the licenses to White Spaces to the usual suspects, the European Union should declare White Spaces as a “free” WiFi on Steroids alternative to LTE. White Spaces can be the solution for rural areas that want to get 21st century broadband connectivity.

Also the laws that oblige telecom companies to give national service are outdated. We do not have gigabit fiber-to-the-home in big cities because competitors are obliged to give universal service. Why not let 10 competitors fight without obligation to connect everybody? The free markets will connect those people and companies that are economically viable. By obliging universal connectivity, everybody is connected to a slow network. Leading to European broadband mediocrity.

Telecom companies that have started to set-up venture capitalist offerings are going the right way. Unfortunately too little money is poured into new ventures. Telefonica’s Wayra is offering $30-70K during a 6 months incubation. That means €46K to €109K on an annual basis as seed capital. What can you buy for this kind of money? Virtually nothing. Only one or two people teams at most. Great people would earn more money in their day job so they are unlikely to jump on Wayra. More realistic numbers would be €150-200K, which would allow teams of 3-10 people plus potential for hardware and other types of innovation. The chances that a 2 people team on a small budget makes a world-changing impact are very slim because you need multiple skills to really innovate.

Crowdfunding  should also be high on the list of the European Union. Let people participate in ventures as very small minority stakeholders via collective seed investment. Give Europe some chance of building a European Kickstarter on steroids. Cross-European laws would need to be put in place for this.

We need European Entrepreneur Heroes as well. Europe needs a European version of Steve JobsJeff Bezos, Larry Page, Mark Zuckerberg and Marc Benioff. People that can convert a vision into a multi-billion industry. People that will be role models for future generations.

If Europe wants to leave the current recession behind, it needs to think about moving away from farming subsidies into investing in innovation. We need modern digital laws and a general legal simplification to allow more entrepreneurs to start innovative companies. European corporations should set-up more venture capitalist funding and crowd funding should be high on everybody’s agenda.

Big Data Apps and Big Data PaaS

March 21, 2012 5 comments

Enterprises no longer have a lack of data. Data can be obtained from everywhere. The hard part is to convert data into valuable information that can trigger positive actions. The problem is that you need currently four experts to get this process up and running:

1) Data ETL expert – is able to extract, transform and load data into a central system.

2) Data Mining expert – is able to suggest great statistical algorithms and able to interpret the results.

3) Big Data programmer – is an expert in Hadoop, Map-Reduce, Pig,  Hive, HBase, etc.

4) A business expert – that is able to guide all the experts into extracting the right information and taking the right actions based on the results.

A Big Data PaaS should focus on making sure that the first three are needed as little as possible. Ideally they are not needed at all.

How could a business expert be enabled in Big Data?

The answer is Big Data Apps and Big Data PaaS. What if a Big Data PaaS is available, ideally open source as well as hosted, that comes with a community marketplace for Big Data ETL connectors and Big Data Apps? You would have Big Data ETL connectors to all major databases, Excel, Access, Web server logs, Twitter, Facebook, Linkedin, etc. For a fee different data sources could be accessed in order to enhance the quality of data. Companies should be able to easily buy access to data of others on a Pay-as-you-use basis.

The next steps are Big Data Apps. Business experts often have very simple questions: “Which age group is buying my product?”, “Which products are also bought by my customers?”, etc. Small re-useable Big Data Apps could be built by experts and reused by business experts.

A Big Data App example

A medium sized company is selling household appliances. This company has a database with all the customers. Another database with all the product sales. What if a Big Data App could find which products tend to be sold together and if there are any specific customer features (age, gender, customer since, hobbies, income, number of children, etc.) and other features (e.g. time of the year) that are significant? Customer data in the company’s database could be enhanced with publicly available information (from Facebook, Twitter, Linkedin, etc.). Perhaps the Big Data App could find out that parents (number of children >0), whose children like football (Facebook), are 90% more likely to buy waffle makers, pancake makers, oil fryers, etc. three times a year. Local football clubs might organize events three times a year to gain extra funding. Sponsorship, direct mailing, special offers, etc. could all help to attract more parents, of football-loving-kids, to the shop.

The Big Data Apps would focus on solving a specific problem each: “Finding products that are sold together”, “Clustering customers based on social aspects”, etc. As long as a simple wizard can guide a non-technical expert in selecting the right data sources and understanding the results, it could be packaged up as a Big Data App. A marketplace could exist for the best Big Data Apps. External Big Data PaaS platforms could also allow data from different enterprises to be brought together and generate extra revenue as long as individual persons can not be identified.

%d bloggers like this: