Archive

Archive for the ‘Cloud Computing’ Category

Disruptive operator, Truphone, to show the future to other operators

December 16, 2014 Leave a comment

Truphone showed other operators how open source, telecom solutions & available in minutes can now be combined in one sentence. Check out the details at: https://insights.ubuntu.com/2014/12/16/truphone-uses-juju-to-demo-worlds-first-telecom-solution-in-minutes/

Snappy Ubuntu for Business People

December 9, 2014 1 comment

Canonical is the company behind Ubuntu. Ubuntu powers up to 70% of the public cloud and 64% of OpenStack private clouds run on top of Ubuntu. Today, Canonical launched Snappy Ubuntu Core! Snappy Ubuntu is a revolution in how software gets packaged, deployed, upgraded and rolled-back. So what is it and why should you and your business care?

What is Snappy Ubuntu?
Snappy is allowing developers to build Snappy Apps – called Snaps – like mobile apps and deploy them to the cloud, any X86 computer or a fast growing list of any form of IoT or ARM v7/8 based board. For more info on IoT see ubuntu.com/things In the past developers would make a software solution, afterwards a maintainer would take often weeks or months to create a packaged version. This would mean that fast moving projects like Docker would never be up to date inside any of the big Linux distributions. Snappy changes this by allowing the developer to package their solution on their own and publish it through the Snap Store to all users in minutes. Since all Snaps run in a secure and confined environment, they can not harm other Snaps or the operating system itself. Quality, speed and security can now all be combined.

Snappy upgrades are transactional. This means that you install a new version in one go but also easily roll back to the previous version if required. Snappy manages a super small version of Ubuntu called Ubuntu Core. This means you can run it very cost efficiently and fast everywhere. Since Ubuntu Core is a lightweight version of Ubuntu, teams don’t have to be trained when they want to go from the cloud to embedded, it all works the same.

Why is Snappy important for Businesses?

Snappy allows solutions to be packaged and published by the software vendors in minutes instead of months. Users can deploy and roll back very easily. Trying new innovations becomes cheap and fast.

Snaps can use any license. Snappy Ubuntu was born as a spin-off of the Ubuntu Phone operating system. Canonical is working on commercial Snap Stores with different groups like the ROS Robot Store, the Ninjasphere Store, etc. Unlike traditional mobile app stores, the Ubuntu Snap Stores are a lot more open. You can use the generic Ubuntu Snap Store but you are also able to get your own branded Snap Store and govern it. For large companies there will be even an OEM version that they can manage locally and host federated Snap Stores for their large customers. Be sure to reach out to Canonical if this is of interest to you.

With Snappy, the vendor packages the complete application, including its dependencies. Less moving parts mean less chances of something going wrong and cheaper to support customers. Updates are incremental so only what changes gets pushed, saving bandwidth costs and time. Urgent security patches can be easily distributed, with high confidence.

Existing Docker or other container apps can be Snappy deployed. Building your Docker containers or other commercial Snaps on top of Snappy Ubuntu makes good business sense. In the future you can get optional commercial support from a company that has been supporting Linux for 10 years and is trusted by Amazon AWS, Google and Microsoft Azure with the big majority of their Linux workloads.

Snappy Ubuntu is open source and has some great example Snaps, so make sure your teams don’t get “Snapsassinated” by a competitor…

Proximity Cloud, what is it and why you should care…

November 29, 2014 Leave a comment

Many people are still getting their head around public and private clouds. Even less know about Internet of Things. However the real revolution will be bringing both together and for that you will need a proximity cloud.

What is a proximity cloud?
In five years time there will be billions of sensors everywhere. You will be wearing them on your body. They will be in your house, at your job, in your hospital, in your city, in your car/bus/plane, etc.

Now in a world where 1 billion people will have access to 10’s or even 100’s of connected devices around them, it is easy to understand that you don’t want to send all data they generate to a public or private cloud. There is just not enough mobile spectrum or fiber capacity to cater for this.

What we need is put intelligence close to the data generators to determine if your heartbeat, that video camara stream, the electricity consumption of your boiler, the brake capacity of your car, etc. are within normal limits or are outliers that deserve some more attention. Especially if video is involved you don’t want to send it over a public Internet connection if it has nothing interesting on it, e.g. the cat of the neighbours just got onto your lawn.

So houses, cars, companies, telecoms, hospitals, manufacturers, etc. will need some new type of equipment close to the data avalanches in order to filter out the 99.999999% of useless data.

An example. If you are a security company that manages tens of thousands of video cameras in thousands of companies then you want to know when a thief walks by or an aggression happens. You will train machines to make decisions on what the difference is between a human walking past or a cat. However burglars will soon find out that computer vision has a fault and when they wear a cat disguise they can get past. It is this type of events that will trigger a central cloud platform to request all videos of a local business in the last 24 hours and to train its visual model that humans can wear animal suits and then push this to all proximity clouds in all its customer premises. The alternative is storing all video streams in the cloud which would require enormous bandwidth or even worse, not knowing what happened and being in the press the next week for being the “cat-suit” flop.

Eliminating RFPs to make enterprise software sexy

November 28, 2014 Leave a comment

Today I had a meeting that could be the beginning of the end of RFPs to buy software. RFPs are the tool established buyers and vendors use to keep new entrants at bay. However I haven’t met anybody that says they love writing or responding to them. The effect of RFPs on software is perverse. The main problem is that you can’t ask if your software is beautiful, easy to use, fast to integrate, efficient, effective at solving a business problem, secure, etc. Instead you ask if you provide training, because you assume it is ugly and difficult. You ask if they offer consultancy services and an SDK or connector library because you assume it is difficult. You assume you need to customise it for months because it will not be effective out of the box. But most importantly since you will be stuck with the software for years, you ask if it supports any potential feature that perhaps in 5 years might be needed for 5 minutes. It is this last set of questions that kill any innovation and ease of use in business software. A product manager in the receiving end will get funding to add those absurd features when customers ask for them. A career limiting move would be to ask for budget to reduce useless features or tell that your product looks worse than Frankenstein.

So how can you make sure that software is beautiful, does what it supposed to efficiently and effectively, is fast, nimble, easy to use, secure, scalable, fast to integrate, is future proof, etc.? You do what you do when you buy a car, you go and ask the keys of different models and take them for a serious spin and put them to their limits.

So what you propose is a three months PoC for each potential solution?
No what I propose is being able to get your hands on all different alternative software solutions and deploying, integrating and scaling them in hours or even minutes and then release a bunch of automatic performance tests and rough end-users, even some ethical hackers or competitors.

If the software does what it says on the tin, is effective, efficient, beautiful, secure, fast, scalable, easy, etc. then you negotiate pricing or use it for a minimum valuable product.

It used to be impossible to do all of this in hours but with solutions to deploy quickly private clouds and cloud orchestration solutions like Juju, we are actually planning on trying this approach with a real customer and real suppliers. To be continued…

Several telecom operators to run into financial problems in the next three years…

November 21, 2014 Leave a comment

In 2017 several telecom operators will run into financial problems, with Vodafone being the most known, unless they start changing today. Why?

The telecom business is a very capital intensive business. Buying spectrum, rolling out the next-generation mobile networks and bringing fiber connections to each home and business is extremely capital intensive. Traditionally operators were the main users of their networks and got large margins on the services that ran on top of them. The truth today is that telecom operators have been completely sidetracked. They no longer have any control of the mobile devices that are used on their networks and neither the services. Data is growing exponentially and is already clogging their networks. A data tsunami is on the horizon. Operators see costs ballooning and ARPU shrinking. There is no way they can start asking substantially more for broadband access. Obama just killed any hope of adding a speed tax on the Internet. The EU wants to kill juicy roaming charges. However the future will be even worse.

New disruptive competitors have entered the market in recent years. Google Fiber is offering gigabit speeds both for uploading and downloading. Youtube and Netflix are generating the majority of Internet traffic in most countries.  Most streaming videos are broadcasted in SD quality. However Netflix is already broadcasting in 4K or ultra high-definition quality on Google Fiber. This means traffic volumes of between 7 to 19GB per hour depending on the codec that is used. Take into account that often different family members can be looking at two or more programmes at the same time. The end result is that today’s networks and spectrum are completely insufficient. Now add the nascent IoT revolution. Every machine on earth will get an IP address and be able to “share its feelings with the world”. Every vital sign of each person in the richer parts of the world will be collected by smart watches and tweeted about on social networks. 90% of the communication that is running inside Facebook’s data centre is machine to machine communication, not user-related communication. Facebook hasn’t even introduced IoT or wearables yet. You can easily imagine them helping even the biggest geek with suggestions on which girl to talk to and what to talk about via augmented reality goggles and with the help of smart watches. Yes it is a crazy example but which telecom marketing department would have given even $1 to Zuckerberg if he would have pitched Facebook to them when it was still known as TheFacebook. It is the perfect example of how “crazy entrepreneurs” make telecom executives look like dinosaurs.

This brings us to the internals on how telecom operators are ran. Marketing departments decide what customers MUST like. Often based on more than doubtful market studies and business plans. In contrast the mobile app stores of this world just let customers decide. Angry Bird might not be the most intelligent app but it sure is a money maker. Procurement departments decide which network and IT infrastructure is best for the company. Ask them what NFV or SDN means and the only thing they can sensibly response is an RFP identifier. Do you really think any procurement department can make a sensible decision on what network technology will be able to compete with Google? More importantly make sure these solutions are deployed at Google speed, integrated at Google speed and scale out at Google speed? If they pick a “Telecom-Grade Feature Monster” that takes years to integrate, then they have killed any chance of that operator being innovative. With all the telecom-grade solutions operators have, why is it that Google’s solutions are more responsive, offer better quality of service and are always available? Vittorio Colao, the Vodafone CEO, was quoted in a financial newspaper yesterday saying Vodafone is going to have to participate in the crazy price war around digital content because BT has moved into mobile. So one of the biggest telecom operators in the world has executive strategies like launching new tariff plans [think RED in 2013], pay crazy money to broadcast football matches, bundle mobile with fixed to be able to discount overall monthly tariffs and erode ARPU even more, etc. If you can get paid millions to just look at what competitors are doing and just badly copy them and dotcoms [the list is long: hosted email, portals, mobile portals, social networks, virtual desktops, IaaS, streaming video, etc.] then please allow me to put your long term viability into question.

So can it actually be done differently. YES, for sure. What if operators would enable customers to customise communication solutions towards their needs. Communication needs have not gone away, if any they augmented. Whatsapp, Google Hangout, etc. are clear examples of how SMS and phone calls can be improved. However they are just the tip of the iceberg of what is possible and what should be done. Network integrated apps via Telco App Stores would give innovators a chance to launch services that customers really like. Hands up who would pay to get rid of their current voicemail? Hands up who really loves their operator’s conference bridge and thinks it is state of the art? Hands up who is of the opinion a bakery is absolute not interested in knowing what its customers think about its products after they have left the shop?

Last week the TAD Summit in Turkey had a very special presentation from Truphone, one of the few disruptive mobile operators in the world. No wonder it won the best presentation award. Truphone, with the help of partners, deployed a telecom solution in minutes that included key components like IMS, SDP, HLR integration, one hundred numbers, dashboards, interactive voice responses, etc. Once deployed, the audience could immediately start calling and participate. All numbers of the people in the audience, their home operator, the operator that sold them their SIM initially, their age and responses to interactive questions were registered and results shown on a real-time dashboard. If the audience would have been in different locations, they could have been put on an interactive map as well. The whole solution took only a few weeks to build with a team of people that all had day jobs. The surprising thing is that it was all build with open source software. It is technically possible to innovate big time in telecom and bring to market new services daily. All at a fraction of today’s cost. The technology is no longer a limiting factor. Old-school thinking, bureaucracy and incompetence are the only things that hold back operators from changing their destiny. Whatever they do, they shouldn’t act like former-Nokia executives in some years and tell the world that Android and the iPhone took them by surprise. Dear mister operator, you have been warned. You have been giving good advise and examples of how to do it better. Now it is time to act upon them…

A Layman’s Guide to the Big Data Ecosystem

November 19, 2014 Leave a comment

Charles – Chuck – Butler, a colleague at Canonical, wrote a very nice blog post explaining the basics of Big Data. It does not only explain them but it also allows anybody to set up Big Data solutions in minutes via Juju. Really recommended reading:

http://blog.dasroot.net/a-laymans-guide-to-the-big-data-ecosystem/

This is a good example of the power of cloud orchestration. Some expert creates charms and bundles them with Juju and afterwards anybody can easily deploy, integrate and scale this Big Data solution in minutes.

Samuel Cozannet, another colleague, used some of these components to create an open source Tweet sentiment analysis solution that can be deployed in 14 minutes and includes autoscaling, a dashboard, Hadoop, Storm, Kafka, etc. He presented it on the OpenStack Developer Summit in Paris and will be providing instructions for everybody to set it up shortly.

LXD and Docker

November 11, 2014 1 comment

This blog post is not about the technical details around LXC, LXD, Docker, Kubernetes, etc. It focuses on the different use cases LXD and Docker are solving and should help non-experts understand them.
Canonical demoed a prototype of LXD last week at ODS. Several journalists incorrectly understood that LXD is a competitor of Docker. The truth is that LXD is trying to solve a completely different use case than Docker. Ubuntu 14.04 was the first operating system to provide commercial support for Docker. Six times more Docker images are powered by Ubuntu than all other operating systems combined. Ubuntu loves Docker.

Different Use Cases?

Docker is focused on being the universal container for applications. Developers love Docker because it allows them to prototype quickly solutions and share them with others. Docker is best compared to an onion. All internal layers are read-only and only the last layer is writeable. This means that people can quickly reuse Docker containers that were made by others and add their own layers on top if desired. Afterwards you upload your “personalised onion” to the Docker hub hence others can benefit from your work. Docker is ideal for augmenting developer productivity and showing of innovations.
Canonical is the company behind Ubuntu and Ubuntu powers 64% of all OpenStack’s in production, the fastest growing open source project in the world. In OpenStack, like in VMWare or on AWS, you run a hypervisor on a host operating system and then install a guest operating system on top. Because you have two layers of operating systems you can host on one server many applications on multiple operating systems at the same time. This is greatly optimising resource usage over non-virtualization. However because you need to duplicate operating systems you are wasting a lot of resources. Now ideally you could put Docker directly inside OpenStack and run all applications from inside containers. The problem with this is that Docker does not give an administrator the possibility to remotely log into the container and just add some monitoring, backup, etc. and other normal activities administrators do to guarantee SLAs. In comes LXD. LXD is building on top of a container technology called LXC which was used by Docker before. However LXD allows you to have access to a virtual server, just like you would have in case of a hypervisor. The big difference is that LXD does not require operating systems to be duplicated. Instead it partitions the host operating system and assures fair and secure usage between different applications that run inside different containers. The result is that the same server can pack many more applications and startup as well as migrations of applications between different servers becomes extremely fast. This idea is not new. Mainframes already had containers. Solaris had containers. LXD just makes sure that your favourite private cloud has containers that are easy to manage.

Can a hypervisor, Docker and LXD coexist?

Yes. The hypervisor could make sure Windows runs on top of an Ubuntu host [linux containers can not support Windows on top]. Docker containers can host some next generation scale out solution that is either purpose build for Docker or has made changes to support some of the new paradigms Docker introduces. LXD will be best for all your standard Linux workloads that you just want to move as is. No need to update the applications or the tools that get integrated into them.
Since LXD has an Apache licence and is available on Github, it is very likely that the future will actually evolve into a world where LXD and Docker advantages get combined in some shape or form. Hopefully with new innovations being added as well. That is the power of open source innovation and exactly the reason why Canonical has shared LXD with the world…

%d bloggers like this: