Truphone showed other operators how open source, telecom solutions & available in minutes can now be combined in one sentence. Check out the details at: https://insights.ubuntu.com/2014/12/16/truphone-uses-juju-to-demo-worlds-first-telecom-solution-in-minutes/
Canonical is the company behind Ubuntu. Ubuntu powers up to 70% of the public cloud and 64% of OpenStack private clouds run on top of Ubuntu. Today, Canonical launched Snappy Ubuntu Core! Snappy Ubuntu is a revolution in how software gets packaged, deployed, upgraded and rolled-back. So what is it and why should you and your business care?
What is Snappy Ubuntu?
Snappy is allowing developers to build Snappy Apps – called Snaps – like mobile apps and deploy them to the cloud, any X86 computer or a fast growing list of any form of IoT or ARM v7/8 based board. For more info on IoT see ubuntu.com/things In the past developers would make a software solution, afterwards a maintainer would take often weeks or months to create a packaged version. This would mean that fast moving projects like Docker would never be up to date inside any of the big Linux distributions. Snappy changes this by allowing the developer to package their solution on their own and publish it through the Snap Store to all users in minutes. Since all Snaps run in a secure and confined environment, they can not harm other Snaps or the operating system itself. Quality, speed and security can now all be combined.
Snappy upgrades are transactional. This means that you install a new version in one go but also easily roll back to the previous version if required. Snappy manages a super small version of Ubuntu called Ubuntu Core. This means you can run it very cost efficiently and fast everywhere. Since Ubuntu Core is a lightweight version of Ubuntu, teams don’t have to be trained when they want to go from the cloud to embedded, it all works the same.
Why is Snappy important for Businesses?
Snappy allows solutions to be packaged and published by the software vendors in minutes instead of months. Users can deploy and roll back very easily. Trying new innovations becomes cheap and fast.
Snaps can use any license. Snappy Ubuntu was born as a spin-off of the Ubuntu Phone operating system. Canonical is working on commercial Snap Stores with different groups like the ROS Robot Store, the Ninjasphere Store, etc. Unlike traditional mobile app stores, the Ubuntu Snap Stores are a lot more open. You can use the generic Ubuntu Snap Store but you are also able to get your own branded Snap Store and govern it. For large companies there will be even an OEM version that they can manage locally and host federated Snap Stores for their large customers. Be sure to reach out to Canonical if this is of interest to you.
With Snappy, the vendor packages the complete application, including its dependencies. Less moving parts mean less chances of something going wrong and cheaper to support customers. Updates are incremental so only what changes gets pushed, saving bandwidth costs and time. Urgent security patches can be easily distributed, with high confidence.
Existing Docker or other container apps can be Snappy deployed. Building your Docker containers or other commercial Snaps on top of Snappy Ubuntu makes good business sense. In the future you can get optional commercial support from a company that has been supporting Linux for 10 years and is trusted by Amazon AWS, Google and Microsoft Azure with the big majority of their Linux workloads.
Snappy Ubuntu is open source and has some great example Snaps, so make sure your teams don’t get “Snapsassinated” by a competitor…
Many people are still getting their head around public and private clouds. Even less know about Internet of Things. However the real revolution will be bringing both together and for that you will need a proximity cloud.
What is a proximity cloud?
In five years time there will be billions of sensors everywhere. You will be wearing them on your body. They will be in your house, at your job, in your hospital, in your city, in your car/bus/plane, etc.
Now in a world where 1 billion people will have access to 10’s or even 100’s of connected devices around them, it is easy to understand that you don’t want to send all data they generate to a public or private cloud. There is just not enough mobile spectrum or fiber capacity to cater for this.
What we need is put intelligence close to the data generators to determine if your heartbeat, that video camara stream, the electricity consumption of your boiler, the brake capacity of your car, etc. are within normal limits or are outliers that deserve some more attention. Especially if video is involved you don’t want to send it over a public Internet connection if it has nothing interesting on it, e.g. the cat of the neighbours just got onto your lawn.
So houses, cars, companies, telecoms, hospitals, manufacturers, etc. will need some new type of equipment close to the data avalanches in order to filter out the 99.999999% of useless data.
An example. If you are a security company that manages tens of thousands of video cameras in thousands of companies then you want to know when a thief walks by or an aggression happens. You will train machines to make decisions on what the difference is between a human walking past or a cat. However burglars will soon find out that computer vision has a fault and when they wear a cat disguise they can get past. It is this type of events that will trigger a central cloud platform to request all videos of a local business in the last 24 hours and to train its visual model that humans can wear animal suits and then push this to all proximity clouds in all its customer premises. The alternative is storing all video streams in the cloud which would require enormous bandwidth or even worse, not knowing what happened and being in the press the next week for being the “cat-suit” flop.
Today I had a meeting that could be the beginning of the end of RFPs to buy software. RFPs are the tool established buyers and vendors use to keep new entrants at bay. However I haven’t met anybody that says they love writing or responding to them. The effect of RFPs on software is perverse. The main problem is that you can’t ask if your software is beautiful, easy to use, fast to integrate, efficient, effective at solving a business problem, secure, etc. Instead you ask if you provide training, because you assume it is ugly and difficult. You ask if they offer consultancy services and an SDK or connector library because you assume it is difficult. You assume you need to customise it for months because it will not be effective out of the box. But most importantly since you will be stuck with the software for years, you ask if it supports any potential feature that perhaps in 5 years might be needed for 5 minutes. It is this last set of questions that kill any innovation and ease of use in business software. A product manager in the receiving end will get funding to add those absurd features when customers ask for them. A career limiting move would be to ask for budget to reduce useless features or tell that your product looks worse than Frankenstein.
So how can you make sure that software is beautiful, does what it supposed to efficiently and effectively, is fast, nimble, easy to use, secure, scalable, fast to integrate, is future proof, etc.? You do what you do when you buy a car, you go and ask the keys of different models and take them for a serious spin and put them to their limits.
So what you propose is a three months PoC for each potential solution?
No what I propose is being able to get your hands on all different alternative software solutions and deploying, integrating and scaling them in hours or even minutes and then release a bunch of automatic performance tests and rough end-users, even some ethical hackers or competitors.
If the software does what it says on the tin, is effective, efficient, beautiful, secure, fast, scalable, easy, etc. then you negotiate pricing or use it for a minimum valuable product.
It used to be impossible to do all of this in hours but with solutions to deploy quickly private clouds and cloud orchestration solutions like Juju, we are actually planning on trying this approach with a real customer and real suppliers. To be continued…
Charles – Chuck – Butler, a colleague at Canonical, wrote a very nice blog post explaining the basics of Big Data. It does not only explain them but it also allows anybody to set up Big Data solutions in minutes via Juju. Really recommended reading:
This is a good example of the power of cloud orchestration. Some expert creates charms and bundles them with Juju and afterwards anybody can easily deploy, integrate and scale this Big Data solution in minutes.
Samuel Cozannet, another colleague, used some of these components to create an open source Tweet sentiment analysis solution that can be deployed in 14 minutes and includes autoscaling, a dashboard, Hadoop, Storm, Kafka, etc. He presented it on the OpenStack Developer Summit in Paris and will be providing instructions for everybody to set it up shortly.
This blog post is not about the technical details around LXC, LXD, Docker, Kubernetes, etc. It focuses on the different use cases LXD and Docker are solving and should help non-experts understand them.
Canonical demoed a prototype of LXD last week at ODS. Several journalists incorrectly understood that LXD is a competitor of Docker. The truth is that LXD is trying to solve a completely different use case than Docker. Ubuntu 14.04 was the first operating system to provide commercial support for Docker. Six times more Docker images are powered by Ubuntu than all other operating systems combined. Ubuntu loves Docker.
Different Use Cases?
Docker is focused on being the universal container for applications. Developers love Docker because it allows them to prototype quickly solutions and share them with others. Docker is best compared to an onion. All internal layers are read-only and only the last layer is writeable. This means that people can quickly reuse Docker containers that were made by others and add their own layers on top if desired. Afterwards you upload your “personalised onion” to the Docker hub hence others can benefit from your work. Docker is ideal for augmenting developer productivity and showing of innovations.
Canonical is the company behind Ubuntu and Ubuntu powers 64% of all OpenStack’s in production, the fastest growing open source project in the world. In OpenStack, like in VMWare or on AWS, you run a hypervisor on a host operating system and then install a guest operating system on top. Because you have two layers of operating systems you can host on one server many applications on multiple operating systems at the same time. This is greatly optimising resource usage over non-virtualization. However because you need to duplicate operating systems you are wasting a lot of resources. Now ideally you could put Docker directly inside OpenStack and run all applications from inside containers. The problem with this is that Docker does not give an administrator the possibility to remotely log into the container and just add some monitoring, backup, etc. and other normal activities administrators do to guarantee SLAs. In comes LXD. LXD is building on top of a container technology called LXC which was used by Docker before. However LXD allows you to have access to a virtual server, just like you would have in case of a hypervisor. The big difference is that LXD does not require operating systems to be duplicated. Instead it partitions the host operating system and assures fair and secure usage between different applications that run inside different containers. The result is that the same server can pack many more applications and startup as well as migrations of applications between different servers becomes extremely fast. This idea is not new. Mainframes already had containers. Solaris had containers. LXD just makes sure that your favourite private cloud has containers that are easy to manage.
Can a hypervisor, Docker and LXD coexist?
Yes. The hypervisor could make sure Windows runs on top of an Ubuntu host [linux containers can not support Windows on top]. Docker containers can host some next generation scale out solution that is either purpose build for Docker or has made changes to support some of the new paradigms Docker introduces. LXD will be best for all your standard Linux workloads that you just want to move as is. No need to update the applications or the tools that get integrated into them.
Since LXD has an Apache licence and is available on Github, it is very likely that the future will actually evolve into a world where LXD and Docker advantages get combined in some shape or form. Hopefully with new innovations being added as well. That is the power of open source innovation and exactly the reason why Canonical has shared LXD with the world…