Showing posts with label docker. Show all posts
Showing posts with label docker. Show all posts

2019-02-25

Goodbye Docker and Thanks for all the Fish

Back in July 2018, I started to write a blog post about the upcoming death of Docker as a company (and also perhaps as a technology) but I never got round to completing and publishing the post. It is time to actually get that post out.





So here you go....


Of course Docker is still here, and of course everyone is still using Docker and will continue to do so the near and foreseeable future (how far that foreseeable future is - is yet to be determined). The reason I chose this title for the blogpost is because, in my humble opinion the days for Docker as a company are numbered and maybe also a technology as well. If would indulge me with a few minutes of your time - I will share with you the basis for my thoughts.

A number of years ago - Docker was the company that changed the world - and we can safely say - is still changing the world today. Containers and the technology behind containers has been around for many years, long before the word docker was even thought of, even turned into a verb (“Dockerize all the things”), but Docker was the company that enabled the masses to consume the technology of containers, in a easy and simple fashion. Most technology companies (or at least companies that consider themselves to be a modern tech company) will be using Docker or containers as part of their product or their pipeline - because it makes so much sense and brings so much benefit to whole process.

Over the past 12-24 months, people are coming to the realization that docker has run its course and as a technology is not going to be able to provide additional value to what they have today - and have decided to start to look elsewhere for that extra edge.

Kubernetes has won the container orchestration war, I don’t think that anyone can deny that fact. Docker itself has adopted Kubernetes. There will always be niche players that have specific use cases for Docker Swarm, Mesos, Marathon, Nomad - but the de-facto standard is Kubernetes. All 3 big cloud providers, now have a managed Kubernetes solution that they offer to their customers (and as a result will eventually sunset their own home-made solutions that they built over the years - because there can be only one). Everyone is building more services and providing more solutions, to bring in more customers, increase their revenue.

Story is done. Nothing to see here. Next shiny thing please..

At the moment, Kubernetes uses docker as the underlying container engine. I think that the Kubernetes community understood that Docker as a container runtime (and I use this term specifically) was the ultimate solution to get a product out of the gate as soon as possible. They also (wisely) understood quite early on they needed to have the option of switching out that container runtime - and allowing the consumers of Kubernetes to make a choice.

The Open Container Initiative - brought with it the Runtime Spec - which opened the door to allow us all to use something else besides docker as the runtime. And they are growing - steadily. Docker is no longer the only runtime that is being used. Their is a growth in the community - that are slowly sharing the knowledge of how use something else besides Docker. Kelsey Hightower - has updated his Kubernetes the hard way (amazing work - honestly) over the years from CRI-O to containerd to gvisor. All the cool kids on the block are no longer using docker as the underlying runtime. There are many other options out there today clearcontainers, katacontainers and the list is continuously growing.

Most people (including myself) do not have enough knowledge and expertise of how to swap out the runtime to what ever they would like and usually just go with the default out of the box. When people understand that they can easily make the choice to swap out the container runtime, and the knowledge is out there and easily and readily available, I do not think there is any reason for us to user docker any more and therefore Docker as a technology and as a company will slowly vanish. The other container runtimes that are coming out will be faster, more secure, smarter, feature rich (some of them already are) compared to what Docker has to offer. If you have a better, smarter, more secure product - why would people continue to use technology that no longer suits their ever increasing needs?

For Docker - to avert this outcome - I would advise to invest as much energy as possible - into creating the best of breed runtime for any workload - so that docker remains the de-facto standard that everyone uses. The problem with this statement - is that there no money in a container runtime. Docker never made money on their runtime, they looked for their revenue on the enterprise features above and on top the container runtime. How they are going to solve this problem - is beyond me and the scope of this post.

The docker community has been steadily declining, the popularity of the events has been declining, the number of new features, announcements - is on the decline and has been on the decline for the past year or two.

Someone told me a while back - that speaking bad about things or giving bad news is usually very easy. We can easily say that this is wrong, this is no useful, this should change. But without providing a positive twist on something - you become the “doom and gloom”. The “grim reaper”. Don’t be that person.

I would like to heed their advice, and with that add something about - what that means for you today. You should start investing in understanding how these other runtimes can help you, where they fit, increase your knowledge and expertise - so that you can prepare for this and not be surprised when everyone else stops using docker and you find yourself having to rush into adapting all your infrastructure. I think it is inevitable.

That was the post I wanted to write 8 months ago...

What triggered me to finish this post today was a post from Scott Mccarty - about the upcoming RHEL 8 beta - Enterprise Linux 8 Beta: A new set of container tools - and my tweet that followed

Lo and behold - no more docker package available in RHEL 8.
If you’re a container veteran, you may have developed a habit of tailoring your systems by installing the “docker” package. On your brand new RHEL 8 Beta system, the first thing you’ll likely do is go to your old friend yum. You’ll try to install the docker package, but to no avail. If you are crafty, next, you’ll search and find this package:
podman-docker.noarch : "package to Emulate Docker CLI using podman."
What is this Podman we speak of? The docker package is replaced by the Container Tools module, which consists of Podman, Buildah, Skopeo and several other tidbits. There are a lot of new names packed into that sentence so, let’s break them down.









(Source - Tutorial - Doug Tidwell (https://youtu.be/bJDI_QuXeCE)

I think a picture is worth more than a thousand words..

Please feel free to share this post and share your feedback with me on Twitter (@maishsk)

2018-01-25

The #AWS PowerShell Docker Container

I cannot believe it is over 3 years since I created the openstack-git-env container. At the time I was really frustrated at how hard it was to get started with setting up an environment  to start contributing to OpenStack.

Well I have now moved on - focused primarily on AWS - and I have a good amount of PowerShell experience under my belt - but since I moved off a Windows laptop 3 years ago - I hardly use PowerShell anymore. Which is a shame.

Luckily Microsoft have released a version of PowerShell that will work on Mac and Linux - so I can start getting back on the horse.

I looked at the instructions for setting up PowerShell command for AWS - which led me to the AWS documentation page. But the missing link there - is how do you install PowerShell on your Mac/Linux machine - there is no documentation there. This is complicated ands error prone.

So I was thinking - there must be a container already available for PowerShell - it can’t be that everyone goes through the hoops of installing everything locally.

And lo and behold - there is one - https://hub.docker.com/r/microsoft/powershell/

So I built on top of this - the AWS PowerShell container.

All you need to do is set an alias on you machine, add a script that will launch the container - and Bob’s your uncle - you are ready to go.

All the information is located on the repository.

Screenshot at Jan 25 08-59-05

Please let me know if you think this is useful - and if there are any improvements your would like to see.

The code is on Github - feel free to contribute or raise any issues when/if you find them.

2014-11-27

Start Contributing to OpenStack - The Easy Way #docker

One of the most daunting and complicated things people find when trying to provide feedback and suggestions to the OpenStack community, projects and code – is the nuts and bolts of actually getting this done.

There are a number of tutorials around. The official kind - HowTo for FirstTimers, Documentation HowTo, Gerrit Workflow.

Scott Lowe also posted a good tutorial on Setting up the Tools for Contributing to OpenStack Documentation. But the process itself is still clunky, complicated and for someone who has never used git or gerrit before – highly intimidating.

That is why I embarked on providing a really simple way of starting to contribute to the OpenStack code. I was planning on writing a step-by-step on how exactly this should be done – but Scott’s post was more than enough – so need to repeat what has already been said.

Despite that there are still some missing pieces in there which I would like to fill in here in this post.

Before we get started there are a few requirements/bits of information that you must have, and some things that you need to do before hand - in order for this process to work.

They are as follows:

  1. A launchpad account
  2. An Openstack Foundation account. (Use the same email address for both step 1 and step 2).
  3. A signed Contributor License Agreement (CLA).
  4. A gerrit http password.
  5. Somewhere to run Docker (I wrote a post about this - The Quickest Way to Get Started with Docker)

Let me walk you through each of the steps.

1. A Launchpad Account

Sign up for a launchpad account – http://www.launchpad.net

Launchpad

Register for a new account – you will need to provide some information of course

Create account

You will need to of course verify your email address. Go to your inbox and click on that link in the email you have received and validate your address

Validate

2. Join the OpenStack Foundation

Sign up for an Openstack Foundation account – https://www.openstack.org/join

Join OpenStack

And fill in the details

Details

Details2

*Remember* – use the same email address you used to sign up for the launchpad account.

3. Sign the CLA

Go to https://review.openstack.org/ and sign in with your Launchpad ID (from Step 1)

cla

If you have not logged out of the Launchpad – you should be presented with a screen like the one below.

confirm

Some of the information will already be populated for you. You will need to choose a unique username.

gerrit

We will not choose an SSH key at the moment. Scroll to the bottom of the screen and choose New Contributor Agreement.

CLA2

You should choose the ICLA

ICLA

Review the agreement and understand what you are signing and then fill in the details below.

details2

If everything is Kosher then you will be presented with the following screen to confirm

Signed

4. A gerrit http password


Remember the username you chose from the previous step? this is the one you should use.

http password

On that same Settings screen, choose HTTP password and enter your username and Generate Password.

http password2

http password3

Don’t worry – the password has already been changed – the minute I published this post.

And we have finished all the registration and administrative things.

Just to recap – you will need these details for later (you need to replace them with your relevant details instead)

  1. Your Name – Maish Saidel-Keesing
  2. Email Address – [email protected]
  3. Gerrit Username – maish_sk
  4. HTTP Password - zwZW0X5NAGVP

Running the Container


Now that we have all the parts – it is really simple to get started.

The steps are as follows:

docker pull maishsk/openstack-git-env

This will retrieve the container from the Docker Hub. Once the container has been retrieved you can launch the container.

A few points to note beforehand.

  1. The container will always start a bash shell. The aim of this environment is to allow you to contribute to the OpenStack Project – so it has to be interactive.
  2. You have to provide 4 variable to the run command – it has to be all four – otherwise the container will not launch.
  3. The container will automatically upload an SSH key to gerrit – to allow you to connect and contribute your code upstream. It does not remove the SSH keys when done – this you will have to do manually.

The command to launch the container would is as follows – and remember you need to take the values from above.

docker run --name="git-container" \-e GIT_USERNAME="\"Maish Saidel-Keesing\"" \
-e GIT_EMAIL="[email protected]" -e GERRIT_USERNAME=maish_sk \
-e GERRIT_HTTP_PASSWORD=zwZW0X5NAGVP -i \
-t maishsk/openstack-git-env

A few words about the variables

--name="git-container" – this is just to identify the launched container easily
-e GIT_USERNAME="\"Maish Saidel-Keesing\"" – the quotes have to be escaped \"
-e [email protected]
Don’t forget to put in your real email address!

Once the container is launched – provided you have followed all the steps correctly and the variables are also correct - you will see some output printed to the screen with the SSH key that was just created and you will also be able to see that key in the gerrit web interface as well.

run container

ssh key

You can see that the comment on the web is the same as the hostname of the container.

Embedded below is a screencast of the launching of the container.

In the next post – I will show you how to actually contribute some code.

If you have any feedback, comments or questions, please feel free to leave them below.

2014-10-28

Nova-Docker on Juno

Containers are hot. It is the latest buzzword. Unfortunately buzzwords are not always the right way to go, but I have been wanting to use containers as a first class citizen on OpenStack for a while.

In Icehouse, Heat has support for containers but only in the sense that you can launch an instance and then launch a container within that instance (Scott Lowe – has a good walkthrough for this – it is a great read).

First a bit of history.

DockerStackThe Docker driver is a hypervisor driver for Openstack Nova Compute. It was introduced with the Havana release, but lives out-of-tree for Icehouse and Juno. Being out-of-tree has allowed the driver to reach maturity and feature-parity faster than would be possible should it have remained in-tree. It is expected the driver will return to mainline Nova in the Kilo release.

The Docker driver was removed from Nova – due to CI issues and migrated to Stackforge for the Icehouse release.

From the announcement for Juno

Many operational updates were also made this cycle including improvements for rescue mode that users requested as well as allowing per-network setting on nova-network code. Key drivers were added such as bare metal as a service (Ironic) and Docker support through StackForge.

I set out to try it out. This is my environment:

  • Fedora 20 (x64)
  • All in one RDO installation of OpenStack (2014.2)

First things first was to get OpenStack up and running (that I am not going to go into how that is done in this post).

The stages are as follows:

  1. Install Docker on the compute node
  2. Install required packages to install nova-docker driver
  3. Config file changes
  4. Dockerize all the things!!

Install Docker on the compute Node

Following the documentation (do so for your Linux distribution)

yum -y remove docker
yum -y install docker-io

Then start the docker services and set them to run at startup

systemctl start docker
systemctl enable docker

Now to test that Docker is working correctly without OpenStack

docker run -i -t ubuntu /bin/bash

If all is good then you should see something similar to the screenshots below.

docker run

docker ps

Now we know that Docker is working correctly.

Install required packages to install nova-docker driver

Following the OpenStack documentation for Docker.

There are two packages needed to start, pip (python-pi) and git.

yum install -y python-pip git

Then we get the nova-docker driver from Stackforge and install it.

pip install -e git+https://github.com/stackforge/nova-docker#egg=novadocker
cd src/novadocker/
python setup.py install

This will pull the files from github - will place them under your current working directory. Then you install the modules required for the driver.

Config file changes

The default compute driver needs to be changed, edit your /etc/nova/nova.conf and change the following option.

[DEFAULT]
compute_driver = novadocker.virt.docker.DockerDriver

Create the directory /etc/nova/rootwrap.d, if it does not already exist, and inside that directory create a file "docker.filters" with the following content:

# nova-rootwrap command filters for setting up network in the docker driver
# This file should be owned by (and only-writeable by) the root user

[Filters]
# nova/virt/docker/driver.py: 'ln', '-sf', '/var/run/netns/.*'
ln: CommandFilter, /bin/ln, root

Glance is the place where all the images are stored – and it used to be the case that you needed a private docker registry – but this is no longer the case, they can be added directly.

Edit the /etc/glance/glance-api.conf file and add docker to the supported container_formats value like the following example.

# Supported values for the 'container_format' image attribute
container_formats=ami,ari,aki,bare,ovf,ova,docker

We now need to restart the services for the new setting to take effect.

systemctl restart openstack-nova-compute
systemctl restart openstack-glance-api

If all is well and there were no configuration errors – then you are good to go.

Dockerize all the things!!

No demonstration is ever complete without showing the deployment of a Wordpress application (why in the hell is it always Wordpress???).

We pull the Wordpress container into the host and then push it into Glance (assuming you have already sourced the credentials for Keystone/Glance)

docker pull tutum/wordpress
docker save tutum/wordpress | glance image-create --is-public=True --container-format=docker --disk-format=raw --name tutum/wordpress

**The image name has to be the same as container name

docker pull

glance image-create

image

And in the GUI

Horizon

And now to boot the new instance

nova boot --image "tutum/wordpress" --flavor m1.tiny test

nova boot

Here is the Console log

console log

Opening a web browser to the instance that received an IP from Neutron.

And hey presto – Wordpress!

Hey - Wordpress

This was a preliminary test – still many things to check…

  • Automation (Heat)
  • Bug problems
  • and so on…

Happy Dockerizing!! (and yes it seems that is actually a word)

2014-07-17

The Return of the Container

This is an excerpt of a post published elsewhere. A link to the original is at the bottom of this excerpt

Containers are not a new concept – there are several implementations that have been around for quite a number of years, be it Solaris Containers, Linux-V-Server, OpenVZ, or LXC.

So why has this become a hot topic, something that has many people turning their head and looking at it once more. Well that is quite simple. This is due to a huge amount of interest in Docker.

[Read full article … ]

2014-05-27

The Quickest Way to Get Started with Docker

op12171166215_4ec135c6b2_oContainers are not only those things are used for shipping stuff around – or storing the things you will never use or you don’t want to spoil – they are also used (and if you ask me – might even replace virtual machines in the not to distant future) as a platform to run applications / services / stacks.

Docker is one that is getting a large amount of focus lately

Docker is an open-source engine that automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere.

Docker containers can encapsulate any payload, and will run consistently on and between virtually any server. The same container that a developer builds and tests on a laptop will run at scale, in production*, on VMs, bare-metal servers, OpenStack clusters, public instances, or combinations of the above.

Common use cases for Docker include:

  • Automating the packaging and deployment of applications
  • Creation of lightweight, private PAAS environments
  • Automated testing and continuous integration/deployment
  • Deploying and scaling web apps, databases and backend services

(Source – Docker)

A quick introduction presentation to docker is embedded below

I wanted to share with you a quick and easy way to start looking at docker and what you can do with it. And the most basic part of it is installing and configuring docker.

If you are just looking to play with it – you could install a VM with Ubuntu or and go through the motions of installation on your platform of choice.

But I would like to get you up and running as fast as can be.

Enter boot2docker

Boot2docker demo

boot2docker is a lightweight Linux distribution based on Tiny Core Linux made specifically to run Docker containers. It runs completely from RAM, weighs ~24MB and boots in ~5s.

And to make it even easier Mitchell Hashimoto (the author of Vagrant) has created a vagrant box so you start it up in even less time.

I can understand why this is going to take off – and why containers will have a good use case for a number of reasons and applications.

It is unbelievably fast!!

It takes less than a second – microseconds even, to start a container (i.e. a VM/OS/instance)

Just to emphasize how fast – let me show you a small example

I have a base ubuntu image and I ran a simple test:

  1. list running images
  2. ping the container (to show it is not up
  3. start the container
  4. ping the container again.

#!/bin/sh

docker ps
ping 172.17.0.2 -c 2
docker run -d phusion/baseimage
ping 172.17.0.2 -c 2
exit

At the same time I followed the docker log.

2014/05/27 14:02:27 GET /v1.10/containers/json
2014/05/27 14:02:38 POST /v1.10/containers/create
2014/05/27 14:02:38 POST /v1.10/containers/17029e...c45f1d1f92f899/start
[libcontainer] 2014/05/27 14:02:38 created sync pipe parent fd 14 child fd 13
[libcontainer] 2014/05/27 14:02:38 attach terminal to command
[libcontainer] 2014/05/27 14:02:38 starting command
[libcontainer] 2014/05/27 14:02:38 writting pid 1947 to file
[libcontainer] 2014/05/27 14:02:38 setting cgroups
[libcontainer] 2014/05/27 14:02:38 setting up network
[libcontainer] 2014/05/27 14:02:38 closing sync pipe with child

14:02:27 – I queried to see there were no containers running.
Ping timed out over the next 11 seconds
14:02:38 - Container was created and was available within 1 second

How do you like them apples?

Are you ready to try out docker?

And of course this is a simple demonstration – you can get much more creative than this!