With Docker being an extremely new technology here at tombola, we were extremely excited to be visiting Austin TX for Dockercon17.
With some big companies talking like Capital One, Netflix, PayPal and AWS we were looking forward to seeing how the big boys delivered services using Docker, and the benefits it gave them. With tombola being new to the Docker ecosystem we were excited to learn some new tricks and practices for streamlining and strengthening our (small) Docker fleet even further.

The Noteworthy

My day one was a workshop about modernising .NET applications using Docker. Microsoft are making a big deal out of Docker for Windows and there was a surprising number of talks and demos relating to Docker on Windows. This workshop took me through migrating a .NET app into a Docker container and expanding its capabilities using Docker and other Docker images (MQ, logging etc.). What this highlighted to me was that traditional windows based software is not well suited to be run in containers in production. This is mainly down to Windows as an OS, Container size and Windows feature installation can be an issue. Having said that there is a lot to be said for making deployment pipelines consistent so Windows in containers should be considered on balance. Throwaway concept environments may be a great candidate for using containers on windows in tombola.

Netflix’s Brendan Gregg talked about performance analysis of containers. He talked briefly about Titus and how they use it to manage over 1 million container deployments a week. He then went on to discuss in depth the challenges you can face in identifying performance bottlenecks in containers, particularly in having two perspectives on the kernel. His talk was mainly around the tools and methodologies he uses to analyse container performance. His talk was quite in depth and can be found here https://youtu.be/bK9A5ODIgac?list=PLkA60AVN3hh_nihZ1mh6cO3n-uMdF7UlV

Capital One talked about an interesting application of Docker. They use Docker to streamline their browser testing. Using selenium, they faced typical UI testing issues such as “works on my machine”, speed of tests and testing different browser versions. They moved onto investigating selenium grid to manage their tests. This required relatively complicated infrastructure setup. Docker hugely simplified this architecture for them. It also solved the speed issue by improving concurrency as this setup allowed them to scale specific browser version nodes as they wished.

AWS talked about running micro services on AWS using ECS. They talked about all the basic concepts of ECS and how they fit into the Docker ecosystem and inversely how Docker fits into the AWS platform. Interestingly, AWS have taken a slightly different approach to all others in that they have very basic task scheduling features out of the box (none for scheduling batch jobs or time-based jobs). They were careful to highlight that ECS allowed you to build your own scheduler that does anything you would like it to do (typical AWS approach in my opinion).

What was interesting about the talks that i attended is that Docker was used for a whole host of different reasons. Not just deploying applications. People used them for building, testing, prototyping etc.

Some interesting Projects and tools

The first two were the winners of the Docker hacks awards:

Play with Docker (http://labs.play-with-docker.com/)
This is a hugely powerful browser based tool for provisioning temporary Docker infrastructure to run tests or POCs on. Seriously, if you haven’t looked at it you should. You can set up swarms, share sessions, SSH and everything in this tool. It’s awesome.

Docker FaaS (https://github.com/alexellis/faas)
This is essentially a framework for anyone running Docker on any hardware (Linux or Windows) to build and run lambda type functions. Improving on lambda in the languages that are supported.

Linux Kit (https://blog.docker.com/2017/04/introducing-linuxkit-container-os-toolkit/)
This is essentially the Docker solution to the multi-platform issue released as open source. Docker released a number of versions of Docker such as Docker for Mac and Docker for Windows and realised that in each case they were essentially shipping a different Linux subsystem to provide container capabilities. They realised that there were many, many different requirements for this type of custom subsystem so they open sourced the project. What it is is an entirely pluggable, read-only and lightweight OS (only 35MB) with a minimal boot time. All system services are containers that can be swapped. Apparently, Docker are considering donating this project to the Linux foundation.

Round Up

Overall, it was a fascinating conference. It was energizing to be immersed in an ecosystem like Docker. It was obvious that Docker is not going anywhere now. There are even signs that Docker are breaking through barriers and changing the time honored ways of thinking about doing things at a very low level (Linux kit). All of the big players (MS, Google, AWS, even Alibaba) had a very dominating presence and were keen to sell their platform as the best route into the Docker ecosystem. If you ask me Docker seemed to revel in this, often the Docker engineers were more clued up on the vendor solutions than the vendors were. While Docker were leaving the big boys to fight it out amongst themselves, Docker were giving out a very strong message – security and enterprise. The keynotes were heavily biased to talking about Docker enterprise, which is a supported version of Docker. Probably born out of companies, like Visa and Capital One, requiring pretty stringent support contracts to be able to run in production. They talked a lot about Docker cloud, which is a management UI for repositories and swarms. It represents a single point of management for builds and deployments. They talked a lot about security and how the latest version of Docker brings about complete container isolation. They talked about vulnerability scanning and digitally signing images within your CI/CD process to validate that your images do not have any vulnerabilities and that they are the images you think they are. They also talked about the new Docker store which is a repository of signed and verified images so you can be sure the images you run are built by the vendor you think they are and often supported by them. Oracle announced they now ship oracle to the Docker store and will support any deployments of oracle using Docker with this image (you can also run it free in dev/stage environments).
I have no doubt that introducing Docker infrastructure at tombola was a good idea even though we only have some simple use cases at present. I am sure that in future we will have a lot more use cases and a much larger Docker cluster. Already we are seeing the benefits (on a small scale) of a unified deployment process with much improved scalability and flexibility and therefore cost management ability.

Roll on dockercon18…