I used to think I learned better through lectures than reading. Then, during the fall semester of my junior year at college, I took the class "ECE311 Hardware Design," also known as FPGAs (field programmable gate arrays) because that's primarily what we discussed. In this class, we used a book that all of us—including the Professor—hated, Digital Design of Signal Processing Systems: A Practical Approach By Shoab Ahmed Khan. This book was full of typos, the content barely made sense and it often lacked vital information.
In addition to the textbook issue, I was also the youngest person in the class—a junior, while the rest of my classmates were seniors and master students. The upperclassmen would belittle me for being the only junior and, by proxy, having the least prior knowledge coming into the class. So, I became driven to prove my worth to my classmates and started preparing for class. One day I was so prepared that the Professor (note—this professor eventually became my project manager at my first job) asked me, "David, why do you seem to be on top of the material today?" I responded with what is probably one of my favorite sentences I've ever said:
"I read the chapter until it made sense."
I ended up reading the chapter relevant to that class session about 8-10 times the weekend before trying to figure out what the author was talking about. In that chapter, he introduced some terms he never fully defined, so I had to deduce what was going on from his examples. He referenced future chapters, so I had to read sections from later in the textbook. I had to do research elsewhere to fully understand much of the rest. In the end, I learned the importance of being able to read and comprehend, no matter the cost. I now take that attitude into every problem I approach, especially when reading documentation.
I find the ability to read documentation to be an invaluable tool when it comes to software engineering, and I wanted to share a part of my journey learning Docker as a client-facing software developer at Enigma. In particular, I want to describe a 1 month period in which I learned how to deploy Docker Swarm, a container orchestration platform similar to Kubernetes. As someone who knew the basics of Docker and had some familiarity with Unix Chroot, Linuxnamespaces and cgroups, this really became an exercise in reading documentation.
In this specific case at Enigma, our Fortune 500 client restricted us to using Docker 1.12 in production (we had more freedom with development servers). The changes in 1.12 are fairly significant. It was the first version of Docker Community Edition (CE) to include swarm. Swarm was also bundled with Docker, meaning no extra dependencies had to be installed. Our deployment strategy for this client was implemented with server-specific shell scripts because we did not have the time to use a more robust orchestration tool. However, the ability to setup new environments with fewer servers and the amount of effort to maintain these complex shell scripts became too much of a hindrance. When we realized we had needed to setup a new environment soon, we decided to embark on a mission—
- Learn swarm.
- Port our deployment to swarm.
- Enjoy a (hopefully) easier life.
To learn how swarm works, I did what I generally do first, I dove deep into the documentation as a means to expose myself to the swarm vocabulary. This brings up my first strategy for reading reading documentation:
1. Do not be afraid to dive into the documentation. Use familiar terms to guide you and **expose yourself** to what you are trying to learn.
I began my journey by reading about Swarm's key concepts. The two main concepts I learned were:
- Nodes: This is a term for each box your swarm is running on.
- Services: This is a term for a specific container running on the swarm.
There were other terms in this article that were useful, but these two terms were the ones my gut said were most important. Now I had a vocabulary to read the swarm documentation with. I wanted to know how to deploy services to nodes that make up a swarm. So instead of actually continuing to read the articles, I went to Google and started searching for variants of, "how to deploy services to nodes, docker swarm." I eventually found a page about deploying a stack with compose. I skimmed it and was immediately overwhelmed by the unfamiliar terms. However, I read it anyway and tried to make guesses about what I would need to use going forward. In this case, I discovered the command docker stack deploy and the concept of using a docker-compose file for deploying the stack (note—a stack is all the services that you are deploying from your compose file). I had no clue what a stack was in this context. From experience, I assumed it was a subcommand that was used to handle a group of services—maybe those from a compose file?
This brings me to the second action I do when reading documentation:
2. Guess the meaning of unfamiliar terms in documentation. Keep note and research those terms. Come back later and confirm.
After learning about the command stack deploy, I wanted to know what the subcommand stack was actually doing. After a little digging, I found the definition for the stack command. The documentation found on that page is full of valuable information, including child command deploy. From the child commands, I saw that the stack command is used for controlling a group of services and went back to previous documentation to figure out how to incorporate this knowledge into our stack.
At this point, I went back to basics with my third reading strategy:
3. Find a tutorial. Use the files generated in that tutorial to create your actual implementation. Read the tutorial line by line and type out every line of code you see.
Among the tutorials I found, my favorite came from Docker. This tutorial teaches you how to bring up a swarm locally, though it didn't teach me how to write a docker-compose file. A docker-compose file—which I was aware of before I started this project—is a configuration YAML file that defines all the services running on your stack. It's what you use with docker stack deploy. The alternative to a docker-compose file is using shell scripts with docker service to deploy your stack, at which point you are rewriting docker stack deploy—why reinvent the wheel? Use compose files if you're sticking to a pure docker stack with no other orchestration.
The tutorial used shell commands for every swarm operation it could and explained swarm vocabulary. I went back to the webpage defining a Docker stack deploy with a compose file, read it again, and this time, it made sense. I mocked up a simple compose file to deploy the services created in the tutorial—now I had all my tools. I then decided to make a compose file using all the services we used for our deployment and figured out the appropriate docker-compose file spec linguistics for constructing it.
There is a lot of tooling Docker provides that may be hidden, though, which brings me to strategy #4:
4. Explore the website even going into sections you think are not important. I'm personally a fan of reading release notes and mapping them back to the documentation. Everything is interesting.
I developed the swarm for Docker 17.06 out of convenience (eventually we will have to move to Docker 1.12 because of customer requirements). My compose file initially came from one we used for bringing up the entire stack on our personal laptops. A few of the requirements were that I needed to add the ability to work in our customer's environment, the ability to have config files, environment lists and certificates get distributed across the network and the ability to network all these containers together.
From reading the release notes on 17.06, I discovered Docker Secrets and Configs. It took a while to figure out how to integrate them into my deployment, but I finally found how to use them via the composefile spec. These features allow you to distribute files and secret keys across the swarm from your swarm master. Before these existed, you would have to figure out alternate ways of distributing your configuration files—such as copying the files manually to other machines and then use volume mounts or bake the configs into the images. Imagine a deployment bash script such as: