Skip to main content
BlogContainers (Kubernetes, Docker)Is Your Road to Kubernetes Clear?

Is Your Road to Kubernetes Clear?

Road to Kubernetes Ebook

Should you use Kubernetes? The answer, in short, depends. But let me walk you through my journey to understanding Kubernetes, and perhaps it might help you find your own answer. 

I wrote a whole book to help guide you on your Road to Kubernetes (get the early-release MEAP from Manning here). But before we talk about the book, let’s talk a bit about deploying software and my journey leading up to it. 

Nearly two decades ago, I embarked on my first endeavor to deploy software on the Internet to publish my first website. At the time, I had no clue where to even start. A couple of family friends were kind enough to offer me some advice:

The entrepreneur suggested: 

“Pick up this book on Dreamweaver and HTML and you won’t need to spend a dime developing your first website.”

The system administrator and developer took a different approach: 

“Here, let me set up an FTP server for you to host your website; I do not recommend learning how to manage servers.”

Both of these people helped get me to where I am today, but one of them unintentionally delayed my learning for a long time. Can you guess who?

Looking back on my early days, it becomes clear how crucial both foundational knowledge and learning through hands-on experience were in shaping my journey. The entrepreneurial advice pushed me to dive hands-on into web development, while the system administrator’s guidance taught me the value of tools that could simplify complex tasks. 

However, one key lesson that emerged over time was the importance of understanding foundational principles, even when using high-level tools. This early learning experience was the genesis of my interest in Kubernetes, a technology that would significantly impact my professional trajectory. Understanding Kubernetes, like any other tool, requires a firm grasp of the underlying technology. 

Is Kubernetes complex? Yes. Is it perfect? No. Is it right for all situations? No. But the question of whether or not you should use Kubernetes will often come down to your understanding of the underlying technology that makes Kubernetes so powerful and how you can use it to manage and protect your software.

To those of you wondering, “What is Kubernetes?” or perhaps are not yet familiar with the term “Docker” (beyond the type of pants) and its association with containers, allow me to explain. 

Applications are constantly evolving, with each component, from database to programming language, frequently releasing new versions. Keeping track of multiple versions of each application and numerous iterations of third-party components can become a pain to manage, especially when it comes to older or legacy versions without mainstream support. Even supported versions of software have a lot of system-level and third-party installation dependencies that can add complexity to running the software, let alone trying to use it as a component of your application.

Over time, all software eventually becomes obsolete and is replaced by newer versions. The challenge lies in running older software when necessary. Even seemingly small changes in software can have a huge impact on modern applications. For example, Python 2.7 (compared to the most recent Python 3.11) used to be built-in on Apple’s Mac OS X. Python 2.7 had a strange syntax for outputting text to the command line: `print “this thing” ` instead of the more logical version of `print(“this thing”)` that is in newer versions of Python. This one piece of syntax can break an entire legacy Python application because of missing parenthesis.

While using older versions of software can be impractical, there are certainly situations in which we need to run older software. But how? 

We could spend the time finding a piece of hardware or VM image from a specific point in time that would allow us to run an old piece of software. Or we can turn to containers, a concept pioneered by Docker. Containers are self-contained applications packaged with their dependencies that we can modify as we see fit. Their unique selling point is their ability to seamlessly transition from one system to another.

Here’s an example Dockerfile that uses Python 2.7 as the building block of a container:

```dockerfile
FROM python:2.7.7-slim

COPY ./src /app/
WORKDIR /app

RUN python -m pip install -r requirements.txt

CMD [“python”, “manage.py”, “runserver”]
```
```dockerfile
FROM python:2.7.7-slim

COPY ./src /app/
WORKDIR /app

RUN python -m pip install -r requirements.txt

CMD [“python”, “manage.py”, “runserver”]
```

This Dockerfile tells Docker what is needed to build this new container with code that exists on our local machine and local path under `/src`. Dockerfiles can get a lot more complicated but this example shows just how easy using Docker can be. 

To build-run this containerized application, it’s as simple as:

```bash
docker build -f Dockerfile -t hello-python:v1 .
docker run hello-python -p 8000:8000
```

Without containerization, we’d have to install Python 2.7 directly on a machine, which is almost never straightforward. Docker and other containers can make our applications portable, and you can replace Python 2.7 in this example with almost any open-source language or tool.

However, the problem arises when we want to update a containerized application, especially in production. Locally, updating a container is simple. You stop running the container, re-build it, then re-run it. In production, however, updating a container can be done the same way but we run the risk of major downtime if the build fails.

That’s where Kubernetes comes in. Kubernetes helps manage traffic routing to specific containers and oversees the number of containers running at any given time. If a container is failing, Kubernetes facilitates easy rollback to previous versions with minimal or no downtime whatsoever.

The configuration for deploying a container on Kubernetes is called a manifest. Here’s an example of a relatively simple manifest:

```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-py-deploy
spec:
    replicas: 3
    selector:
        matchLabels:
            app: hello-py-deploy
    template:
        metadata:
            labels:
                app: hello-py-deploy
        spec:
            containers:
            - name: hello-py-container
              image: jmitchel3/hello-python:v1
              ports:
                - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: hello-py-service
spec:
    type: LoadBalancer
    ports:
        - name: http
          port: 80
          targetPort: 8080
          protocol: TCP
    selector:
        app: hello-py-deploy
```

In the example above, the yaml-formatted Kubernetes manifest provisions two resources: a load balancing service and a deployment. The load balancer helps route traffic to our deployment. Via `replicas: 3`, of our declared container `hello-python:v1`, our deployment is running 3 versions.

Now, when we want to update the deployed version, we can just change `hello-python:v1` to `hello-python:v2` and Kubernetes will gracefully update our application. And if something goes wrong, it will rollback to `hello-py-deploy:v1`. Kubernetes makes this process painless and easy to manage. Manifests can easily be version-controlled with git so we can get very granular with our rollback capabilities. Kubernetes is integral to deployment because it provides a framework for automating, scaling, and managing containerized applications, ensuring system resilience and efficiency in our increasingly complex software landscapes.

I wanted to go over just some of the methods that lead me to discovering how Kubernetes could simplify my application deployments. Getting the most from Kubernetes took several steps for me, but it was a worthwhile journey. In Road to Kubernetes, We’ll go on a journey to deploy applications using various technologies that are important to understand before jumping into Kubernetes and other modern deployment practices.

In Road to Kubernetes we’ll walk through how to:

  • Manage git repositories on self-hosted or cloud platforms
  • Deploy Python and Node.js apps via cloud-based VMs with git
  • Automate VM configuration and deployment with Ansible
  • Containerize and deploy apps with Docker and Docker Compose
  • Run containers directly on VMs without orchestration
  • Push and host containers with DockerHub registry
  • Deploy containerized apps on Kubernetes
  • Implement public and private apps on Kubernetes
  • Configure load balancers for HTTP & HTTPs traffic
  • Use CI/CD techniques with Github Actions and the open-source alternative act by Nectos
  • and more!

Deployment is the ultimate test of your software. Road to Kubernetes condenses fifteen years of deployment learnings into one accessible and practical guide. It takes you from deploying software from scratch all the way up to implementing the power of Kubernetes. You’ll learn sustainable deployment practices that you can use with any language and any kind of web app, how to create portable applications that can move across deployment options and cloud providers, and see how possible it is to utilize Kubernetes for projects of any size.

To start your road to understanding Kubernetes, get the early-release MEAP from Manning here.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *