For this blog post, some knowledge of containers is beneficial in order to fully grasp the key concepts of Kubernetes. See two of my previous blog posts to get some background:


In an architecture where you have multiple containers that run on multiple hosts, you need a way to manage all the containers and hosts.

Kubernetes, sometimes shortened to k8s, is an 'open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications'[1]. Collaborators on the project include Google, Microsoft, IBM, HP and Red Hat. Kubernetes is cloud platform independent and so can run almost anywhere!

The big idea of Kubernetes is to think about a data center or a cluster as one big computing resource.

Kubernetes is a way to describe and launch containers in a cluster. It monitors the state of containers and allows an administrator to increase or reduce the number of copies of a container that run in a cluster. If any containers get stopped due to software or hardware faults, the k8s scheduler can schedule the stopped containers to be run on another node.

Kubernetes overview
(source: kubernetes.io)

There are a few key concepts in Kubernetes that are important to outline.

A pod is a collection of co-located containers. They are the smallest deployable unit that is managed by k8s. This means that creation, scheduling and management is done at the pod level. Typically though, there would only be one container in a pod. If two containers have to be on the same physical host and always together, they can be put together into a pod.

Replication controllers are used to manage the lifecycle of pods. Pods can be stopped for many reasons, e.g. host failure. Therefore, the purpose of a replication controller is to guarantee that the specified number of pods are running at any given time in the cluster.

A service provides a single name and address for a set of pods. Therefore, they act as a load balancer for all the pods that are under that service. This ties in nicely with the concept of microservices.

In a Kubernetes cluster, one host (or potentially more than one in a future update) is assigned as the Master. All other hosts are called Nodes.

Nodes run one or more Pods. A Node also runs a Kubelet which is a component that provides node level pod management.

The Master node manages all other Nodes in the cluster. It has three main components:

  1. Kubernetes API server - provides API that you can use to interact with Kubernetes.
  2. Scheduler - schedules Pods to Nodes.
  3. Controller Manager Server - responsible for a lot of the heavy lifting (e.g. Node discovery and monitoring)

Kubectl is the component you can use to control the Kubernetes cluster manager. It communicates with the API server stated above. Typically, this component runs on the Master (where the API server runs).

The best way I found to learn Kubernetes was to try it out for myself!
Below are some helpful resources to get started.


  1. Kubernetes GitHub https://github.com/kubernetes/kubernetes ↩︎