Kubernetes

menu icon

Kubernetes

Kubernetes is an open source container orchestration platform that automates deployment, management and scaling of applications. Learn how Kubernetes enables cost-effective cloud-native development.

What is Kubernetes?

Kubernetes — also known as “k8s” or “kube” — is a container orchestration platform for scheduling and automating the deployment, management, and scaling of containerized applications.

Kubernetes was first developed by engineers at Google before being open sourced in 2014. It is a descendant of Borg, a container orchestration platform used internally at Google. Kubernetes is Greek for helmsman or pilot, hence the helm in the Kubernetes logo (link resides outside IBM).

Today, Kubernetes and the broader container ecosystem are maturing into a general-purpose computing platform and ecosystem that rivals — if not surpasses — virtual machines (VMs) as the basic building blocks of modern cloud infrastructure and applications. This ecosystem enables organizations to deliver a high-productivity Platform-as-a-Service (PaaS) that addresses multiple infrastructure-related and operations-related tasks and issues surrounding cloud-native development so that development teams can focus solely on coding and innovation.     

In the following video, Sai Vennam gives an explanation of the basics of Kubernetes (10:59):

What are containers?

Let’s start with a definition: A container is an executable unit of software in which application code is packaged — together with libraries and dependencies — in common ways so that it can be run anywhere on the desktop, traditional IT, or the cloud.

Containers take advantage of a form of operating system (OS) virtualization that lets multiple applications share the OS by isolating processes and controlling the amount of CPU, memory, and disk those processes can access.

Containers vs. virtual machines vs. traditional infrastructure

It may be easier or more helpful to understand containers as the latest point on the continuum of IT infrastructure automation and abstraction.

In traditional infrastructure, applications run on a physical server and grab all the resources they can get. This leaves you the choice of running multiple applications on a single server and hoping one doesn’t hog resources at the expense of the others or dedicating one server per application, which wastes resources and doesn’t scale.

Virtual machines (VMs) are servers abstracted from the actual computer hardware, enabling you to run multiple VMs on one physical server or a single VM that spans more than one physical server. Each VM runs its own OS instance, and you can isolate each application in its own VM, reducing the chance that applications running on the same underlying physical hardware will impact each other. VMs make better use of resources and are much easier and more cost-effective to scale than traditional infrastructure. And, they’re disposable — when you no longer need to run the application, you take down the VM.

For more information on VMs, see "Virtual Machines: An Essential Guide."

Containers take this abstraction to a higher level — specifically, in addition to sharing the underlying virtualized hardware, they share an underlying, virtualized OS kernel as well. Containers offer the same isolation, scalability, and disposability of VMs, but because they don’t carry the payload of their own OS instance, they’re lighter weight (that is, they take up less space) than VMs. They’re more resource-efficient — they let you run more applications on fewer machines (virtual and physical), with fewer OS instances. Containers are more easily portable across desktop, data center, and cloud environments. And they’re an excellent fit for Agile and DevOps development practices.

"Containers: An Essential Guide" gives a complete explanation of containers and containerization. And the blog post "Containers vs. VMs: What's the difference?" gives a full rundown of the differences.

What is Docker?

Docker is the most popular tool for creating and running Linux® containers. While early forms of containers were introduced decades ago (with technologies such as FreeBSD Jails and AIX Workload Partitions), containers were democratized in 2013 when Docker brought them to the masses with a new developer-friendly and cloud-friendly implementation.

Docker began as an open source project, but today it also refers to Docker Inc., the company that produces Docker — a commercial container toolkit that builds on the open source project (and contributes those improvements back to the open source community).

Docker was built on traditional Linux container (LXC) technology, but enables more granular virtualization of Linux kernel processes and adds features to make containers easier for developers to build, deploy, manage, and secure.

While alternative container platforms exist today (such as Open Container Initiative (OCI), CoreOS, and Canonical (Ubuntu) LXD), Docker is so widely preferred that it is virtually synonymous with containers and is sometimes mistaken as a competitor to complimentary technologies such as Kubernetes (see the video “Kubernetes vs, Docker: It’s Not an Either/Or Question” further below).

Container orchestration with Kubernetes

As containers proliferated — today, an organization might have hundreds or thousands of them — operations teams needed to schedule and automate container deployment, networking, scalability, and availability. And so, the container orchestration market was born.

While other container orchestration options — most notably Docker Swarm and Apache Mesos — gained some traction early on, Kubernetes quickly became the most widely adopted (in fact, at one point, it was the fastest-growing project in the history of open source software).

Developers chose (and continue to choose) Kubernetes for its breadth of functionality, its vast and growing ecosystem of open source supporting tools, and its support and portability across the leading cloud providers (some of who now offer fully managed Kubernetes services).

For more info on container orchestration, see the video “Container Orchestration Explained” (08:59):

What does Kubernetes do?

Kubernetes schedules and automates these and other container-related tasks:

  • Deployment: Deploy a specified number of containers to a specified host and keep them running in a desired state.
  • Rollouts: A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or roll back rollouts.
  • Service discovery: Kubernetes can automatically expose a container to the internet or to other containers using a DNS name or IP address.
  • Storage provisioning: Set Kubernetes to mount persistent local or cloud storage for your containers as needed.
  • Load balancing and scaling: When traffic to a container spikes, Kubernetes can employ load balancing and scaling to distribute it across the network to maintain stability.
  • Self-healing for high availability: When a container fails, Kubernetes can restart or replace it automatically; it can also take down containers that don’t meet your health-check requirements.

Kubernetes vs. Docker

If you’ve read this far, you already understand that while Kubernetes is an alternative to Docker Swarm, it is not (contrary to persistent popular misconception) an alternative or competitor to Docker itself.

In fact, if you’ve enthusiastically adopted Docker and are creating large-scale Docker-based container deployments, Kubernetes orchestration is a logical next step for managing these workloads. To learn more, watch “Kubernetes vs. Docker: It’s Not an Either/Or Question” (08:03):

Kubernetes architecture

The chief components of Kubernetes architecture include the following:

Clusters and nodes (compute)

Clusters are the building blocks of Kubernetes architecture. The clusters are made up of nodes, each of which represents a single compute host (virtual or physical machine).

Each cluster consists of multiple worker nodes that deploy, run, and manage containerized applications and one master node that controls and monitors the worker nodes.

The master node runs a scheduler service that automates when and where the containers are deployed based on developer-set deployment requirements and available computing capacity. Each worker node includes the tool that is being used to manage the containers — such as Docker — and a software agent called a Kubelet that receives and executes orders from the master node.

For a deeper dive on Kubernetes clusters, check out this blog post: “Kubernetes Clusters: Architecture for Rapid, Controlled Cloud App Delivery.”

Pods and deployments (software)

Pods are groups of containers that share the same compute resources and the same network. They are also the unit of scalability in Kubernetes: if a container in a pod is getting more traffic than it can handle, Kubernetes will replicate the pod to other nodes in the cluster. For this reason, it’s a good practice to keep pods compact so that they contain only containers that must share resources.

The deployment controls the creation and state of the containerized application and keeps it running. It specifies how many replicas of a pod should run on the cluster. If a pod fails, the deployment will create a new one.

For more on Kubernetes deployments, watch “Kubernetes Deployments: Get Started Fast” (03:54):

For a more detailed understanding of the elements of Kubernetes architecture, try this self-paced online course: “Kubernetes 101”.

You can also take a deeper dive with the blog post "Kubernetes Architecture: Four Approaches to Container Solutions."

Istio service mesh

Kubernetes can deploy and scale pods, but it can’t manage or automate routing between them and doesn’t provide any tools to monitor, secure, or debug these connections. As the number of containers in a cluster grows, the number of possible connection paths between them escalates exponentially (for example, two containers have two potential connections, but 10 pods have 90), creating a potential configuration and management nightmare.

Enter Istio, an open source service mesh layer for Kubernetes clusters. To each Kubernetes cluster, Istio adds a sidecar container — essentially invisible to the programmer and the administrator — that configures, monitors, and manages interactions between the other containers.

With Istio, you set a single policy that configures connections between containers so that you don’t have to configure each connection individually. This makes connections between containers easier to debug.

Istio also provides a dashboard that DevOps teams and administrators can use to monitor latency, time-in-service errors, and other characteristics of the connections between containers. And, it builds in security — specifically, identity management that keeps unauthorized users from spoofing a service call between containers — and authentication, authorization and auditing (AAA) capabilities that security professionals can use to monitor the cluster.

See the article “What is Istio?” for more detail, including video and some examples of Istio in use.

Knative and serverless computing

Knative (pronounced ‘kay-native’) is an open source platform that sits on top of Kubernetes and provides two important classes of benefits for cloud-native development:

Knative provides an easy onramp to serverless computing

Serverless computing is a relatively new way of deploying code that makes cloud native applications more efficient and cost-effective. Instead of deploying an ongoing instance of code that sits idle while waiting for requests, serverless brings up the code as needed — scaling it up or down as demand fluctuates — and then takes the code down when not in use. Serverless prevents wasted computing capacity and power and reduces costs because you only pay to run the code when its actually running.

Knative enables developers to build a container once and run it as a software service or as a serverless function. It’s all transparent to the developer: Knative handles the details in the background, and the developer can focus on code.

Knative simplifies container development and orchestration

For developers, containerizing code requires lots of repetitive steps, and orchestrating containers requires lots of configuration and scripting (such as generating configuration files, installing dependencies, managing logging and tracing, and writing continuous integration/continuous deployment (CI/CD) scripts.)

Knative makes these tasks easier by automating them through three components:

  • Build: Knative’s Build component automatically transforms source code into a cloud-native container or function. Specifically, it pulls the code from repository, installs the required dependencies, builds the container image, and puts it in a container registry for other developers to use. Developers need to specify the location of these components so Knative can find them, but once that’s done, Knative automates the build.
  • Serve: The Serve component runs containers as scalable services; it can scale up to thousands of container instances or scale down to none (called scaling to zero). In addition, Serve has two very useful features:
    • Configuration, which saves versions of a container (called snapshots) every time you push the container to production and lets you run those versions concurrently.
    • Service routing, which lets you direct different amounts of traffic to these versions. You can use these features together to gradually phase a container rollout or to stage a canary test of a containerized application before putting it into global production.
  • Event: Event enables specified events to trigger container-based services or functions. This is especially integral to Knative’s serverless capabilities; something needs to tell the system to bring up a function when needed. Event allows teams to express interest in types of events, and it then automatically connects to the event producer and routs the events to the container, eliminating the need to program these connections.

You can learn more about Knative by reading "Knative: An Essential Guide."

Kubernetes GitHub commits and more evidence of surging popularity

Kubernetes is one of the fastest-growing open source projects in history, and growth is accelerating. Adoption continues to soar among developers and the companies that employ them. A few data points worth noting:

  • At this writing, over 86,200 commits have been made to the Kubernetes repository on GitHub (link resides outside IBM) — including nearly 6,000 commits in the last four months — and there are more than 2,300 active contributors to the project. According to the Cloud Native Computing Foundation (link resides outside IBM), there have been more than 148,000 commits across all Kubernetes-related repositories (including Kubernetes Dashboard and Kubernetes MiniKube).
  • More than 1,500 companies use Kubernetes in their production software stacks. These include world-known enterprises such as AirBnB, Bose, CapitalOne, Intuit, Nordstrom, Philips, Reddit, Slack, Spotify, Tinder, and, of course, IBM. Read these and other adoption case studies (link resides outside IBM)
  • A July 2019 survey cited in Container Journal (link resides outside IBM) found a 51% increase in Kubernetes adoption during the previous six months.
  • More than 12,000 people attended the KubeCon + CloudNative Con North America 2019 (link resides outside IBM) conference — up more than 3,000 from the previous year’s record-setting attendance.
  • According to ZipRecruiter (link resides outside IBM), the average annual salary (in North America) for a Kubernetes-related job is USD 144,628. At this writing, there are currently more than 21,000 Kubernetes-related positions listed on LinkedIn (link resides outside IBM).

Kubernetes tutorials

If you're ready to start working with Kubernetes or looking to build your skills with Kubernetes and Kubernetes ecosystem tools, try one of these tutorials:

Kubernetes and IBM Cloud

A managed container orchestration solution, IBM Cloud® Kubernetes Service automates deployment, operation, scaling, and monitoring of containerized apps in a cluster of compute hosts, while adding in IBM-specific capabilities. It enables the rapid delivery of applications and can bind to advanced services like blockchain and IBM Watson®.

For an overview of how a managed Kubernetes service can help you on your cloud journey, watch our video, "Advantages of Managed Kubernetes" (03:14):

Red Hat® OpenShift® on IBM Cloud is a comprehensive service that offers fully managed OpenShift clusters on the IBM Cloud platform. (OpenShift is an enterprise Kubernetes platform running on Red Hat Enterprise Linux.)

Read more about OpenShift in the new Forrester Wave: Multicloud Container Development Platforms report (PDF, 415 KB).

To get started, sign up for an IBMid and create your IBM Cloud account.