menu icon

Kubernetes

Kubernetes is an open source container orchestration platform that automates deployment, management and scaling of applications. Learn how Kubernetes enables cost-effective cloud-native development.

What is Kubernetes?

Kubernetes — also known as “k8s” or “kube” — is a container orchestration platform for scheduling and automating the deployment, management, and scaling of containerized applications.

Kubernetes was first developed by engineers at Google before being open sourced in 2014. It is a descendant of Borg, a container orchestration platform used internally at Google. Kubernetes is Greek for helmsman or pilot, hence the helm in the Kubernetes logo (link resides outside IBM).

Today, Kubernetes and the broader container ecosystem are maturing into a general-purpose computing platform and ecosystem that rivals — if not surpasses — virtual machines (VMs) as the basic building blocks of modern cloud infrastructure and applications. This ecosystem enables organizations to deliver a high-productivity Platform-as-a-Service (PaaS) that addresses multiple infrastructure-related and operations-related tasks and issues surrounding cloud-native development so that development teams can focus solely on coding and innovation.     

The following video (10:59) provides a great introduction to Kubernetes basics.

What are containers?

Containers are lightweight, executable application components that combine application source code with all the operating system (OS) libraries and dependencies required to run the code in any environment.

Containers take advantage of a form of operating system (OS) virtualization that lets multiple applications share a single instance of an OS by isolating processes and controlling the amount of CPU, memory, and disk those processes can access. Because they are smaller, more resource-efficient and more portable than virtual machines (VMs), containers have become the de facto compute units of modern cloud-native applications

In a recent IBM study (PDF, 1.4 MB) users reported several specific technical and business benefits resulting from their adoption of containers and related technologies. You can explore those benefits using the interactive data visualization below:

Download the full report, Containers in the enterprise (PDF, 1.4 MB)

Containers vs. virtual machines vs. traditional infrastructure

It may be easier or more helpful to understand containers as the latest point on the continuum of IT infrastructure automation and abstraction.

In traditional infrastructure, applications run on a physical server and grab all the resources they can get. This leaves you the choice of running multiple applications on a single server and hoping one doesn’t hog resources at the expense of the others or dedicating one server per application, which wastes resources and doesn’t scale.

Virtual machines (VMs) are servers abstracted from the actual computer hardware, enabling you to run multiple VMs on one physical server or a single VM that spans more than one physical server. Each VM runs its own OS instance, and you can isolate each application in its own VM, reducing the chance that applications running on the same underlying physical hardware will impact each other. VMs make better use of resources and are much easier and more cost-effective to scale than traditional infrastructure. And, they’re disposable — when you no longer need to run the application, you take down the VM.

For more information on VMs, see "Virtual Machines: An Essential Guide."

Containers take this abstraction to a higher level — specifically, in addition to sharing the underlying virtualized hardware, they share an underlying, virtualized OS kernel as well. Containers offer the same isolation, scalability, and disposability of VMs, but because they don’t carry the payload of their own OS instance, they’re lighter weight (that is, they take up less space) than VMs. They’re more resource-efficient — they let you run more applications on fewer machines (virtual and physical), with fewer OS instances. Containers are more easily portable across desktop, data center, and cloud environments. And they’re an excellent fit for Agile and DevOps development practices.

"Containers: An Essential Guide" gives a complete explanation of containers and containerization. And the blog post "Containers vs. VMs: What's the difference?" gives a full rundown of the differences.

What is Docker?

Docker is the most popular tool for creating and running Linux® containers. While early forms of containers were introduced decades ago (with technologies such as FreeBSD Jails and AIX Workload Partitions), containers were democratized in 2013 when Docker brought them to the masses with a new developer-friendly and cloud-friendly implementation.

Docker began as an open source project, but today it also refers to Docker Inc., the company that produces Docker — a commercial container toolkit that builds on the open source project (and contributes those improvements back to the open source community).

Docker was built on traditional Linux container (LXC) technology, but enables more granular virtualization of Linux kernel processes and adds features to make containers easier for developers to build, deploy, manage, and secure.

While alternative container platforms exist today (such as Open Container Initiative (OCI), CoreOS, and Canonical (Ubuntu) LXD), Docker is so widely preferred that it is virtually synonymous with containers and is sometimes mistaken as a competitor to complimentary technologies such as Kubernetes (see the video “Kubernetes vs, Docker: It’s Not an Either/Or Question” further below).

Container orchestration with Kubernetes

As containers proliferated — today, an organization might have hundreds or thousands of them — operations teams needed to schedule and automate container deployment, networking, scalability, and availability. And so, the container orchestration market was born.

While other container orchestration options — most notably Docker Swarm and Apache Mesos — gained some traction early on, Kubernetes quickly became the most widely adopted (in fact, at one point, it was the fastest-growing project in the history of open source software).

Developers chose and continue to choose Kubernetes for its breadth of functionality, its vast and growing ecosystem of open source supporting tools, and its support and portability across cloud service providers. All leading public cloud providers —  including Amazon Web Services (AWS), Google Cloud, IBM Cloud and Microsoft Azure — offer fully managed Kubernetes services.

For more info on container orchestration, see the video “Container Orchestration Explained” (08:59):

What does Kubernetes do?

Kubernetes schedules and automates container-related tasks throughout the application lifecycle, including:

  • Deployment: Deploy a specified number of containers to a specified host and keep them running in a desired state.
  • Rollouts: A rollout is a change to a deployment. Kubernetes lets you initiate, pause, resume, or roll back rollouts.
  • Service discovery: Kubernetes can automatically expose a container to the internet or to other containers using a DNS name or IP address.
  • Storage provisioning: Set Kubernetes to mount persistent local or cloud storage for your containers as needed.
  • Load balancing: Based on CPU utilization or custom metrics, Kubernetes load balancing can distribute the workload across the network to maintain performance and stability. 
  • Autoscaling: When traffic spikes, Kubernetes autoscaling can spin up new clusters as needed to handle the additional workload.
  • Self-healing for high availability: When a container fails, Kubernetes can restart or replace it automatically to prevent downtime. It can also take down containers that don’t meet your health-check requirements.

Kubernetes vs. Docker

If you’ve read this far, you already understand that while Kubernetes is an alternative to Docker Swarm, it is not (contrary to persistent popular misconception) an alternative or competitor to Docker itself.

In fact, if you’ve enthusiastically adopted Docker and are creating large-scale Docker-based container deployments, Kubernetes orchestration is a logical next step for managing these workloads. To learn more, watch “Kubernetes vs. Docker: It’s Not an Either/Or Question” (08:03):

Kubernetes architecture

The chief components of Kubernetes architecture include the following:

Clusters and nodes (compute)

Clusters are the building blocks of Kubernetes architecture. The clusters are made up of nodes, each of which represents a single compute host (virtual or physical machine).

Each cluster consists of a master node that serves as the control plan for the cluster, and multiple worker nodes that deploy, run, and manage containerized applications. The master node runs a scheduler service that automates when and where the containers are deployed based on developer-set deployment requirements and available computing capacity. Each worker node includes the tool that is being used to manage the containers — such as Docker — and a software agent called a Kubelet that receives and executes orders from the master node.

Developers manage cluster operations using kubectl, a command-line interface (cli) that communicates directly with the Kubernetes API. 

For a deeper dive into Kubernetes clusters, check out this blog post: “Kubernetes Clusters: Architecture for Rapid, Controlled Cloud App Delivery.”

Pods and deployments (software)

Pods are groups of containers that share the same compute resources and the same network. They are also the unit of scalability in Kubernetes: if a container in a pod is getting more traffic than it can handle, Kubernetes will replicate the pod to other nodes in the cluster. For this reason, it’s a good practice to keep pods compact so that they contain only containers that must share resources.

The deployment controls the creation and state of the containerized application and keeps it running. It specifies how many replicas of a pod should run on the cluster. If a pod fails, the deployment will create a new one.

For more on Kubernetes deployments, watch “Kubernetes Deployments: Get Started Fast” (03:54):

For a more detailed understanding of the elements of Kubernetes architecture, try this self-paced online course: “Kubernetes 101”.

You can also take a deeper dive with the blog post "Kubernetes Architecture: Four Approaches to Container Solutions."

Istio service mesh

Kubernetes can deploy and scale pods, but it can’t manage or automate routing between them and doesn’t provide any tools to monitor, secure, or debug these connections. As the number of containers in a cluster grows, the number of possible connection paths between them escalates exponentially (for example, two containers have two potential connections, but 10 pods have 90), creating a potential configuration and management nightmare.

Enter Istio, an open source service mesh layer for Kubernetes clusters. To each Kubernetes cluster, Istio adds a sidecar container — essentially invisible to the programmer and the administrator — that configures, monitors, and manages interactions between the other containers.

With Istio, you set a single policy that configures connections between containers so that you don’t have to configure each connection individually. This makes connections between containers easier to debug.

Istio also provides a dashboard that DevOps teams and administrators can use to monitor latency, time-in-service errors, and other characteristics of the connections between containers. And, it builds in security — specifically, identity management that keeps unauthorized users from spoofing a service call between containers — and authentication, authorization and auditing (AAA) capabilities that security professionals can use to monitor the cluster.

See the article “What is Istio?” for more detail, including video and some examples of Istio in use.

Knative and serverless computing

Knative (pronounced ‘kay-native’) is an open source platform that sits on top of Kubernetes and provides two important classes of benefits for cloud-native development:

Knative provides an easy onramp to serverless computing

Serverless computing is a relatively new way of deploying code that makes cloud native applications more efficient and cost-effective. Instead of deploying an ongoing instance of code that sits idle while waiting for requests, serverless brings up the code as needed — scaling it up or down as demand fluctuates — and then takes the code down when not in use. Serverless prevents wasted computing capacity and power and reduces costs because you only pay to run the code when its actually running.

Knative enables developers to build a container once and run it as a software service or as a serverless function. It’s all transparent to the developer: Knative handles the details in the background, and the developer can focus on code.

Knative simplifies container development and orchestration

For developers, containerizing code requires lots of repetitive steps, and orchestrating containers requires lots of configuration and scripting (such as generating configuration files, installing dependencies, managing logging and tracing, and writing continuous integration/continuous deployment (CI/CD) scripts.)

Knative makes these tasks easier by automating them through three components:

  • Build: Knative’s Build component automatically transforms source code into a cloud-native container or function. Specifically, it pulls the code from repository, installs the required dependencies, builds the container image, and puts it in a container registry for other developers to use. Developers need to specify the location of these components so Knative can find them, but once that’s done, Knative automates the build.
  • Serve: The Serve component runs containers as scalable services; it can scale up to thousands of container instances or scale down to none (called scaling to zero). In addition, Serve has two very useful features: configuration, which saves versions of a container (called snapshots) every time you push the container to production and lets you run those versions concurrently; and service routing, which lets you direct different amounts of traffic to these versions. You can use these features together to gradually phase a container rollout or to stage a canary test of a containerized application before putting it into global production.
  • Event: Event enables specified events to trigger container-based services or functions. This is especially integral to Knative’s serverless capabilities; something needs to tell the system to bring up a function when needed. Event allows teams to express interest in types of events, and it then automatically connects to the event producer and routs the events to the container, eliminating the need to program these connections.

You can learn more about Knative by reading "Knative: An Essential Guide."

Kubernetes GitHub commits and more evidence of surging popularity

Kubernetes is one of the fastest-growing open source projects in history, and growth is accelerating. Adoption continues to soar among developers and the companies that employ them. A few data points worth noting:

  • At this writing, over 120,190 commits have been made to the Kubernetes repository on GitHub (link resides outside IBM) — an increase of nearly 34,000 commits in the past 18 months — and there are more than 3,100 active contributors to the project. According to the Cloud Native Computing Foundation (CNCF) there have been more than 148,000 commits across all Kubernetes-related repositories (including Kubernetes Dashboard and Kubernetes MiniKube). You can read all the stats here (link resides outside IBM).
  • More than 2,000 companies use Kubernetes in their production software stacks. These include world-known enterprises such as AirBnB, Ancestry, Bose, CapitalOne, Intuit, Nordstrom, Philips, Reddit, Slack, Spotify, Tinder, and, of course, IBM. Read these and other adoption case studies (link resides outside IBM)
  • A 2021 survey cited in Container Journal (link resides outside IBM) found that 68% of IT professionals increased use of Kubernetes during the COVID-19 pandemic.
  • According to ZipRecruiter (link resides outside IBM), the average annual salary (in North America) for a Kubernetes-related job is USD 147,732. At this writing, there are currently more than 57,000 Kubernetes-related positions listed on LinkedIn (link resides outside IBM), as compared to 21,000 positions listed just 18 months ago.

Kubernetes tutorials

If you're ready to start working with Kubernetes or looking to build your skills with Kubernetes and Kubernetes ecosystem tools, try one of these tutorials:

Kubernetes and IBM Cloud

Containers are ideal for modernizing your applications and optimizing your IT infrastructure. Built on Kubernetes and other tools in the open-source Kubernetes ecosystem, container services from IBM Cloud can facilitate and accelerate your path to cloud-native application development, and to an open hybrid cloud approach that integrates the best features and functions from private cloud, public cloud and on-premises IT infrastructure.

Take the next step:

  • Learn how you can deploy highly available, fully managed Kubernetes clusters for your containerized applications with a single click using Red Hat OpenShift on IBM Cloud.
  • Deploy and manage containerized applications consistently across on-premises, edge computing and public cloud environments from any vendor with IBM Cloud Satellite.
  • Run container images, batch jobs or source code as a serverless workload - no sizing, deploying, networking or scaling required - with IBM Cloud Code Engine.
  • Deploy secure, highly available applications in a native Kubernetes experience using IBM Cloud Kubernetes Service.

To get started right away, sign up for an IBM Cloud account.