What is Google's Kubernetes project about

Kubernetes (k8s) explained

Kubernetes, also called k8s (starts with k, has 8 characters, ends with s), or "kube" for short, is an open source platform that automates the operation of Linux® containers, eliminating many of the manual processes associated with deploying and scaling containerized applications. In other words, you can Groups of hosts running Linux containers are clustered and managed with Kubernetes in a simple and efficient way. These clusters can be hosted in public, private or hybrid clouds. This makes Kubernetes the ideal platform for hosting more cloud-native Applications that need to scale quickly (such as real-time data streaming with Apache Kafka).


A brief history of Kubernetes

Kubernetes was originally developed and designed by Google engineers. Google was one of the early adopters of Linux container technology and has informed the public that everything at Google runs in containers. (This is the technology base of Google Cloud Services.) Google generates over 2 billion container deployments per week, all on one internal platform: Borg. Borg is the predecessor of Kubernetes and all experience gained from its development over the years forms the basis of today's Kubernetes technology.

Anecdote on the side: The seven spokes of the Kubernetes logo refer to the original project name "Project Seven of Nine" (reference to the human-Borg hybrid in the Star Trek series "Voyager").

Red Hat® was one of Google's first partners in the development of Kubernetes, even before it was released, and is now the second largest supporter of the Kubernetes Upstream Project. In 2015, Google donated the Kubernetes project to the newly established Cloud Native Computing Foundation.


What can you use Kubernetes for?

The main advantage of Kubernetes is that it gives you a platform with which you can plan and run containers on clusters of physical or virtual machines. This is especially the case if you want to optimize your app development for the cloud. More generally speaking, with Kubernetes you can implement an infrastructure in your production environments that is completely container-based and that you can rely on. And because Kubernetes is all about the automation of operational tasks, you can do practically the same thing as on other application platforms or management systems, but for your containers.

With Kubernetes you can:

  • Orchestrate containers across multiple hosts
  • Make better use of hardware to maximize the resources necessary to run your business applications
  • Control and automate application deployments and updates
  • Set up and add storage to run stateful apps
  • Scale containerized applications and their resources on the fly.
  • Manage services declaratively to ensure that deployed applications run exactly on schedule
  • Use autoplacement, autorestart, autoreplication and autoscaling to check the status of your apps and, if necessary, carry out repairs

However, Kubernetes is building on other projects to fully deploy these orchestrated services. By using additional open source projects, you can fully realize the performance of Kubernetes. The necessary components include:

  • Registry - via projects like Atomic Registry or Docker Registry
  • Networking - via projects such as OpenvSwitch and intelligent edge routing
  • Telemetry - via projects like Heapster, Kibana, Hawkular and Elastic
  • Security - via projects such as LDAP, SELinux, RBAC and OAUTH with multi-tenancy layers
  • Automation - through Ansible® playbooks for installation purposes and cluster lifecycle management
  • Services through a rich catalog of predefined content from popular app templates

Learn the Kubernetes language

As with any technology, there are some special technical terms that can make it difficult to access. Here is a list of the most common terms that can help you understand Kubernetes better.

Control plane: The machine that controls the Kubernetes nodes. All task assignments are output here.

Nodes: These machines perform the requested and assigned tasks. These are controlled by the Kubernetes control plane.

Pod: A group of one or more containers implemented in a single node. All containers in a pod share the IP address, IPC, host name and other resources. With pods, network and storage can be abstracted away from the underlying container. This makes it easier to move the containers in the cluster.

Replication Controller:This tool controls how many identical copies of a Pod should run anywhere on the cluster.

Service: This decouples working definitions from the pods. Kubernetes service proxies automatically move service requests to the right pod, regardless of where it is in the cluster and even if it has been replaced.

Kubelet: This service runs on the nodes, reads out the container manifests and ensures that the defined containers are started and in operation.

kubectl: This is the command line configuration tool for Kubernetes.


How does Kubernetes work?

A functioning Kubernetes deployment is known as a cluster. You can imagine a Kubernetes cluster in two parts: the control plane and the computing machines or nodes.

Each node is its own Linux® environment and can be either a physical or a virtual machine. Pods, which are made up of containers, run on each node.

The control plane is responsible for maintaining the desired state of the cluster, e.g. B. what applications are running and what container images they are using. The compute machines run the applications and workloads.

Kubernetes runs on an operating system (e.g. Red Hat® Enterprise Linux®) and interacts with pods from containers that run on the nodes.

The Kubernetes control plane receives commands from an administrator (or DevOps team) and forwards these instructions to the calculating machines.

This forwarding works with a variety of services to automatically decide which node is best for the task. Resources are then allocated and the pods in that node are entrusted with the requested work.

The desired state of a Kubernetes cluster defines what applications or other workloads to run, what images to use, what resources to make available to them, and similar configuration details.

From an infrastructure perspective, the way you manage containers changes little. Control over the containers is just at a higher level, which gives you greater control without having to micromanage every single container or node.

Your job consists of configuring Kubernetes and defining nodes, pods and the containers in them. Kubernetes takes over the orchestration of the containers.

Where you run Kubernetes is up to you. This can be on bare metal servers, virtual machines, public cloud providers, in private clouds and hybrid cloud environments. One of the main advantages of Kubernetes is that it works on many different types of infrastructures.


What are the advantages of Kubernetes?

Real production apps span multiple containers. These containers must be deployed on multiple server hosts. Security for containers is multi-layered and can therefore be very complex. And this is exactly where Kubernetes can help. It gives you the orchestration and management capabilities you need for a very large container deployment with these workloads. With Kubernetes orchestration, you can develop application services that span multiple containers, plan and scale these containers as clusters, and monitor their health over time. With Kubernetes you take the first real steps towards better IT security.

Kubernetes also needs to be integrated with networking, storage, security, telemetry and other services to create a comprehensive container infrastructure.

Of course, that depends on how you use containers in your environment. In a rudimentary application of Linux containers, these are treated as efficient, fast virtual machines. Once this environment is scaled to the size of a production environment and multiple applications, you need the power of multiple, side-by-side containers that provide the individual services. This increases the number of containers in your environment significantly, and as the number of containers increases, so does complexity.

Kubernetes addresses many of the common problems that can arise from the growing number of containers by grouping them into so-called "pods." These pods add a layer of abstraction to the grouped containers that helps you plan workloads and those containers the necessary Providing services such as networking and storage. Using other parts of Kubernetes, you can balance the load across these pods and ensure that the correct number of containers are running for your workloads.

With the right implementation of Kubernetes and the support of other open source projects such as Open vSwitch, OAuth and SELinux, you can orchestrate all parts of your container infrastructure.


The use of Kubernetes in production

Kubernetes is an open source technology. There is no formal support structure for this technology, or at least none that would be trusted to be successful in business. If you encountered problems implementing Kubernetes during production, you would not be particularly satisfied. The same goes for your customers.

And this is exactly where Red Hat® OpenShift® comes in. OpenShift is Kubernetes for businesses and much more. OpenShift integrates all of the additional technology components that make Kubernetes powerful and enterprise-ready, including registry, networking, telemetry, security, automation and services. With OpenShift, your engineers can build new containerized apps, host them, and deploy them in the cloud with the scalability, control, and orchestration to turn a good idea into business opportunities quickly and easily.

And the best thing is: OpenShift is supported and developed by Red Hat, the world's leading provider of open source solutions.

What about Docker?

Docker technology still does what it was designed to do. When Kubernetes schedules a pod for a node, Kubelet instructs Docker on this node to start the specific containers. Kubelet then continuously records the status of these Docker containers and collects this information in the control plane. Docker pulls containers onto this node and starts and stops these containers as normal. The difference is that an automated system asks Docker to do these things instead of an administrator doing it manually on all nodes for all containers.

 

"With Red Hat OpenShift we get a Kubernetes framework for companies with everything we need in the areas of stability, lifecycle management, storage integration and authorization functions for our important pharmaceutical processes."

Clemens Utschig-Utschig
Head of IT Technology Strategy & CTO, Boehringer Ingelheim