Kubernetes Integration with Python-CGI
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.
It aims to provide a “platform for automating deployment, scaling, and operations of application containers across clusters of hosts”. It works with a range of container tools and runs containers in a cluster, often with images built using Docker.
Docker is an open platform for developing, shipping, and running applications. Docker enables you to separate your applications from your infrastructure so you can deliver software quickly. With Docker, you can manage your infrastructure in the same ways you manage your applications. Developers can create containers without Docker, but the platform makes it easier, simpler, and safer to build, deploy and manage containers. Docker is essentially a toolkit that enables developers to build, deploy, run, update, and stop containers using simple commands and work-saving automation through a single API.
With Kubernetes we can:
- Orchestrate(organize) containers across multiple hosts.
- Make better use of hardware to maximize resources needed to run your enterprise apps.
- Control and automate application deployments and updates.
- Mount and add storage to run stateful apps.
- Scale containerized applications and their resources on the fly.
- Declaratively manage services , which guarantees the deployed applications are always running the way you intended them to run.
- Health-check and self-heal your apps with auto placement, auto restart, auto replication, and auto scaling.
A Node, also known as a Worker or a Minion, is a machine where containers (workloads) are deployed. Every node in the cluster must run a container runtime such as Docker
The basic scheduling unit in Kubernetes is a pod. A pod is a grouping of containerized components. A pod consists of one or more containers that are guaranteed to be co-located on the same node.
A Replica Set declares the number of instances of a pod that is needed, and a Replication Controller manages the system so that the number of healthy pods that are running matches the number of pods declared in the Replica Set. Replication controller , This controls how many identical copies of a pod should be running somewhere on the cluster.
The most basic type of load balancing in Kubernetes is actually load distribution, which is easy to implement at the dispatch level. Load balancing is defined as the methodical and efficient distribution of network or application traffic across multiple servers in a server farm. Each load balancer sits between client devices and backend servers, receiving and then distributing incoming requests to any available server capable of fulfilling them.
Minikube is a open source software which enables you to run Kubernetes on your local machine making use of a virtualization manager. Minikube runs on Linux, MacOS, and Windows operating systems, selecting a virtualization manager appropriate for each operating system.
- Kubelet: This service runs on nodes, reads the container manifests, and ensures the defined containers are started and running.
- kubectl: The command line configuration tool for Kubernetes. then enables the interaction with the cluster: to create pods, services and other components.
This app will help the user to run all the Kubernetes commands, It can launch pods with specific name given by user.
- Run deployment using image and name given by user.
- Expose services on given user input port number.
- Scale the replica according to user need.
- Delete complete environment created.
- Delete specific resources given by user.