Kubernetes¶
Overview¶
Kubernetes (many refer to it as K8s) is an open source container orchestration platform used to automate scaling and management of containerized applications. HELM is often used as a package manager for Kubernetes applications, similar to Yum for CentOS, APT for Ubuntu, etc. Note that Kubernetes is a complex technology that requires some effort to learn, just like all infrastructure solutions.
There are plenty of resources available on the Internet that cover the basics, so we won’t re-hash the details. You might find the Kubernetes site useful.
Genesis Public Cloud is an OpenStack-based cloud and uses the Magnum Project to deploy Kubernetes clusters, specifically the Kubernetes Masters and Nodes. You can use a single command to deploy a small or large Kubernetes cluster that is ready to manage via the Kubernetes API.
When OpenStack deploys the cluster, drivers are include to configure various Kubernetes components, including volumes, load balancers, and Ingress load balancer, using respective OpenStack components. For example, when a volume is created in Kubernetes, a volume is created in OpenStack and mapped to the container via a driver. When an Ingress load balancer is created, an OpenStack load balancer is created and mapped appropriately to Kubernetes so you, the user, never notices.
In addition to deployments, OpenStack will upgrade, destroy, and perform scaling operations on Kubernetes clusters, automatically creating/destroying VMs to perform expansion and contraction of the cluster based on usage.
Cluster templates¶
To deploy a Kubernetes cluster on OpenStack, a Container Orchestration Engine (COE) template is required, which includes parameters for what a cluster will consist of, such as Kubernetes version, which image to use for the Master and Node VMs, what network overlay driver should be used, etc. This allows different cluster definitions to be used when creating many clusters, depending on the need. Magnum supports the following Container Orchestration Engines: Kubernetes, Swarm, and Mesos.
To get started with Kubernetes, create a Kubernetes cluster template using the following. Replace the respective parameters as appropriate:
- keypair with an appropriate public key that you have imported into OpenStack
- master-flavor and flavor that are appropriate for your Masters and Nodes VMs
- fixed-network where your cluster VMs will be connected
- fixed-subnet where your cluster VMs will get their IP addresses from
- name of the template - we have provided “k8s-cluster-template-1.15.7-production-private”, but you can name it something that is more appropriate
And run:
openstack coe cluster template create \
--image Fedora-AtomicHost-29-20191126.0.x86_64_raw \
--keypair userkey \
--external-network ext-net \
--dns-nameserver 1.1.1.1 \
--master-flavor c5sd.xlarge \
--flavor m5sd.large \
--coe kubernetes \
--network-driver flannel \
--volume-driver cinder \
--docker-storage-driver overlay2 \
--docker-volume-size 50 \
--registry-enabled \
--master-lb-enabled \
--floating-ip-disabled \
--fixed-network KubernetesProjectNetwork001 \
--fixed-subnet KubernetesProjectSubnet001 \
--labels kube_tag=v1.15.7,cloud_provider_tag=v1.15.0,heat_container_agent_tag=stein-dev,master_lb_floating_ip_enabled=true \
k8s-cluster-template-1.15.7-production-private
You can list the available cluster templates available using:
openstack coe cluster template list
Building a cluster¶
To build a cluster, you will need a keypair to login to the individual Master and Node VMs, as described in Storing a key pair. You will also need a network and subnet created.
Next, simply specify a cluster template, the OpenStack subnet where the cluster will be located, the number of Master and Node VMs, and the cluster name, and sit back for about 5 minutes while the cluster is created:
openstack coe cluster create \
--cluster-template k8s-cluster-template-1.15.7-production-private \
--keypair userkey \
--master-count 3 \
--node-count 3 \
--timeout 15 \
k8s-cluster001
Note that the process of creating the cluster not only consists of creating VMs, load balancers, load balancer policies, security groups, etc., but also downloading updates to the operating system, downloading Docker and Etcd components, configuring the Masters to work in a cluster, etc. - so the operating is a bit involved and can take a while. For the above example, it is typically 5 minutes or less.
You can override various parameters in the template, such as:
--keypair
--master-flavor
--flavor
--docker-volume-size
Using the common OpenStack command format, view the list of clusters by running:
openstack coe cluster list
If you wish to monitor the status of the cluster creation process every 10 seconds, use:
watch -n 10 openstack coe cluster list
Once the cluster status is in a HEALTHY state, you should review the properties of the cluster using:
openstack coe cluster show k8s-cluster001
This will show the IP addresses associated with the individual VMs as well as the Kubernetes API address.
The template provided above creates a private cluster, meaning that the only way to access the Kubernetes API is from the local network, so if you wish to connect to it from the Internet, you will need to create a Jump Host on the “fixed-network” specified and connect to this Jump Host via a floating IP.
The VMs for the cluster are created in your account, so they are no different from any other VM. Simply use:
openstack server list
to view the VMs that OpenStack created.
Server groups are created for the Master and Node groups, which can be seen using:
openstack server group list
You can also view the load balancers that were created for the Kubernetes Master nodes for both API access (TCP port 6443) and its etcd cluster (TCP port 2379):
openstack loadbalancer list
If you have the ability to filter traffic by source IP, you can create a cluster with a public interface and then add security groups to the load balancer.
Note that OpenStack manages all of these components as part of the cluster, so do NOT adjust the components outside of OpenStack or parts of the cluster could fail.
More to come…¶
We will add additional documentation soon including:
- Resizing a cluster
- Autoscaling a cluster
- Deleting a cluster
- Ingress controller
- Rolling upgrade
- Deploying a development cluster
- Using HELM
- Persistent volumes