Kubernetes

Containers have been helping teams of all sizes to solve issues with consistency, scalability, and security. Using containers, such as Docker, allow you to separate the application from the underlying infrastructure. Gaining that separation requires some new tools in order to get the most value out of containers, and one of the most popular tools used for container management and orchestration is Kubernetes.

What is Kubernetes?

Kubernetes (also known as k8s or “kube”) is an open-source container orchestration platform that automates many of the manual processes involved in deploying, managing, and scaling containerized applications. Kubernetes is most often used with Docker, the most popular containerization platform, and it can also work with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes. Because Kubernetes is open-source, with relatively few restrictions on how it can be used, it can be used freely by anyone who wants to run containers, anywhere they want to run them, including on-premises, in the public cloud, or both.

The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or as a node. Each node hosts groups of one or more containers (which contain your applications), and the master communicates with nodes about when to create or destroy containers. At the same time, the master tells nodes how to re-route traffic based on new container alignments.

The Kubernetes master

The Kubernetes master is the access point (or the control plane) from which administrators and other users interact with the cluster to manage the scheduling and deployment of containers. A cluster will always have at least one master but may have more depending on the cluster’s replication pattern.

The master stores the state and configuration data for the entire cluster in etcd, a persistent and distributed key-value data store. Each node has access to etcd and through it, nodes learn how to maintain the configurations of the containers they are running. You can run etcd on the Kubernetes master, or in standalone configurations.

Masters communicate with the rest of the cluster through the kube-apiserver, the main access point to the control plane. For example, the kube-apiserver makes sure that configurations in etcd match with configurations of containers deployed in the cluster.

The kube-controller-manager handles control loops that manage the state of the cluster via the Kubernetes API server. The kube-controller-manager also handles controls for deployments, replicas, and nodes. For example, the node controller is responsible for registering a node and monitoring its health throughout its lifecycle.

Node workloads in the cluster are tracked and managed by the kube-scheduler. This service keeps track of the capacity and resources of nodes and assigns work to nodes based on their availability.

The cloud-controller-manager is a service running in Kubernetes that helps keep it “cloud-agnostic.” The cloud-controller-manager serves as an abstraction layer between the APIs and tools of a cloud provider (for example, storage volumes or load balancers) and their representational counterparts in Kubernetes.

Nodes

All nodes in a Kubernetes cluster must be configured with a container runtime, which is typically Docker. The container runtime starts and manages the containers as they are deployed to nodes in the cluster by Kubernetes. Your applications (web servers, databases, API servers, etc.) run inside the containers.

Each Kubernetes node runs an agent process called a kubelet that is responsible for managing the state of the node: starting, stopping, and maintaining application containers based on instructions from the control plane. The kubelet collects performance and health information from the node, pods, and containers it runs, and then shares that information with the control plane to help it make scheduling decisions.

The kube-proxy is a network proxy that runs on nodes in the cluster. It also works as a load balancer for services running on a node.

The basic scheduling unit is a pod, which consists of one or more containers guaranteed to be co-located on the host machine and can share resources. Each pod is assigned a unique IP address within the cluster, allowing the application to use ports without conflict.

You describe the desired state of the containers in a pod through a YAML or JSON object called a Pod Spec. These objects are passed to the kubelet through the API server.

A pod can define one or more volumes, such as a local disk or network disk, and expose them to the containers in the pod, which allows different containers to share storage space. For example, volumes can be used when one container downloads content and another container uploads that content somewhere else.

Since containers inside pods are often ephemeral, Kubernetes offers a type of load balancer, called a service, to simplify sending requests to a group of pods. A service targets a logical set of pods selected based on labels (explained below). By default, services can be accessed only from within the cluster, but you can enable public access to them as well if you want them to receive requests from outside the cluster.

Deployments and replicas

A deployment is a YAML object that defines the pods and the number of container instances, called replicas, for each pod. You define the number of replicas you want to have running in the cluster via a ReplicaSet, which is part of the deployment object. So, for example, if a node running a pod dies, the replica set will ensure that another pod is scheduled on another available node.

A DaemonSet deploys and runs a specific daemon (in a pod) on nodes you specify. They are most often used to provide services or maintenance to pods. A daemon set, for example, is how New Relic Infrastructure gets the Infrastructure agent deployed across all nodes in a cluster.

Namespaces

Namespaces allow you to create virtual clusters on top of a physical cluster. Namespaces are intended for use in environments with many users spread across multiple teams or projects. They assign resource quotas and logically isolate cluster resources.

Labels

Labels are key/value pairs that you can assign to pods and other objects in Kubernetes. Labels allow Kubernetes operators to organize and select a subset of objects. For example, when monitoring Kubernetes objects, labels let you quickly drill down to the information you are most interested in.

Stateful sets and persistent storage volumes

StatefulSets give you the ability to assign unique IDs to pods in case you need to move pods to other nodes, maintain networking between pods, or persist data between them. Similarly, persistent storage volumes provide storage resources for a cluster to which pods can request access as they’re deployed.

Other useful components

These Kubernetes components are useful but not required for regular Kubernetes functionality:

  • Kubernetes DNS
    Kubernetes provides this mechanism for DNS-based service discovery between pods. This DNS server works in addition to any other DNS servers you may use in your infrastructure.
  • Cluster-level logs
    If you have a logging tool, you can integrate it with Kubernetes to extract and store application and system logs from within a cluster, written to standard output and standard error. If you want to use cluster-level logs, it’s important to note that Kubernetes does not provide native log storing; you must provide your own log storage solution.

How to deploy Kubernetes

Prerequisites

In order to deploy Kubernetes, you need one or more machines running one of:

  • Ubuntu v16.04 or higher
  • Debian v9 or higher
  • CentOS 7
  • Red Hat Enterprise Linux (RHEL) 7
  • Fedora v25 or higher
  • HypriotOS v1.0.1 or higher
  • Container Linux (tested with 1800.6.0)
  • 2 GB or more of RAM per machine (any less will leave little room for your apps)
  • 2 CPUs or more
  • Full network connectivity between all machines in the cluster (public or private network is fine)
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines. See here for more details.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

All containers in Kubernetes are scheduled as pods, which are groups of co-located containers that share some resources. Furthermore, in a realistic application, we almost never create individual pods; instead, most of our workloads are scheduled as deployments, which are scalable groups of pods maintained automatically by Kubernetes. Lastly, all Kubernetes objects can and should be described in manifests called Kubernetes YAML files. These YAML files describe all the components and configurations of your Kubernetes app, and can be used to easily create and destroy your app in any Kubernetes environment.

Deployments represent a set of multiple, identical Pods with no unique identities. A Deployment runs multiple replicas of your application and automatically replaces any instances that fail or become unresponsive. In this way, Deployments help ensure that one or more instances of your application are available to serve user requests. Deployments are managed by the Kubernetes Deployment controller.

Deployments use a Pod template, which contains a specification for its Pods. The Pod specification determines how each Pod should look like: what applications should run inside its containers, which volumes the Pods should mount, its labels, and more.

When a Deployment’s Pod template is changed, new Pods are automatically created one at a time.

The example below shows a Deployment with the basic config parameters required to run the BIG-IP Controller in Kubernetes.

basic_deployment.yaml
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
     apiVersion: extensions/v1beta1
     kind: Deployment
     metadata:
       name: k8s-bigip-ctlr-deployment
       namespace: kube-system
     spec:
       # DO NOT INCREASE REPLICA COUNT
       replicas: 1
       template:
         metadata:
           name: k8s-bigip-ctlr
           labels:
             app: k8s-bigip-ctlr
         spec:
           # Name of the Service Account bound to a Cluster Role with the required
           # permissions
           serviceAccountName: bigip-ctlr
           containers:
             - name: k8s-bigip-ctlr
               image: "f5networks/k8s-bigip-ctlr"
               env:
                 - name: BIGIP_USERNAME
                   valueFrom:
                     secretKeyRef:
                       # Replace with the name of the Secret containing your login
                       # credentials
                       name: bigip-login
                       key: username
                 - name: BIGIP_PASSWORD
                   valueFrom:
                     secretKeyRef:
                       # Replace with the name of the Secret containing your login
                       # credentials
                       name: bigip-login
                       key: password
               command: ["/app/bin/k8s-bigip-ctlr"]
               args: [
                 # See the k8s-bigip-ctlr documentation for information about
                 # all config options
                 # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
                 "--bigip-username=$(BIGIP_USERNAME)",
                 "--bigip-password=$(BIGIP_PASSWORD)",
                 "--bigip-url=<ip_address-or-hostname>",
                 "--bigip-partition=<name_of_partition>",
                 "--pool-member-type=nodeport",
                 "--agent=as3",
                 ]
         imagePullSecrets:
             # Secret that gives access to a private docker registry
             - name: f5-docker-images
             # Secret containing the BIG-IP system login credentials
             - name: bigip-login

Various configuration options available in the CIS controller:

args:
    - "--bigip-username=$(BIGIP_USERNAME)"
    - "--bigip-password=$(BIGIP_PASSWORD)"
    - "--bigip-url=192.168.200.92"
    - "--bigip-partition=k8s"
    - "--namespace=default"
    - "--pool-member-type=nodeport" - This option can be either cluster or nodeport
    - "--log-level=DEBUG"
    - "--insecure=true"
    - "--manage-ingress=false"
    - "--manage-routes=false"
    - "--manage-configmaps=true"
    - "--agent=as3"
    - "--as3-validation=true"
    - "--node-label-selector=f5role=worker"

Quickstart

  1. Create CIS Controller, BIG-IP credentials, and RBAC Authentication.

    Configuration options available in the CIS controller:

    args: [
            # See the k8s-bigip-ctlr documentation for information about
            # all config options
            # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
            "--bigip-username=$(BIGIP_USERNAME)",
            "--bigip-password=$(BIGIP_PASSWORD)",
            # Replace with the IP address or hostname of your BIG-IP device
            "--bigip-url=192.168.200.91",
            "--bigip-partition=k8s",
            "--namespace=default",
            "--pool-member-type=cluster",
            "--flannel-name=fl-vxlan",
            # Logging level
            "--log-level=DEBUG",
            "--log-as3-response=true",
            AS3 override functionality
            "--override-as3-declaration=default/f5-as3-configmap",
            # Self-signed cert
            "--insecure=true",
            "--agent=as3",
        ]
    

    Note

    CIS controller is configured with the override-as3-declaration option. This allows the BIG-IP administrator to add global policy, profiles to the virtual without having to add additional the need for an annotation. The example below shows added WAF and logging. Create this configmap for the configuration to be applied. The configmap, namespace, tenant, AS3 app all need to match. All the objects need to be defined under the virtual.

    kind: ConfigMap
    apiVersion: v1
    metadata:
    name: f5-as3-declaration
    namespace: default
    data:
    template: |
        {
            "declaration": {
                "k8s_AS3": {
                    "Shared": {
                        "ingress_10_192_75_108_80": {
                            "securityLogProfiles": [
                                {
                                    "bigip": "/Common/Log all requests"
                                }
                            ],
                            "policyWAF": {
                                "bigip": "/Common/WAF_Policy"
                            }
                        }
                    }
                }
            }
        }
    
  2. Add BIG-IP credentials and RBAC Authentication:

    #create kubernetes bigip container connecter, authentication and RBAC
    kubectl create secret generic bigip-login -n kube-system --from-literal=username=admin --from-literal=password=f5PME123
    kubectl create serviceaccount k8s-bigip-ctlr -n kube-system
    kubectl create clusterrolebinding k8s-bigip-ctlr-clusteradmin --clusterrole=cluster-admin --serviceaccount=kube-system:k8s-bigip-ctlr
    kubectl create -f f5-cluster-deployment.yaml
    kubectl create -f f5-bigip-node.yaml
    
  3. Create Ingress and configmap:

    kubectl create -f f5-as3-configmap.yaml
    kubectl create -f f5-k8s-ingress.yaml
    
  4. Enable logging for AS3:

    kubectl get pod n kube-system
    kubectl log -f f5-server### -n kube-system | grep -i 'as3'
    
  5. Delete Kubernetes BIG-IP container connect, authentication, and RBAC:

    #delete kubernetes bigip container connecter, authentication and RBAC
    kubectl delete node bigip1
    kubectl delete deployment k8s-bigip-ctlr-deployment -n kube-system
    kubectl delete clusterrolebinding k8s-bigip-ctlr-clusteradmin
    kubectl delete serviceaccount k8s-bigip-ctlr -n kube-system
    kubectl delete secret bigip-login -n kube-system
    

Example

The example below shows a Deployment with the basic config parameters required to run the BIG-IP Controller in Kubernetes.

Basic deployment
 apiVersion: extensions/v1beta1
 kind: Deployment
 metadata:
 name: k8s-bigip-ctlr-deployment
 namespace: kube-system
 spec:
 # DO NOT INCREASE REPLICA COUNT
 replicas: 1
 template:
     metadata:
     name: k8s-bigip-ctlr
     labels:
         app: k8s-bigip-ctlr
     spec:
     # Name of the Service Account bound to a Cluster Role with the required
     # permissions
     serviceAccountName: bigip-ctlr
     containers:
         - name: k8s-bigip-ctlr
         image: "f5networks/k8s-bigip-ctlr"
         env:
             - name: BIGIP_USERNAME
             valueFrom:
                 secretKeyRef:
                 # Replace with the name of the Secret containing your login
                 # credentials
                 name: bigip-login
                 key: username
             - name: BIGIP_PASSWORD
             valueFrom:
                 secretKeyRef:
                 # Replace with the name of the Secret containing your login
                 # credentials
                 name: bigip-login
                 key: password
         command: ["/app/bin/k8s-bigip-ctlr"]
         args: [
             # See the k8s-bigip-ctlr documentation for information about
             # all config options
             # https://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest
             "--bigip-username=$(BIGIP_USERNAME)",
             "--bigip-password=$(BIGIP_PASSWORD)",
             "--bigip-url=<ip_address-or-hostname>",
             "--bigip-partition=<name_of_partition>",
             "--pool-member-type=nodeport",
             "--agent=as3",
             ]
     imagePullSecrets:
         # Secret that gives access to a private docker registry
         - name: f5-docker-images
         # Secret containing the BIG-IP system login credentials
         - name: bigip-login

Repository

See the repository on GitHub for more examples.


Note

To provide feedback on Container Ingress Services or this documentation, you can file a GitHub Issue.