BIG-IP Controller Modes¶
The pool-member-type option determines what mode the Controller runs in, either nodeport or cluster This document describes each option to help you decide which Controller mode is best for your deployment.
Nodeport mode is the default mode of operation for the BIG-IP Controller in Kubernetes. This mode is easier to set up since it supports all Kubernetes Cluster Networks, and has no specific BIG-IP licensing requirements.
As shown in the diagram, nodeport mode uses 2-tier load balancing:
- The BIG-IP Platform load balances requests to Nodes (kube-proxy).
- Nodes (kube-proxy) load balance requests to Pods.
- The Kubernetes Services you manage must use type: NodePort. 
- The BIG-IP system can’t load balance directly to Pods, which means:
- Some BIG-IP services, like L7 persistence, won’t behave as expected.
- Additional network latency.
- The BIG-IP Controller has limited visibility into Pod health.
To use NodePort mode, continue on to Install the BIG-IP Controller in Kubernetes.
You should use cluster mode if you intend to integrate your BIG-IP device into the Kubernetes cluster network.
OpenShift users must run the BIG-IP Controller in cluster mode.
Cluster mode requires a Better or Best license that includes SDN services, and advanced routing. While there are additional networking configurations, cluster mode has distinct benefits over nodeport mode:
- You can use any type of Kubernetes Services.
- The BIG-IP system can load balance directly to any Pod in the Cluster, providing:
- BIG-IP services, including L7 persistence.
- The BIG-IP Controller has full visibility into Pod health via the Kubernetes API.
To run BIG-IP Controller in cluster mode, continue on to Network considerations.
When thinking about how to integrate your BIG-IP device into the cluster network, you’ll probably want to take into account what you have to do manually vs what the BIG-IP Controller takes care of automatically. In general, the manual operations required occur far less frequently than those that are automatic. The list below shows common operations for a typical Kubernetes cluster, from most-frequent to least-frequent.
- Add or remove Pods from an existing Service, or expose a Service with Pods.
- Add or remove a Node from the Cluster.
- Create a new Kubernetes Cluster from scratch.
The BIG-IP Controller always manages BIG-IP system configurations for Pods automatically. For Nodes and Clusters, you may have to perform some actions manually (or automate them using a different system, like Ansible).  Take these into consideration if you’re deciding how to set up your cluster network, or deciding how to integrate the BIG-IP Controller and a BIG-IP device into an existing cluster.
BIG-IP platforms support several overlay networks, like VXLAN, NVGRE, and IPIP. The manual steps noted in the table apply when integrating a BIG-IP device into any overlay network, not just the examples shown here.
The examples below are for instructional purposes only.
|Network Type||Add Cluster||Add Node(s)|
|Layer 2 networks|
Create a new OpenShift HostSubnet for the BIG-IP self IP.
|None. The BIG-IP Controller automatically detects OpenShift Nodes and makes the necessary BIG-IP system configurations.|
|flannel VXLAN||None. The BIG-IP Controller automatically detects Kubernetes Nodes and makes the necessary BIG-IP system configurations.|
|Layer 3 networks|
|Calico||Set up BGP peering between the BIG-IP device and Calico.||
None. Managed by BGP.
NOTE: Depending on the BGP configuration, you may need to update the BGP neighbor table.
|flannel host-gw||Configure routes in flannel and on the BIG-IP device for per-node subnet(s).||Add/update per-node subnet routes on the BIG-IP device.|
Review the k8s-bigip-ctlr configuration parameters.
|||See Publishing Services - Service Types in the Kubernetes documentation.|
|||See the f5-ansible repo on GitHub for Ansible modules that can manipulate F5 products.|
|||Be sure to use the correct encapsulation format for your network.|