What is OCI Container Engine for Kubernetes?
- A fully-managed, scalable, and highly available Kubernetes service.
- Abbreviated as OKE.
- OKE is ISO-compliant (ISO-IEC 27001, 27017, 27018).
- You can access your OKE clusters using:
- The Kubernetes command line — kubectl.
- The Kubernetes Dashboard.
- The Kubernetes API.
OKE Concepts
Clusters & Nodes
- A Kubernetes cluster is a group of nodes.
- The nodes are the machines running applications.
- Each node can be a physical machine or a virtual machine.
- The node’s capacity (CPU & memory) is defined when the node is created.
- A cluster comprises:
- 1 or more master nodes
- 1 or more worker nodes a.k.a. minions
- A Kubernetes cluster can be organized into namespaces.
- Initially, a cluster has the following namespaces:
- default, for resources with no other namespace
- kube-system, for resources created by the Kubernetes system
- kube-node-lease, for one lease object per node to help determine node availability
- kube-public, usually used for resources that have to be accessible across the cluster
Kubernetes Processes
- The master nodes in a cluster run a number of processes:
- kube-apiserver to support API operations via kubectl & REST API
- includes admissions controllers required for advanced Kubernetes operations
- kube-controller-manager to manage different Kubernetes components e.g.:
- replication controller
- endpoints controller
- namespace controller
- serviceaccounts controller
- kube-scheduler to control where in the cluster to run jobs
- etcd to store the cluster’s config
- kube-apiserver to support API operations via kubectl & REST API
- Each worker node runs two Kubernetes processes:
- kubelet to communicate with the master nodes
- kube-proxy to handle networking
- Each worker node also runs the Docker runtime.
The Kubernetes processes running on the master nodes are collectively referred to as the Kubernetes Control Plane. Together, the Control Plane processes monitor & record the state of the cluster & distribute requested operations between the nodes in the cluster.
Pods
- Where an application running on a worker node comprises multiple containers, Kubernetes groups the containers into a single logical unit called a pod for easy management and discovery.
- The containers in the pod share the same networking namespace & the same storage space, and can be managed as a single object by the Kubernetes Control Plane.
- A number of pods providing the same functionality can be grouped into a single logical set known as a service.
Pod Spec / Manifest
- A Kubernetes manifest file comprises instructions in a YAML or JSON file that specify how to deploy an application to the node or nodes in a Kubernetes cluster.
- The instructions include information about the Kubernetes deployment, the Kubernetes service, & other Kubernetes objects to be created on the cluster.
- The manifest is commonly also referred to as a pod spec, or as a deployment.yaml file (although other filenames are allowed).
Node Pools
- A node pool is a subset of machines within a cluster that all have the same configuration.
- Node pools enable you to create pools of machines within a cluster that have different configurations.
- e.g. you might create one pool of nodes in a cluster as virtual machines, and another pool of nodes as bare metal machines.
- A cluster must have a minimum of one node pool, but a node pool need not contain any worker nodes.
Admission Controllers
- A Kubernetes admission controller intercepts authenticated & authorized requests to the Kubernetes API server before admitting an object (such as a pod) to the cluster.
- An admission controller can validate an object, or modify it, or both.
- Many advanced features in Kubernetes require an enabled admission controller.
- The Kubernetes version you select when you create an OKE cluster determines the admission controllers supported by that cluster.
CIDR Blocks in OKE
- When configuring the VCN & the worker node & load balancer subnets for use with OKE, you specify CIDR blocks to indicate the network addresses that can be allocated to the resources.
- When creating an OKE cluster, you specify:
- CIDR blocks for the Kubernetes services
- CIDR blocks that can be allocated to pods running in the cluster
- The VCN CIDR block must not overlap with the CIDR block you specify for the Kubernetes services.
- The CIDR blocks you specify for pods running in the cluster must not overlap with CIDR blocks you specify for worker node & load balancer subnets.
- Each pod running on a worker node is assigned its own network address.
- OKE allocates a /24 CIDR block for each worker node in a cluster, to assign to pods running on that node.
- A /24 CIDR block equates to 256 distinct IP addresses, of which one is reserved.
- So 255 addresses are available to assign to pods running on each worker node.
- When you create a cluster, you specify a CIDR block for pods.
- This cannot be changed after cluster creation.
- This constrains the maximum total number of network addresses available for allocation to pods running on all the nodes in the cluster, & therefore effectively limits the number of nodes in the cluster.
- By default, a /16 CIDR block is used, making 65,536 network addresses available for all the nodes in the cluster.
- Since 256 network addresses are allocated for each node, specifying a /16 CIDR block limits the number of nodes in the cluster to 256.
- To support more than 256 nodes in a cluster, specify a larger CIDR block.
- e.g. specify a /14 CIDR block to support 262,144 network addresses.
- OKE gives names to worker nodes in the format:
oke-c<part-of-cluster-OCID>-n<part-of-node-pool-OCID>-s<part-of-subnet-OCID>-<slot>
<slot>
is an ordinal number of the node in the subnet (e.g., 0, 1)
- Do not change the worker nodes’ auto-generated names.
- To ensure HA, OKE:
- creates the Kubernetes Control Plane on multiple Oracle-managed master nodes, distributing the master nodes across different ADs in a region
- creates worker nodes in each of the FDs in an AD, distributing the worker nodes as evenly as possible across the FDs
kubeconfig
- A single kubeconfig file can include the details for multiple clusters, as multiple contexts.
- The cluster on which operations will be performed is specified by the current-context: element in the kubeconfig file.
- A kubeconfig file includes an OCI CLI command that dynamically generates an auth token & inserts it when you run a kubectl command.
- The OCI CLI must be available on your shell’s executable path $PATH.
- The auth tokens generated by the OCI CLI command in the kubeconfig file are short-lived, cluster-scoped, & specific to individual users.
- You cannot share kubeconfig files between users to access Kubernetes clusters.
- The OCI CLI command in the kubeconfig file uses your current CLI profile when generating an auth token.
- If you have defined multiple profiles in different tenancies in the CLI configuration file
~/.oci/config
, specify which profile to use when generating the auth token using either the--profile
argument or theOCI_CLI_PROFILE
env var. - The auth tokens generated by the OCI CLI command in the kubeconfig file are appropriate to authenticate individual users accessing the cluster using kubectl & the Kubernetes Dashboard.
- The generated auth tokens are unsuitable if you want other processes & tools to access the cluster, such as CI/CD tools.
- In this case, consider creating a Kubernetes service account & adding its associated auth token to the kubeconfig file.