Skip to content

Kubernetes


This Ansible role is designed to deploy a K3s Kubernetes cluster. Without adding cluster configuration this role will install K3s as a single node cluster. To deploy a multi-node cluster add your configuration, K3s will be installed on all nodes. On completion you will have fully configured cluster in a state ready to use. This role can be used with our our playbooks or comes included, along with the playbook within our Ansible Execution Environment.

Role Details

Item Value Description
Dependent Roles None
Optional Roles nfc_firewall Used to setup the firewall for kubernetes.
Idempotent Yes
Stats Available Not Yet
Tags Nil
Requirements Gather Facts
become

Features

  • CNI Setup, calico including calicoctl plugin

    kubectl calico .... instead of calicoctl ....

  • Configurable:

    • Container Registries

    • ectd deployment

      • etcd snapshot cron schedule

      • etcd snapshot retention

    • Cluster Domain

    • Configure System reserved CPU, Storage and Memory.

    • Node Labels

    • Node Taints

    • Service Load Balancer Namespace

  • Encryption between nodes (Wireguard)

  • Firewall configured for kubernetes host

  • Multi-node Deployment

  • OpenID Connect SSO Authentication

  • Basic RBAC ClusterRoles and Bindings

  • ToDo-#5 Restore backup on fresh install of a cluster

  • Installs OLM for operator subscriptions

  • Install MetalLB

  • Install KubeVirt including virtctl plugin

    kubectl virt .... instead of virtctl ....

  • Install the Helm Binary

  • Upgrade cluster

Role Workflow

For a more probable than not success this role first installs/configures prime master, other master(s) and worker nodes using the following simplified workflow:

  1. Download both install script and k3s binary to ansible controller

  2. copy install script and k3s binary to host

  3. Create required config files needed for installation

  4. (kubernetes prime master only) Add install required config files

  5. Install kubernetes

  6. (kubernetes prime master only) Wait for kubernetes to be ready. Playbook is paused until true

  7. Configure Kubernetes

  8. Install Kubevirt

If the playbook is setup as per our recommendation step 2 onwards is first done on master nodes then worker nodes.

Tip

If you prefer to manually restart the kubernetes service the following variables can be set to prevent a restart of the kubernetes service

nfc_kubernetes_no_restart: false
nfc_kubernetes_no_restart_master: false
nfc_kubernetes_no_restart_prime: false
nfc_kubernetes_no_restart_slave: false
See default variables below for explanation of each variable if it's not evident enough.

Default Variables

On viewing these variables you will notice there are single dictionary keys prefixed nfc_role_kubernetes_ and a dictionary of dictionaries kubernetes_config. variables prefixed with nfc_role_kubernetes_ are for single node installs with the kubernetes_config dictionary containing all of the information for an entire cluster. The kubernetes_config dictionary variables take precedence. Even if you are installing a cluster on multiple nodes, you are still advised to review the variables prefixed with nfc_role_kubernetes_ as they may still be needed. i.e. setting a node type use keys nfc_role_kubernetes_prime, nfc_role_kubernetes_master and nfc_role_kubernetes_worker.

defaults/main.yaml
# Depreciated:
#      Calico is being migrated to use the calico operator.
#      in a near future release, this method of deploying calico
#      will be removed. use tag `operator_migrate_calico` to migrate
calico_image_tag: v3.25.0 # Depreciated
# EoF Depreciated
# SoF New Variables
nfc_role_kubernetes_calico_version: v3.27.0
# nfc_kubernetes_tigera_operator_registry: quay.io
# nfc_kubernetes_tigera_operator_image: tigera/operator
# nfc_kubernetes_tigera_operator_tag: v1.32.3               # Calico v3.27.0
# EoF New Variables, EEoF Depreciated


nfc_kubernetes_enable_metallb: false
nfc_kubernetes_enable_servicelb: false


nfc_role_kubernetes_container_images:

  kubevirt_operator:
    name: Kubevirt Operator
    registry: quay.io
    image: kubevirt/virt-operator
    tag: v1.2.0

  tigera_operator:
    name: Tigera Operator
    registry: quay.io
    image: tigera/operator
    tag: v1.32.3    # Calico v3.27.0


nfc_role_kubernetes_cluster_domain: cluster.local

nfc_role_kubernetes_configure_firewall: true

nfc_role_kubernetes_etcd_enabled: false

nfc_role_kubernetes_install_olm: false

nfc_role_kubernetes_install_helm: true

nfc_role_kubernetes_install_kubevirt: false

nfc_role_kubernetes_kubevirt_operator_replicas: 1

nfc_role_kubernetes_node_labels: {}     # Optional, Dict. Node labels.
nfc_role_kubernetes_node_taints: {}     # Optional, Dict. Node taints.
# nfc_role_kubernetes_node_prime: ''    # Mandatory*, string. the inventory_hostname of the prime node. ONLY required for multi-node deployments

nfc_role_kubernetes_oidc_enabled: false

nfc_role_kubernetes_resolv_conf_file: /etc/resolv.conf

nfc_role_kubernetes_pod_subnet: 172.16.248.0/21
nfc_role_kubernetes_service_subnet: 172.16.244.0/22

nfc_role_kubernetes_prime: false                         # Mandatory for a node designated as the prime master node
nfc_role_kubernetes_master: false                        # Mandatory for a node designated as a master node and the prime master node
nfc_role_kubernetes_worker: false                       # Mandatory for a node designated as a worker node

############################################################################################################
#
#                     Old Vars requiring refactoring
#
# ############################################################################################################


KubernetesVersion: '1.26.12'                                # must match the repository release version
kubernetes_version_olm: '0.27.0'



KubernetesVersion_k3s_prefix: '+k3s1'

kubernetes_private_container_registry: []                  # Optional, Array. if none use `[]`

kubernetes_etcd_snapshot_cron_schedule: '0 */12 * * *'
kubernetes_etcd_snapshot_retention: 5

# host_external_ip: ''                                     # Optional, String. External IP Address for host.

kube_apiserver_arg_audit_log_maxage: 2

kubelet_arg_system_reserved_cpu: 450m
kubelet_arg_system_reserved_memory: 512Mi
kubelet_arg_system_reserved_storage: 8Gi


nfc_kubernetes:
  enable_firewall: false             # Optional, bool enable firewall rules from role 'nfc_firewall'

nfc_kubernetes_no_restart: false         # Set to true to prevent role from restarting kubernetes on the host(s)
nfc_kubernetes_no_restart_master: false  # Set to true to prevent role from restarting kubernetes on master host(s)
nfc_kubernetes_no_restart_prime: false   # Set to true to prevent role from restarting kubernetes on prime host
nfc_kubernetes_no_restart_slave: false   # Set to true to prevent role from restarting kubernetes on slave host(s)


k3s:
  files:

    - name: audit.yaml
      path: /var/lib/rancher/k3s/server
      content: |
        apiVersion: audit.k8s.io/v1
        kind: Policy
        rules:
        - level: Request
      when: "{{ nfc_role_kubernetes_master }}"

    - name: 90-kubelet.conf
      path: /etc/sysctl.d
      content: |
        vm.panic_on_oom=0
        vm.overcommit_memory=1
        kernel.panic=10
        kernel.panic_on_oops=1
        kernel.keys.root_maxbytes=25000000

    - name: psa.yaml
      path: /var/lib/rancher/k3s/server
      content: ""
        # apiVersion: apiserver.conf0 */12 * * *ig.k8s.io/v1
        # kind: AdmissionConfiguration
        # plugins:
        # - name: PodSecurity
        #   configuration:
        #     apiVersion: pod-security.admission.config.k8s.io/v1beta1
        #     kind: PodSecurityConfiguration
        #     defaults:
        #       enforce: "restricted"
        #       enforce-version: "latest"
        #       audit: "restricted"
        #       audit-version: "latest"
        #       warn: "restricted"
        #       warn-version: "latest"
        #     exemptions:
        #       usernames: []
        #       runtimeClasses: []
        #       namespaces: [kube-system]
      when: "{{ nfc_role_kubernetes_prime | bool }}"


#############################################################################################
# Cluster Config when stored in Inventory
#
# One required per cluster. recommend creating one ansible host group per cluster.
#############################################################################################
# kubernetes_config:                    # Dict. Cluster Config
#   cluster:
#     access:                           # Mandatory. List, DNS host name or IPv4/IPv6 Address.
#                                       # if none use '[]'
#       - 'my.dnshostname.com'
#       - '2001:4860:4860::8888'
#       - '192.168.1.1'
#     domain_name: earth                # Mandatory, String. Cluster Domain Name
#     group_name:                       # Mandatory, String. name of the ansible inventory group containg all cluster hosts
#     prime:
#       name: k3s-prod                  # Mandatory, String. Ansible inventory_host that will
#                                       # act as the prime master node.
#     networking:
#       encrypt: true                   # Optional, Boolean. default `false`. Install wireguard for inter-node encryption
#       podSubnet: 172.16.70.0/24       # Mandatory, String. CIDR
#       ServiceSubnet: 172.16.72.0/24   # Mandatory, String. CIDR
#
#
#  helm:
#    enabled: true       # Optional, Boolean. default=false. Install Helm Binary
#
#
#  kube_virt:
#    enabled: false      # Optional, Boolean. default=false. Install KubeVirt
#
#    nodes: []           # Optional, List of String. default=inventory_hostname. List of nodes to install kibevirt on.
#
#    operator:
#      replicas: 2       # Optional, Integer. How many virt_operators to deploy.
#
#
#  oidc:                                                    # Used to configure Kubernetes with OIDC Authentication.
#    enabled: true                                          # Mandatory, boolen. speaks for itself.
#    issuer_url: https://domainname.com/realms/realm-name   # Mandatory, String. URL of OIDC Provider
#    client_id: kubernetes-test                             # Mandatory, string. OIDC Client ID
#    username_claim: preferred_username                     # Mandatory, String. Claim name containing username.
#    username_prefix: oidc                                  # Optional, String. What to prefix to username
#    groups_claim: roles                                    # Mandatory, String. Claim name containing groups
#    groups_prefix: ''                                      # Optional, String. string to append to groups
#
#   hosts:
#
#     my-host-name:
#       labels:
#         mylabel: myvalue
#
#       taints:
#         - effect: NoSchedule
#           key: taintkey
#           value: taintvalue

About:

This page forms part of our Project Kubernetes Ansible Collection.

Page Metadata
Version: ToDo: place files short git commit here
Date Created: 2023-10-24
Date Edited: 2024-03-29

Contribution:

Would You like to contribute to our Kubernetes Ansible Collection project? You can assist in the following ways:

 

ToDo: Add the page list of contributors