Merge branch 'feat-cni-calico-operator' into 'development'

feat: cni migration to calico operator

Closes #3

See merge request nofusscomputing/projects/ansible/kubernetes!17
This commit is contained in:
2024-02-02 08:24:07 +00:00
22 changed files with 30500 additions and 69 deletions

View File

@ -19,3 +19,25 @@ Ansible-roles.Submodule.Deploy:
GIT_COMMIT_TYPE: feat
GIT_COMMIT_TYPE_CATEGORY: $CI_PROJECT_NAME
GIT_CONFIG_SUBMODULE_NAME: nfc_kubernetes
Website.Submodule.Deploy:
extends: .submodule_update_trigger
variables:
SUBMODULE_UPDATE_TRIGGER_PROJECT: nofusscomputing/infrastructure/website
environment:
url: https://nofusscomputing.com/$PAGES_ENVIRONMENT_PATH
name: Documentation
rules:
- if: # condition_dev_branch_push
$CI_COMMIT_BRANCH == "development" &&
$CI_PIPELINE_SOURCE == "push"
exists:
- '{docs/**,pages/**}/*.md'
changes:
paths:
- '{docs/**,pages/**}/*.md'
compare_to: 'master'
when: always
- when: never

View File

@ -13,7 +13,7 @@
![GitHub forks](https://img.shields.io/github/forks/NofussComputing/ansible_role_homeassistant?logo=github&style=plastic&color=000000&labell=Forks) ![GitHub stars](https://img.shields.io/github/stars/NofussComputing/ansible_role_homeassistant?color=000000&logo=github&style=plastic) ![Github Watchers](https://img.shields.io/github/watchers/NofussComputing/ansible_role_homeassistant?color=000000&label=Watchers&logo=github&style=plastic)
![GitHub forks](https://img.shields.io/github/forks/NofussComputing/ansible_role_nfc_kubernetes?logo=github&style=plastic&color=000000&labell=Forks) ![GitHub stars](https://img.shields.io/github/stars/NofussComputing/ansible_role_nfc_kubernetes?color=000000&logo=github&style=plastic) ![Github Watchers](https://img.shields.io/github/watchers/NofussComputing/ansible_role_nfc_kubernetes?color=000000&label=Watchers&logo=github&style=plastic)
<br>
This project is hosted on [gitlab](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes) and has a read-only copy hosted on [Github](https://github.com/NofussComputing/ansible_role_nfc_kubernetes).

View File

@ -1,5 +1,29 @@
KubernetesPodSubnet: 10.85.0.0/16
KubernetesServiceSubnet: 10.86.0.0/16
# Depreciated:
# Calico is being migrated to use the calico operator.
# in a near future release, this method of deploying calico
# will be removed. use tag `operator_migrate_calico` to migrate
calico_image_tag: v3.25.0 # Depreciated
# EoF Depreciated
# SoF New Variables
nfc_kubernetes_calico_version: v3.27.0
nfc_kubernetes_tigera_operator_registry: quay.io
nfc_kubernetes_tigera_operator_image: tigera/operator
nfc_kubernetes_tigera_operator_tag: v1.32.3 # Calico v3.27.0
# EoF New Variables, EEoF Depreciated
nfc_kubernetes_enable_metallb: false
nfc_kubernetes_enable_servicelb: false
############################################################################################################
#
# Old Vars requiring refactoring
#
# ############################################################################################################
# KubernetesPodSubnet: 10.85.0.0/16
# KubernetesServiceSubnet: 10.86.0.0/16
Kubernetes_Prime: false # Optional, Boolean. Is the current host the Prime master?
@ -9,6 +33,9 @@ ContainerDioVersion: 1.6.20-1
KubernetesVersion: '1.26.2' # must match the repository release version
kubernetes_version_olm: '0.26.0'
KubernetesVersion_k8s_prefix: '-00'
KubernetesVersion_k3s_prefix: '+k3s1'

View File

@ -6,13 +6,11 @@ template: project.html
about: https://gitlab.com/nofusscomputing/projects/ansible/roles/kubernetes
---
This Ansible roles purpose is to install and configure Kubernetes with configuration from code. You can also use [our playbooks](../../playbooks/index.md) to deploy using this role. this is especially useful if you are also using [our Ansible Execution Environment](../../execution_environment/index.md)
This Ansible role is designed to deploy a K3s Kubernetes cluster. After adding your configuration, the cluster will deploy and have a configured CNI (calico) and be in a state ready to use. This role can be used with our [our playbooks](../../playbooks/index.md) or comes included, along with the playbook within our [Ansible Execution Environment](../../execution_environment/index.md).
## Features
This role deploys a K3s cluster. In addition it has the following features:
- CNI Setup
- Configurable:
@ -33,11 +31,11 @@ This role deploys a K3s cluster. In addition it has the following features:
- Service Load Balancer Namespace
- _[ToDo-#3](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes/-/issues/3)_ Encryption between nodes (Wireguard)
- Encryption between nodes (Wireguard)
- [Firewall configured for kubernetes host](firewall.md)
- _[ToDo-#2](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes/-/issues/2)_ Multi-node Deployment
- Multi-node Deployment
- OpenID Connect SSO Authentication
@ -47,6 +45,8 @@ This role deploys a K3s cluster. In addition it has the following features:
- Installs OLM for operator subscriptions
- Install MetalLB
## Role Workflow
@ -70,6 +70,7 @@ If the playbook is setup as per [our recommendation](ansible.md) step 2 onwards
!!! tip
If you prefer to manually restart the kubernetes service the following variables can be set to prevent a restart of the kubernetes service
``` yaml
nfc_kubernetes_no_restart: false
nfc_kubernetes_no_restart_master: false

View File

@ -0,0 +1,24 @@
---
title: Release Notes
description: No Fuss Computings Ansible role nfc_kubernetes
date: 2024-01-31
template: project.html
about: https://gitlab.com/nofusscomputing/projects/ansible/roles/kubernetes
---
This document details any changes that have occured that may impact users of this role. It's a rolling document and will be amended from time to time.
## Changes with an impact
- _**31 Jan 2024**_ Calico CNI deployment has been migrated to use the calico operator.
- All new cluster installations will be deployed with the operator
- Existing deployments will be required to run a deployment with job tag `operator_migrate_calico` to migrate their deployment to the operator
- if an issue occurs with the migration it can be rolled back by `kubectl delete -f` for all manifests in the `/var/lib/rancher/k3s/ansible` directory and redeploying with job tag `calico_manifest`. This re-deploys calico using the current manifest.
- This tag will be removed in the future at no set date.
- `ServiceLB` / `klipperLB` no longer deploys by default and to deploy it variable `nfc_kubernetes_enable_servicelb` must be set `true`

View File

@ -39,13 +39,13 @@
)
or
(
inventory_hostname == kubernetes_config.config.prime.name
inventory_hostname == kubernetes_config.cluster.prime.name
and
nfc_kubernetes_no_restart_prime
)
or
(
inventory_hostname in kubernetes_worker
inventory_hostname in groups['kubernetes_worker']
and
nfc_kubernetes_no_restart_slave
)

View File

@ -43,6 +43,8 @@ nav:
- projects/ansible/roles/kubernetes/rbac.md
- projects/ansible/roles/kubernetes/release_notes.md
- Operations:

View File

@ -4,16 +4,26 @@
- name: K3s Install
ansible.builtin.include_tasks:
file: k3s/install.yaml
apply:
tags:
- always
when: >
install_kubernetes | default(true) | bool
and
not kubernetes_installed | default(false) | bool
tags:
- always
- name: K3s Configure
ansible.builtin.include_tasks:
file: k3s/configure.yaml
apply:
tags:
- always
when: >
install_kubernetes | default(true) | bool
and
kubernetes_installed | default(false) | bool
tags:
- always

View File

@ -1,9 +1,25 @@
---
- name: Check for calico deployment manifest
ansible.builtin.stat:
name: /var/lib/rancher/k3s/server/manifests/calico.yaml
become: true
register: file_calico_yaml_metadata
- name: Check for calico Operator deployment manifest
ansible.builtin.stat:
name: /var/lib/rancher/k3s/ansible/deployment-manifest-calico_operator.yaml
become: true
register: file_calico_operator_yaml_metadata
- name: Install Software
ansible.builtin.include_role:
name: nfc_common
vars:
common_gather_facts: false
initial_common_tasks: true # Don't run init tasks
aptInstall:
- name: curl
- name: iptables
@ -68,19 +84,58 @@
- name: /var/lib/rancher/k3s/server/manifests
state: directory
mode: 700
- name: /var/lib/rancher/k3s/ansible
state: directory
mode: 700
- name: Add sysctl net.ipv4.ip_forward
ansible.posix.sysctl:
name: net.ipv4.ip_forward
value: '1'
name: "{{ item.name }}"
value: "{{ item.value }}"
sysctl_set: true
state: present
reload: true
notify: reboot_host
loop: "{{ settings }}"
notify: reboot_host # On change reboot
vars:
settings:
- name: net.ipv4.ip_forward
value: '1'
- name: fs.inotify.max_user_watches
value: '524288'
- name: fs.inotify.max_user_instances
value: '512'
when:
- ansible_os_family == 'Debian'
# On change reboot
- name: Check for Network Manager Directory
ansible.builtin.stat:
name: /etc/NetworkManager/conf.d
become: true
register: directory_network_manager_metadata
- name: Network Manager Setup
ansible.builtin.copy:
content: |-
#
# K3s Configuration for Network Manager
#
# Managed By ansible/role/nfc_kubernetes
#
# Dont edit this file directly as it will be overwritten.
#
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*;interface-name:vxlan.calico;interface-name:vxlan-v6.calico;interface-name:wireguard.cali;interface-name:wg-v6.cali
dest: /etc/NetworkManager/conf.d/calico.conf
mode: '770'
owner: root
group: root
become: true
diff: true
when: directory_network_manager_metadata.stat.exists
- name: Check if K3s Installed
@ -108,6 +163,7 @@
- 304
dest: "{{ item.dest }}"
mode: "744"
changed_when: false
register: k3s_download_script
delegate_to: localhost
run_once: true
@ -139,6 +195,7 @@
- 304
dest: "/tmp/k3s.{{ cpu_arch.key }}"
mode: "744"
changed_when: false
register: k3s_download_files
delegate_to: localhost
run_once: true
@ -183,6 +240,7 @@
mode: '755'
owner: root
group: root
changed_when: false
loop: "{{ install_scripts }}"
vars:
install_scripts:
@ -223,7 +281,22 @@
notify: kubernetes_restart
- src: "calico.yaml.j2"
dest: /var/lib/rancher/k3s/server/manifests/calico.yaml
when: "{{ kubernetes_config.cluster.prime.name == inventory_hostname }}"
when: >
{{
kubernetes_config.cluster.prime.name == inventory_hostname
and
(
(
not file_calico_operator_yaml_metadata.stat.exists
and
file_calico_yaml_metadata.stat.exists
and
k3s_installed.rc == 0
)
or
'calico_manifest' in ansible_run_tags
)
}}
- src: k3s-registries.yaml.j2
dest: /etc/rancher/k3s/registries.yaml
notify: kubernetes_restart
@ -252,9 +325,53 @@
cmd: |
INSTALL_K3S_SKIP_DOWNLOAD=true \
INSTALL_K3S_VERSION="v{{ KubernetesVersion }}{{ KubernetesVersion_k3s_prefix }}" \
/tmp/install.sh
/tmp/install.sh --cluster-init
changed_when: false
when: kubernetes_config.cluster.prime.name == inventory_hostname
when: >
kubernetes_config.cluster.prime.name == inventory_hostname
and
k3s_installed.rc == 1
- name: Install Calico Operator
ansible.builtin.include_tasks:
file: migrate_to_operator.yaml
apply:
tags:
- always
when: >-
(
(
'operator_migrate_calico' in ansible_run_tags
or
'operator_calico' in ansible_run_tags
)
or
not file_calico_yaml_metadata.stat.exists
)
and
'calico_manifest' not in ansible_run_tags
and
kubernetes_config.cluster.prime.name == inventory_hostname
- name: Install MetalLB Operator
ansible.builtin.include_tasks:
file: manifest_apply.yaml
apply:
tags:
- always
loop: "{{ manifests }}"
loop_control:
loop_var: manifest
vars:
manifests:
- name: MetalLB Operator
template: Deployment-manifest-MetalLB_Operator.yaml
when: >-
nfc_kubernetes_enable_metallb | default(false) | bool
and
kubernetes_config.cluster.prime.name == inventory_hostname
- name: Wait for kubernetes prime to be ready
@ -296,14 +413,54 @@
and
kubernetes_olm_install | default(false) | bool
- name: Uninstall OLM
ansible.builtin.shell:
cmd: |
kubectl delete -n olm deployment packageserver;
kubectl delete -n olm deployment catalog-operator;
kubectl delete -n olm deployment olm-operator;
kubectl delete crd catalogsources.operators.coreos.com;
kubectl delete` crd clusterserviceversions.operators.coreos.com;
kubectl delete crd installplans.operators.coreos.com;
kubectl delete crd olmconfigs.operators.coreos.com;
kubectl delete crd operatorconditions.operators.coreos.com;
kubectl delete crd operatorgroups.operators.coreos.com;
kubectl delete crd operators.operators.coreos.com;
kubectl delete crd subscriptions.operators.coreos.com;
kubectl delete namespace operators --force;
kubectl delete namespace olm --force;
changed_when: false
failed_when: false
register: install_olm
when: >
kubernetes_config.cluster.prime.name == inventory_hostname
and
'olm_uninstall' not in ansible_run_tags
- name: Enable Cluster Encryption
ansible.builtin.command:
cmd: kubectl patch felixconfiguration default --type='merge' -p '{"spec":{"wireguardEnabled":true,"wireguardEnabledV6":true}}'
changed_when: false
failed_when: false # New cluster will fail
when: >
kubernetes_config.cluster.prime.name == inventory_hostname
and
kubernetes_config.cluster.networking.encrypt | default(false) | bool
and
(
'calico_manifest' in ansible_run_tags
or
(
'operator_migrate_calico' not in ansible_run_tags
or
'operator_calico' not in ansible_run_tags
)
)
- name: Fetch Join Token
@ -337,6 +494,8 @@
Kubernetes_Master | default(false) | bool
and
not kubernetes_config.cluster.prime.name == inventory_hostname
and
k3s_installed.rc == 1
- name: Install K3s (worker nodes)
@ -353,6 +512,8 @@
changed_when: false
when: >
not Kubernetes_Master | default(false) | bool
and
k3s_installed.rc == 1
- name: Set Kubernetes Final Install Fact

View File

@ -0,0 +1,49 @@
---
# Save the manifests in a dir so that diff's can be shown for changes
- name: Copy Manifest for addition - {{ manifest.name }}
ansible.builtin.template:
src: "{{ manifest.template }}"
dest: "/var/lib/rancher/k3s/ansible/{{ manifest.template | lower | replace('.j2', '') }}"
mode: '744'
become: true
diff: true
- name: Try / Catch
block:
# Try to create first, if fail use replace.
- name: Apply Manifest Create - {{ manifest.name }}
ansible.builtin.command:
cmd: "kubectl create -f /var/lib/rancher/k3s/ansible/{{ manifest.template | lower | replace('.j2', '') }}"
become: true
changed_when: false
failed_when: >
'Error from server' in manifest_stdout.stderr
register: manifest_stdout
rescue:
- name: TRACE - Manifest Create - {{ manifest.name }}
ansible.builtin.debug:
msg: "{{ manifest_stdout }}"
- name: Replace Manifests - "Rescue" - {{ manifest.name }}
ansible.builtin.command:
cmd: "kubectl replace -f /var/lib/rancher/k3s/ansible/{{ manifest.template | lower | replace('.j2', '') }}"
become: true
changed_when: false
failed_when: >
'Error from server' in manifest_stdout.stderr
and
'ensure CRDs are installed first' in manifest_stdout.stderr
register: manifest_stdout
- name: TRACE - Replace Manifest - "Rescue" - {{ manifest.name }}
ansible.builtin.debug:
msg: "{{ manifest_stdout }}"

View File

@ -0,0 +1,198 @@
---
# Reference https://docs.tigera.io/calico/3.25/operations/operator-migration
# Script creation of imageset: https://docs.tigera.io/calico/latest/operations/image-options/imageset#create-an-imageset
# above may pull sha for arch of machine who ran the script
- name: Try / Catch
vars:
operator_manifests:
- Deployment-manifest-Calico_Operator.yaml.j2
- Installation-manifest-Calico_Cluster.yaml.j2
- FelixConfiguration-manifest-Calico_Cluster.yaml
- IPPool-manifest-Calico_Cluster.yaml.j2
- APIServer-manifest-Calico_Cluster.yaml
- ConfigMap-manifest-Calico_Service_Endpoint.yaml.j2
block:
- name: Move Calico Manifest from addons directory
ansible.builtin.command:
cmd: mv /var/lib/rancher/k3s/server/manifests/calico.yaml /tmp/
become: true
changed_when: false
when: file_calico_yaml_metadata.stat.exists
- name: Remove addon from Kubernetes
ansible.builtin.command:
cmd: kubectl delete addon -n kube-system calico
become: true
changed_when: false
when: file_calico_yaml_metadata.stat.exists
- name: Uninstall Calico
ansible.builtin.command:
cmd: kubectl delete -f /tmp/calico.yaml
become: true
changed_when: false
when: file_calico_yaml_metadata.stat.exists
# Save the manifests in a dir so that diff's can be shown for changes
- name: Copy Manifest for addition
ansible.builtin.template:
src: "{{ item }}"
dest: "/var/lib/rancher/k3s/ansible/{{ item | lower | replace('.j2', '') }}"
mode: '744'
become: true
diff: true
loop: "{{ operator_manifests }}"
- name: Try / Catch
block:
- name: Apply Operator Manifests
ansible.builtin.command:
cmd: "kubectl create -f /var/lib/rancher/k3s/ansible/{{ item | lower | replace('.j2', '') }}"
become: true
changed_when: false
failed_when: >
'Error from server' in operator_manifest_stdout.stderr
loop: "{{ operator_manifests }}"
register: operator_manifest_stdout
rescue:
- name: TRACE - Operator manifest apply
ansible.builtin.debug:
msg: "{{ operator_manifest_stdout }}"
- name: Apply Operator Manifests - "Rescue"
ansible.builtin.command:
cmd: "kubectl replace -f /var/lib/rancher/k3s/ansible/{{ item | lower | replace('.j2', '') }}"
become: true
changed_when: false
failed_when: >
'Error from server' in operator_manifest_stdout.stderr
and
'ensure CRDs are installed first' in operator_manifest_stdout.stderr
loop: "{{ operator_manifests }}"
register: operator_manifest_stdout
- name: TRACE - Operator manifest apply. Rescued
ansible.builtin.debug:
msg: "{{ operator_manifest_stdout }}"
- name: Fetch Calico Kubectl Plugin
ansible.builtin.uri:
url: |-
https://github.com/projectcalico/calico/releases/download/{{ nfc_kubernetes_calico_version }}/calicoctl-linux-
{%- if cpu_arch.key == 'aarch64' -%}
arm64
{%- else -%}
amd64
{%- endif %}
status_code:
- 200
- 304
dest: "/tmp/kubectl-calico.{{ cpu_arch.key }}"
mode: '777'
owner: root
group: 'root'
changed_when: false
become: true
delegate_to: localhost
loop: "{{ nfc_kubernetes_install_architectures | dict2items }}"
loop_control:
loop_var: cpu_arch
vars:
ansible_connection: local
- name: Add calico Plugin
ansible.builtin.copy:
src: "/tmp/kubectl-calico.{{ ansible_architecture }}"
dest: /usr/local/bin/kubectl-calico
mode: '770'
owner: root
group: 'root'
become: true
when: inventory_hostname in groups['kubernetes_master']
- name: Setup Automagic Host Endpoints
ansible.builtin.shell:
cmd: |-
kubectl calico \
patch kubecontrollersconfiguration \
default --patch='{"spec": {"controllers": {"node": {"hostEndpoint": {"autoCreate": "Enabled"}}}}}'
executable: bash
become: true
changed_when: false
failed_when: false # fixme
- name: Remove calico migration label
ansible.builtin.shell:
cmd: |-
kubectl label \
{{ inventory_hostname }} \
projectcalico.org/operator-node-migration-
executable: bash
become: true
delegate_to: "{{ kubernetes_config.cluster.prime.name }}"
changed_when: false
failed_when: false
loop: "{{ groups[kubernetes_config.cluster.group_name] }}"
# kubectl label node ip-10-229-92-202.eu-west-1.compute.internal projectcalico.org/operator-node-migration-
# migration started
rescue:
- name: Remove Operator Manifests
ansible.builtin.command:
cmd: "kubectl delete -f /var/lib/rancher/k3s/ansible/{{ item | lower | replace('.j2', '') }}"
become: true
changed_when: false
failed_when: false
loop: "{{ operator_manifests }}"
when: file_calico_yaml_metadata.stat.exists # Only rescue if it was a migration
- name: Move Calico Manifest from addons directory
ansible.builtin.command:
cmd: mv /tmp/calico.yaml /var/lib/rancher/k3s/server/manifests/
become: true
changed_when: false
when: file_calico_yaml_metadata.stat.exists
- name: Re-install Calico
ansible.builtin.command:
cmd: kubectl apply -f /var/lib/rancher/k3s/server/manifests/calico.yaml
become: true
changed_when: false
when: file_calico_yaml_metadata.stat.exists
always:
- name: Clean-up Temp File
ansible.builtin.file:
name: /tmp/calico.yaml
state: absent
become: true
when: file_calico_yaml_metadata.stat.exists

View File

@ -1,14 +1,38 @@
---
- name: Firewall Rules
ansible.builtin.include_role:
name: nfc_firewall
vars:
nfc_firewall_enabled_kubernetes: "{{ nfc_kubernetes.enable_firewall | default(false) | bool }}"
tags:
- never
- install
# fix, reload firewall `iptables-reloader`
- name: Reload iptables
ansible.builtin.command:
cmd: bash -c /usr/bin/iptables-reloader
changed_when: false
tags:
- never
- install
- name: K8s Cluster
ansible.builtin.include_tasks: k8s.yaml
when: kubernetes_type == 'k8s'
tags:
- never
- install
- name: K3s Cluster
ansible.builtin.include_tasks: k3s.yaml
when: kubernetes_type == 'k3s'
tags:
- never
- install
- operator_calico
- operator_migrate_calico

View File

@ -0,0 +1,6 @@
---
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}

View File

@ -0,0 +1,11 @@
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kubernetes-services-endpoint
namespace: tigera-operator
data:
KUBERNETES_SERVICE_HOST: "
{%- set octet = kubernetes_config.cluster.networking.ServiceSubnet | split('.') -%}
{{- octet[0] }}.{{- octet[1] }}.{{- octet[2] }}.1"
KUBERNETES_SERVICE_PORT: '443'

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,17 @@
---
apiVersion: crd.projectcalico.org/v1
kind: FelixConfiguration
metadata:
name: default
spec:
bpfConnectTimeLoadBalancing: TCP
bpfExternalServiceMode: DSR
bpfHostNetworkedNATWithoutCTLB: Enabled
bpfLogLevel: ""
floatingIPs: Disabled
healthPort: 9099
logSeverityScreen: Info
reportingInterval: 0s
wireguardEnabled: true
wireguardEnabledV6: true

View File

@ -0,0 +1,16 @@
---
apiVersion: crd.projectcalico.org/v1
kind: IPPool
metadata:
name: default-ipv4-ippool
spec:
allowedUses:
- Workload
- Tunnel
blockSize: 26
cidr: {{ kubernetes_config.cluster.networking.podSubnet }}
ipipMode: Never
natOutgoing: true
nodeSelector: all()
vxlanMode: Always

View File

@ -0,0 +1,45 @@
---
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
calicoNetwork:
bgp: Disabled
containerIPForwarding: Enabled
hostPorts: Enabled
ipPools:
- blockSize: 26
cidr: {{ kubernetes_config.cluster.networking.podSubnet }}
disableBGPExport: false
encapsulation: VXLAN
natOutgoing: Enabled
nodeSelector: all()
# linuxDataplane: Iptables
linuxDataplane: BPF
mtu: 0
multiInterfaceMode: None
nodeAddressAutodetectionV4:
kubernetes: NodeInternalIP
cni:
ipam:
type: Calico
type: Calico
componentResources:
- componentName: Node
resourceRequirements:
requests:
cpu: 250m
controlPlaneReplicas: 3
flexVolumePath: None
kubeletVolumePluginPath: None
nodeUpdateStrategy:
rollingUpdate:
maxSurge: 0
maxUnavailable: 1
type: RollingUpdate
nonPrivileged: Disabled
serviceCIDRs:
- {{ kubernetes_config.cluster.networking.ServiceSubnet }}
variant: Calico

View File

@ -1,4 +1,11 @@
---
# Depreciated:
# Calico is being migrated to use the calico operator.
# in a near future release, this method of deploying calico
# will be removed. use tag `operator_migrate_calico` to migrate
# and tag `operator_calico` to keep.
#
#
# URL: https://github.com/projectcalico/calico/blob/8f2548a71ddc4fbe2497a0c20a3b24fc7a165851/manifests/calico.yaml
# Source: calico/templates/calico-kube-controllers.yaml
# This manifest creates a Pod Disruption Budget for Controller to allow K8s Cluster Autoscaler to evict
@ -4774,13 +4781,13 @@ spec:
value: "autodetect"
# Enable IPIP
- name: CALICO_IPV4POOL_IPIP
value: "Always"
value: "Never"
# Enable or Disable VXLAN on the default IP pool.
- name: CALICO_IPV4POOL_VXLAN
value: "Never"
value: "Always"
# Enable or Disable VXLAN on the default IPv6 IP pool.
- name: CALICO_IPV6POOL_VXLAN
value: "Never"
value: "Always"
# Set MTU for tunnel device used if ipip is enabled
- name: FELIX_IPINIPMTU
valueFrom:

View File

@ -114,7 +114,12 @@
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-embedded-etcd -s ' + master_host + ' -j ACCEPT'] -%}
{# {%- set data.firewall_rules = data.firewall_rules + ['-I INPUT -s ' + master_host + ' -p tcp -m multiport --dports 2380 -j ACCEPT'] -%} #}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-api -s ' + master_host + ' -j ACCEPT'] -%}
{%- if '-I kubernetes-api -s ' + master_host + ' -j ACCEPT' not in data.firewall_rules -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-api -s ' + master_host + ' -j ACCEPT'] -%}
{%- endif -%}
{%- endif -%}
@ -158,7 +163,18 @@
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-flannel-wg-four -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-flannel-wg-six -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-calico-bgp -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-calico-typha -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- if nfc_kubernetes_enable_metallb | default(false) -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I metallb-l2-tcp -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I metallb-l2-udp -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- endif -%}
{%- endif -%}

View File

@ -6,70 +6,217 @@
# Dont edit this file directly as it will be overwritten.
#
{% if Kubernetes_Master | default(false) -%}cluster-cidr: "{{ KubernetesPodSubnet }}"
{%- if inventory_hostname in groups['kubernetes_master'] -%}
{%
{% if
set kube_apiserver_arg = [
"audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log",
"audit-log-maxage=" + kube_apiserver_arg_audit_log_maxage | string,
"audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml",
]
-%}
{%
set servers_config = {
"cluster-cidr": KubernetesPodSubnet,
"disable": [
"traefik"
],
"disable-network-policy": true,
"etcd-snapshot-retention": kubernetes_etcd_snapshot_retention | int,
"etcd-snapshot-schedule-cron": kubernetes_etcd_snapshot_cron_schedule | string,
"flannel-backend": "none",
"service-cidr": KubernetesServiceSubnet
}
-%}
{%- if
kubernetes_config.cluster.domain_name is defined
and
kubernetes_config.cluster.domain_name | default('') != ''
-%}
cluster-domain: {{ kubernetes_config.cluster.domain_name }}
{%- endif %}
cluster-init: true
disable-network-policy: true
disable:
- traefik
# - metrics-server
etcd-snapshot-retention: {{ kubernetes_etcd_snapshot_retention | int }}
etcd-snapshot-schedule-cron: "{{ kubernetes_etcd_snapshot_cron_schedule }}"
flannel-backend: none
kube-apiserver-arg:
- audit-log-path=/var/lib/rancher/k3s/server/logs/audit.log
- audit-log-maxage={{ kube_apiserver_arg_audit_log_maxage }}
- audit-policy-file=/var/lib/rancher/k3s/server/audit.yaml
# - admission-control-config-file=/var/lib/rancher/k3s/server/psa.yaml
{% if kubernetes_config.cluster.oidc.enabled | default(false) | bool -%}
- oidc-issuer-url={{ kubernetes_config.cluster.oidc.issuer_url }}
- oidc-client-id={{ kubernetes_config.cluster.oidc.client_id }}
- oidc-username-claim={{ kubernetes_config.cluster.oidc.username_claim }}
{% if kubernetes_config.cluster.oidc.oidc_username_prefix | default('') != '' -%} - oidc-username-prefix={{ kubernetes_config.cluster.oidc.oidc_username_prefix }}{% endif %}
- oidc-groups-claim={{ kubernetes_config.cluster.oidc.groups_claim }}
{% if kubernetes_config.cluster.oidc.groups_prefix | default('') != '' %} - oidc-groups-prefix={{ kubernetes_config.cluster.oidc.groups_prefix }}{% endif %}
{% endif %}
{% endif %}
{%- set servers_config = servers_config | combine({
"cluster-domain": kubernetes_config.cluster.domain_name
}) -%}
kubelet-arg:
- system-reserved=cpu={{ kubelet_arg_system_reserved_cpu }},memory={{ kubelet_arg_system_reserved_memory }},ephemeral-storage={{ kubelet_arg_system_reserved_storage }}
{% if host_external_ip | default('') %}node-external-ip: "{{ host_external_ip }}"{% endif %}
{%- endif -%}
node-name: {{ inventory_hostname }}
{%- if kubernetes_config.cluster.oidc.enabled | default(false) | bool -%}
{% if
groups[kubernetes_config.cluster.group_name] | default([]) | list | length > 0
{%-
set kube_apiserver_arg = kube_apiserver_arg + [
"oidc-client-id=" + kubernetes_config.cluster.oidc.client_id,
"oidc-groups-claim=" + kubernetes_config.cluster.oidc.groups_claim,
"oidc-issuer-url=" + kubernetes_config.cluster.oidc.issuer_url,
"oidc-username-claim=" + kubernetes_config.cluster.oidc.username_claim
] -%}
{%- if kubernetes_config.cluster.oidc.oidc_username_prefix | default('') != '' -%}
{%- set kube_apiserver_arg = kube_apiserver_arg + [
"oidc-username-prefix=" + kubernetes_config.cluster.oidc.oidc_username_prefix
] -%}
{%- endif -%}
{%- if kubernetes_config.cluster.oidc.groups_prefix | default('') != '' -%}
{%- set kube_apiserver_arg = kube_apiserver_arg + [
"oidc-groups-prefix=" + kubernetes_config.cluster.oidc.groups_prefix
]
-%}
{%- endif -%}
{%- endif -%}
{%- if (
nfc_kubernetes_enable_metallb | default(false)
or
not nfc_kubernetes_enable_servicelb | default(false)
) -%}
{%- set disable = servers_config.disable + [ "servicelb" ] -%}
{%
set servers_config = servers_config | combine({
"disable": disable
})
-%}
{%- endif -%}
{%- if (
not nfc_kubernetes_enable_metallb | default(false)
and
nfc_kubernetes_enable_servicelb | default(false)
) -%}
{%- set servers_config = servers_config | combine({
"servicelb-namespace": kubernetes_config.cluster.networking.service_load_balancer_namespace | default('kube-system')
}) -%}
{%- endif -%}
{# Combine Remaining Server Objects #}
{%
set servers_config = servers_config | combine({
"kube-apiserver-arg": kube_apiserver_arg
})
-%}
server: {% for cluster_node in groups[kubernetes_config.cluster.group_name] +%}
{% if
cluster_node in groups['kubernetes_master']
-%}
- https://
{%- endif -%}
{# Eof Server Nodes #}
{# SoF All Nodes #}
{%
set all_nodes_config = {
"kubelet-arg": [
"system-reserved=cpu=" + kubelet_arg_system_reserved_cpu + ",memory=" + kubelet_arg_system_reserved_memory +
",ephemeral-storage=" + kubelet_arg_system_reserved_storage
],
"node-name": inventory_hostname,
"node-ip": ansible_default_ipv4.address
}
-%}
{%- if groups[kubernetes_config.cluster.group_name] | default([]) | list | length > 0 -%}
{%- if k3s_installed.rc == 0 -%}
{%- set ns = namespace(server=[]) -%}
{%- for cluster_node in groups[kubernetes_config.cluster.group_name] -%}
{%- if cluster_node in groups['kubernetes_master'] -%}
{%- if hostvars[cluster_node].host_external_ip is defined -%}
{{ hostvars[cluster_node].host_external_ip }}
{%- if
hostvars[cluster_node].host_external_ip != ansible_default_ipv4.address
and
cluster_node == inventory_hostname
-%} {# Server self, use internal ip if external ip exists #}
{%- set server_node = ansible_default_ipv4.address -%}
{%- else -%}
{%- set server_node = hostvars[cluster_node].host_external_ip -%}
{%- endif -%}
{%- else -%}
{{ hostvars[cluster_node].ansible_host }}
{%- set server_node = hostvars[cluster_node].ansible_host -%}
{%- endif -%}
:6443
{%- endif -%}
{%- endfor %}
{%- set ns.server = (ns.server | default([])) + [
"https://" + server_node + ":6443"
] -%}
{%- endif %}
{%- endif -%}
{% if Kubernetes_Master | default(false) | bool -%}
servicelb-namespace: {{ kubernetes_config.cluster.networking.service_load_balancer_namespace | default('kube-system') }}
service-cidr: "{{ KubernetesServiceSubnet }}"
{% endif %}
{%- endfor -%}
{%- set all_nodes_config = all_nodes_config | combine({
"server": ns.server,
}) -%}
{%- elif
kubernetes_config.cluster.prime.name != inventory_hostname
and
k3s_installed.rc == 1
-%}
{%- set server = (server | default([])) + [
"https://" + hostvars[kubernetes_config.cluster.prime.name].ansible_host + ":6443"
] -%}
{%- set all_nodes_config = all_nodes_config | combine({
"server": server,
}) -%}
{%- endif -%}
{%- endif -%}
{%- if
host_external_ip is defined
and
ansible_default_ipv4.address != host_external_ip
-%}
{%- set all_nodes_config = all_nodes_config | combine({
"node-external-ip": host_external_ip,
}) -%}
{%- endif -%}
{# EoF All Nodes #}
{%- if inventory_hostname in groups['kubernetes_master'] -%}
{%- set servers_config = servers_config | combine( all_nodes_config ) -%}
{{ servers_config | to_nice_yaml(indent=2) }}
{%- else -%}
{{ all_nodes_config | to_nice_yaml(indent=2) }}
{%- endif -%}