Compare commits

...

74 Commits

Author SHA1 Message Date
e0035d88df build: bump version 1.8.0 -> 1.8.1-a1
!64
2024-05-02 01:29:53 +00:00
Jon
52c4ee12fa Merge branch 'feat-workaround-15161' into 'development'
fix: workaround 15161

See merge request nofusscomputing/projects/ansible/collections/kubernetes!64
2024-05-02 01:27:52 +00:00
Jon
b4d5031b0a fix(nfc_kubernetes): correct url build to loop through all cpu arch
!64 https://github.com/ansible/awx/issues/15161
2024-05-02 10:45:58 +09:30
3cf2a2e169 build: bump version 1.7.2 -> 1.8.0
!61
2024-05-02 00:26:59 +00:00
Jon
358891e1cc Merge branch 'feat-workaround-15161' into 'development'
feat: workaround 15161

Closes #27

See merge request nofusscomputing/projects/ansible/collections/kubernetes!61
2024-05-02 00:20:25 +00:00
Jon
9fa3b233a9 feat(nfc_kubernetes): build url and on use cast as string
!61 https://github.com/ansible/awx/issues/15161 closes #27
2024-05-02 09:42:59 +09:30
9ec1ba4c51 build: bump version 1.7.1 -> 1.7.2
!59
2024-04-25 07:22:05 +00:00
Jon
bb707149f6 Merge branch '26-nfc_kubernetes-check-mode-support' into 'development'
fix: install fails in check mode

Closes #26

See merge request nofusscomputing/projects/ansible/collections/kubernetes!59
2024-04-25 07:09:32 +00:00
Jon
f622228493 ci(tests): correct so they are always available on all branches intended
!59
2024-04-25 16:27:38 +09:30
Jon
5efd9807f6 fix(nfc_kubernetes): adjust some tasks to run during checkmode
these tasks make no change and are required for checkmode to function as it's intended

!59 fixes #26
2024-04-25 16:24:48 +09:30
f09a71ef77 build: bump version 1.7.0 -> 1.7.1
!58
2024-04-24 03:23:27 +00:00
Jon
9d9cffb03a fix: add role readme
!58
2024-04-24 12:40:48 +09:30
50c89c9f00 build: bump version 1.6.0 -> 1.7.0
!56
2024-04-24 02:24:40 +00:00
Jon
325b0e51d0 Merge branch 'netbox_role' into 'development'
feat: Netbox role

See merge request nofusscomputing/projects/ansible/collections/kubernetes!56
2024-04-24 02:11:07 +00:00
Jon
1068223abd chore: add default mr template with linked items
!56
2024-04-24 11:27:28 +09:30
Jon
241c737647 feat(kubernetes_netbox): custom field bug work around
!56
2024-04-24 11:20:19 +09:30
Jon
33a40d0ba9 docs(navigation): add role kubernetes_netbox
!56
2024-04-24 11:19:31 +09:30
Jon
0ce3ed1245 fix(nfc_kubernetes): ensure install tasks run when job_tags specified
!56
2024-04-24 11:18:00 +09:30
Jon
0097556730 fix(facts): gather required facts if not already available
!56
2024-04-24 11:06:21 +09:30
Jon
6faee04b39 docs: add note about netbox fileds bug
!56
2024-04-23 23:19:59 +09:30
Jon
ef8255cca6 fix(install): correct template installed var
!56
2024-04-23 23:05:19 +09:30
Jon
725e8dbfec fix(install): as part of install check, confirm service
!56
2024-04-23 23:02:35 +09:30
Jon
c5b9420ed9 feat(services): add netbox service fields
!56
2024-04-23 23:01:40 +09:30
Jon
c5b4add4c7 feat(role): New role kubernetes_netbox
!56
2024-04-19 22:07:41 +09:30
Jon
aa3735f271 Merge branch '14-k3s-upgrades' into 'development'
test: fixing of deb 12

See merge request nofusscomputing/projects/ansible/collections/kubernetes!55
2024-03-29 20:03:03 +00:00
Jon
0ccb121955 ci(build): build job must occur on dev and master branch for test results
!55
2024-03-30 05:20:55 +09:30
Jon
98a9e6dcdf test(debian12): set PIP_BREAK_SYSTEM_PACKAGES=1
!55
2024-03-30 04:34:31 +09:30
Jon
7271e28c76 test(debian12): fix iptables missing rules.v6
!54
2024-03-30 04:32:42 +09:30
70a350bf56 build: bump version 1.5.0 -> 1.6.0
!54
2024-03-29 18:51:00 +00:00
Jon
af10814791 fix(docs): use correct badge query url
!54
2024-03-30 04:07:42 +09:30
Jon
f139827554 Merge branch '14-k3s-upgrades' into 'development'
feat: Support upgrading cluster

Closes #14

See merge request nofusscomputing/projects/ansible/collections/kubernetes!53
2024-03-29 18:02:51 +00:00
Jon
5980123e7a feat(test): add integration test. playbook install
!53
2024-03-30 03:20:36 +09:30
Jon
7ef739d063 feat: add retry=3 delay=10 secs to all ansible url modules
!53
2024-03-30 03:08:17 +09:30
Jon
4d44c01b32 refactor(galaxy): for dependent collections prefix with >= so as to not cause version lock
!53
2024-03-29 20:03:43 +09:30
Jon
c5371b8ff4 feat(upgrade): If upgrade occurs, dont run remaining tasks
!53
2024-03-29 19:44:38 +09:30
Jon
7c20146660 chore: fix yaml schema paths for vscode
!53
2024-03-29 19:18:42 +09:30
Jon
6c4616873e feat: support upgrading cluster
In place binary upgrades was chosen as its just a matter of changing binary and restarting the service

!53 closes #14
2024-03-29 19:18:28 +09:30
3243578951 build: bump version 1.4.0 -> 1.5.0
!52
2024-03-21 17:42:16 +00:00
Jon
0fd15f2195 feat(collection): nofusscomputing.firewall update 1.0.1 -> 1.1.0
!52
2024-03-22 03:08:51 +09:30
03e48c7031 build: bump version 1.3.0 -> 1.4.0
!50
2024-03-20 11:22:53 +00:00
Jon
11756037a3 Merge branch '22-check-mode' into 'development'
feat: check mode

Closes #22

See merge request nofusscomputing/projects/ansible/collections/kubernetes!50
2024-03-20 11:19:33 +00:00
Jon
6498a48e82 feat(install): "ansible_check_mode=true" no hostname check
!50 fixes #22
2024-03-20 20:46:46 +09:30
053d1f17ec build: bump version 1.2.0 -> 1.3.0
!48
2024-03-18 10:05:36 +00:00
Jon
17ff472577 Merge branch '2024-03-18' into 'development'
fix: couple of fixes

Closes #19 and #20

See merge request nofusscomputing/projects/ansible/collections/kubernetes!48
2024-03-18 10:00:49 +00:00
Jon
ec94414383 docs: add warning for not configuring firewall before install
!48 fixes #19
2024-03-18 19:18:02 +09:30
Jon
1faae0327e fix(handler): add missing 'reboot_host' handler
!48 fixes #20
2024-03-18 19:11:25 +09:30
Jon
17e3318c3c fix(firewall): ensure slave nodes can access ALL masters API point
!48
2024-03-18 19:09:17 +09:30
Jon
89b5593abf fix(firewall): dont add rules for disabled features
!48
2024-03-18 19:08:33 +09:30
Jon
10eae79a74 feat: dont attempt to install if already installed
!48
2024-03-18 19:07:55 +09:30
0be7080089 build: bump version 1.1.2 -> 1.2.0
!46
2024-03-16 13:58:16 +00:00
Jon
d3666c6825 Merge branch 'firewall' into 'development'
feat: migrate to firewall collection

Closes firewall#4

See merge request nofusscomputing/projects/ansible/collections/kubernetes!46
2024-03-16 13:54:29 +00:00
Jon
4af31ff3ac feat(firewall): use collection nofusscomputing.firewall to configure kubernetes firewall
!46
2024-03-16 23:05:01 +09:30
Jon
74187c7023 fix(config): use correct var name when setting node name
!46
2024-03-16 22:13:20 +09:30
47ac3095b6 Merge branch 'automated-tasks' into 'development'
chore(gitlab-ci): Automated update of git sub-module

See merge request nofusscomputing/projects/ansible/collections/kubernetes!45
2024-03-16 11:35:31 +00:00
dd4638bc93 chore(git): updated submodule gitlab-ci
Automation Data:
{
    "branch": "development",
    "current_commit": "9afa68d1f3849e491fa8ca034749388808531b74)",
    "name": "gitlab-ci",
    "path": "/builds/nofusscomputing/projects/ansible/collections/kubernetes/_automation_/gitlab-ci",
    "remote_head": "a24f352ca3d82b8d0f02f5db20173fe2c3f71a4a)",
    "remote_name": "origin",
    "url": "https://gitlab.com/nofusscomputing/projects/gitlab-ci.git"
}

Changes: Submodule path gitlab-ci: checked out a24f352ca3d82b8d0f02f5db20173fe2c3f71a4a

MR !45
2024-03-16 11:34:50 +00:00
3ed6fd0f4c Merge branch 'automated-tasks' into 'development'
chore(gitlab-ci): Automated update of git sub-module

See merge request nofusscomputing/projects/ansible/collections/kubernetes!44
2024-03-14 12:46:56 +00:00
beb1bd2006 chore(git): updated submodule gitlab-ci
Automation Data:
{
    "branch": "development",
    "current_commit": "41eeb7badd582175b371cd4a5b2192decbcb0210)",
    "name": "gitlab-ci",
    "path": "/builds/nofusscomputing/projects/ansible/collections/kubernetes/_automation_/gitlab-ci",
    "remote_head": "9afa68d1f3849e491fa8ca034749388808531b74)",
    "remote_name": "origin",
    "url": "https://gitlab.com/nofusscomputing/projects/gitlab-ci.git"
}

Changes: Submodule path gitlab-ci: checked out 9afa68d1f3849e491fa8ca034749388808531b74

MR !44
2024-03-14 12:46:14 +00:00
4a83550530 build: bump version 1.1.1 -> 1.1.2
!42
2024-03-13 16:16:32 +00:00
Jon
7c54b19b64 Merge branch 'dont-rush-to-release-next-time-jon' into 'development'
fix: Dont rush to release next time jon

Closes #17 and #18

See merge request nofusscomputing/projects/ansible/collections/kubernetes!42
2024-03-13 16:13:02 +00:00
Jon
173c840121 fix(readme): update gitlab links to new loc
!42
2024-03-14 01:37:59 +09:30
Jon
f0f5d686fa chore: correct galaxy build ignores
!42
2024-03-14 01:36:31 +09:30
Jon
536c6e7b26 fix(configure): dont attempt to configure firewall if install=false
!42
2024-03-14 01:15:08 +09:30
Jon
a23bc5e9ee fix(handler): remove old k8s code causing handler to fail
!42
2024-03-14 01:00:20 +09:30
Jon
5444f583e5 fix(handler): kubernetes restart handler now using updated node type vars
!42
2024-03-14 00:55:09 +09:30
Jon
b4ad0a4e61 fix(config): if hostname=localhost use hostname command to fetch hostname
!42
2024-03-14 00:51:37 +09:30
Jon
c9961973e1 fix: limit the use of master group
!42
2024-03-14 00:39:22 +09:30
Jon
622338e497 fix: add missing dependency ansible.posix
!42 fixes #17
2024-03-14 00:32:59 +09:30
Jon
dec65ed57c fix(install): use correct var type for packages
!42 fixes #18
2024-03-14 00:31:26 +09:30
71d1dd884e build: bump version 1.1.0 -> 1.1.1
!38
2024-03-13 14:38:06 +00:00
Jon
7a077dabe0 fix: don't check hostname for localhost
!38 !39 !40 !41
2024-03-14 00:04:38 +09:30
16add8a5b8 build: bump version 1.0.1 -> 1.1.0
!38
2024-03-13 14:20:36 +00:00
Jon
1bbbdd23c3 feat: add role readme and fix gitlab release job
!38 !39 !40
2024-03-13 23:47:06 +09:30
9552ed7703 build: bump version 1.0.0 -> 1.0.1
!38
2024-03-13 14:02:37 +00:00
Jon
05fc3455da fix(ci): ensure correct package name is used
!38 !39
2024-03-13 23:28:56 +09:30
40 changed files with 1539 additions and 238 deletions

View File

@ -1 +0,0 @@
galaxy.yml galaxy[version-incorrect]

View File

@ -4,5 +4,5 @@ commitizen:
prerelease_offset: 1
tag_format: $version
update_changelog_on_bump: false
version: 1.0.0
version: 1.8.1-a1
version_scheme: semver

5
.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
artifacts/
build/
test_results/
test_results.json
*.tar.gz

View File

@ -1,7 +1,7 @@
---
variables:
ANSIBLE_GALAXY_PACKAGE_NAME: phpipam_scan_agent
ANSIBLE_GALAXY_PACKAGE_NAME: kubernetes
MY_PROJECT_ID: "51640029"
GIT_SYNC_URL: "https://$GITHUB_USERNAME_ROBOT:$GITHUB_TOKEN_ROBOT@github.com/NoFussComputing/ansible_collection_kubernetes.git"
PAGES_ENVIRONMENT_PATH: projects/ansible/collection/kubernetes/
@ -9,6 +9,7 @@ variables:
include:
- local: .gitlab/integration_test.gitlab-ci.yml
- project: nofusscomputing/projects/gitlab-ci
ref: development
file:
@ -21,6 +22,40 @@ include:
- automation/.gitlab-ci-ansible.yaml
Build Collection:
extends: .ansible_collection_build
needs:
- job: Ansible Lint
optional: true
- job: Ansible Lint (galaxy.yml)
optional: true
rules:
- if: $CI_COMMIT_TAG
when: always
# Needs to run, even by bot as the test results need to be available
# - if: "$CI_COMMIT_AUTHOR =='nfc_bot <helpdesk@nofusscomputing.com>'"
# when: never
- if: # Occur on merge
$CI_COMMIT_BRANCH
&&
$CI_PIPELINE_SOURCE == "push"
when: always
# - if:
# $CI_COMMIT_BRANCH != "development"
# &&
# $CI_COMMIT_BRANCH != "master"
# &&
# $CI_PIPELINE_SOURCE == "push"
# when: always
- when: never
Update Git Submodules:
extends: .ansible_playbook_git_submodule
@ -31,6 +66,29 @@ Github (Push --mirror):
needs: []
Gitlab Release:
extends: .ansible_collection_release
needs:
- Stage Collection
release:
tag_name: $CI_COMMIT_TAG
description: ./artifacts/release_notes.md
name: $CI_COMMIT_TAG
assets:
links:
- name: 'Ansible Galaxy'
url: https://galaxy.ansible.com/ui/repo/published/${ANSIBLE_GALAXY_NAMESPACE}/${ANSIBLE_GALAXY_PACKAGE_NAME}/?version=${CI_COMMIT_TAG}
- name: ${ANSIBLE_GALAXY_NAMESPACE}-${ANSIBLE_GALAXY_PACKAGE_NAME}-${CI_COMMIT_TAG}.tar.gz
url: https://galaxy.ansible.com/api/v3/plugin/ansible/content/published/collections/artifacts/${ANSIBLE_GALAXY_NAMESPACE}-${ANSIBLE_GALAXY_PACKAGE_NAME}-${CI_COMMIT_TAG}.tar.gz
link_type: package
- name: Documentation
url: https://nofusscomputing.com/${PAGES_ENVIRONMENT_PATH}
milestones:
- $CI_MERGE_REQUEST_MILESTONE
Website.Submodule.Deploy:
extends: .submodule_update_trigger
variables:

View File

@ -0,0 +1,217 @@
.integration_test:
stage: test
needs:
- "Build Collection"
image:
name: nofusscomputing/docker-buildx-qemu:dev
pull_policy: always
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# GIT_STRATEGY: none
services:
- name: docker:23-dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
before_script:
- | # start test container
docker run -d \
--privileged \
-v ${PWD}:/workdir \
-v ${PWD}/artifacts/galaxy:/collection \
--workdir /workdir \
--rm \
--env "ANSIBLE_FORCE_COLOR=true" \
--env "CI_COMMIT_SHA=${CI_COMMIT_SHA}" \
--env "ANSIBLE_LOG_PATH=/workdir/ansible.log" \
--env "PIP_BREAK_SYSTEM_PACKAGES=1" \
--name test_image_${CI_JOB_ID} \
nofusscomputing/ansible-docker-os:dev-${test_image}
- | # enter test container
docker exec -i test_image_${CI_JOB_ID} ps aux
- docker ps
- docker exec -i test_image_${CI_JOB_ID} apt update
- docker exec -i test_image_${CI_JOB_ID} apt install -y --no-install-recommends python3-pip net-tools dnsutils iptables
- |
if [ "${test_image}" == 'debian-12' ]; then
echo "Debian 12":
docker exec -i test_image_${CI_JOB_ID} pip install ansible-core --break-system-packages;
docker exec -i test_image_${CI_JOB_ID} mkdir -p /etc/iptables;
docker exec -i test_image_${CI_JOB_ID} touch /etc/iptables/rules.v6;
docker exec -i test_image_${CI_JOB_ID} update-alternatives --set iptables /usr/sbin/iptables-legacy;
else
echo " Not Debian 12":
docker exec -i test_image_${CI_JOB_ID} pip install ansible-core;
fi
- docker exec -i test_image_${CI_JOB_ID} cat /etc/hosts
- docker exec -i test_image_${CI_JOB_ID} cat /etc/resolv.conf
- | # check if DNS working
docker exec -i test_image_${CI_JOB_ID} nslookup google.com
script:
- | # inside container?
docker exec -i test_image_${CI_JOB_ID} ls -l /collection;
docker exec -i test_image_${CI_JOB_ID} echo $PWD;
- | # Show Network Interfaces
docker exec -i test_image_${CI_JOB_ID} ifconfig;
- | # Install the collection
docker exec -i test_image_${CI_JOB_ID} bash -c 'ansible-galaxy collection install $(ls /collection/*.tar.gz)'
- | # output ansible vars
docker exec -i test_image_${CI_JOB_ID} ansible -m setup localhost
- | # run the collection
docker exec -i test_image_${CI_JOB_ID} \
${test_command} \
--extra-vars "nfc_role_firewall_policy_input=ACCEPT" \
--extra-vars "nfc_role_firewall_policy_forward=ACCEPT" \
-vv
- | # Create test.yaml
mkdir -p test_results;
cat <<EOF > test_results/${test_image}.json
{
"$( echo ${test_image} | sed -e 's/\./_/')": "Pass"
}
EOF
after_script:
- | # Create test.yaml if not exists
if [ ! -f test_results/${test_image}.json ]; then
echo "[TRACE] Test has failed"
mkdir -p test_results;
cat <<EOF > test_results/${test_image}.json
{
"$( echo ${test_image} | sed -e 's/\./_/')": "Fail"
}
EOF
fi
- | # Run trace script for debugging
chmod +x ./.gitlab/integration_test_trace.sh;
./.gitlab/integration_test_trace.sh;
artifacts:
untracked: false
paths:
- ansible.log
- test_results/*
when: always
rules:
- if: $CI_COMMIT_TAG
allow_failure: true
when: on_success
# Needs to run, even by bot as the test results need to be available
# - if: "$CI_COMMIT_AUTHOR =='nfc_bot <helpdesk@nofusscomputing.com>'"
# when: never
- if: # Occur on merge
$CI_COMMIT_BRANCH
&&
$CI_PIPELINE_SOURCE == "push"
allow_failure: true
when: on_success
# - if:
# $CI_COMMIT_BRANCH != "development"
# &&
# $CI_COMMIT_BRANCH != "master"
# &&
# $CI_PIPELINE_SOURCE == "push"
# allow_failure: true
# when: always
- when: never
Playbook - Install:
extends: .integration_test
parallel:
matrix:
- test_image: debian-11
test_command: ansible-playbook nofusscomputing.kubernetes.install
- test_image: debian-12
test_command: ansible-playbook nofusscomputing.kubernetes.install
- test_image: ubuntu-20.04
test_command: ansible-playbook nofusscomputing.kubernetes.install
- test_image: ubuntu-22.04
test_command: ansible-playbook nofusscomputing.kubernetes.install
test_results:
stage: test
extends: .ansible_playbook
variables:
ansible_playbook: .gitlab/test_results.yaml
ANSIBLE_PLAYBOOK_DIR: $CI_PROJECT_DIR
needs:
- Playbook - Install
artifacts:
untracked: false
when: always
access: all
expire_in: "3 days"
paths:
- test_results.json
rules:
- if: $CI_COMMIT_TAG
allow_failure: true
when: on_success
# Needs to run, even by bot as the test results need to be available
# - if: "$CI_COMMIT_AUTHOR =='nfc_bot <helpdesk@nofusscomputing.com>'"
# when: never
- if: # Occur on merge
$CI_COMMIT_BRANCH
&&
$CI_PIPELINE_SOURCE == "push"
allow_failure: true
when: on_success
# - if:
# $CI_COMMIT_BRANCH != "development"
# &&
# $CI_COMMIT_BRANCH != "master"
# &&
# $CI_PIPELINE_SOURCE == "push"
# allow_failure: true
# when: always
- when: never

View File

@ -0,0 +1,42 @@
#!/bin/bash
# colour ref: https://stackoverflow.com/a/28938235
NC='\033[0m' # Text Reset
# Regular Colors
Black='\033[0;30m' # Black
Red='\033[0;31m' # Red
Green='\033[0;32m' # Green
Yellow='\033[0;33m' # Yellow
Blue='\033[0;34m' # Blue
Purple='\033[0;35m' # Purple
Cyan='\033[0;36m' # Cyan
cmd() {
echo -e "${Yellow}[TRACE] ${Green}executing ${Cyan}'$1'${NC}"
docker exec -i test_image_${CI_JOB_ID} $1 || true
}
cmd "journalctl -xeu netfilter-persistent.service";
cmd "journalctl -xeu iptables.service"
cmd "journalctl -xeu k3s.service"
cmd "systemctl status netfilter-persistent.service"
cmd "systemctl status iptables.service"
cmd "systemctl status k3s.service"
cmd "kubectl get po -A -o wide"
cmd "kubectl get no -o wide"
cmd "iptables -nvL --line-numbers"

View File

View File

@ -0,0 +1,22 @@
### :books: Summary
<!-- your summary here emojis ref: https://github.com/yodamad/gitlab-emoji -->
### :link: Links / References
<!-- using a list as any links to other references or links as required. if relevent, describe the link/reference -->
### :construction_worker: Tasks
- [ ] Add your tasks here if required (delete)
<!-- dont remove tasks below strike through including the checkbox by enclosing in double tidle '~~' -->
- [ ] Playbook Update
This collection has a [corresponding playbook](https://gitlab.com/nofusscomputing/projects/ansible/ansible_playbooks/-/blob/development/role.yaml) that may need to be updated (Ansible Role), specifically [Role Validation](https://gitlab.com/nofusscomputing/projects/ansible/ansible_playbooks/-/blob/development/tasks/role/validation/nfc_kubernetes.yaml).
- [ ] NetBox Rendered Config Update
This Collection has a [NetBox Rendered Config template](https://gitlab.com/nofusscomputing/infrastructure/configuration-management/netbox/-/blob/development/templates/cluster.json.j2) that may need to be updated. Specifically Section `cluster.type == 'kubernetes'`

19
.gitlab/test_results.yaml Normal file
View File

@ -0,0 +1,19 @@
---
- name: Create Test Results File
hosts: localhost
gather_facts: false
tasks:
- name: Load Test Results
ansible.builtin.include_vars:
dir: ../test_results
name: test_results
- name: Create Results file
ansible.builtin.copy:
content: "{{ (test_results) | to_nice_json }}"
dest: ../test_results.json

15
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,15 @@
{
"yaml.schemas": {
"https://raw.githubusercontent.com/ansible/ansible-lint/main/src/ansiblelint/schemas/ansible.json#/$defs/tasks": [
"roles/nfc_kubernetes/tasks/*.yaml",
"roles/nfc_kubernetes/tasks/*/*.yaml",
"roles/nfc_kubernetes/tasks/*/*/*.yaml"
],
"https://raw.githubusercontent.com/ansible/ansible-lint/main/src/ansiblelint/schemas/vars.json": [
"roles/nfc_kubernetes/variables/**.yaml"
],
"https://raw.githubusercontent.com/ansible/ansible-lint/main/src/ansiblelint/schemas/ansible.json#/$defs/playbook": ".gitlab/test_results.yaml"
},
"gitlab.aiAssistedCodeSuggestions.enabled": false,
"gitlab.duoChat.enabled": false,
}

View File

@ -1,87 +1,85 @@
## 1.0.0 (2024-03-13)
### BREAKING CHANGE
- Repository restructure from Ansible Role to Ansible Collection
### Feat
- **playbook**: add the install playbook
- restructure repository as ansible collection
### Refactor
- **nfc_kubernetes**: update meta file
- remove dependency on role nfc_common
- **nfc_kubernetes**: layout role ingress to install prime -> master -> worker nodes as separate groups
- **docs**: restructure docs
## 0.3.0 (2024-03-13)
### Feat
- remove old var and update kube version
- install helm binary
- disable node ipv6 support
- **kubevirt**: install virtctl plugin
- **kubevirt**: optionally specify which nodes within a cluster to install kubeviirt
- **kubevirt**: Default to live migration for update strategy
- Optionally Install KubeVirt
- **install**: dont allow installation to continue if the hostname does not match inventory_hostname
- **variables**: remove depreciated variables
- **install**: etcd deployment now optional
## 1.8.1-a1 (2024-05-02)
### Fix
- remove depreciated worker var
- **configure**: if firewall rules dir does not exist, dont add firewall rules
- **nfc_kubernetes**: correct url build to loop through all cpu arch
### Refactor
- image var update for calico
## 0.2.0 (2024-02-03)
## 1.8.0 (2024-05-02)
### Feat
- **calico**: turn bpf off
- **calico**: set tolerations for typha "CriticalAddonsOnly"
- **config**: for server self. use internal ip to connect instead of external
- **config**: dont set external-ip if it matches node-ip
- **config**: set value `node-ip`
- **calico**: use vxlan instead of ipip
- **olm**: uninstall olm if tag `olm_uninstall` specified
- **calico**: add job tag calico_manifest to enable rollback
- **install**: enable k3s module metrics-server
- **olm**: dont install by default
- **calico**: disable vxlan
- **calico**: use vxlan overlay
- **calico**: IP AUTO-detection set to kubernetes-internal-ip
- feature gate added to prevent restart of kubernetes service
- **node**: ability to configure node taints
- **config**: set node name to inventory_hostname
- **firewall**: add vxlan rules
- **audit_logs**: keep two days by default
- **firewall**: allow hosts external IP
- **nfc_kubernetes**: build url and on use cast as string
## 1.7.2 (2024-04-25)
### Fix
- **config**: set external ip if set or node ip if not set
- **install**: don't attempt to reinstall the cluster if already installed
- **prime_install**: requires cluster init for prime install
- **restart_k3s**: use correct group var
- **token_fetch**: only fetch token after prime installed
- **handler**: kubernetes restart when clause corrected
- **audit_log**: max age not backup
- **config**: ensure server var is list not csv string
- **handler**: restart kubernetes implementation was flawed
- **config**: ensure join token is included in config
- **k3s_multi_master**: adjusted config so multi-master install works
- **olm**: dont fail if already installed
- **config**: ensure config option servicelb-namespace only deployed to prime node
- **nfc_kubernetes**: adjust some tasks to run during checkmode
## 1.7.1 (2024-04-24)
### Fix
- add role readme
## 1.7.0 (2024-04-24)
### Feat
- **kubernetes_netbox**: custom field bug work around
- **services**: add netbox service fields
- **role**: New role kubernetes_netbox
### Fix
- **nfc_kubernetes**: ensure install tasks run when job_tags specified
- **facts**: gather required facts if not already available
- **install**: correct template installed var
- **install**: as part of install check, confirm service
## 1.6.0 (2024-03-29)
### Feat
- **test**: add integration test. playbook install
- add retry=3 delay=10 secs to all ansible url modules
- **upgrade**: If upgrade occurs, dont run remaining tasks
- support upgrading cluster
### Fix
- **docs**: use correct badge query url
### Refactor
- **config**: use jinja to construct data then pretty print it
- **tasks**: ensure module FQCN is used
- **node_labels**: removed from config.yaml and set to be a manifest on prime node
- **galaxy**: for dependent collections prefix with `>=` so as to not cause version lock
## 1.5.0 (2024-03-21)
### Feat
- **collection**: nofusscomputing.firewall update 1.0.1 -> 1.1.0
## 1.4.0 (2024-03-20)
### Feat
- **install**: "ansible_check_mode=true" no hostname check
## 1.3.0 (2024-03-18)
### Fix
- **handler**: add missing 'reboot_host' handler
- **firewall**: ensure slave nodes can access ALL masters API point
- **firewall**: dont add rules for disabled features
## 1.2.0 (2024-03-16)
### Feat
- **firewall**: use collection nofusscomputing.firewall to configure kubernetes firewall
### Fix
- **config**: use correct var name when setting node name

View File

@ -14,26 +14,31 @@
<br>
![Gitlab forks count](https://img.shields.io/badge/dynamic/json?label=Forks&query=%24.forks_count&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2F&color=ff782e&logo=gitlab&style=plastic) ![Gitlab stars](https://img.shields.io/badge/dynamic/json?label=Stars&query=%24.star_count&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2F&color=ff782e&logo=gitlab&style=plastic) [![Open Issues](https://img.shields.io/badge/dynamic/json?color=ff782e&logo=gitlab&style=plastic&label=Open%20Issues&query=%24.statistics.counts.opened&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fissues_statistics)](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes/-/issues)
![Gitlab forks count](https://img.shields.io/badge/dynamic/json?label=Forks&query=%24.forks_count&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2F&color=ff782e&logo=gitlab&style=plastic) ![Gitlab stars](https://img.shields.io/badge/dynamic/json?label=Stars&query=%24.star_count&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2F&color=ff782e&logo=gitlab&style=plastic) [![Open Issues](https://img.shields.io/badge/dynamic/json?color=ff782e&logo=gitlab&style=plastic&label=Open%20Issues&query=%24.statistics.counts.opened&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fissues_statistics)](https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes/-/issues)
![GitHub forks](https://img.shields.io/github/forks/NofussComputing/ansible_collection_kubernetes?logo=github&style=plastic&color=000000&labell=Forks) ![GitHub stars](https://img.shields.io/github/stars/NofussComputing/ansible_collection_kubernetes?color=000000&logo=github&style=plastic) ![Github Watchers](https://img.shields.io/github/watchers/NofussComputing/ansible_collection_kubernetes?color=000000&label=Watchers&logo=github&style=plastic)
<br>
This project is hosted on [gitlab](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes) and has a read-only copy hosted on [Github](https://github.com/NofussComputing/ansible_collection_kubernetes).
This project is hosted on [gitlab](https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes) and has a read-only copy hosted on [Github](https://github.com/NofussComputing/ansible_collection_kubernetes).
----
**Stable Branch**
![Gitlab build status - stable](https://img.shields.io/badge/dynamic/json?color=ff782e&label=Build&query=0.status&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fpipelines%3Fref%3Dmaster&logo=gitlab&style=plastic) ![branch release version](https://img.shields.io/badge/dynamic/yaml?color=ff782e&logo=gitlab&style=plastic&label=Release&query=%24.commitizen.version&url=https%3A//gitlab.com/nofusscomputing/projects/ansible/kubernetes%2F-%2Fraw%2Fmaster%2F.cz.yaml)
![Gitlab build status - stable](https://img.shields.io/badge/dynamic/json?color=ff782e&label=Build&query=0.status&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fpipelines%3Fref%3Dmaster&logo=gitlab&style=plastic) ![branch release version](https://img.shields.io/badge/dynamic/yaml?color=ff782e&logo=gitlab&style=plastic&label=Release&query=%24.commitizen.version&url=https%3A//gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes%2F-%2Fraw%2Fmaster%2F.cz.yaml)
![Debian 11](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'debian-11'%5D&style=plastic&logo=debian&logoColor=a80030&label=Debian%2011&color=a80030) ![Debian 12](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'debian-12'%5D&style=plastic&logo=debian&logoColor=a80030&label=Debian%2012&color=a80030) ![Ubuntu 20.04](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'ubuntu-20_04'%5D&style=plastic&logo=ubuntu&logoColor=dd4814&label=Ubuntu%2020&color=dd4814) ![Ubuntu 22.04](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'ubuntu-22_04'%5D&style=plastic&logo=ubuntu&logoColor=dd4814&label=Ubuntu%2022&color=dd4814)
----
**Development Branch**
![Gitlab build status - development](https://img.shields.io/badge/dynamic/json?color=ff782e&label=Build&query=0.status&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fpipelines%3Fref%3Ddevelopment&logo=gitlab&style=plastic) ![branch release version](https://img.shields.io/badge/dynamic/yaml?color=ff782e&logo=gitlab&style=plastic&label=Release&query=%24.commitizen.version&url=https%3A//gitlab.com/nofusscomputing/projects/ansible/kubernetes%2F-%2Fraw%2Fdevelopment%2F.cz.yaml)
![Gitlab build status - development](https://img.shields.io/badge/dynamic/json?color=ff782e&label=Build&query=0.status&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fpipelines%3Fref%3Ddevelopment&logo=gitlab&style=plastic) ![branch release version](https://img.shields.io/badge/dynamic/yaml?color=ff782e&logo=gitlab&style=plastic&label=Release&query=%24.commitizen.version&url=https%3A//gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes%2F-%2Fraw%2Fdevelopment%2F.cz.yaml)
![Debian 11](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fdevelopment%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'debian-11'%5D&style=plastic&logo=debian&logoColor=a80030&label=Debian%2011&color=a80030) ![Debian 12](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fdevelopment%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'debian-12'%5D&style=plastic&logo=debian&logoColor=a80030&label=Debian%2012&color=a80030) ![Ubuntu 20.04](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fdevelopment%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'ubuntu-20_04'%5D&style=plastic&logo=ubuntu&logoColor=dd4814&label=Ubuntu%2020&color=dd4814) ![Ubuntu 22.04](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fdevelopment%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'ubuntu-22_04'%5D&style=plastic&logo=ubuntu&logoColor=dd4814&label=Ubuntu%2022&color=dd4814)
----
<br>
@ -42,14 +47,14 @@ This project is hosted on [gitlab](https://gitlab.com/nofusscomputing/projects/a
links:
- [Issues](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes/-/issues)
- [Issues](https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes/-/issues)
- [Merge Requests (Pull Requests)](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes/-/merge_requests)
- [Merge Requests (Pull Requests)](https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes/-/merge_requests)
## Contributing
All contributions for this project must conducted from [Gitlab](https://gitlab.com/nofusscomputing/projects/ansible/kubernetes).
All contributions for this project must conducted from [Gitlab](https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes).
For further details on contributing please refer to the [contribution guide](CONTRIBUTING.md).

View File

@ -0,0 +1 @@
linked to

View File

@ -13,13 +13,15 @@ about: https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernete
![Gitlab build status - stable](https://img.shields.io/badge/dynamic/json?color=ff782e&label=Build&query=0.status&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fpipelines%3Fref%3Dmaster&logo=gitlab&style=plastic) ![Gitlab build status - development](https://img.shields.io/badge/dynamic/json?color=ff782e&label=Build&query=0.status&url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fpipelines%3Fref%3Ddevelopment&logo=gitlab&style=plastic)
![Debian 11](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'debian-11'%5D&style=plastic&logo=debian&logoColor=a80030&label=Debian%2011&color=a80030) ![Debian 12](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'debian-12'%5D&style=plastic&logo=debian&logoColor=a80030&label=Debian%2012&color=a80030) ![Ubuntu 20.04](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'ubuntu-20_04'%5D&style=plastic&logo=ubuntu&logoColor=dd4814&label=Ubuntu%2020&color=dd4814) ![Ubuntu 22.04](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgitlab.com%2Fapi%2Fv4%2Fprojects%2F51640029%2Fjobs%2Fartifacts%2Fmaster%2Fraw%2Ftest_results.json%3Fjob%3Dtest_results&query=%24%5B'ubuntu-22_04'%5D&style=plastic&logo=ubuntu&logoColor=dd4814&label=Ubuntu%2022&color=dd4814)
[![Downloads](https://img.shields.io/badge/dynamic/json?url=https%3A%2F%2Fgalaxy.ansible.com%2Fapi%2Fv3%2Fplugin%2Fansible%2Fcontent%2Fpublished%2Fcollections%2Findex%2Fnofusscomputing%2Fkubernetes%2F&query=%24.download_count&style=plastic&logo=ansible&logoColor=white&label=Galaxy%20Downloads&labelColor=black&color=cyan)](https://galaxy.ansible.com/ui/repo/published/nofusscomputing/kubernetes/)
</span>
This Ansible Collection is for installing a K3s Kubernetes cluster, both single and multi-node cluster deployments are supported.
This Ansible Collection is for installing a K3s Kubernetes cluster, both single and multi-node cluster deployments are supported. In addition to installing and configuring the firewall for the node. for further information on the firewall config please see the [firewall docs](../firewall/index.md)
## Installation
@ -29,14 +31,20 @@ To install this collection use `ansible-galaxy collection install nofusscomputin
## Features
Most of the features of this collection are from the included role `nfc_kubernetes`, please [view its page for feature details](roles/nfc_kubernetes/index.md).
- Install k3s cluster. Both Single and multi-node clusters
- Configure the cluster
- Upgrade a cluster
For a more detailed list of featured checkout the roles [documentation](roles/nfc_kubernetes/index.md).
## Using this collection
This collection has been designed to be a complete and self-contained management tool for a K3s kubernetes cluster.
## K3s Kubernetes Installation
## Cluster Installation
By default the install playbook will install to localhost.
@ -46,6 +54,11 @@ ansible-playbook nofusscomputing.kubernetes.install
```
!!! danger
By default when the install task is run, The firewall is also configured. The default sets the `FORWARD` and `INPUT` tables to have a policy of `DROP`. Failing to add any required additional rules before installing/configuring kubernetes will cause you to not have remote access to the machine.
You are encouraged to run `ansible-playbook nofusscomputing.firewall.install` with your rules configured within your inventory first. see the [firewall docs](../firewall/index.md) for more information.
The install playbook has a dynamic `hosts` key. This has been done to specifically support running the playbook from AWX and being able to populate the field from the survey feature. Order of precedence for the host variable is as follows:
- `nfc_pb_host` set to any valid value that a playbook `hosts` key can accept
@ -59,4 +72,12 @@ The install playbook has a dynamic `hosts` key. This has been done to specifical
For the available variables please view the [nfc_kubernetes role docs](roles/nfc_kubernetes/index.md#default-variables)
## Cluster Upgrade
[In place cluster upgrades](https://docs.k3s.io/upgrades/manual#upgrade-k3s-using-the-binary) is the method used to conduct the cluster upgrades. The logic for the upgrades first confirms that K3s is installed and that the local binary and running k3s version are the desired versions. If they are not, they will be updated to the desired version. On completion of this the node has its `k3s` service restarted which completes the upgrade process.
!!! info
If an upgrade occurs, no other task within the play will run. This is by design. if you have further tasks to be run in addition to the upgrade, run the play again.
!!! danger
not following the [Kubernetes version skew policy](https://kubernetes.io/releases/version-skew-policy/) when upgrading your cluster may break your cluster.

View File

@ -0,0 +1,46 @@
---
title: NetBox Kubernetes
description: No Fuss Computings Ansible role kubernetes_netbox
date: 2023-10-24
template: project.html
about: https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes
---
This Ansible role as part of our collection `nofusscomputing.kubernetes` is intended to be used to setup NetBox so that the settings for deploying a kubernetes cluster can be stored within NetBox.
## Role Details
| Item| Value | Description |
|:---|:---:|:---|
| Dependent Roles | _None_ | |
| Optional Roles | _None_ | |
| Idempotent | _Yes_ | |
| Stats Available | _Not Yet_ | |
| Tags | _Nil_ | |
| Requirements | _None_ | |
## Features
- Adds custom fields to `cluster` object within NetBox that this collection can use to deploy a kubernetes cluster.
!!! info
Due to a bug in ansible module `netbox.netbox.netbox_custom_field` The fields are not created as they should be. For example, the fields are supposed to be set to only display when not empty. for more information see [Github #1210](https://github.com/netbox-community/ansible_modules/issues/1210). We have [added a workaround](https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes/-/merge_requests/56#note_1876912267) so the fields are created.
Other than that, the fields are created as they should.
## Usage
To configure NetBox, ensure that the NetBox Access variables are set and run playbook `nofusscomputing.netbox.kubernetes_netbox`. This will setup NetBox with the required fields that role [nfc_kubernetes](../nfc_kubernetes/index.md) uses.
## Default Variables
``` yaml title="defaults/main.yaml" linenums="1"
--8<-- "roles/kubernetes_netbox/defaults/main.yaml"
```

View File

@ -10,7 +10,7 @@ This role include logic to generate firewall rules for iptables. Both IPv4 and I
Rules generation workflow:
- itertes over all kubernetes hosts
- iterates over all kubernetes hosts
- adds rules if host is masters for worker access

View File

@ -70,6 +70,8 @@ This Ansible role is designed to deploy a K3s Kubernetes cluster. Without adding
- Install the Helm Binary
- Upgrade cluster
## Role Workflow

View File

@ -8,7 +8,7 @@ namespace: nofusscomputing
name: kubernetes
# The version of the collection. Must be compatible with semantic versioning
version: 1.0.0
version: 1.8.1-a1
# The path to the Markdown (.md) readme file. This path is relative to the root of the collection
readme: README.md
@ -44,7 +44,10 @@ tags:
# L(specifiers,https://python-semanticversion.readthedocs.io/en/latest/#requirement-specification). Multiple version
# range specifiers can be set and are separated by ','
dependencies:
kubernetes.core: '3.0.0'
ansible.posix: '>=1.5.4'
kubernetes.core: '>=3.0.0'
nofusscomputing.firewall: '>=1.1.0'
netbox.netbox: '>=3.16.0'
# The URL of the originating SCM repository
@ -64,17 +67,17 @@ issues: https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernet
# uses 'fnmatch' to match the files or directories. Some directories and files like 'galaxy.yml', '*.pyc', '*.retry',
# and '.git' are always filtered. Mutually exclusive with 'manifest'
build_ignore:
- .vscode/
- artifacts/
- docs/
- .gitlab*
- includes/
- website-template/
- .vscode
- artifacts
- docs
- .git*
- gitlab-ci
- website-template
- .ansible-lint-ignore
- .cz.yaml
- .nfc_automation.yaml
- dockerfile
- mkdocs.yaml
- mkdocs.yml
# A dict controlling use of manifest directives used in building the collection artifact. The key 'directives' is a
# list of MANIFEST.in style

View File

@ -49,6 +49,9 @@ nav:
- projects/ansible/collection/kubernetes/roles/nfc_kubernetes/release_notes.md
- Role kubernetes_netbox:
- projects/ansible/collection/kubernetes/roles/kubernetes_netbox/index.md
- Operations:

64
playbooks/netbox.yaml Normal file
View File

@ -0,0 +1,64 @@
---
- name: Install K3s Kubernetes
hosts: |-
{%- if nfc_pb_host is defined -%}
{{ nfc_pb_host }}
{%- elif nfc_pb_kubernetes_cluster_name is defined -%}
kubernetes_cluster_{{ nfc_pb_kubernetes_cluster_name | lower }}
{%- else -%}
{%- if ansible_limit is defined -%}
{{ ansible_limit }}
{%- else -%}
localhost
{%- endif -%}
{%- endif %}
become: false
gather_facts: false
tasks:
- name: Configure NetBox for Kubernetes Deployment(s)
ansible.builtin.include_role:
name: kubernetes_netbox
tags:
- always
# vars:
#
# Future feature, add playbook to import to awx
#
# nfc_pb_awx_tower_template:
# - name: "Collection/NoFussComputing/Kubernetes/NetBox/Configure"
# ask_credential_on_launch: true
# ask_job_type_on_launch: true
# ask_limit_on_launch: true
# ask_tags_on_launch: true
# ask_variables_on_launch: true
# description: |
# Playbook to Install/Configure Kubernetes using configuration
# from code.
# execution_environment: "No Fuss Computing EE"
# job_type: "check"
# labels:
# - cluster
# - k3s
# - kubernetes
# verbosity: 2
# use_fact_cache: true
# survey_enabled: false

2
requirements.txt Normal file
View File

@ -0,0 +1,2 @@
pynetbox
pytz

9
roles/defaults/main.yaml Normal file
View File

@ -0,0 +1,9 @@
---
#
# NetBox Access Variables. Required
#
# nfc_pb_api_netbox_url: # ENV [NETBOX_API]
# nfc_pb_api_netbox_token: # ENV [NETBOX_TOKEN]
# nfc_pb_api_netbox_validate_cert: true # ENV [NETBOX_VALIDATE_CERT]

View File

@ -0,0 +1,3 @@
## No Fuss Computing - Ansible Role kubernetes_netbox
Nothing to see here

View File

@ -0,0 +1,30 @@
galaxy_info:
role_name: kubernetes_netbox
author: No Fuss Computing
description: Configure the required items within Netbox to support deploying kubernetes from Netbox configuration.
issue_tracker_url: https://gitlab.com/nofusscomputing/projects/ansible/collections/kubernetes
license: MIT
min_ansible_version: '2.15'
platforms:
- name: Debian
versions:
- bullseye
- bookworm
- name: Ubuntu
versions:
- 21
galaxy_tags:
- cluster
- k3s
- kubernetes
- netbox

View File

@ -0,0 +1,255 @@
---
# add cluster type kubernetes
- name: Create Custom Field - Configure Firewall
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Configure Firewall
name: nfc_role_kubernetes_configure_firewall
type: boolean
ui_visibility: 'hidden-ifunset'
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - ETCD Enabled
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: ETCD Enabled
name: nfc_role_kubernetes_etcd_enabled
type: boolean
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Install OLM
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Install OLM
name: nfc_role_kubernetes_install_olm
type: boolean
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Install Helm
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Install Helm
name: nfc_role_kubernetes_install_helm
type: boolean
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Install KubeVirt
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Install KubeVirt
name: nfc_role_kubernetes_install_kubevirt
type: boolean
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - KubeVirt Operator Replicas
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: KubeVirt Operator Replicas
name: nfc_role_kubernetes_kubevirt_operator_replicas
type: integer
ui_visibility: hidden-ifunset
# is_cloneable: false
validation_minimum: 1
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Enable MetalLB
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Enable MetalLB
name: nfc_kubernetes_enable_metallb
type: boolean
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Enable ServiceLB (klipper)
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Enable ServiceLB (klipper)
name: nfc_kubernetes_enable_servicelb
type: boolean
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Pod Subnet
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Pod Subnet
name: nfc_role_kubernetes_pod_subnet
object_type: ipam.prefix
type: object
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Service Subnet
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- virtualization.cluster
default: null
group_name: Kubernetes
label: Service Subnet
name: nfc_role_kubernetes_service_subnet
object_type: ipam.prefix
type: object
ui_visibility: hidden-ifunset
# is_cloneable: false
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp

View File

@ -0,0 +1,21 @@
---
- name: Setup NetBox for Kubernetes Cluster Deployments
ansible.builtin.include_tasks:
file: cluster.yaml
apply:
tags:
- always
tags:
- always
- name: Setup NetBox for Kubernetes Service Deployments
ansible.builtin.include_tasks:
file: services.yaml
apply:
tags:
- always
tags:
- never
- services

View File

@ -0,0 +1,50 @@
---
- name: Create Custom Field - Instance
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- ipam.service
group_name: Kubernetes
label: Instance Name
description: "Name of the Instance to be deployed"
name: service_kubernetes_instance
type: text
ui_visibility: hidden-ifunset
# is_cloneable: true
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp
- name: Create Custom Field - Namespace
netbox.netbox.netbox_custom_field:
netbox_url: "{{ lookup('env', 'NETBOX_API') | default(nfc_pb_api_netbox_url) }}"
netbox_token: "{{ lookup('env', 'NETBOX_TOKEN') | default(nfc_pb_api_netbox_token) }}"
data:
content_types:
- ipam.service
group_name: Kubernetes
label: Service Namespace
description: "Deployment Namespace"
name: service_kubernetes_namespace
type: text
ui_visibility: hidden-ifunset
# is_cloneable: true
weight: 100
state: present
validate_certs: "{{ lookup('env', 'NETBOX_VALIDATE_CERT') | default(nfc_pb_api_netbox_validate_cert) | default(true) | bool }}"
delegate_to: localhost
failed_when: >
custom_field_tmp.msg != 'ui_visibility does not exist on existing object. Check to make sure valid field.'
and
custom_field_tmp.diff is not defined
register: custom_field_tmp

View File

@ -0,0 +1,3 @@
## No Fuss Computing - Ansible Role nfc_kubernetes
Nothing to see here

View File

@ -34,6 +34,8 @@ nfc_role_kubernetes_container_images:
nfc_role_kubernetes_cluster_domain: cluster.local
nfc_role_kubernetes_configure_firewall: true
nfc_role_kubernetes_etcd_enabled: false
nfc_role_kubernetes_install_olm: false
@ -46,6 +48,8 @@ nfc_role_kubernetes_kubevirt_operator_replicas: 1
nfc_role_kubernetes_oidc_enabled: false
nfc_role_kubernetes_resolv_conf_file: /etc/resolv.conf
nfc_role_kubernetes_pod_subnet: 172.16.248.0/21
nfc_role_kubernetes_service_subnet: 172.16.244.0/22

View File

@ -1,31 +1,20 @@
---
- name: "restart ContainerD"
service:
name: containerd
state: restarted
when: >
containerd_config.changed | default(false) | bool
and
containerd_installed.rc | default(1) | int == 0
and
kubernetes_type == 'k8s'
tags:
- configure
- install
- name: Reboot Node
ansible.builtin.reboot:
reboot_timeout: 300
listen: reboot_host
when: ansible_connection == 'ssh'
- name: Restart Kubernetes
ansible.builtin.service:
name: |-
{%- if kubernetes_type == 'k3s' -%}
{%- if Kubernetes_Master | default(false) | bool -%}
k3s
{%- else -%}
k3s-agent
{%- endif -%}
{%- if nfc_role_kubernetes_master | default(false) | bool -%}
k3s
{%- else -%}
kubelet
{%- endif %}
k3s-agent
{%- endif -%}
state: restarted
listen: kubernetes_restart
when: |-
@ -33,21 +22,20 @@
nfc_kubernetes_no_restart
or
(
inventory_hostname in groups['kubernetes_master']
nfc_role_kubernetes_master
and
nfc_kubernetes_no_restart_master
)
or
(
inventory_hostname == kubernetes_config.cluster.prime.name
inventory_hostname == kubernetes_config.cluster.prime.name | default(inventory_hostname)
and
nfc_kubernetes_no_restart_prime
)
or
(
inventory_hostname in groups['kubernetes_worker']
nfc_role_kubernetes_worker
and
nfc_kubernetes_no_restart_slave
)
)

View File

@ -5,6 +5,9 @@
url: https://baltocdn.com/helm/signing.asc
dest: /usr/share/keyrings/helm.asc
mode: 740
changed_when: not ansible_check_mode
delay: 10
retries: 3
- name: Add Helm Repository

View File

@ -4,7 +4,10 @@
ansible.builtin.command:
cmd: hostname
changed_when: false
check_mode: false
register: hostname_to_check
tags:
- always
- name: Hostname Check
@ -12,35 +15,72 @@
that:
- hostname_to_check.stdout == inventory_hostname
msg: The hostname must match the inventory_hostname
tags:
- always
when: >
inventory_hostname != 'localhost'
- name: Testing Env Variables
ansible.builtin.set_fact:
ansible_default_ipv4: {
"address": "127.0.0.1"
}
check_mode: false
tags:
- always
when: >
lookup('ansible.builtin.env', 'CI_COMMIT_SHA') | default('') != ''
- name: Gather Facts required by role
ansible.builtin.setup:
gather_subset:
- all_ipv4_addresses
- os_family
- processor
tags:
- always
when: >
ansible_architecture is not defined
or
ansible_default_ipv4 is not defined
or
ansible_os_family is not defined
- name: Check Machine Architecture
ansible.builtin.set_fact:
nfc_kubernetes_install_architectures: "{{ nfc_kubernetes_install_architectures | default({}) | combine({ansible_architecture: ''}) }}"
tags:
- always
- name: Firewall Rules
- name: Configure Kubernetes Firewall Rules
ansible.builtin.include_role:
name: nfc_firewall
name: nofusscomputing.firewall.nfc_firewall
vars:
nfc_firewall_enabled_kubernetes: "{{ nfc_kubernetes.enable_firewall | default(false) | bool }}"
nfc_role_firewall_firewall_type: iptables
nfc_role_firewall_additional_rules: "{{ ( lookup('template', 'vars/firewall_rules.yaml') | from_yaml ).kubernetes_chains }}"
tags:
- never
- install
- always
when: >
nfc_role_kubernetes_configure_firewall
# fix, reload firewall `iptables-reloader`
- name: Reload iptables
ansible.builtin.command:
cmd: bash -c /usr/bin/iptables-reloader
changed_when: false
- name: Install required software
ansible.builtin.apt:
name: python3-pip
install_recommends: false
state: present
when: >
install_kubernetes | default(true) | bool
and
not kubernetes_installed | default(false) | bool
tags:
- never
- install
- always
# kubernetes_installed
- name: K3s Install
ansible.builtin.include_tasks:
file: k3s/install.yaml
@ -65,6 +105,8 @@
install_kubernetes | default(true) | bool
and
kubernetes_installed | default(false) | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
tags:
- always
@ -81,6 +123,8 @@
kubernetes_config.kube_virt.enabled | default(nfc_role_kubernetes_install_kubevirt)
and
inventory_hostname in kubernetes_config.kube_virt.nodes | default([ inventory_hostname ]) | list
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
tags:
- always
@ -97,5 +141,7 @@
kubernetes_config.helm.enabled | default(nfc_role_kubernetes_install_helm)
and
nfc_role_kubernetes_master
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
tags:
- always

View File

@ -14,7 +14,7 @@
- name: Check if FW dir exists
ansible.builtin.stat:
name: /etc/iptables.rules.d
name: /etc/iptables-reloader/rules.d
register: firewall_rules_dir_metadata
@ -37,9 +37,18 @@
when: "{{ kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname }}"
- src: iptables-kubernetes.rules.j2
dest: "/etc/iptables.rules.d/iptables-kubernetes.rules"
dest: "/etc/iptables-reloader/rules.d/iptables-kubernetes.rules"
notify: firewall_reloader
when: "{{ firewall_rules_dir_metadata.stat.exists }}"
when: |-
{%- if firewall_installed -%}
{{ firewall_rules_dir_metadata.stat.exists }}
{%- else -%}
false
{%- endif %}
- name: Add Kubernetes Node Labels

View File

@ -1,5 +1,16 @@
---
- name: Install required python modules
ansible.builtin.pip:
name: "{{ item }}"
state: present
loop: "{{ pip_packages }}"
vars:
pip_packages:
- kubernetes>=12.0.0
- PyYAML>=3.11
- name: Check for calico deployment manifest
ansible.builtin.stat:
name: /var/lib/rancher/k3s/server/manifests/calico.yaml
@ -21,10 +32,11 @@
loop_var: package
vars:
packages:
- name: curl
- name: iptables
- name: jq
- name: wireguard
- wget
- curl
- iptables
- jq
- wireguard
- name: Remove swapfile from /etc/fstab
@ -41,14 +53,29 @@
- install
- name: Disable swap
ansible.builtin.command:
cmd: swapoff -a
changed_when: false
when:
- ansible_os_family == 'Debian'
tags:
- install
- name: Testing Environment try/catch
block:
- name: Disable swap
ansible.builtin.command:
cmd: swapoff -a
changed_when: false
when:
- ansible_os_family == 'Debian'
tags:
- install
rescue:
- name: Check if inside Gitlab CI
ansible.builtin.assert:
that:
- lookup('ansible.builtin.env', 'CI_COMMIT_SHA') | default('') != ''
success_msg: "Inside testing enviroment, 'Disable swap' error OK"
fail_msg: "You should figure out what went wrong"
- name: Check an armbian os system
ansible.builtin.stat:
@ -138,28 +165,245 @@
when: directory_network_manager_metadata.stat.exists
- name: Check if K3s Installed
- name: File Metadata - k3s binary
ansible.builtin.stat:
checksum_algorithm: sha256
name: /usr/local/bin/k3s
register: metadata_file_k3s_existing_binary
- name: File Metadata - k3s[-agent].service
ansible.builtin.stat:
checksum_algorithm: sha256
name: |-
/etc/systemd/system/k3s
{%- if not nfc_role_kubernetes_master | default(false) | bool -%}
-agent
{%- endif -%}
.service
register: metadata_file_k3s_service
- name: Directory Metadata - /etc/rancher/k3s/k3s.yaml
ansible.builtin.stat:
name: /etc/rancher/k3s/k3s.yaml
register: metadata_dir_etc_k3s
- name: File Metadata - /var/lib/rancher/k3s/server/token
ansible.builtin.stat:
checksum_algorithm: sha256
name: /var/lib/rancher/k3s/server/token
register: metadata_file_var_k3s_token
- name: Config Link
ansible.builtin.shell:
cmd: |
if [[ $(service k3s status) ]]; then exit 0; else exit 1; fi
executable: /bin/bash
changed_when: false
failed_when: false
register: k3s_installed
cmd: >
ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config
executable: bash
creates: ~/.kube/config
when: >
nfc_role_kubernetes_master | default(false) | bool
and
metadata_dir_etc_k3s.stat.exists | default(false) | bool
- name: Check if K3s Installed
- name: Fetch Kubernetes Node Object
kubernetes.core.k8s_info:
kind: Node
name: "{{ inventory_hostname }}"
register: kubernetes_node
when: >
metadata_file_k3s_existing_binary.stat.exists | default(false) | bool
and
metadata_file_k3s_service.stat.exists | default(false) | bool
and
metadata_dir_etc_k3s.stat.exists | default(false) | bool
and
metadata_file_var_k3s_token.stat.exists | default(false) | bool
- name: Fetch Installed K3s Metadata
ansible.builtin.shell:
cmd: |
if [[ $(service k3s-agent status) ]]; then exit 0; else exit 1; fi
export installed_version=$(k3s --version | grep k3s | awk '{print $3}');
export installed="
{%- if
metadata_file_k3s_existing_binary.stat.exists | default(false) | bool
and
metadata_file_k3s_service.stat.exists | default(false) | bool
and
metadata_dir_etc_k3s.stat.exists | default(false) | bool
and
metadata_file_var_k3s_token.stat.exists | default(false) | bool
-%}
true
{%- else -%}
false
{%- endif -%}";
if ! service k3s status > /dev/null; then
export installed='false';
fi
export running_version="{{ kubernetes_node.resources[0].status.nodeInfo.kubeletVersion | default('0') }}";
export correct_hash=$(wget -q https://github.com/k3s-io/k3s/releases/download/v
{{-KubernetesVersion + KubernetesVersion_k3s_prefix | urlencode -}}
/sha256sum-
{%- if ansible_architecture | lower == 'x86_64' -%}
amd64
{%- elif ansible_architecture | lower == 'aarch64' -%}
arm64
{%- endif %}.txt -O - | grep -v 'images' | awk '{print $1}');
cat <<EOF
{
"current_hash": "{{ metadata_file_k3s_existing_binary.stat.checksum | default('') }}",
"current_version": "${installed_version}",
"desired_hash": "${correct_hash}",
"desired_version": "v{{ KubernetesVersion + KubernetesVersion_k3s_prefix | default('') }}",
"installed": ${installed},
"running_version": "${running_version}"
}
EOF
executable: /bin/bash
changed_when: false
check_mode: false
failed_when: false
register: k3s_installed
register: k3s_metadata
- name: K3s Metadata Fact
ansible.builtin.set_fact:
node_k3s: "{{ k3s_metadata.stdout | from_yaml }}"
- name: Cached K3s Binary Details
ansible.builtin.stat:
path: "/tmp/k3s.{{ ansible_architecture }}"
checksum_algorithm: sha256
delegate_to: localhost
register: file_cached_k3s_binary
vars:
ansible_connection: local
- name: Remove Cached K3s Binaries
ansible.builtin.file:
path: "/tmp/k3s.{{ ansible_architecture }}"
state: absent
delegate_to: localhost
vars:
ansible_connection: local
when: >
not nfc_role_kubernetes_worker | default(false) | bool
file_cached_k3s_binary.stat.checksum | default('0') != node_k3s.desired_hash
# Workaround. See: https://github.com/ansible/awx/issues/15161
- name: Build K3s Download URL
ansible.builtin.set_fact:
cacheable: false
url_download_k3s: |-
[
{%- for key, value in nfc_kubernetes_install_architectures | dict2items -%}
"https://github.com/k3s-io/k3s/releases/download/
{{- node_k3s.desired_version | urlencode -}}
/k3s
{%- if key == 'aarch64' -%}
-arm64
{%- endif %}",
{%- endfor -%}
]
changed_when: false
check_mode: false
delegate_to: localhost
loop: "{{ nfc_kubernetes_install_architectures | dict2items }}"
loop_control:
loop_var: cpu_arch
vars:
ansible_connection: local
- name: Download K3s Binary
ansible.builtin.uri:
url: "{{ url | string }}"
method: GET
return_content: false
status_code:
- 200
- 304
dest: "/tmp/k3s.{{ ansible_architecture }}"
mode: "744"
changed_when: not ansible_check_mode
check_mode: false
delay: 10
retries: 3
register: k3s_download_files
delegate_to: localhost
failed_when: >
(lookup('ansible.builtin.file', '/tmp/k3s.' + ansible_architecture) | hash('sha256') | string) != node_k3s.desired_hash
and
(
k3s_download_files.status | int != 200
or
k3s_download_files.status | int != 304
)
run_once: true
when: ansible_os_family == 'Debian'
loop: "{{ url_download_k3s }}"
loop_control:
loop_var: url
vars:
ansible_connection: local
- name: Copy K3s binary to Host
ansible.builtin.copy:
src: "/tmp/k3s.{{ ansible_architecture }}"
dest: "/usr/local/bin/k3s"
mode: '741'
owner: root
group: root
register: k3s_binary_copy
when: >
node_k3s.current_hash != node_k3s.desired_hash
- name: K3s Binary Upgrade
ansible.builtin.service:
name: |-
{%- if nfc_role_kubernetes_master | default(false) | bool -%}
k3s
{%- else -%}
k3s-agent
{%- endif %}
state: restarted
register: k3s_upgrade_service_restart
when: >
(
k3s_binary_copy.changed | default(false) | bool
and
node_k3s.installed | default(false) | bool
)
or
(
node_k3s.running_version != node_k3s.desired_version
and
node_k3s.installed | default(false) | bool
)
- name: Create Fact - cluster_upgraded
ansible.builtin.set_fact:
nfc_role_kubernetes_cluster_upgraded: true
cacheable: true
when: >
k3s_upgrade_service_restart.changed | default(false) | bool
- name: Download Install Scripts
@ -172,7 +416,10 @@
- 304
dest: "{{ item.dest }}"
mode: "744"
check_mode: false
changed_when: false
delay: 10
retries: 3
register: k3s_download_script
delegate_to: localhost
run_once: true
@ -181,6 +428,8 @@
ansible_os_family == 'Debian'
and
item.when | default(true) | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
loop: "{{ download_files }}"
vars:
ansible_connection: local
@ -192,61 +441,6 @@
when: "{{ nfc_role_kubernetes_install_olm }}"
- name: Download K3s Binary
ansible.builtin.uri:
url: |-
https://github.com/k3s-io/k3s/releases/download/v
{{- KubernetesVersion + KubernetesVersion_k3s_prefix | urlencode -}}
/k3s
{%- if cpu_arch.key == 'aarch64' -%}
-arm64
{%- endif %}
method: GET
return_content: false
status_code:
- 200
- 304
dest: "/tmp/k3s.{{ cpu_arch.key }}"
mode: "744"
changed_when: false
register: k3s_download_files
delegate_to: localhost
run_once: true
# no_log: true
when: ansible_os_family == 'Debian'
loop: "{{ nfc_kubernetes_install_architectures | dict2items }}"
loop_control:
loop_var: cpu_arch
vars:
ansible_connection: local
- name: "[TRACE] Downloaded File SHA256"
ansible.builtin.set_fact:
hash_sha256_k3s_downloaded_binary: "{{ lookup('ansible.builtin.file', '/tmp/k3s.' + cpu_arch.key) | hash('sha256') | string }}"
delegate_to: localhost
loop: "{{ nfc_kubernetes_install_architectures | dict2items }}"
loop_control:
loop_var: cpu_arch
- name: Existing k3s File hash
ansible.builtin.stat:
checksum_algorithm: sha256
name: /usr/local/bin/k3s
register: hash_sha256_k3s_existing_binary
- name: Copy K3s binary to Host
ansible.builtin.copy:
src: "/tmp/k3s.{{ ansible_architecture }}"
dest: "/usr/local/bin/k3s"
mode: '741'
owner: root
group: root
when: hash_sha256_k3s_existing_binary.stat.checksum | default('0') != hash_sha256_k3s_downloaded_binary
- name: Copy install scripts to Host
ansible.builtin.copy:
src: "{{ item.path }}"
@ -263,6 +457,8 @@
when: "{{ nfc_role_kubernetes_install_olm }}"
when: >
item.when | default(true) | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Required Initial config files
@ -276,7 +472,8 @@
loop: "{{ k3s.files }}"
when: >
item.when | default(true) | bool
# kubernetes_config.cluster.prime.name == inventory_hostname
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Copy Intial required templates
@ -291,6 +488,8 @@
diff: true
when: >
item.when | default(true) | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
vars:
templates_to_apply:
- src: k3s-config.yaml.j2
@ -308,7 +507,7 @@
and
file_calico_yaml_metadata.stat.exists
and
k3s_installed.rc == 0
not node_k3s.installed | bool
)
or
'calico_manifest' in ansible_run_tags
@ -335,19 +534,23 @@
ansible.builtin.command:
cmd: update-alternatives --set iptables /usr/sbin/iptables-legacy
changed_when: false
when: >
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Install K3s (prime master)
ansible.builtin.shell:
cmd: |
INSTALL_K3S_SKIP_DOWNLOAD=true \
INSTALL_K3S_VERSION="v{{ KubernetesVersion }}{{ KubernetesVersion_k3s_prefix }}" \
INSTALL_K3S_VERSION="{{ node_k3s.desired_version }}" \
/tmp/install.sh {% if nfc_role_kubernetes_etcd_enabled %}--cluster-init{% endif %}
changed_when: false
when: >
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
k3s_installed.rc == 1
not node_k3s.installed | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Install Calico Operator
@ -370,6 +573,8 @@
'calico_manifest' not in ansible_run_tags
and
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Install MetalLB Operator
@ -389,6 +594,8 @@
nfc_kubernetes_enable_metallb | default(false) | bool
and
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Wait for kubernetes prime to be ready
@ -413,6 +620,22 @@
kubernetes_ready_check.rc != 0
changed_when: false
failed_when: kubernetes_ready_check.rc != 0
when: >
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
and
not ansible_check_mode
- name: Config Link
ansible.builtin.shell:
cmd: >
ln -s /etc/rancher/k3s/k3s.yaml ~/.kube/config
executable: bash
creates: ~/.kube/config
when: >
nfc_role_kubernetes_master | default(false) | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Install olm
@ -429,6 +652,8 @@
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
nfc_role_kubernetes_install_olm | default(false) | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Uninstall OLM
@ -457,6 +682,8 @@
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
'olm_uninstall' in ansible_run_tags
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Enable Cluster Encryption
@ -469,6 +696,8 @@
and
kubernetes_config.cluster.networking.encrypt | default(false) | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
and
(
'calico_manifest' in ansible_run_tags
or
@ -487,6 +716,8 @@
run_once: true
register: k3s_join_token
no_log: true # Value is sensitive
when: >
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Create Token fact
@ -495,6 +726,8 @@
delegate_to: "{{ kubernetes_config.cluster.prime.name | default(inventory_hostname) }}"
run_once: true
no_log: true # Value is sensitive
when: >
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Install K3s (master nodes)
@ -502,7 +735,7 @@
cmd: |
INSTALL_K3S_EXEC="server" \
INSTALL_K3S_SKIP_DOWNLOAD=true \
INSTALL_K3S_VERSION="v{{ KubernetesVersion }}{{ KubernetesVersion_k3s_prefix }}" \
INSTALL_K3S_VERSION="{{ node_k3s.desired_version }}" \
K3S_TOKEN="{{ k3s_join_token }}" \
/tmp/install.sh
executable: /bin/bash
@ -512,7 +745,9 @@
and
not kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
k3s_installed.rc == 1
not node_k3s.installed | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Install K3s (worker nodes)
@ -521,7 +756,7 @@
set -o pipefail
INSTALL_K3S_EXEC="agent" \
INSTALL_K3S_SKIP_DOWNLOAD=true \
INSTALL_K3S_VERSION="v{{ KubernetesVersion }}{{ KubernetesVersion_k3s_prefix }}" \
INSTALL_K3S_VERSION="v{{ node_k3s.desired_version }}" \
K3S_TOKEN="{{ k3s_join_token }}" \
K3S_URL="https://{{ hostvars[kubernetes_config.cluster.prime.name | default(inventory_hostname)].ansible_host }}:6443" \
/tmp/install.sh -
@ -532,7 +767,9 @@
and
not kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
k3s_installed.rc == 1
not node_k3s.installed | bool
and
not nfc_role_kubernetes_cluster_upgraded | default(false) | bool
- name: Set Kubernetes Final Install Fact

View File

@ -110,6 +110,7 @@
owner: root
group: 'root'
changed_when: false
check_mode: false
become: true
delegate_to: localhost
loop: "{{ nfc_kubernetes_install_architectures | dict2items }}"

View File

@ -9,6 +9,8 @@
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
and
nfc_role_kubernetes_prime | bool
and
not kubernetes_installed | default(false)
- name: Install/Configure Kubernetes on remaining Master Nodes
@ -20,6 +22,8 @@
kubernetes_config.cluster.prime.name | default(inventory_hostname) != inventory_hostname
and
nfc_role_kubernetes_master | bool
and
not kubernetes_installed | default(false)
- name: Install/Configure Kubernetes on Worker Nodes
@ -33,3 +37,5 @@
not nfc_role_kubernetes_prime | bool
and
not nfc_role_kubernetes_master | bool
and
not kubernetes_installed | default(false)

View File

@ -1,7 +1,7 @@
#
# IP Tables Firewall Rules for Kubernetes
#
# Managed By ansible/role/nfc_kubernetes
# Managed By ansible/collection/kubernetes
#
# Dont edit this file directly as it will be overwritten. To grant a host API access
# edit the cluster config, adding the hostname/ip to path kubernetes_config.cluster.access
@ -61,7 +61,7 @@
{%- if kubernetes_host != '' -%}
{%- for master_host in groups['kubernetes_master'] -%}
{%- for master_host in groups['kubernetes_master'] | default([]) -%}
{%- if master_host in groups[kubernetes_config.cluster.group_name | default('me_is_optional')] | default([]) -%}
@ -149,8 +149,13 @@
{#- All cluster Hosts -#}
{%- if nfc_role_kubernetes_master | default(false) | bool -%}
{%- if
nfc_role_kubernetes_master | default(false) | bool
and
kubernetes_host not in groups['kubernetes_master']
and
'-I kubernetes-api -s ' + kubernetes_host + ' -j ACCEPT' not in data.firewall_rules
-%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-api -s ' + kubernetes_host + ' -j ACCEPT'] -%}
@ -162,9 +167,17 @@
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-flannel-wg-four -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-flannel-wg-six -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- if false -%}{# see IPv6 is disabled #}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-flannel-wg-six -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- endif -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-calico-bgp -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- if false -%}{# see Installation-manifest-Calico_Cluster.yaml.j2 bgp is disabled #}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-calico-bgp -s ' + kubernetes_host + ' -j ACCEPT'] -%}
{%- endif -%}
{%- set data.firewall_rules = data.firewall_rules + ['-I kubernetes-calico-typha -s ' + kubernetes_host + ' -j ACCEPT'] -%}

View File

@ -7,7 +7,7 @@
#
{%- if
inventory_hostname in groups['kubernetes_master']
nfc_role_kubernetes_master
or
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
-%}
@ -128,6 +128,16 @@
{# SoF All Nodes #}
{%- if inventory_hostname == 'localhost' -%}
{%- set node_name = hostname_to_check.stdout -%}
{%- else -%}
{%- set node_name = inventory_hostname -%}
{%- endif -%}
{%
set all_nodes_config = {
@ -135,7 +145,8 @@
"system-reserved=cpu=" + kubelet_arg_system_reserved_cpu + ",memory=" + kubelet_arg_system_reserved_memory +
",ephemeral-storage=" + kubelet_arg_system_reserved_storage
],
"node-name": inventory_hostname,
"node-name": node_name,
"resolv-conf": nfc_role_kubernetes_resolv_conf_file,
}
-%}
@ -143,13 +154,13 @@
{%- if groups[kubernetes_config.cluster.group_name | default('make_me_optional')] | default([]) | list | length > 0 -%}
{%- if k3s_installed.rc == 0 -%}
{%- if node_k3s.installed -%}
{%- set ns = namespace(server=[]) -%}
{%- for cluster_node in groups[kubernetes_config.cluster.group_name] -%}
{%- if cluster_node in groups['kubernetes_master'] -%}
{%- if cluster_node in groups['kubernetes_master'] | default([]) -%}
{%- if hostvars[cluster_node].host_external_ip is defined -%}
@ -188,7 +199,7 @@
{%- elif
kubernetes_config.cluster.prime.name != inventory_hostname
and
k3s_installed.rc == 1
not node_k3s.installed
-%}
{%- set server = (server | default([])) + [
@ -228,7 +239,7 @@
{%- if
inventory_hostname in groups['kubernetes_master']
nfc_role_kubernetes_master
or
kubernetes_config.cluster.prime.name | default(inventory_hostname) == inventory_hostname
-%}

View File

@ -0,0 +1,90 @@
---
kubernetes_chains:
- name: kubernetes-embedded-etcd
chain: true
table: INPUT
protocol: tcp
dest:
port:
- '2379'
- '2380'
comment: etcd. Servers only
when: "{{ nfc_role_kubernetes_etcd_enabled }}"
- name: kubernetes-api
chain: true
table: INPUT
protocol: tcp
dest:
port: '6443'
comment: Kubernetes API access. All Cluster hosts and end users
- name: kubernetes-calico-bgp
chain: true
table: INPUT
protocol: tcp
dest:
port: '179'
comment: Kubernetes Calico BGP. All Cluster hosts and end users
when: false # currently hard set to false. see Installation-manifest-Calico_Cluster.yaml.j2
- name: kubernetes-flannel-vxlan
chain: true
table: INPUT
protocol: udp
dest:
port: '4789'
comment: Flannel. All cluster hosts
- name: kubernetes-kubelet-metrics
chain: true
table: INPUT
protocol: tcp
dest:
port: '10250'
comment: Kubernetes Metrics. All cluster hosts
- name: kubernetes-flannel-wg-four
chain: true
table: INPUT
protocol: udp
dest:
port: '51820'
comment: Flannel Wiregaurd IPv4. All cluster hosts
- name: kubernetes-flannel-wg-six
chain: true
table: INPUT
protocol: udp
dest:
port: '51821'
comment: Flannel Wiregaurd IPv6. All cluster hosts
when: false # ipv6 is disabled. see install.yaml sysctrl
- name: kubernetes-calico-typha
chain: true
table: INPUT
protocol: tcp
dest:
port: '5473'
comment: Calico networking with Typha enabled. Typha agent hosts.
- name: metallb-l2-tcp
chain: true
table: INPUT
protocol: tcp
dest:
port: '7946'
comment: MetalLB Gossip
when: "{{ nfc_kubernetes_enable_metallb }}"
- name: metallb-l2-udp
chain: true
table: INPUT
protocol: udp
dest:
port: '7946'
comment: MetalLB Gossip
when: "{{ nfc_kubernetes_enable_metallb }}"