Files
kubernetes/README.md
2026-01-12 23:52:48 +00:00

470 lines
10 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
# kubernetes
Kubernetes is an opensource platform that automates the deployment, scaling, and management of containerized applications. It acts as an orchestrator, ensuring your containers run reliably across clusters of machines, handling networking, storage, and updates without downtime.
- [kubectl](#kubectl)
- [Get Pod](#get-pod)
- [Pod delete](#pod-delete)
- [OOMKilled](#oomkilled)
- [Rollout](#rollout)
- [Custom Resource Definitions](#custom-resource-definitions)
- [Helper pods](#helper-pods)
- [network testing](#network-testing)
- [Set Replicas](#set-replicas)
- [taint nodes](#taint-nodes)
- [control plane - NoSchedule](#control-plane---noschedule)
- [Resources](#resources)
- [Persistent volumes claims](#persistent-volumes-claims)
- [Services Accounts](#services-accounts)
- [Namespaces](#namespaces)
- [Secrets](#secrets)
- [Manifest - Opaque / Base64](#manifest---opaque--base64)
- [Manifest - StringData](#manifest---stringdata)
- [Inline with heredoc and environment variables](#inline-with-heredoc-and-environment-variables)
- [substr](#substr)
- [get certificate end date](#get-certificate-end-date)
- [service accounts](#service-accounts)
- [core-dns](#core-dns)
- [Services DNS Name](#services-dns-name)
- [k3s](#k3s)
- [Install / Setup](#install--setup)
- [prune old image](#prune-old-image)
- [check system logs](#check-system-logs)
- [Workarounds \& Fixes](#workarounds--fixes)
- [Failed unmounting var-lib-rancher.mount on reboot](#failed-unmounting-var-lib-ranchermount-on-reboot)
## kubectl
kubectl is the commandline tool used to interact with Kubernetes clusters. Think of it as the “remote control” for Kubernetes: it lets you deploy applications, inspect resources, and manage cluster operations directly from your terminal.
**Create namespace:**
``` bash
kubectl create namespace tests
```
### Get Pod
**Get pod name by label ap:**
```bash
POD_NAME=$(kubectl get pod -l app=borg-backup-sidekick -n git-limbosolutions-com -o jsonpath='{.items[0].metadata.name}')
echo $POD_NAME
```
### Pod delete
**Restart local Path Provizionizer:**
``` bash
kubectl delete pod -n kube-system -l app=local-path-provisioner
```
### OOMKilled
**list all OOMKilled pods:**
``` bash
kubectl get events --all-namespaces | grep -i "OOMKilled"
```
``` bash
kubectl get pods --all-namespaces \
-o jsonpath='{range .items[*]}{.metadata.namespace}{" "}{.metadata.name}{" "}{.status.containerStatuses[*].lastState.terminated.reason}{"\n"}{end}' \
| grep OOMKilled
```
### Rollout
**rollout coredns:**
``` bash
kubectl rollout restart deployment coredns -n kube-system
```
### Custom Resource Definitions
- **Definition:** A Custom Resource Definition (CRD) is an extension of the Kubernetes API.
- **Purpose:** They allow you to define new resource kinds (e.g., Database, Backup, FooBar) that behave like native Kubernetes objects.
- **Analogy:** By default, Kubernetes understands objects like Pods and Services. With CRDs, you can add your own object types and manage them with kubectl just like builtin resources
**List traefik CRDS:**
```bash
kubectl get crds | grep traefik
```
### Helper pods
#### network testing
``` bash
kubectl run -i --tty dns-test --namespace tests --image=busybox --restart=Never --
kubectl delete pod dns-test --namespace tests || 0
```
**Example using yaml and hostNetwork:**
- Create Pod
```yaml
apiVersion: v1
kind: Pod
metadata:
name: dns-test
namespace: tests
spec:
hostNetwork: true
containers:
- name: dns-test
image: busybox
command: ["sh"]
stdin: true
tty: true
```
- Attach to Pod
```bash
kubectl attach -it dns-test -n tests
```
- Execute command inside pod.
``` bash
nslookup google.com
```
- Delete pod
```bash
kubectl delete pod dns-test --namespace tests
```
### Set Replicas
**Set deployment replicas to 0:**
```bash
kubectl patch deployment <deployment-name> \
-n <namespace> \
-p '{"spec":{"replicas":0}}'
```
**Set statefulset replicas to 0:**
```bash
kubectl patch statefulset zigbee2mqtt \
-n mqtt \
-p '{"spec":{"replicas":1}}'
```
### taint nodes
#### control plane - NoSchedule
``` bash
MASTER_NODE_NAME="master-node-name"
kubectl taint nodes ${MASTER_NODE_NAME} node-role.kubernetes.io/control-plane=:NoSchedule
```
### Resources
**List all resources:**
```bash
kubectl get all -n kube-system | grep traefik
```
**List service accounts:**
```bash
kubectl get serviceAccount --all-namespaces
```
### Persistent volumes claims
**Patch pvc to retain policy:**
``` bash
PVC_NAME="????"
NAMESPACE="????"
PV_NAME= $(kubectl get pvc $PVC_NAME -n $NAMESPACE -o jsonpath='{.spec.volumeName}')
kubectl patch pv $PV_NAME \
-p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
```
### Services Accounts
**List all:**
```bash
kubectl get serviceAccount --all-namespaces
```
**Get Service Account Token:**
```bash
kubectl get secret <secret_name> -o jsonpath='{.data.token}' | base64 -d
```
```bash
kubectl get secret <secret_name> -o jsonpath='{.data.token}' | base64 -d > ./service-account-secret-base64
```
**Get Cluster certificate Base64:**
```bash
kubectl config view --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}'
```
## Namespaces
``` yaml
apiVersion: v1
kind: Namespace
metadata:
name: namespace-name
labels:
name: namespace-name
```
## Secrets
### Manifest - Opaque / Base64
```yaml
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: namespace-name
type: Opaque
data:
SERVER_ADDRESS: MTI3LjAuMC4x # 127.0.0.1 BASE64
```
### Manifest - StringData
```yaml
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: namespace-name
stringData:
SERVER_ADDRESS: 127.0.0.1
```
### Inline with heredoc and environment variables
``` bash
SERVER_ADDRESS=127.0.0.1
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: namespace-name
stringData:
SERVER_ADDRESS: ${SERVER_ADDRESS}
EOF
```
### substr
**yaml secret template:**
``` yaml
# ./secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: secret-name
namespace: namespace-name
stringData:
SERVER_ADDRESS: ${SERVER_ADDRESS}
```
``` bash
export SERVER_ADDRESS="127.0.1"
envsubst < ./secret.yaml | kubectl apply -f -
```
**env file and envsubst:**
``` bash
#---
# ./.env
# content:
# SERVER_ADDRESS=127.0.0.1
#---
set -a
source ./.env
set +a
envsubst < ./secret.yaml | kubectl apply -f -
```
### get certificate end date
``` bash
kubectl get secret certificate-name-tls -o "jsonpath={.data['tls\.crt']}" | base64 --decode | openssl x509 -enddate -noout
```
## service accounts
**Get service account token:**
```bash
kubectl get secret continuous-deploy -o jsonpath='{.data.token}' | base64 -d
```
## core-dns
Kubernetes automatically provides DNS names for Services and Pods, and CoreDNS serves these records. This allows workloads to communicate using stable, predictable names instead of changing IP addresses.
### Services DNS Name
```text
<service-name>.<namespace>.svc.<cluster-domain>
```
Remove warning from logs.
```log
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.override
```
1. Apply on kubernetes
```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
log.override: |
#
stub.server: |
#
```
## k3s
K3s is a lightweight, certified Kubernetes distribution designed to run in resourceconstrained environments such as edge devices, IoT appliances, and small servers. It simplifies installation and operation by packaging Kubernetes into a single small binary, while still being fully compliant with the Kubernetes API.
🌐 What K3s Is
- Definition: K3s is a simplified Kubernetes distribution created by Rancher Labs (now part of SUSE) and maintained under the CNCF.
- Purpose: Its built for environments where full Kubernetes (K8s) is too heavy — like Raspberry Pis, edge servers, or CI pipelines.
- Size: The entire distribution is packaged into a binary under ~70MB.
### Install / Setup
**Default master installation:**
``` bash
curl -sfL https://get.k3s.io | sh -
```
Install specific version and disable:
- flannel (alternative example calico)
- servicelb (alternative example metallb)
- traefik (then install using helm chart or custom manifests for better control)
```bash
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.33.3+k3s1 INSTALL_K3S_EXEC="--flannel-backend=none \
--disable-network-policy \
--cluster-cidr=10.42.0.0/16 \
--disable=servicelb \
--disable=traefik" \
sh -
```
### prune old image
prune old images, execute on kubernetes host node
```bash
crictl rmi --prune
```
### check system logs
```bash
sudo journalctl -u k3s-agent --since "1h ago" --reverse --no-pager | more
sudo journalctl -u k3s-agent --since "1 hour ago" --reverse | grep -i "Starting k3s-agent.service"
sudo journalctl -u k3s --reverse | grep -i "Starting k3s.service"
```
*Example: [test-services.services.svc.cluster.local](test-services.services.svc.cluster.local).*
### Workarounds & Fixes
#### Failed unmounting var-lib-rancher.mount on reboot
When running K3s with /var/lib/rancher on a separate disk.
K3s and containerd often leave behind mount namespaces and overlay layers that block clean unmounting during shutdown.
This causes slow reboots and errors like:
``` bash
Failed unmounting var-lib-rancher.mount
```
1. Create the cleanup service
``` bash
nano /etc/systemd/system/rancher-cleanup.service
```
Paste:
``` bash
[Unit]
DefaultDependencies=no
Before=shutdown.target
[Service]
Type=oneshot
ExecStart=/bin/sh -c '/bin/umount -l /var/lib/rancher || true'
[Install]
WantedBy=shutdown.target
```
Why this works
- DefaultDependencies=no ensures the service runs early.
- Before=umount.target guarantees it executes before systemd tries to unmount anything.
- umount -l detaches the filesystem immediately, even if containerd still holds namespaces.
- || true prevents harmless “not mounted” errors from blocking shutdown.
1. Reload systemd
``` bash
systemctl daemon-reload
```
1. Enable the cleanup service
```bash
systemctl enable rancher-cleanup.service
```
1. Reboot to test:
``` bash
reboot
```