2026-01-13 00:26:06 +00:00
2026-01-12 21:03:48 +00:00
2026-01-13 00:26:06 +00:00

kubernetes

Kubernetes is an opensource platform that automates the deployment, scaling, and management of containerized applications. It acts as an orchestrator, ensuring your containers run reliably across clusters of machines, handling networking, storage, and updates without downtime.

kubectl

kubectl is the commandline tool used to interact with Kubernetes clusters. Think of it as the “remote control” for Kubernetes: it lets you deploy applications, inspect resources, and manage cluster operations directly from your terminal.

Create namespace:

 kubectl create namespace tests 

Get Pod

Get pod name by label ap:

POD_NAME=$(kubectl get pod -l app=borg-backup-sidekick -n git-limbosolutions-com -o jsonpath='{.items[0].metadata.name}')

echo $POD_NAME

Pod delete

Restart local Path Provizionizer:

kubectl delete pod -n kube-system -l app=local-path-provisioner

OOMKilled

list all OOMKilled pods:

kubectl get events --all-namespaces | grep -i "OOMKilled"
kubectl get pods --all-namespaces \
-o jsonpath='{range .items[*]}{.metadata.namespace}{" "}{.metadata.name}{" "}{.status.containerStatuses[*].lastState.terminated.reason}{"\n"}{end}' \
| grep OOMKilled

Rollout

rollout coredns:

kubectl rollout restart deployment coredns -n kube-system

Custom Resource Definitions

  • Definition: A Custom Resource Definition (CRD) is an extension of the Kubernetes API.

  • Purpose: They allow you to define new resource kinds (e.g., Database, Backup, FooBar) that behave like native Kubernetes objects.

  • Analogy: By default, Kubernetes understands objects like Pods and Services. With CRDs, you can add your own object types and manage them with kubectl just like builtin resources

List traefik CRDS:

kubectl get crds | grep traefik

Helper pods

network testing

kubectl run  -i --tty dns-test --namespace tests --image=busybox --restart=Never -- 
kubectl delete pod dns-test --namespace tests || 0

Example using yaml and hostNetwork:

  • Create Pod
apiVersion: v1
kind: Pod
metadata:
  name: dns-test
  namespace: tests
spec:
  hostNetwork: true
  containers:
  - name: dns-test
    image: busybox
    command: ["sh"]
    stdin: true
    tty: true
  • Attach to Pod
kubectl attach -it dns-test -n tests
  • Execute command inside pod.
nslookup google.com
  • Delete pod
kubectl delete pod dns-test --namespace tests

Set Replicas

Set deployment replicas to 0:

kubectl patch deployment <deployment-name> \
  -n <namespace> \
  -p '{"spec":{"replicas":0}}'

Set statefulset replicas to 0:

kubectl patch statefulset zigbee2mqtt \
  -n  mqtt \
  -p '{"spec":{"replicas":1}}'

taint nodes

control plane - NoSchedule

MASTER_NODE_NAME="master-node-name"
kubectl taint nodes ${MASTER_NODE_NAME} node-role.kubernetes.io/control-plane=:NoSchedule

Resources

List all resources:

kubectl get all -n kube-system | grep traefik

List service accounts:

kubectl get serviceAccount --all-namespaces

Persistent volumes claims

Patch pvc to retain policy:

PVC_NAME="????"
NAMESPACE="????"
PV_NAME= $(kubectl get pvc $PVC_NAME -n $NAMESPACE -o jsonpath='{.spec.volumeName}')
kubectl patch pv $PV_NAME \
  -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'

Services Accounts

List all:

kubectl get serviceAccount --all-namespaces

Get Service Account Token:

kubectl get secret <secret_name> -o jsonpath='{.data.token}' | base64 -d
kubectl get secret <secret_name> -o jsonpath='{.data.token}' | base64 -d > ./service-account-secret-base64

Get Cluster certificate Base64:

kubectl config view --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' 

Namespaces

apiVersion: v1
kind: Namespace
metadata:
  name: namespace-name
  labels:
    name: namespace-name

Secrets

Manifest - Opaque / Base64

apiVersion: v1
kind: Secret
metadata:
  name: secret-name
  namespace: namespace-name
type: Opaque
data:
  SERVER_ADDRESS: MTI3LjAuMC4x # 127.0.0.1 BASE64

Manifest - StringData

apiVersion: v1
kind: Secret
metadata:
  name: secret-name
  namespace: namespace-name
stringData:
  SERVER_ADDRESS: 127.0.0.1

Inline with heredoc and environment variables

SERVER_ADDRESS=127.0.0.1
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: secret-name
  namespace: namespace-name
stringData:
  SERVER_ADDRESS: ${SERVER_ADDRESS}
EOF

substr

yaml secret template:

# ./secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: secret-name
  namespace: namespace-name
stringData:
  SERVER_ADDRESS: ${SERVER_ADDRESS}
export SERVER_ADDRESS="127.0.1"
envsubst < ./secret.yaml | kubectl apply -f -

env file and envsubst:

#---
# ./.env
# content:
# SERVER_ADDRESS=127.0.0.1
#---
set -a
source ./.env
set +a
envsubst < ./secret.yaml | kubectl apply -f -

certs

list all certs

kubectl get cert -n default

get cert end date

kubectl get secret certificate-name-tls -o "jsonpath={.data['tls\.crt']}" | base64 --decode | openssl x509 -enddate -noout

service accounts

Get service account token:

kubectl get secret continuous-deploy -o jsonpath='{.data.token}' | base64 -d

core-dns

Kubernetes automatically provides DNS names for Services and Pods, and CoreDNS serves these records. This allows workloads to communicate using stable, predictable names instead of changing IP addresses.

Services DNS Name

<service-name>.<namespace>.svc.<cluster-domain>

Remove warning from logs.

[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.server
[WARNING] No files matching import glob pattern: /etc/coredns/custom/*.override
  1. Apply on kubernetes
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  log.override: |
    #
  stub.server: |
    #

k3s

K3s is a lightweight, certified Kubernetes distribution designed to run in resourceconstrained environments such as edge devices, IoT appliances, and small servers. It simplifies installation and operation by packaging Kubernetes into a single small binary, while still being fully compliant with the Kubernetes API.

🌐 What K3s Is

  • Definition: K3s is a simplified Kubernetes distribution created by Rancher Labs (now part of SUSE) and maintained under the CNCF.
  • Purpose: Its built for environments where full Kubernetes (K8s) is too heavy — like Raspberry Pis, edge servers, or CI pipelines.
  • Size: The entire distribution is packaged into a binary under ~70MB.

Install / Setup

Default master installation:

curl -sfL https://get.k3s.io | sh -

Install specific version and disable:

  • flannel (alternative example calico)
  • servicelb (alternative example metallb)
  • traefik (then install using helm chart or custom manifests for better control)
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.33.3+k3s1 INSTALL_K3S_EXEC="--flannel-backend=none \
--disable-network-policy \
--cluster-cidr=10.42.0.0/16 \
--disable=servicelb \
--disable=traefik" \
 sh -

prune old images

prune old images, execute on kubernetes host node

crictl rmi --prune

check system logs

sudo journalctl -u k3s-agent --since "1h ago" --reverse --no-pager | more
sudo journalctl -u k3s-agent --since "1 hour ago" --reverse | grep -i "Starting k3s-agent.service" 
sudo journalctl -u k3s --reverse | grep -i "Starting k3s.service"

Example: test-services.services.svc.cluster.local.

Workarounds & Fixes

Failed unmounting var-lib-rancher.mount on reboot

When running K3s with /var/lib/rancher on a separate disk.

K3s and containerd often leave behind mount namespaces and overlay layers that block clean unmounting during shutdown. This causes slow reboots and errors like:

Failed unmounting var-lib-rancher.mount
  1. Create the cleanup service

    nano /etc/systemd/system/rancher-cleanup.service
    

    Paste:

    
    [Unit]
    DefaultDependencies=no
    Before=shutdown.target
    
    [Service]
    Type=oneshot
    ExecStart=/bin/sh -c '/bin/umount -l /var/lib/rancher || true'
    
    [Install]
    WantedBy=shutdown.target
    
    

    Why this works

    • DefaultDependencies=no ensures the service runs early.
    • Before=umount.target guarantees it executes before systemd tries to unmount anything.
    • umount -l detaches the filesystem immediately, even if containerd still holds namespaces.
    • || true prevents harmless “not mounted” errors from blocking shutdown.
  2. Reload systemd

    systemctl daemon-reload
    
  3. Enable the cleanup service

    systemctl enable rancher-cleanup.service
    
  4. Reboot to test:

    reboot
    
Description
No description provided
Readme 80 KiB