Warning: Cannot modify header information - headers already sent by (output started at /data/web/virtuals/85063/virtual/www/domains/waldauf.org/lib/plugins/color/syntax.php:1) in /data/web/virtuals/85063/virtual/www/domains/waldauf.org/inc/actions.php on line 180
technology:k8s:kubernetes - WiKi

Kubernetes

FIXME VSECHNY SYSTEMD KONFIGURACI ZASLOUZI PREDELAT - odvolavaji se na konfiguracni sluzby jednotlivych demonu kde se casto duplikuji promenne. Opravit tak, aby se volalo co se volat ma a nic navic, co ani neexistuje v samotnem aplikacnim konfiguraku.

Download

Download K8s version which is in documentation. Use KB repository - Nexus3.

Version of K8s is in file platform-infra/ansible/roles/master/defaults/main.yml.

Download link is in file platform-infra/ansible/roles/master/tasks/download.

Tokens

FIXME GENERATE TOKENS

Kube-api Server

Install just on master server(s).

The API server is where most of the magic happens. It is stateless by design and takes in API requests, processes them and stores the result in etcd if needed, and then returns the result of the request.

Certificates

Like everything in K8s - kube-apiserver also need own certificates. Please proceed according this manual step by step.

Generate certificate for all FQDNs and IPs - including IP range defined in API server (see below)!

Configuration

Configuration files are places id /etc/kubernetes.

  • Master01
    • /etc/kubernetes/config/config
      ###
      # kubernetes system config
      #
      # The following values are used to configure various aspects of all
      # kubernetes services, including
      #
      #   kube-apiserver.service
      #   kube-controller-manager.service
      #   kube-scheduler.service
      #   kubelet.service
      #   kube-proxy.service
      
      # logging to stderr means we get it in the systemd journal
      KUBE_LOGTOSTDERR="--logtostderr=true"
      
      # journal verbosity level, the higher is the more verbose
      KUBE_LOG_LEVEL="--v=0"
      
      # Should this cluster be allowed to run privileged docker containers
      KUBE_ALLOW_PRIV="--allow-privileged=true"
      
      # How the controller-manager, scheduler, and proxy find the apiserver
      MASTER_HOSTNAME_NONE=master01
      KUBE_MASTER="--master=https://master01:443"
    • /etc/kubernetes/apiserver/apiserver
      ###
      # kubernetes system config
      #
      # The following values are used to configure the kube-apiserver
      #
      
      # The address on the local server to listen to.
      KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
      
      # The port on the local server to listen on.
      KUBE_API_PORT="--secure-port=443"
      
      # Port nodes listen on
      # KUBELET_PORT="--kubelet-port=10250"
      
      # Address range to use for services
      KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.6.142.0/24"
      
      # Location of the etcd cluster
      KUBE_ETCD_SERVERS="--etcd-servers=https://master01:2379 --etcd-cafile=/etc/kubernetes/certs/etcd/ca-master01.pem --etcd-certfile=/etc/kubernetes/certs/etcd/etcd-client-master01.pem --etcd-keyfile=/etc/kubernetes/certs/etcd/etcd-client-master01-key.pem"
      
      # default admission control policies
      #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota"
      KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
      #KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ResourceQuota"
      
      KUBE_LOG_LEVEL="--v=2"
      
      # Add your own!
      ##KUBE_API_ARGS="--storage-backend=etcd2 --requestheader-client-ca-file=/etc/kubernetes/certs/ca.pem --tls-cert-file=/etc/kubernetes/certs/apiserver.pem --tls-private-key-file=/etc/kubernetes/certs/apiserver-key.pem --client-ca-file=/etc/kubernetes/certs/ca.pem --runtime-config=batch/v2alpha1 --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --allow-privileged=True --authorization-mode=ABAC --authorization-policy-file=/etc/kubernetes/certs/abac-authz-policy.jsonl  --bind-address=192.168.56.101"
      KUBE_API_ARGS="--storage-backend=etcd2 --requestheader-client-ca-file=/etc/kubernetes/certs/ca-master01.pem --tls-cert-file=/etc/kubernetes/certs/apiserver-server.pem --tls-private-key-file=/etc/kubernetes/certs/apiserver-server-key.pem --client-ca-file=/etc/kubernetes/certs/ca-master01.pem --runtime-config=batch/v2alpha1 --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --allow-privileged=True --bind-address=192.168.56.101"
      KUBE_API_ARGS="--storage-backend=etcd2 --requestheader-client-ca-file=/etc/kubernetes/certs/ca-master01.pem --tls-cert-file=/etc/kubernetes/certs/apiserver-server.pem --tls-private-key-file=/etc/kubernetes/certs/apiserver-server-key.pem --client-ca-file=/etc/kubernetes/certs/ca-master01.pem --bind-address=0.0.0.0 --runtime-config=batch/v2alpha1 --token-auth-file=/etc/kubernetes/tokens/known_tokens.csv --allow-privileged=True --authorization-mode=ABAC --authorization-policy-file=/etc/kubernetes/certs/abac-authz-policy.jsonl"
    • /etc/kubernetes/certs/abac-authz-policy.jsonl
      {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
      {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"admin", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
      {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubelet", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
      {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kube_proxy", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
      {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"kubecfg", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
      {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"user":"client", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
      {"apiVersion": "abac.authorization.kubernetes.io/v1beta1", "kind": "Policy", "spec": {"group":"system:serviceaccounts", "namespace": "*", "resource": "*", "apiGroup": "*", "nonResourcePath": "*"}}
    • kube-apiserver is running under non-root user kubeadm on privileged port 443 - you must set capabilities:
      # setcap cap_net_bind_service=+ep /usr/local/bin/kube-apiserver
      # getcap /usr/local/bin/kube-apiserver
      /usr/local/bin/kube-apiserver = cap_net_bind_service+ep
  • Minion01 - /etc/kubernetes/config:
    ###
    # kubernetes system config
    #
    # The following values are used to configure various aspects of all
    # kubernetes services, including
    #
    #   kube-apiserver.service
    #   kube-controller-manager.service
    #   kube-scheduler.service
    #   kubelet.service
    #   kube-proxy.service
    
    # logging to stderr means we get it in the systemd journal
    KUBE_LOGTOSTDERR="--logtostderr=true"
    
    # journal verbosity level, the higher is the more verbose
    KUBE_LOG_LEVEL="--v=0"
    
    # Should this cluster be allowed to run privileged docker containers
    KUBE_ALLOW_PRIV="--allow-privileged=true"
    
    # How the controller-manager, scheduler, and proxy find the apiserver
    MASTER_HOSTNAME_NONE=master01
    KUBE_MASTER="--master=https://master01:443"

Systemd

  • /etc/systemd/system/kube-apiserver.service
    [Unit]
    Description=Kubernetes API Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    After=etcd.service
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/apiserver
    User=kubeadm
    ExecStart=/usr/local/bin/kube-apiserver \
       $KUBE_LOGTOSTDERR \
       $KUBE_LOG_LEVEL \
       $KUBE_ETCD_SERVERS \
       $KUBE_API_ADDRESS \
       $KUBE_API_PORT \
       $KUBELET_PORT \
       $KUBE_ALLOW_PRIV \
       $KUBE_SERVICE_ADDRESSES \
       $KUBE_ADMISSION_CONTROL \
       $KUBE_API_ARGS
    Restart=on-failure
    Type=notify
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target

Enable by default:

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver


You can check if kube-api server is listening with cURL:

curl --cacert /etc/kubernetes/certs/ca.pem --cert /etc/kubernetes/certs/apiserver-client-master01.pem --key /etc/kubernetes/certs/apiserver-client-master01-key.pem --cert-type pem https://master01:443/version
{
  "major": "1",
  "minor": "6+",
  "gitVersion": "v1.6.0-beta.1",
  "gitCommit": "23cded36d1d20a538f97e0da05c1d2b62a6be700",
  "gitTreeState": "clean",
  "buildDate": "2017-03-02T22:52:08Z",
  "goVersion": "go1.7.5",
  "compiler": "gc",
  "platform": "linux/amd64"
}% 


Kubelet

Run on every master(s)/minion(s).

Certificates

Kubelet is connected to kube-api server. You must generate client certificates for every Kubelet running on every K8s server:

  • master
    openssl genrsa -out apiserver-client-master01-key.pem 2048
    openssl req -new -key apiserver-client-master01-key.pem -subj "/CN=master01" -out apiserver-client-master01.csr
    openssl x509 -req -in apiserver-client-master01.csr -CA ../ca.pem -CAkey ../ca-key.pem -CAcreateserial -out apiserver-client-master01.pem -days 3650
    openssl x509 -noout -text -in apiserver-client-master01.pem
  • node01
    openssl genrsa -out apiserver-client-node01-key.pem 2048
    openssl req -new -key apiserver-client-node01-key.pem -subj "/CN=node01" -out apiserver-client-node01.csr
    openssl x509 -req -in apiserver-client-node01.csr -CA ../../ca.pem -CAkey ../../ca-key.pem -CAcreateserial -out apiserver-client-node01.pem -days 3650
    openssl x509 -noout -text -in apiserver-client-node01.pem

Configuration

  • /etc/kubernetes/kubelet
    • Master01:
      # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
      KUBELET_ADDRESS="--address=0.0.0.0"
      
      # The port for the info server to serve on
      # KUBELET_PORT="--port=10250"
      
      # You may leave this blank to use the actual hostname
      KUBELET_HOSTNAME="--hostname-override=master01"
      
      # location of the api-server
      # # KUBELET_API_SERVER="--api-servers=https://master01:443"
      # 
      
      KUBELET_ARGS="--v=2 --allow-privileged=True --authorization-mode=AlwaysAllow --cgroup-driver=cgroupfs --require-kubeconfig=true --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --pod-manifest-path=/etc/kubernetes/manifests --client-ca-file=/etc/kubernetes/certs/ca.pem --tls-cert-file=/etc/kubernetes/certs/apiserver-client-master01.pem --tls-private-key-file=/etc/kubernetes/certs/apiserver-client-master01-key.pem --enable-debugging-handlers=true --register-node=true --register-schedulable=false --cadvisor-port=0 --hairpin-mode=none --pod-infra-container-image=nexus3.kb.cz:18443/k8s/pause-amd64:3.0 --cluster-dns=10.6.142.10 --cluster-domain=k8s-cluster.local"
    • Node01:
      ###
      # kubernetes kubelet (node) config
      
      # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
      KUBELET_ADDRESS="--address=0.0.0.0"
      
      # The port for the info server to serve on
      # KUBELET_PORT="--port=10250"
      
      # You may leave this blank to use the actual hostname
      KUBELET_HOSTNAME="--hostname-override=node01"
      
      # location of the api-server
      # # KUBELET_API_SERVER="--api-servers=https://master01:443"
      # 
      KUBELET_ARGS="--v=2 --allow-privileged=True --authorization-mode=AlwaysAllow --cgroup-driver=cgroupfs --require-kubeconfig=true --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --pod-manifest-path=/etc/kubernetes/manifests --client-ca-file=/etc/kubernetes/certs/ca.pem --tls-cert-file=/etc/kubernetes/certs/apiserver-client-node01.pem --tls-private-key-file=/etc/kubernetes/certs/apiserver-client-node01-key.pem --cpu-cfs-quota=true --eviction-hard=memory.available<100Mi,nodefs.available<10% --enable-debugging-handlers=true --cadvisor-port=0 --hairpin-mode=none --pod-infra-container-image=nexus3.kb.cz:18443/k8s/pause-amd64:3.0 --cluster-dns=10.6.142.10 --cluster-domain=k8s-cluster.local "
  • /etc/kubernetes/kubelet.kubeconfig - Set token according your server (master/minion). Your token must be the same like in /etc/kubernetes/tokens/{known_tokens.csv,system:kubelet-master01.token}
    apiVersion: v1
    kind: Config
    current-context: kubelet-to-k8s-cluster.local
    preferences: {}
    clusters:
    - cluster:
        certificate-authority: /etc/kubernetes/certs/ca.pem
        server: https://master01:443
      name: k8s-cluster.local
    contexts:
    - context:
        cluster: k8s-cluster.local
        user: kubelet
      name: kubelet-to-k8s-cluster.local
    users:
    - name: kubelet
      user:
        token: J486qZv3McsBbA4EjybNbR8qPEd9JmTM
    
        client-certificate: /etc/kubernetes/certs/apiserver-client-master01.pem
        client-key: /etc/kubernetes/certs/apiserver-client-master01-key.pem

Systemd

  • Prepare app directory structure:
    mkdir -p /appl/kubeadm/kubernetes
    chown -R kubeadm:kubeadm /appl/kubeadm/kubernetes
    ln -s /appl/kubeadm/kubernetes /var/lib/kubelet
  • /lib/systemd/system/kubelet.service
    [Unit]
    Description=Kubernetes Kubelet Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=docker.service
    Requires=docker.service
    
    [Service]
    WorkingDirectory=/var/lib/kubelet
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/kubelet
    ExecStart=/usr/local/bin/kubelet \
    	    $KUBE_LOGTOSTDERR \
    	    $KUBE_LOG_LEVEL \
    	    $KUBELET_API_SERVER \
    	    $KUBELET_ADDRESS \
    	    $KUBELET_PORT \
    	    $KUBELET_HOSTNAME \
    	    $KUBE_ALLOW_PRIV \
    	    $KUBELET_ARGS
    Restart=on-failure
    KillMode=process
    
    [Install]
    WantedBy=multi-user.target
    • FIXME: PROC JE ZDE VYTVOREN SYMLINK NA ADRESAR /appl/kubeadm/kubernetes KDYZ DO NEJ MUZU ODKAZOVAT ROVNOU

Enable by default:

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet


Kube-proxy Manager

Install just on minion(s).

Configuration

  • /etc/kubernetes/proxy
    ###
    # kubernetes proxy config
    
    # default config should be adequate
    
    # Add your own!
    KUBE_PROXY_ARGS="--v=2 --kubeconfig=/etc/kubernetes/proxy.kubeconfig --cluster-cidr=10.6.142.0/24 "
  • /etc/kubernetes/proxy.kubeconfig
    apiVersion: v1
    kind: Config
    current-context: proxy-to-k8s-cluster.local
    preferences: {}
    contexts:
    - context:
        cluster: k8s-cluster.local
        user: proxy
      name: proxy-to-k8s-cluster.local
    clusters:
    - cluster:
        certificate-authority: /etc/kubernetes/certs/ca.pem
        server: https://master01:443
      name: k8s-cluster.local
    users:
    - name: proxy
      user:
        token: 5nd3bT6w0bAc9uDIU5lnEW1MGlzdT0PT
    
        client-certificate: /etc/kubernetes/certs/apiserver-client-node01.pem
        client-key: /etc/kubernetes/certs/apiserver-client-node01-key.pem

Systemd

  • /lib/systemd/system/kube-proxy.service
    [Unit]
    Description=Kubernetes Kube-Proxy Server
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    After=network.target
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/proxy
    ExecStart=/usr/local/bin/kube-proxy \
    	    $KUBE_LOGTOSTDERR \
    	    $KUBE_LOG_LEVEL \
    	    $KUBE_MASTER \
    	    $KUBE_PROXY_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target
  • /lib/systemd/system/system.conf.d/kubernetes-accounting.conf
    [Manager]
    DefaultCPUAccounting=yes
    DefaultMemoryAccounting=yes

Enable by default:

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy


Kube-controller Manager

Configuration

  • /etc/kubernetes/controller-manager
    ###
    # The following values are used to configure the kubernetes controller-manager
    
    # defaults from config and apiserver should be adequate
    
    # Add your own!
    KUBE_CONTROLLER_MANAGER_ARGS="--v=2 --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig --service-account-private-key-file=/etc/kubernetes/certs/apiserver-key.pem --root-ca-file=/etc/kubernetes/certs/ca.pem --enable-hostpath-provisioner --cluster-cidr=10.6.142.0/24 --enable-hostpath-provisioner --terminated-pod-gc-threshold 100"
  • /etc/kubernetes/controller-manager.kubeconfig
    apiVersion: v1
    kind: Config
    current-context: controller-manager-to-k8s-cluster.local
    preferences: {}
    clusters:
    - cluster:
        certificate-authority: /etc/kubernetes/certs/ca.pem
        server: https://master01:443
      name: k8s-cluster.local
    contexts:
    - context:
        cluster: k8s-cluster.local
        user: controller-manager
      name: controller-manager-to-k8s-cluster.local
    users:
    - name: controller-manager
      user:
        token: Uz5M0LHZYYas3i04TTecEYJfqXXesH8l
    
        client-certificate: /etc/kubernetes/certs/apiserver-client-master01.pem
        client-key: /etc/kubernetes/certs/apiserver-client-master01-key.pem

Systemd

  • /lib/systemd/system/kube-controller-manager.service
    [Unit]
    Description=Kubernetes Controller Manager
    Documentation=https://github.com/GoogleCloudPlatform/kubernetes
    
    [Service]
    EnvironmentFile=-/etc/kubernetes/config
    EnvironmentFile=-/etc/kubernetes/controller-manager
    User=kubeadm
    ExecStart=/usr/local/bin/kube-controller-manager \
    	    $KUBE_LOGTOSTDERR \
    	    $KUBE_LOG_LEVEL \
    	    $KUBE_MASTER \
    	    $KUBE_CONTROLLER_MANAGER_ARGS
    Restart=on-failure
    LimitNOFILE=65536
    
    [Install]
    WantedBy=multi-user.target

Enable by default:

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager


Kube-scheduler

Configuration

  • /etc/kubernetes/scheduler
    ###
    # kubernetes scheduler config
    
    # default config should be adequate
    
    # Add your own!
    KUBE_SCHEDULER_ARGS="--v=2 --kubeconfig=/etc/kubernetes/scheduler.kubeconfig "
  • /etc/kubernetes/scheduler.kubeconfig
    apiVersion: v1
    kind: Config
    current-context: scheduler-to-k8s-cluster.local
    preferences: {}
    clusters:
    - cluster:
        certificate-authority: /etc/kubernetes/certs/ca.pem
        server: https://master01:443
      name: k8s-cluster.local
    contexts:
    - context:
        cluster: k8s-cluster.local
        user: scheduler
      name: scheduler-to-k8s-cluster.local
    users:
    - name: scheduler
      user:
        token: wJP9YL4lMPZ5lRgpsxCop6TCqWKe3j1f
    
        client-certificate: /etc/kubernetes/certs/apiserver-client-master01.pem
        client-key: /etc/kubernetes/certs/apiserver-client-master01-key.pem

Systemd

Configuration service: /lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
User=kubeadm
ExecStart=/usr/bin/kube-scheduler \
	    $KUBE_LOGTOSTDERR \
	    $KUBE_LOG_LEVEL \
	    $KUBE_MASTER \
	    $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

Enable by default:

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler
Navigation
Print/export
Toolbox