Tuesday, November 7, 2017

Kuberenetes v1.8 Installation


I have configured the Single Master and Single node Kubernetes Cluster.  I followed the following kubernetes installation document, but I got many errors, during the installation part. I fixed it one by one and finally I configured my kubernetes cluster successfully. 




Pre Installation steps:

You must disable swap in order for the kubelet to work properly.
[root@kubemaster ~]# swapoff -a

To pass bridged IPv4 traffic to iptables’ chains you need to run below command. This is a requirement for CNI plugins to work
[root@kubemaster ~]# sysctl net.bridge.bridge-nf-call-iptables=1

Update your kubernetes yum repo before start your yum install command.
[root@kubemaster ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Configure Kubernetes Master server:
Install Docker engine
[root@kubemaster ~]#yum install -y docker

Start docker service and enable it for sartup
[root@kubemaster ~]# systemctl enable docker && systemctl start docker

Install Kubernetes packages
[root@kubemaster ~]#yum install -y kubelet kubeadm kubectl

Start the kubelet service and enable it in startup

[root@kubemaster ~]#systemctl enable kubelet && systemctl start kubelet



[root@kubemaster ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname "kubemaster" could not be reached
[preflight] WARNING: hostname "kubemaster" lookup kubemaster on 75.75.75.75:53: no such host
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a no                                n-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubemaster kubernetes kubernetes.                                default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.5                                ]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manife                                sts/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernet                                es/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manife                                sts/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.ya                                ml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/                                kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled                                .
[apiclient] All control plane components are healthy after 221.003059 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubemaster as master by adding a label and a taint
[markmaster] Master kubemaster tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: fb5e91.098fadceaeca79f9
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy


mkdir -p $HOME/.kube

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token xcxceaec9abc9 10.0.2.5:6443 --discovery-token-ca-cert-hash sha256:200f2aefdd3ea9xcxcxx380a8afb607ee36ce9f5178ae59

---------------------------
[root@kubemaster ~]# ls -lrt  /etc/kubernetes/admin.conf
-rw------- 1 root root 5448 Oct 22 16:37 /etc/kubernetes/admin.conf

Copy admin.conf file to users`s home directory. [root@kubemaster ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kubemaster ~]#
[root@kubemaster ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@kubemaster ~]#
[root@kubemaster ~]# kubectl get nodes
NAME         STATUS     ROLES     AGE       VERSION
kubemaster   NotReady   master    1h        v1.8.1

Configure Weave Overlay network for kubernetes pods.  [root@kubemaster ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@kubemaster ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created

[root@kubemaster ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   etcd-kubemaster                       1/1       Running   1          1d
kube-system   kube-apiserver-kubemaster            1/1       Running   4          1d
kube-system   kube-controller-manager-kubemaster   1/1       Running   2          1d
kube-system   kube-dns-545bc4bfd4-5675m            3/3       Running   0          1d
kube-system   kube-proxy-hnkcn                     1/1       Running   1          1d
kube-system   kube-scheduler-kubemaster            1/1       Running   2          1d
kube-system   weave-net-2ztgt                      2/2       Running   2          1d

=====================================================


Join nodes to the Kubernetes cluster
--------------------------------------------------

# kubeadm join --token xcxceaec9abc9 192.168.56.103:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: Running with swap on is not supported. Please disable swap or set kubelet's --fail-swap-on flag to false.
[validation] WARNING: using token-based discovery without DiscoveryTokenCACertHashes can be unsafe (see https://kubernetes.io/docs/admin/kubeadm/#kubeadm-join).
[validation] WARNING: Pass --discovery-token-unsafe-skip-ca-verification to disable this warning. This warning will become an error in Kubernetes 1.9.
[discovery] Trying to connect to API Server "192.168.56.103:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.56.103:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.56.103:6443"
[discovery] Successfully established connection with API Server "192.168.56.103:6443"
[bootstrap] Detected server version: v1.8.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

[root@kubemaster ~]# kubectl get nodes
NAME         STATUS     ROLES     AGE       VERSION
kubemaster   Ready      master    1d        v1.8.1
kubenode1    NotReady   <none>    48s       v1.8.1
==============================================================

Troubleshoot node joining errors:
ERROR#1:
After adding new node to the cluster the above command shows the Kubenode1 is "Not Ready" state. I run below command to check node configuration.

Solution:
[root@kubemaster ~]# kubectl describe nodes kubenode1
Name:               kubenode1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=kubenode1
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Wed, 25 Oct 2017 21:53:03 -0400
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:03 -0400   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:03 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:03 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:53 -0400   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.56.105
  Hostname:    kubenode1
Capacity:
 cpu:     1
 memory:  1883560Ki
 pods:    110
Allocatable:
 cpu:     1
 memory:  1781160Ki
 pods:    110
System Info:
 Machine ID:                 efb5b81510e4442180fc2de6090cf1a6
 System UUID:                76093D8D-4224-4FEC-B88C-572B4C45DA0E
 Boot ID:                    2db689d1-ea6f-48b3-a939-2c5a1311612e
 Kernel Version:             3.10.0-693.2.2.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.8.1
 Kube-Proxy Version:         v1.8.1
PodCIDR:                     10.244.1.0/24
ExternalID:                  kubenode1
Non-terminated Pods:         (3 in total)
  Namespace                  Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                ------------  ----------  ---------------  -------------
  default                    hello-pod           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-flppd    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-qrkh4     20m (2%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  20m (2%)      0 (0%)      0 (0%)           0 (0%)
Events:
  Type    Reason                   Age              From                   Message
  ----    ------                   ----             ----                   -------
  Normal  Starting                 3m               kubelet, kubenode1     Starting kubelet.
  Normal  NodeHasSufficientDisk    3m (x2 over 3m)  kubelet, kubenode1     Node kubenode1 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet, kubenode1     Node kubenode1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet, kubenode1     Node kubenode1 status is now: NodeHasNoDiskPressure
  Normal  NodeAllocatableEnforced  3m               kubelet, kubenode1     Updated Node Allocatable limit across pods
  Normal  Starting                 3m               kube-proxy, kubenode1  Starting kube-proxy.
  Normal  NodeReady                2m               kubelet, kubenode1     Node kubenode1 status is now: NodeReady


[root@kubemaster ~]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
kubemaster   Ready     master    1d        v1.8.1
kubenode1    Ready     <none>    3m        v1.8.1


ERROR#2:
/etc/systemd/system/kubelet.service.d
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

Solution:

Disable Swap memory in the Linux server.follow this steps to remove swap permanently
https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-swap-removing.html

ERROR#3: Kubelet service not starting because of below error: 

kubemaster kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Solution:
Kubelet looks certification file in /etc/kubernetes/pki folder while it starts.So I created the softlink from /var/lib/kubelet/pki
[root@kubemaster kubernetes]# ln -s /var/lib/kubelet/pki pki
[root@kubemaster kubernetes]# ls -lrt
total 0
drwxr-xr-x 2 root root  6 Oct 12 02:05 manifests
lrwxrwxrwx 1 root root 20 Oct 22 16:36 pki -> /var/lib/kubelet/pki


ERROR#4: POD network config error
in the document they gave below commands but it will get the current version of the kubernete and download the respective weave yaml file. For v1.8 there is no yaml file for weave. It wont work. 
[root@kubemaster ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@kubemaster ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

[root@kubemaster ~]# kubectl get pods --all-namespaces

NAMESPACE     NAME                                 READY     STATUS              RESTARTS   AGE

kube-system   etcd-kubemaster                      1/1       Running             1          1d

kube-system   kube-apiserver-kubemaster            1/1       Running             4          1d

kube-system   kube-controller-manager-kubemaster   1/1       Running             2          1d

kube-system   kube-dns-545bc4bfd4-5675m            0/3       ContainerCreating   0          1d
kube-system   kube-proxy-hnkcn                     0/1       Error               0          1d
kube-system   kube-scheduler-kubemaster            1/1       Running             2          1d
kube-system   weave-net-2ztgt                      0/2       Error               0          1d

Solution:
v1.8 yaml not in the weave web page so I am using v1.7. Run below command to fix the overlay network error issue.
[root@kubemaster ~]#kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.7





Important command used for cluster troubleshoot.
below command will give you the detailed log files for Cluster and its obejects.
[root@kubemaster log]# kubectl cluster-info dump --output-directory=/ts/cluster-log
Cluster info dumped to /ts/cluster-log

[root@kubemaster log]# cd /ts/cluster-log

[root@kubemaster cluster-log]# ls
default  kube-system  nodes.json
[root@kubemaster cluster-log]# ls -lrt
total 24
-rw-r--r--  1 root root 19517 Oct 24 19:01 nodes.json
drwxr-xr-x 11 root root  4096 Oct 24 19:01 kube-system
drwxr-xr-x  2 root root   170 Oct 24 19:01 default
[root@kubemaster cluster-log]# cd kube-system
[root@kubemaster kube-system]# ls
daemonsets.json   kube-apiserver-kubemaster           kube-proxy-ktl82           replication-controllers.json
deployments.json  kube-controller-manager-kubemaster  kube-scheduler-kubemaster  services.json
etcd-kubemaster   kube-dns-545bc4bfd4-d55z7           pods.json                  weave-net-l2jjj
events.json       kube-proxy-hzf2x                    replicasets.json           weave-net-wtr22
[root@kubemaster kube-system]# ls -lrt
total 436
-rw-r--r-- 1 root root 195397 Oct 24 19:01 events.json
-rw-r--r-- 1 root root    124 Oct 24 19:01 replication-controllers.json
-rw-r--r-- 1 root root   1877 Oct 24 19:01 services.json
-rw-r--r-- 1 root root  34921 Oct 24 19:01 daemonsets.json
-rw-r--r-- 1 root root  19891 Oct 24 19:01 deployments.json
-rw-r--r-- 1 root root  19685 Oct 24 19:01 replicasets.json
-rw-r--r-- 1 root root 160395 Oct 24 19:01 pods.json
drwxr-xr-x 2 root root     22 Oct 24 19:01 etcd-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-apiserver-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-controller-manager-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-dns-545bc4bfd4-d55z7
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-proxy-hzf2x
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-proxy-ktl82
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-scheduler-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 weave-net-l2jjj
drwxr-xr-x 2 root root     22 Oct 24 19:01 weave-net-wtr22

To get Cluster Name space logs

 kubectl --namespace="kube-system" logs kube-apiserver-kubemaster

To check  Cluster Token details

[root@kubemaster Cluster-install]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
ebdf74.ewewerwad7b   23h       2017-10-24T23:33:45-04:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

To use Master node for application POD deployment (not recommended)

[root@kubemaster pki]# kubectl taint nodes --all node-role.kubernetes.io/master-


node "kubemaster" untainted

Thursday, October 5, 2017

Rancher 2.0 - Kubernete ready container environment

Rancher 2.0

I started to experiment new technologies especially containers for more than one year in my lab. Now a days technology architecture are light weight (micro service) and distributed, which enable you to easily configure and learn yourself in your laptop.   

I tested Rancher 0.8.0 few months before and now it is a time to test Rancher 2.0. 
It is a cluster ready container environment. It enables you to manage your existing kubernetes cluster in public cloud and  as well as those already running in on premises. Below diagram describe you the Rancher 2.0 architecture.Create and manage multiple kubernete clusters in Rancher 2.0. Now it is user friendly and easy configurable.


Run a single docker command  and you can get a Container cluster in a container. Rancher recommends your computer should have below requirements.
Ubuntu 16.04 (kernel v3.10+) or RHEL/CentOS 7.3
RAM at least 4GB 
Disk space  80GB
latest version of Docker engine installed .

 #docker run -d --restart=unless-stopped -p 8080:8080 rancher/server:preview


[root@ansible-server ~]# docker ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                              NAMES
e2422b8acb64        rancher/server:preview   "/usr/bin/entry /usr/"   3 weeks ago         Up 58 seconds       3306/tcp, 0.0.0.0:8080->8080/tcp   zen_tesla

Wait for sometime and yon can get your Rancher Kubernete cluster environment in your server. Now I can access my Rancher console http://192.168.99.101:8080 (use your local host IP)


Adding additional host in my default cluster.

Copy the above command and run it in target host.This will join your new host into the cluster.

[root@ansible-client ~]# sudo docker run --rm --privileged -v /var/run/docker.sock:/var/run/docker.sock -v /var/lib/rancher:/var/lib/rancher rancher/agent:v2.0-alpha4 http://192.168.99.101:8080/v3/scripts/64EA5E2D127357876F60:1483142400000:xxXYZXXxxdcWReA5Ir6g
Unable to find image 'rancher/agent:v2.0-alpha4' locally
v2.0-alpha4: Pulling from rancher/agent
b3e1c725a85f: Pull complete
4daad8bdde31: Pull complete
63fe8c0068a8: Pull complete
4a70713c436f: Pull complete
bd842a2105a8: Pull complete
3f7d6fd71888: Pull complete
16914729cfd3: Pull complete
8c02e557c7ff: Pull complete
a2bbb798dbc8: Pull complete
a621cbb2db05: Pull complete
Digest: sha256:63b71388b4c4907394a103dc8abf57xxxxxxxxxxxxxxxxx5e79df24b084a0ae1
Status: Downloaded newer image for rancher/agent:v2.0-alpha4

INFO: Running Agent Registration Process, CATTLE_URL=http://192.168.99.101:8080/v1
INFO: Attempting to connect to: http://192.168.99.101:8080/v1
INFO: http://192.168.99.101:8080/v1 is accessible
INFO: Inspecting host capabilities
INFO: Boot2Docker: false
INFO: Host writable: true
INFO: Token: xxxxxxxx
INFO: Running registration
INFO: Printing Environment
INFO: ENV: CATTLE_ACCESS_KEY=DA7B5FXXXXXXXXXXXXXXA
INFO: ENV: CATTLE_HOME=/var/lib/cattle
INFO: ENV: CATTLE_REGISTRATION_ACCESS_KEY=registrationToken
INFO: ENV: CATTLE_REGISTRATION_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_SECRET_KEY=xxxxxxx
INFO: ENV: CATTLE_URL=http://192.168.99.101:8080/v3
INFO: ENV: DETECTED_CATTLE_AGENT_IP=192.168.99.104
INFO: ENV: RANCHER_AGENT_IMAGE=rancher/agent:v2.0-alpha4
INFO: Launched Rancher Agent: c2034dd35324127715ef49xxxxxxxxxxxxxxxxxxxxxxx6

Host has been added into the kubernete cluster and its ready for application container deployment.

Goto container tab and deploy your new container 

 In this window you see your application container resource consumption, health, configuration and console. You can scale up container on demand.
Learn more about Rancher in http://rancher.com/rancher2-0/

AWS Autoscaling demo lab Configuring VPC, Public/Private subnets, Internet Gateway (IGW), NAT gateway, etc.,   Crea...