Thursday, May 24, 2018



AWS Autoscaling demo lab






Configuring VPC, Public/Private subnets, Internet Gateway (IGW), NAT gateway, etc.,
 Created VPC with CIDR block  10.10.0.0/16





Created two public subnet in AZ1 and AZ2
Rengs-Public -1A: 10.10.1.0/24
Rengs-Public-1B : 10.10.2.0/24
Created two private subnet in AZ3 and AZ4
Rengs-Private-1C: 10.10.3.0/24
Rengs-Private-1D: 10.10.4.0/24
Created two Route Table [ Rengs-Public-RT and 
Rengs-Private-RT]

Created on Internet Gateway (Rengs-IGW)  and I attached this to VPC (Rengs-VPC)



Create NAT GW for Rengs-Public-1A  with Elastic IP and attached this  to Routing Table (Rengs-private-RT)
                0.0.0.0/0 -> NAT GW
                Associate Rengs-Private-RT route table to two private subnets

Add 0.0.0.0/0 -> IGW in Rengs-Public-RT route table.
                Associate two public subnets to Rengs-Public-RT route table.
aws ec2 attach-internet-gateway --vpc-id "vpc-bfbc90c4" --internet-gateway-id "igw-d1e294a9" --region us-east-1
2. Create EC2 instance with below userdata/Bootstrapping script
#!/bin/bash
yum install httpd -y
service httpd start
chkconfig httpd on
yum install wget -y
yum install php php-mysql mysql -y
Create webserver-SG and  allow SSH and HTTP port
Create IAM role for EC2 with S3 access (Rengs-S3-Fullaccess)

Create a SNS topic and subscription to your email ID.
Attach IAM role(Rengs-S3-Fullaccess) to your new Webserver
Create RDS-SG security group and give access to webserver -SG



Create classic LoadBalacer (Rengs-ELB) -open port 80 and map both public-subnet

5. Install wordpress
[ec2-user@ip-10-10-1-77 ~]$
[ec2-user@ip-10-10-1-77 ~]$ sudo su -
[root@ip-10-10-1-77 ~]# cd /var/www/html
[root@ip-10-10-1-77 html]# wget https://wordpress.org/latest.tar.gz
--2018-05-14 03:06:07--  https://wordpress.org/latest.tar.gz
Resolving wordpress.org (wordpress.org)... 198.143.164.252
Connecting to wordpress.org (wordpress.org)|198.143.164.252|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8565154 (8.2M) [application/octet-stream]
Saving to: ‘latest.tar.gz’
latest.tar.gz       100%[===================>]   8.17M  20.1MB/s    in 0.4s
2018-05-14 03:06:08 (20.1 MB/s) - ‘latest.tar.gz’ saved [8565154/8565154]
[root@ip-10-10-1-77 html]# ls
latest.tar.gz
[root@ip-10-10-1-77 html]# tar -xvf latest.tar.gz
[root@ip-10-10-1-77 wordpress]# ls
index.php    wp-activate.php     wp-comments-post.php  wp-cron.php        wp-load.php   wp-settings.php   xmlrpc.php
license.txt  wp-admin            wp-config-sample.php  wp-includes        wp-login.php  wp-signup.php
readme.html  wp-blog-header.php  wp-content            wp-links-opml.php  wp-mail.php   wp-trackback.php
 [root@ip-10-10-1-77 wordpress]# mv * /var/www/html/

[root@ip-10-10-1-77 html]# cd /etc/httpd/conf
[root@ip-10-10-1-77 conf]# ls
httpd.conf  magic
[root@ip-10-10-1-77 conf]# vi httpd.conf
(search  "AllowOverride None" Change none to All )

Change permissions on html directory
                chown -R apache:apache /var/www/

Test RDS instance conncetion from your webserver EC2 instance
mysql -h rengsdb.cidtkwwtk7hg.us-east-1.rds.amazonaws.com -u rengsdb -p
Enter password:
mysql> CREATE DATABASE rengsdb;
Query OK, 1 row affected (0.01 sec)

7. Create new IAM role and S3 bucket
8.Assign IAM role to EC2 instance and copy application data to S3
9.Create Route53 hosted zone to map with domain

Route53 Configuration
Create Hosted zone and Recordset for your Domain www.rengscloud.com

and go to Godaddy.com where you registered your domain name and add AWS DNS details
ManageDNS -> Name servers -> custom -> add AWS DNS Nameservers






















10.Create AMI for EC2 instance











13. Enable ACM in ELB
14. Secure infra at Security Group and VPC level

Monday, December 11, 2017

Minishift v3.6


Install and configure Minishift using Virtual box


C:\Users\Rengs>minishift config set memory 4096
No Minishift instance exists. New memory setting will be applied on next 'minishift start'

C:\Users\Rengs>minishift config set openshift-version v3.6.0


C:\Users\Rengs>minishift config set memory 4096
-- Installing default add-ons ... OK
No Minishift instance exists. New memory setting will be applied on next 'minishift start'

C:\Users\Rengs>minishift --show-libmachine-logs start --vm-driver=virtualbox
-- Starting local OpenShift cluster using 'virtualbox' hypervisor ...
-- Minishift VM will be configured with ...
   Memory:    4 GB
   vCPUs :    2
   Disk size: 20 GB

   Downloading ISO 'https://github.com/minishift/minishift-b2d-iso/releases/download/v1.2.0/minishift-b2d.iso'
 40.00 MiB / 40.00 MiB 

[============================================================] 100.00% 0s
-- Starting Minishift VM ....(minishift) Trying to access option engine-install-url which does not exist
(minishift) THIS ***WILL*** CAUSE UNEXPECTED BEHAVIOR
(minishift) Type assertion did not go smoothly to string for key engine-install-url
(minishift) Trying to access option swarm-master which does not exist
(minishift) THIS ***WILL*** CAUSE UNEXPECTED BEHAVIOR
(minishift) Type assertion did not go smoothly to bool for key swarm-master
(minishift) Trying to access option swarm-host which does not exist
(minishift) THIS ***WILL*** CAUSE UNEXPECTED BEHAVIOR
(minishift) Type assertion did not go smoothly to string for key swarm-host
(minishift) Trying to access option swarm-discovery which does not exist
(minishift) THIS ***WILL*** CAUSE UNEXPECTED BEHAVIOR
(minishift) Type assertion did not go smoothly to string for key swarm-discovery
(minishift) Type assertion did not go smoothly to bool for key virtualbox-host-dns-resolver
Creating CA: C:\Users\Rengs\.minishift\certs\ca.pem
(minishift) Type assertion did not go smoothly to bool for key virtualbox-hostonly-no-dhcp
(minishift) Type assertion did not go smoothly to bool for key virtualbox-no-share
(minishift) Type assertion did not go smoothly to bool for key virtualbox-no-dns-proxy
(minishift) Type assertion did not go smoothly to bool for key virtualbox-no-vtx-check
Creating client certificate: C:\Users\Rengs\.minishift\certs\cert.pem
Running pre-create checks...
Creating machine...
(minishift) Downloading C:\Users\Rengs\.minishift\cache\boot2docker.iso from file://C:/Users/Rengs/.minishift/cache/iso/minishift-b2d.iso...
(minishift) Creating VirtualBox VM...
(minishift) Creating SSH key...
........(minishift) Starting the VM...
(minishift) Check network to re-create if needed...
..(minishift) Waiting for an IP...
.................Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
.Docker is up and running!
 OK
-- Checking for IP address ... OK
-- Checking if external host is reachable from the Minishift VM ...
   Pinging 8.8.8.8 ... OK
-- Checking HTTP connectivity from the VM ...
   Retrieving http://minishift.io/index.html ... OK
-- Checking if persistent storage volume is mounted ... OK
-- Checking available disk space ... 0% OK
-- Downloading OpenShift binary 'oc' version 'v3.6.0'
 33.92 MiB / 33.92 MiB 

[====================================================================] 100.00% 0s
- Downloading OpenShift v3.6.0 checksums ... OK
-- OpenShift cluster will be configured with ...
   Version: v3.6.0
-- Checking `oc` support for startup flags ...
   host-data-dir ... OK
   host-pv-dir ... OK
   host-volumes-dir ... OK
   routing-suffix ... OK
   host-config-dir ... OK
Starting OpenShift using openshift/origin:v3.6.0 ...
Pulling image openshift/origin:v3.6.0
Pulled 1/4 layers, 26% complete
Pulled 1/4 layers, 37% complete
Pulled 1/4 layers, 49% complete
Pulled 1/4 layers, 62% complete
Pulled 1/4 layers, 73% complete
Pulled 2/4 layers, 82% complete
Pulled 3/4 layers, 90% complete
Pulled 3/4 layers, 98% complete
Pulled 4/4 layers, 100% complete
Extracting
Image pull complete
OpenShift server started.

The server is accessible via web console at:
    https://192.168.99.102:8443

You are logged in as:
    User:     developer
    Password: <any value>

To login as administrator:
    oc login -u system:admin


C:\Users\Rengs>oc login https://192.168.99.102:8443 -u developer -p developer
'oc' is not recognized as an internal or external command,
operable program or batch file.

C:\Users\Rengs>

C:\Users\Rengs>minishift oc-env
SET PATH=C:\Users\Rengs\.minishift\cache\oc\v3.6.0;%PATH%
REM Run this command to configure your shell:
REM     @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

C:\Users\Rengs>

C:\Users\Rengs>SET PATH=C:\Users\Rengs\.minishift\cache\oc\v3.6.0;%PATH%

C:\Users\Rengs>@FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i

C:\Users\Rengs>
C:\Users\Rengs>oc login https://192.168.99.102:8443 -u developer -p developer
Login successful.

You have access to the following projects and can switch between them with 'oc project <projectname>':

    my-launcher
  * myproject

Using project "myproject".

C:\Users\Rengs>


C:\Users\Rengs>oc get services -n default
NAME              CLUSTER-IP       EXTERNAL-IP   PORT(S)                   AGE
docker-registry   172.30.1.1       <none>        5000/TCP                  1d
kubernetes        172.30.0.1       <none>        443/TCP,53/UDP,53/TCP     1d
router            172.30.200.222   <none>        80/TCP,443/TCP,1936/TCP   1d

https://appdev.openshift.io/docs/minishift-installation.html#installing-a-openshiftlocal







https://appdev.openshift.io/docs/getting-started.html


Set a Docker environment variable to work with docker daemon runining in minishift node.

C:\Users\Rengs>
C:\Users\Rengs>minishift docker-env
SET DOCKER_TLS_VERIFY=1
SET DOCKER_HOST=tcp://192.168.99.100:2376
SET DOCKER_CERT_PATH=C:\Users\Rengs\.minishift\certs
SET DOCKER_API_VERSION=1.24
REM Run this command to configure your shell:
REM     @FOR /f "tokens=*" %i IN ('minishift docker-env') DO @call %i


C:\Users\Rengs>docker pull openshift/jenkins-2-centos7
C:\Users\Rengs>docker pull openshiftio/launchpad-jenkins-slave

C:\Users\Rengs>docker pull sonatype/nexus


C:\Users\Rengs>oc new-project mynexus
Now using project "mynexus" on server "https://192.168.99.100:8443".

You can add applications to this project with the 'new-app' command. For example, try:

    oc new-app centos/ruby-22-centos7~https://github.com/openshift/ruby-ex.git

to build a new example application in Ruby.

C:\Users\Rengs>
C:\Users\Rengs>oc new-app sonatype/nexus
warning: Cannot find git. Ensure that it is installed and in your path. Git is required to work with git repositories.
--> Found Docker image 45945e5 (3 months old) from Docker Hub for "sonatype/nexus"

    * An image stream will be created as "nexus:latest" that will track this image
    * This image will be deployed in deployment config "nexus"
    * Port 8081/tcp will be load balanced by service "nexus"
      * Other containers can access this service through the hostname "nexus"
    * This image declares volumes and will default to use non-persistent, host-local storage.
      You can add persistent volumes later by running 'volume dc/nexus --add ...'

--> Creating resources ...
    imagestream "nexus" created
    deploymentconfig "nexus" created
    service "nexus" created
--> Success
    Run 'oc status' to view your app.

C:\Users\Rengs>
C:\Users\Rengs>oc status
In project mynexus on server https://192.168.99.100:8443

svc/nexus - 172.30.5.23:8081
  dc/nexus deploys istag/nexus:latest
    deployment #1 pending 2 minutes ago


View details with 'oc describe <resource>/<name>' or list everything with 'oc get all'.

C:\Users\Rengs>oc expose svc/nexus

route "nexus" exposed


Tuesday, November 7, 2017

Kuberenetes v1.8 Installation


I have configured the Single Master and Single node Kubernetes Cluster.  I followed the following kubernetes installation document, but I got many errors, during the installation part. I fixed it one by one and finally I configured my kubernetes cluster successfully. 




Pre Installation steps:

You must disable swap in order for the kubelet to work properly.
[root@kubemaster ~]# swapoff -a

To pass bridged IPv4 traffic to iptables’ chains you need to run below command. This is a requirement for CNI plugins to work
[root@kubemaster ~]# sysctl net.bridge.bridge-nf-call-iptables=1

Update your kubernetes yum repo before start your yum install command.
[root@kubemaster ~]# cat /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
       https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg

Configure Kubernetes Master server:
Install Docker engine
[root@kubemaster ~]#yum install -y docker

Start docker service and enable it for sartup
[root@kubemaster ~]# systemctl enable docker && systemctl start docker

Install Kubernetes packages
[root@kubemaster ~]#yum install -y kubelet kubeadm kubectl

Start the kubelet service and enable it in startup

[root@kubemaster ~]#systemctl enable kubelet && systemctl start kubelet



[root@kubemaster ~]# kubeadm init
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.8.1
[init] Using Authorization modes: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: hostname "kubemaster" could not be reached
[preflight] WARNING: hostname "kubemaster" lookup kubemaster on 75.75.75.75:53: no such host
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a no                                n-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [kubemaster kubernetes kubernetes.                                default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.2.5                                ]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manife                                sts/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernet                                es/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manife                                sts/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.ya                                ml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/                                kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled                                .
[apiclient] All control plane components are healthy after 221.003059 seconds
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[markmaster] Will mark node kubemaster as master by adding a label and a taint
[markmaster] Master kubemaster tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: fb5e91.098fadceaeca79f9
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy


mkdir -p $HOME/.kube

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token xcxceaec9abc9 10.0.2.5:6443 --discovery-token-ca-cert-hash sha256:200f2aefdd3ea9xcxcxx380a8afb607ee36ce9f5178ae59

---------------------------
[root@kubemaster ~]# ls -lrt  /etc/kubernetes/admin.conf
-rw------- 1 root root 5448 Oct 22 16:37 /etc/kubernetes/admin.conf

Copy admin.conf file to users`s home directory. [root@kubemaster ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@kubemaster ~]#
[root@kubemaster ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@kubemaster ~]#
[root@kubemaster ~]# kubectl get nodes
NAME         STATUS     ROLES     AGE       VERSION
kubemaster   NotReady   master    1h        v1.8.1

Configure Weave Overlay network for kubernetes pods.  [root@kubemaster ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@kubemaster ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"
serviceaccount "weave-net" created
clusterrole "weave-net" created
clusterrolebinding "weave-net" created
daemonset "weave-net" created

[root@kubemaster ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                  READY     STATUS    RESTARTS   AGE
kube-system   etcd-kubemaster                       1/1       Running   1          1d
kube-system   kube-apiserver-kubemaster            1/1       Running   4          1d
kube-system   kube-controller-manager-kubemaster   1/1       Running   2          1d
kube-system   kube-dns-545bc4bfd4-5675m            3/3       Running   0          1d
kube-system   kube-proxy-hnkcn                     1/1       Running   1          1d
kube-system   kube-scheduler-kubemaster            1/1       Running   2          1d
kube-system   weave-net-2ztgt                      2/2       Running   2          1d

=====================================================


Join nodes to the Kubernetes cluster
--------------------------------------------------

# kubeadm join --token xcxceaec9abc9 192.168.56.103:6443
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[preflight] Running pre-flight checks
[preflight] WARNING: Running with swap on is not supported. Please disable swap or set kubelet's --fail-swap-on flag to false.
[validation] WARNING: using token-based discovery without DiscoveryTokenCACertHashes can be unsafe (see https://kubernetes.io/docs/admin/kubeadm/#kubeadm-join).
[validation] WARNING: Pass --discovery-token-unsafe-skip-ca-verification to disable this warning. This warning will become an error in Kubernetes 1.9.
[discovery] Trying to connect to API Server "192.168.56.103:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.56.103:6443"
[discovery] Cluster info signature and contents are valid and no TLS pinning was specified, will use API Server "192.168.56.103:6443"
[discovery] Successfully established connection with API Server "192.168.56.103:6443"
[bootstrap] Detected server version: v1.8.2
[bootstrap] The server supports the Certificates API (certificates.k8s.io/v1beta1)

Node join complete:
* Certificate signing request sent to master and response
  received.
* Kubelet informed of new secure connection details.

Run 'kubectl get nodes' on the master to see this machine join.

[root@kubemaster ~]# kubectl get nodes
NAME         STATUS     ROLES     AGE       VERSION
kubemaster   Ready      master    1d        v1.8.1
kubenode1    NotReady   <none>    48s       v1.8.1
==============================================================

Troubleshoot node joining errors:
ERROR#1:
After adding new node to the cluster the above command shows the Kubenode1 is "Not Ready" state. I run below command to check node configuration.

Solution:
[root@kubemaster ~]# kubectl describe nodes kubenode1
Name:               kubenode1
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=kubenode1
Annotations:        node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
Taints:             <none>
CreationTimestamp:  Wed, 25 Oct 2017 21:53:03 -0400
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:03 -0400   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:03 -0400   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:03 -0400   KubeletHasNoDiskPressure     kubelet has no disk pressure
  Ready            True    Wed, 25 Oct 2017 21:56:33 -0400   Wed, 25 Oct 2017 21:53:53 -0400   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.56.105
  Hostname:    kubenode1
Capacity:
 cpu:     1
 memory:  1883560Ki
 pods:    110
Allocatable:
 cpu:     1
 memory:  1781160Ki
 pods:    110
System Info:
 Machine ID:                 efb5b81510e4442180fc2de6090cf1a6
 System UUID:                76093D8D-4224-4FEC-B88C-572B4C45DA0E
 Boot ID:                    2db689d1-ea6f-48b3-a939-2c5a1311612e
 Kernel Version:             3.10.0-693.2.2.el7.x86_64
 OS Image:                   CentOS Linux 7 (Core)
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://1.12.6
 Kubelet Version:            v1.8.1
 Kube-Proxy Version:         v1.8.1
PodCIDR:                     10.244.1.0/24
ExternalID:                  kubenode1
Non-terminated Pods:         (3 in total)
  Namespace                  Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                ------------  ----------  ---------------  -------------
  default                    hello-pod           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-flppd    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                weave-net-qrkh4     20m (2%)      0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ------------  ----------  ---------------  -------------
  20m (2%)      0 (0%)      0 (0%)           0 (0%)
Events:
  Type    Reason                   Age              From                   Message
  ----    ------                   ----             ----                   -------
  Normal  Starting                 3m               kubelet, kubenode1     Starting kubelet.
  Normal  NodeHasSufficientDisk    3m (x2 over 3m)  kubelet, kubenode1     Node kubenode1 status is now: NodeHasSufficientDisk
  Normal  NodeHasSufficientMemory  3m (x2 over 3m)  kubelet, kubenode1     Node kubenode1 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    3m (x2 over 3m)  kubelet, kubenode1     Node kubenode1 status is now: NodeHasNoDiskPressure
  Normal  NodeAllocatableEnforced  3m               kubelet, kubenode1     Updated Node Allocatable limit across pods
  Normal  Starting                 3m               kube-proxy, kubenode1  Starting kube-proxy.
  Normal  NodeReady                2m               kubelet, kubenode1     Node kubenode1 status is now: NodeReady


[root@kubemaster ~]# kubectl get nodes
NAME         STATUS    ROLES     AGE       VERSION
kubemaster   Ready     master    1d        v1.8.1
kubenode1    Ready     <none>    3m        v1.8.1


ERROR#2:
/etc/systemd/system/kubelet.service.d
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

Solution:

Disable Swap memory in the Linux server.follow this steps to remove swap permanently
https://www.centos.org/docs/5/html/Deployment_Guide-en-US/s1-swap-removing.html

ERROR#3: Kubelet service not starting because of below error: 

kubemaster kubelet: error: unable to load client CA file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory

Solution:
Kubelet looks certification file in /etc/kubernetes/pki folder while it starts.So I created the softlink from /var/lib/kubelet/pki
[root@kubemaster kubernetes]# ln -s /var/lib/kubelet/pki pki
[root@kubemaster kubernetes]# ls -lrt
total 0
drwxr-xr-x 2 root root  6 Oct 12 02:05 manifests
lrwxrwxrwx 1 root root 20 Oct 22 16:36 pki -> /var/lib/kubelet/pki


ERROR#4: POD network config error
in the document they gave below commands but it will get the current version of the kubernete and download the respective weave yaml file. For v1.8 there is no yaml file for weave. It wont work. 
[root@kubemaster ~]# export kubever=$(kubectl version | base64 | tr -d '\n')
[root@kubemaster ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$kubever"

[root@kubemaster ~]# kubectl get pods --all-namespaces

NAMESPACE     NAME                                 READY     STATUS              RESTARTS   AGE

kube-system   etcd-kubemaster                      1/1       Running             1          1d

kube-system   kube-apiserver-kubemaster            1/1       Running             4          1d

kube-system   kube-controller-manager-kubemaster   1/1       Running             2          1d

kube-system   kube-dns-545bc4bfd4-5675m            0/3       ContainerCreating   0          1d
kube-system   kube-proxy-hnkcn                     0/1       Error               0          1d
kube-system   kube-scheduler-kubemaster            1/1       Running             2          1d
kube-system   weave-net-2ztgt                      0/2       Error               0          1d

Solution:
v1.8 yaml not in the weave web page so I am using v1.7. Run below command to fix the overlay network error issue.
[root@kubemaster ~]#kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=v1.7





Important command used for cluster troubleshoot.
below command will give you the detailed log files for Cluster and its obejects.
[root@kubemaster log]# kubectl cluster-info dump --output-directory=/ts/cluster-log
Cluster info dumped to /ts/cluster-log

[root@kubemaster log]# cd /ts/cluster-log

[root@kubemaster cluster-log]# ls
default  kube-system  nodes.json
[root@kubemaster cluster-log]# ls -lrt
total 24
-rw-r--r--  1 root root 19517 Oct 24 19:01 nodes.json
drwxr-xr-x 11 root root  4096 Oct 24 19:01 kube-system
drwxr-xr-x  2 root root   170 Oct 24 19:01 default
[root@kubemaster cluster-log]# cd kube-system
[root@kubemaster kube-system]# ls
daemonsets.json   kube-apiserver-kubemaster           kube-proxy-ktl82           replication-controllers.json
deployments.json  kube-controller-manager-kubemaster  kube-scheduler-kubemaster  services.json
etcd-kubemaster   kube-dns-545bc4bfd4-d55z7           pods.json                  weave-net-l2jjj
events.json       kube-proxy-hzf2x                    replicasets.json           weave-net-wtr22
[root@kubemaster kube-system]# ls -lrt
total 436
-rw-r--r-- 1 root root 195397 Oct 24 19:01 events.json
-rw-r--r-- 1 root root    124 Oct 24 19:01 replication-controllers.json
-rw-r--r-- 1 root root   1877 Oct 24 19:01 services.json
-rw-r--r-- 1 root root  34921 Oct 24 19:01 daemonsets.json
-rw-r--r-- 1 root root  19891 Oct 24 19:01 deployments.json
-rw-r--r-- 1 root root  19685 Oct 24 19:01 replicasets.json
-rw-r--r-- 1 root root 160395 Oct 24 19:01 pods.json
drwxr-xr-x 2 root root     22 Oct 24 19:01 etcd-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-apiserver-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-controller-manager-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-dns-545bc4bfd4-d55z7
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-proxy-hzf2x
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-proxy-ktl82
drwxr-xr-x 2 root root     22 Oct 24 19:01 kube-scheduler-kubemaster
drwxr-xr-x 2 root root     22 Oct 24 19:01 weave-net-l2jjj
drwxr-xr-x 2 root root     22 Oct 24 19:01 weave-net-wtr22

To get Cluster Name space logs

 kubectl --namespace="kube-system" logs kube-apiserver-kubemaster

To check  Cluster Token details

[root@kubemaster Cluster-install]# kubeadm token list
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
ebdf74.ewewerwad7b   23h       2017-10-24T23:33:45-04:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

To use Master node for application POD deployment (not recommended)

[root@kubemaster pki]# kubectl taint nodes --all node-role.kubernetes.io/master-


node "kubemaster" untainted

AWS Autoscaling demo lab Configuring VPC, Public/Private subnets, Internet Gateway (IGW), NAT gateway, etc.,   Crea...