The next thing I’d like to play with is to manually create a cluster using kubeadm, run a simple web-service there and access it via AWS LoadBalancer.
The main issue I faced with during this set up was lack of full-fledged documentation and up to date examples, thus had to do almost everything by the cut-and-try method.
Just to finally see a message saying:
WARNING: aws built-in cloud provider is now deprecated. The AWS provider is deprecated and will be removed in a future release
The example below uses the Kubernetes version: v1.15.2. and ЕС2 with OS Ubuntu 18.04
Contents
Preparing AWS
VPC
Create a VPC with the 10.0.0.0/16 CIDR:
Add a tag named kubernetes.io/cluster/kubernetes with the owned value – it will be used by K8s for AWS resources auto-discovery related to the Kubernetes stack, also it will add such a tag itself during creating new resources:
Enable DNS hostnames:
Subnet
Create a new subnet in this VPC:
Enable Public IPs for EC2 instances which will be placed in this subnet:
Add the tag:
Internet Gateway
Create an IGW to route traffic from the subnet into the Internet:
For IGW add the tag as well, just in case:
Attached this IGW to your VPC:
Route Table
Create a routing table:
Add the tag here:
Click on the Routes tab, add a new route to the 0.0.0.0/0 network via the IGW we created above:
Attach this table to the subnet – Edit subnet association:
Choose your subnet created earlier:
IAM role
To make Kubernetes working with AWS need to create two IAM EC2 roles – for master and slaves.
You can also use ACCESS/SECRET instead.
IAM Master role
Go to the IAM > Policies, click Create policy, into the JSON add a new policy description (see cloud-provider-aws):
Save it as k8s-cluster-iam-worker-policy (can be used any name obviously):
And create a k8s-cluster-iam-master-role:
Running EC2
Create an EC2 using t2.medium type (minimal type as cKubernetes master needs to have at least 2 CPU cores), using your VPC and set k8s-cluster-iam-master-role as the IAM role:
Add tags:
Create a Security Group:
Wile Master is spinning up – creat a Worker Node in the same way just using the k8s-cluster-iam-worker-role:
Tags:
Attach existing SG:
Connect to any instances and check if the network is working:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
...
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ip-10-0-0-102.eu-west-3.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 10.0.0.102]
...
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.502303 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
...
[mark-control-plane] Marking the node ip-10-0-0-102.eu-west-3.compute.internal as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node ip-10-0-0-102.eu-west-3.compute.internal as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
...
Your Kubernetes control-plane has initialized successfully!
...
Then you can join any number of worker nodes by running the following on each as root:
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
While I tried to setup this cluster and ELB faced with the issue when Worker Nodes wasn’t added to an AWS LoadBalancer when creating a LoadBalancer Kubernetes service.
In such a case try to check if the ProviderID (--provider-id) is present in a node’s settings: