Let's assume we are going to deploy a production k8s cluster. Such a solution needs a domain name, so we are going to start with getting one. For this exercise, I picked a subdomain docks-www-tutorials.sherzod.com (registar is non-AWS) and going to delegate its NS to Route53 in an AWS account that we are going to operate the cluster in. kOps will have a control of that Route53 zone, and add DNS records as necessary.
Create an AWS account, create IAM credentials (read https://kops.sigs.k8s.io/getting_started/aws/#setup-iam-user), and configure your AWSCLI to use them.
While we are at this, let's track the cost of our k8s cluster and all the resources around it, by using cost-allocation tags. We are going to declare tags in all configurations - Terraform, cluster config, etc. When tags are activated, we will create a budget for it, and track our spending (get detailed usage, alarm when approaching a threshold). We will be back to this. In the mean time, we will use application=kops
tag.
Creating VPC for kOps using Terraform
After creating the AWS account and setting up the credentials (which will be used by Terraform), next we are going to create a VPC for our kubernetes cluster. For this purpose, the cluster nodes will reside in private subnets, while master(s) will be in public subnet(s). We can limit the access to master via security groups. If we had a network path to private subnets via private network (e.g. VPN), we could deploy the master into private subnets.
Checkout the Terraform code, adjust the values in main.tf
- VPC CIDR, AZs to deploy and subnet sizes. In reality those values better reside in variables file.
Once we are satisfied with values, we run the configuration via terraform init
, terraform plan -out myplan
and terraform apply myplan
. After the last command, Terraform should output actual identifiers for VPC and subnets. Note those as we will need that in our cluster configuration. You can store them as such:
$ terraform output > output.txt
S3 buckets for kOps
Now we need two S3 buckets for our cluster. One will house the kOps cluster state store, and the other will provide OIDC server discovery endpoint for IAM authn. Use the S3 console to create those buckets, use the defaults except un-check the Block Public Access option for the second bucket. Use any names for those buckets - as long as they are valid S3 bucket names. For brevity, we will call them bucket1
and bucket1-pub
SSH key pair
We are going to use Ubuntu image for cluster master and nodes. kOps can provision our ssh pubkey stored in AWS, so let's go ahead create one.
$ ssh-keygen -t rsa -f kops-id-rsa
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in kops-id-rsa
Your public key has been saved in kops-id-rsa.pub
The key fingerprint is:
SHA256:<fingerprint hash> <your local hostname>
The key's randomart image is:
...
Log onto AWS Management Console, navigate to EC2 service dashboar and on left navigation bar, click on "Key Pairs". On the next page, click on Actions > Import key pair. On the resulting page give your key a name (for example, kops-ssh-key), browse to the kops-id-rsa.pub file, and click on "Import key pair" button.
Install the needed k8s tools
Install kOps binary using installation instructions. I suggest using Homebrew for that - it will make updating the binaries easier in the future. While at it, install helm as well - we are going to need it.
$ brew install kops helm
Here are the binary versions at the time of this writing
$ kops version
Version 1.22.3
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.3", GitCommit:"816c97ab8cff8a1c72eccca1026f7820e93e0d25", GitTreeState:"clean", BuildDate:"2022-01-25T21:17:57Z", GoVersion:"go1.17.6", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
version.BuildInfo{Version:"v3.8.0", GitCommit:"d14138609b01886f544b2025f5000351c9eb092e", GitTreeState:"clean", GoVersion:"go1.17.6"}
I highly recommend installing Lens IDE for Kubernetes for administering or interacting with the eventual k8s cluster. While kubectl is handy for some commands, Lens enables visualizing pods, nodes, their metrics - and much more. Plus you won't have to deal with switching context, and can see things at once. I thank my friend Alisher for introducing Lens to me, it is a great tool.
In our next post, we will install the cluster and start configuring it.