Deploying and Managing Kubernetes on AWS

March 18, 2019

Deploying and Managing Kubernetes on AWS

Kubernetes, originally designed by Google, is an open-source container orchestration system that makes your life easier by automating application deployment, scaling, and management.

If you have ever played with Kubernetes then you will know that setting it up manually can sometimes be quite painful. Setting up an automated Kubernetes deployment first time round will greatly assist you with an environment that is easily replicated on future projects. Best practice Devops also recommends that you are able to create and destroy your Kubernetes infrastructure in an automated fashion.

The latter is very handy especially in situations where you need to upgrade your Kubernetes clusters.

Each Kubernetes deployment is quite different depending on its specific use case. There are several ways to deploy Kubernetes. Picking an option and sticking to it is challenging when there are so many different ways to do this.

Let’s recap some of the options:

eksctl (or EKS)

EKS offers you the possibility of going the manual route.

  • You could use the AWS Webconsole to first prototype what you want to do, and use EKS after.
  • You could also use the AWS CLI for this.
  • Or  deploy EKS with this nifty tool: eksctl

KOPS

Kubernetes Operations, is an official project for managing clusters on AWS and other cloud providers.

Supergiant

Offer a managed hosted Kubernetes which can be deployed to GC, AWS or Digitalocean

Supergiant Control automates the deployment of K8’s Clusters on multiple clouds. Quite an interesting project in itself.

Terraform

Terraform likes to stay up to date and they also already have support for EKS built into Terraform so that you can deploy to EKS using Terraform.

Important considerations

Deciding on the best way forward could be quite overwhelming.

I’ve tried KOPS which is one of my favourites as it is maintained by the Kubernetes Organization and has a very large amount of users, which means that there is plenty of support for it. My personal choice is usually between KOPS and eksctl.

It really depends on your use case, perhaps you prefer not to manage infrastructure or upgrade Kubernetes infrastructure, then EKS might be for you so eksctl would be a viable option. The only downside with EKS is that it is not yet widely available and limited to a few specific AWS regions:

Getting to Grips with EKS

AWS provides a really cool workshop for people new to EKS or Kubernetes in the form of an interactive tutorial. Keep in mind that you need to run it locally.

This is what the tutorial looks like if you run it locally, it is a pretty cool interactive tutorial:

To make things easier I suggest that you try using eksctl when playing around with EKS.

Potential obstacles:

I ran into one issue with eksctl when I first started playing around with it. The issue was that I could only launch cluster’s with huge instance types on the different nodes. I wanted to be able to use t2.medium’s or t2.large when playing around with EKS (just for the sake of frugality). For some reason I couldn’t launch t2.large’s in Stockholm, but for some reason when I tried one of the American regions it worked fine:

<pre><code>

$ eksctl create cluster --name my-eks-cluster --nodes 3 --nodes-min 3 --nodes-max 5 --node-type t2.medium --region us-west-2

</code></pre>

That command should take quite a while to run as it creates a cloudformation template which takes some time to run. If it completes without any problems then you will be able to run this command:

<pre><code>

$ kubectl get nodes

</code></pre>

Load Balancers and Ingress?

If you have ever worked with ECS (Amazon Elastic Container Service, not to be confused with EKS) or Kubernetes you will either be familiar with ECS’s easy integration with AWS’s three type of load balancers, where in the Kubernetes world you tend to use Nginx Ingress to expose an application to the outside world. EKS is not as integrated with AWS as with, for example, a more well established service such as ECS, but you will be able to create load balancers to expose your services either internally or to the outside world.

As this article in Medium “Setting up Amazon EKS and what you must know” pointed out.

“Most Kubernetes examples will set up an Ingress based on the nginx ingress controller to make themselves visible to the internet. This won’t do anything at all on EKS out of the box.”

This is rather cool if you are new to Kubernetes and coming from ECS and you were trying to figure out how to add load balancers to your services on EKS. EKS makes this pretty easy as you can just define a load balancer in your service like this:

This is from the eks-workshop examples:(type: LoadBalancer will setup an ELB on AWS)

<pre><code>

apiVersion: v1

kind: Service

metadata:

  name: ecsdemo-frontend

spec:

  selector:

    app: ecsdemo-frontend

  type: LoadBalancer

  ports:

    protocol: TCP

    port: 80

    targetPort: 3000

</code></pre>

You can then deploy your service to EKS and use the following command do get your endpoint for your ELB:

<pre><code>

$ kubectl get service ecsdemo-frontend -o wide

</code></pre>

Managing your own Kubernetes Infrastructure

If you decide to manage your own Kubernetes infrastructure then I would suggest that you have a staging environment set-up to use to stage upgrades to the cluster as a whole.  This is a good way to find issues that could occur when upgrading the Kubernetes version of your cluster. It is probably a good idea to stage these kinds of upgrades and keep it running for a few days without any problems before attempting it on production.  

My tool of choice for managing your own infrastructure would be KOPS. You can do the following to get Kubernetes going on EC2 with KOPS.

Here are some of my notes on how to use KOPS to quickly deploy a small cluster for playing around with Kubernetes on EC2.

1. Setting up KOPS

Start by creating a bucket, this bucket will be used to store some of the configuration information for Kubernetes. Here I start by creating the bucket:

<pre><code>

$ aws s3 mb s3://swipeix.digital  --profile swipeix

</code></pre>

Now you will need to tell KOPS about this bucket by exporting it as an environment variable:

<pre><code>

$ export KOPS_STATE_STORE=s3://swipeix.digital

</code></pre>

If you have plenty of AWS profiles in ~/.aws/credentials then I suggest running this command too:

<pre><code>

$ export AWS_PROFILE=swipeix

</code></pre>

2. Setting up Route53

I looked at several tutorials that I could find on using KOPS on AWS, but most of them made the part about setting up your domain or subdomain on route53 seem rather confusing. I will try to explain what I did and what worked for me. Hopefully, this makes more sense than the other KOPS tutorials you’ve read so far.

Most tutorials that I read suggested that you need to create a hosted zone in order to use KOPS with route53. I don’t agree with this, you don’t really need to do that to get KOPS working.lso keep in mind that creating additional hosted zones can incur extra charges on your AWS bill.) If you already have a domain which is a hosted zone on Route53 then try using that, there is no need to do anything special like create subdomains etc. Everything should work if you use your regular hosted zone from route53, in my case my hosted zone is : swipeix.digital.

3. Creating the Cluster

You will need to run the following to create the cluster configuration:

<pre><code>

$ kops create cluster --zones=eu-west-2b swipeix.digital

</code></pre>

You can create the cluster with that configuration like this:

<pre><code>

$  kops create cluster --zones=eu-west-2b swipeix.digital --yes

</code></pre>

After I ran this I was left with a cluster running on c4.large as my master node. This was a bit out of my budget so I had to resize the master node to run on a small instance type. You can do this by checking the list of instance group by running:

<pre><code>

$ kops validate cluster

</code></pre>

This is the output:

To resize the master node we have to run:

<pre><code>

$ kops edit ig master-eu-west-2b

</code></pre>

This brings up a text editor with a config that you can edit, mine opened up in vim:

Change the machineType to a smaller size, in my case I changed it to a t2.large:

Now update the cluster to use this new config:

<pre><code>

$ kops update cluster --yes

</code></pre>

You should see something like this:

Now we need to run the following for the changes to be applied to the cluster:

<pre><code>

$ kops rolling-update cluster --yes

</code></pre>

You should see something like this:

Here is the master node that we just resized:

Need to start over?

If at some point you feel like the cluster that you have tried creating with KOPS is causing you too much of a headache then I suggest that you delete the cluster and try from scratch. If you are still playing around with Kubernetes and you don’t have a specific need to be setup in a specific AWS region, then I would also suggest you delete your cluster or cluster config and try again in another region.

This is how you delete your cluster:

<pre><code>

$ kops delete cluster swipeix.digital --yes

</code></pre>

Next: Setting-up Load Balancers

In a next upcoming tutorial, we will look at how to set up load balancers in front of a cluster that was created by KOPS, we will also look at how to do a decent deploy from ECR to Kubernetes.

Are you thinking of moving to AWS? Swipe iX have an AWS specialist team who can assist with your migration and architecture. Speak to us today!

Jacques Fourie

Timo Goosen

DevOps

Swipe iX Newsletter

Subscribe to our email newsletter for useful tips and valuable resources, sent out monthly.