Migrating To Kubernetes


3 Minute Read
March 31, 2017
Company News

Quarter 4 2016 saw us embark on the first stage of our migration to the cloud, starting with planning out the migration of our eCommerce websites to the Google Cloud Platform (GCP) Container Engine.

Firstly, a bit of background to our previous infrastructure: Each client website had at least two compute instances, of varying spec depending on the levels of traffic we were expecting, which were added to a load balancer utilising a static IP. Each compute instance was sat on the GCP Compute Engine.

Our websites don’t connect to a persistence layer, nor do they connect directly with a back-end platform. Instead, they all utilise Venditan Commerce API (VC-API) to obtain all the data it requires to display the website to the end-user. This abstraction removes an element of the complexity of this migration and also allows us to focus primarily on the website itself and control the migration implementation by switching DNS.

The previous infrastructure made it difficult to scale with demand

The previous infrastructure made it difficult to scale with demand, as adding another instance would require several steps before you could add it to the load balancer.

Using the GCP Container Engine removes this headache, as you effectively instruct the Container Engine to manage the instances for you instead by creating a container cluster. A container cluster is a managed group of uniform VM instances for running Kubernetes. GCP Container Engine allows you to select how powerful you want the machine to be, which will directly impact the resources available to each deployment. It’s fine to be fairly conservative with the machine specification at this point as you can always increase the number of nodes in your cluster as required.

GCP Container Engine recently released the ability to have your clusters automatically upgrade and repair. We have disabled both of these options to provide more control over when the upgrades happen. The upgrades themselves are easy to do, but we have noticed a few minutes of intermittent downtime during the upgrade process so we like to do it during the early hours (GMT) to reduce the impact the upgrade has on our clients.

Within the container cluster there are node pools, that you can easily find within the GCP console. A new feature can also be seen here which is ‘Autoscaling’, but currently, this is considered to be still in beta and does not yet provide the optimum number of nodes so we have it turned off for the time being until the bugs are ironed out.

Cluster setup is dependent on your requirements, so the spec of your cluster and its nodes will differ dependent on your clients needs and what you are hosting (traffic, type of application, etc.). Clusters are useless without services and deployments, which allow you to create external services such as a web service, or internal services such as Memcached that are used by your other services and deployments. We created services for ‘web’, Redis and Memcached.

Since the move to Kubernetes, we have not had any issues with any of the memcache services

The Memcached service is a step forward for us, as with our previous infrastructure we had one instance in Compute Engine running Memcached to service all of our websites. With the move to Kubernetes, each website has its own Memcached service, improving resilience and delivering a robust solution for our clients. Since the move to Kubernetes, we have not had any issues with any of the Memcached services, whereas on the previous infrastructure it was a regular occurrence (mainly running out of memory).

One usual area of concern is the deployment process, but with Kubernetes we have seen a big improvement. We create images that are pushed to a GCP bucket and then used by the containers. The image is the environment for each website including all Apache configuration and SSL certification, and the application itself. This has meant that the deployment and rollback processes are a case of swapping out the image tag version that is currently being used by the containers.

Kubernetes will then fan down the old deployments that used the old image and fan up the new deployments that use the new image. This removes the potential for users to see an issue and instead acts as a seamless switch between two versions of the application you’re deploying.

Although Kubernetes has given us the opportunity to improve our infrastructure, it had numerous dependencies on other technology to allow us to actually do the migration.

We utilise docker heavily to build the server environment based on alpine packages, setting up the Apache/NGINX web service and configuration files that have numerous environment settings that our front-end applications use. The use of docker also provides a stable, production-like environment in which our development team can work on in all scenarios but on their local machines.

With the migration to Kubernetes we upgraded our front-end application to PHP 7, and have seen marginal performance improvements as a result. Add NGINX into the mix and you start to make a bigger performance improvement collectively. With NGINX we have seen average server connection time go up a little, average page download time half but no difference in the average server response time. From this you can determine that the end user will see a benefit and with every upgrade and every change in technology we put the end user first, to ensure we are delivering the most performant solution possible.

Final thoughts

As a whole, we’ve definitely had a successful migration to Kubernetes. The process has delivered a better service to our clients and as developers we have more trust in the infrastructure. Developments that have occurred since the migration have been easier, such as the migration from apache to NGINX and moving towards HTTP/2 and HTTPS across all of our websites. We’re only six months into this journey with Kubernetes, and as it continues to develop we are expecting to be delivering an even better solution to our clients.

Grow faster with Venditan