3
Min read

Migrating To Kubernetes

Written by
Michael Simcoe
Published on
31/3/2017
Share this post
Other posts by:
Michael Simcoe
Head of Technical Operations
7
Min read
6/6/2023
Migrating from Shopify? 8 crucial considerations
What you must consider if you are migrating from Shopify to another eCommerce platform.
8
Min read
14/3/2023
Gaining a competitive advantage in eCommerce
In eCommerce, every advantageous opportunity must at least be considered.

Quarter 4 2016 saw us embark on the first stage of our migration to the cloud, starting with planning out the migration of our eCommerce websites to the Google Cloud Platform (GCP) Container Engine.

Firstly, a bit of background to our previous infrastructure: Each client website had at least two compute instances, of varying spec depending on the levels of traffic we were expecting, which were added to a load balancer utilising a static IP. Each compute instance was sat on the GCP Compute Engine.

Our websites don’t connect to a persistence layer, nor do they connect directly with a back-end platform. Instead, they all utilise Venditan Commerce API (VC-API) to obtain all the data it requires to display the website to the end-user. This abstraction removes an element of the complexity of this migration and also allows us to focus primarily on the website itself and control the migration implementation by switching DNS.

The previous infrastructure made it difficult to scale with demand, as adding another instance would require several steps before you could add it to the load balancer.

Using the GCP Container Engine removes this headache, as you effectively instruct the Container Engine to manage the instances for you instead by creating a container cluster. A container cluster is a managed group of uniform VM instances for running Kubernetes. GCP Container Engine allows you to select how powerful you want the machine to be, which will directly impact the resources available to each deployment. It’s fine to be fairly conservative with the machine specification at this point as you can always increase the number of nodes in your cluster as required.

GCP Container Engine recently released the ability to have your clusters automatically upgrade and repair. We have disabled both of these options to provide more control over when the upgrades happen. The upgrades themselves are easy to do, but we have noticed a few minutes of intermittent downtime during the upgrade process so we like to do it during the early hours (GMT) to reduce the impact the upgrade has on our clients.

Within the container cluster there are node pools, that you can easily find within the GCP console. A new feature can also be seen here which is ‘Autoscaling’, but currently, this is considered to be still in beta and does not yet provide the optimum number of nodes so we have it turned off for the time being until the bugs are ironed out.

Cluster setup is dependent on your requirements, so the spec of your cluster and its nodes will differ dependent on your clients needs and what you are hosting (traffic, type of application, etc.). Clusters are useless without services and deployments, which allow you to create external services such as a web service, or internal services such as Memcached that are used by your other services and deployments. We created services for ‘web’, Redis and Memcached.

The Memcached service is a step forward for us, as with our previous infrastructure we had one instance in Compute Engine running Memcached to service all of our websites.

With the move to Kubernetes, each website has its own Memcached service, improving resilience and delivering a robust solution for our clients. Since the move to Kubernetes, we have not had any issues with any of the Memcached services, whereas on the previous infrastructure it was a regular occurrence (mainly running out of memory).

One usual area of concern is the deployment process, but with Kubernetes we have seen a big improvement. We create images that are pushed to a GCP bucket and then used by the containers. The image is the environment for each website including all Apache configuration and SSL certification, and the application itself. This has meant that the deployment and rollback processes are a case of swapping out the image tag version that is currently being used by the containers.

Kubernetes will then fan down the old deployments that used the old image and fan up the new deployments that use the new image. This removes the potential for users to see an issue and instead acts as a seamless switch between two versions of the application you’re deploying.

Although Kubernetes has given us the opportunity to improve our infrastructure, it had numerous dependencies on other technology to allow us to actually do the migration.

We utilise docker heavily to build the server environment based on alpine packages, setting up the Apache/NGINX web service and configuration files that have numerous environment settings that our front-end applications use. The use of docker also provides a stable, production-like environment in which our development team can work on in all scenarios but on their local machines.

With the migration to Kubernetes we upgraded our front-end application to PHP 7, and have seen marginal performance improvements as a result. Add NGINX into the mix and you start to make a bigger performance improvement collectively. With NGINX we have seen average server connection time go up a little, average page download time half but no difference in the average server response time. From this you can determine that the end user will see a benefit and with every upgrade and every change in technology we put the end user first, to ensure we are delivering the most performant solution possible.

Final thoughts

As a whole, we’ve definitely had a successful migration to Kubernetes. The process has delivered a better service to our clients and as developers we have more trust in the infrastructure. Developments that have occurred since the migration have been easier, such as the migration from apache to NGINX and moving towards HTTP/2 and HTTPS across all of our websites. We’re only six months into this journey with Kubernetes, and as it continues to develop we are expecting to be delivering an even better omnichannel eCommerce solution to our clients.

Our recent posts

Keep up to date with the latest news and insight from the team at Venditan

6
Min read
1/5/2024
Gen A and the future of order management
How Generation Alpha may impact order management and fulfilment.
Andrew Flynn
Head of Digital Marketing
2
Min read
30/4/2024
Welcome to our new Head of Business Development, Steve!
A warm to welcome to Steve Pownall, our new Head of Business Development.
Venditan
Company
3
Min read
26/4/2024
Meet the team - Mike Simcoe
This month we're catching up with Mike Simcoe, Head of Technical Operations.
Andrew Flynn
Head of Digital Marketing
3
Min read
26/4/2024
Get to know The Dressing Room
A conversation with Deryane Tadd, Founder and Owner of The Dressing Room.
Venditan
Company
4
Min read
25/4/2024
You need to export your Universal Analytics data
Google is permanently deleting all Universal Analytics data from 1st July 2024.
Andrew Flynn
Head of Digital Marketing
2
Min read
4/4/2024
Meet the team - Mike Smith
Mike recently celebrated his tenth VenditAnniversary.
Andrew Flynn
Head of Digital Marketing