arrow-flex-shortarrow-flexarrow arrowsbinocularsbsichartcogsflex-icongeargithubglobelightnings linkedinlock plantscalestarget triangle-icon twitter

Building a Multi-Cloud Strategy

Tom
June 28, 2017

This article has been adapted from a talk I gave at Google on 22nd June 2017. The associated slides are available here:

There are many benefits to adopting a multi-cloud strategy: avoiding vendor lock-in, increasing fault tolerance, and utilising the best services each provider has to offer.

However, planning such a strategy faces challenges, most particularly how can you increase portability between cloud providers?

Before we dive into the details, a prerequisite of any strategy is to understand the end goals, to identify what we are trying to achieve.

A multi-cloud strategy has 3 primary goals. We want our apps to:

  • Be more portable

  • Be more stable

  • Take advantage of more specialist services

Portability

The term ‘portability’ in this context refers to the ability to move your application from one platform to another with minimal friction.

Portability is the linchpin to executing a successful multi-cloud strategy. After all, without a highly portable app your multi-cloud strategy simply won’t work.

The need for portability stems from high levels of differentiation between cloud providers; no two clouds are the same. Despite operating in the same category, each cloud platform implements core computing quite differently. Their APIs are different, their CLIs are different and the services they provide are often different.

It is as a result of these differences that multi-cloud as a strategy exists. If cloud platforms were highly commoditised, switching would be extremely simple. This however, is not the the case; the path to reach the hallowed ground of a one-command-deploy is not straightforward.

For those of us working in technology, the one-command-deploy is the nirvana. The ability to deploy your app to any platform, irrespective of the provider, with one command. Yet, although challenging, this nirvana is certainly achievable.

By focusing on building an automated, repeatable deployment pipeline, with a consistent conceptual model, we can decouple our application from our infrastructure. This allows us to become platform agnostic and most importantly, allows our engineering team to focus their efforts above the value line.

Above all, a well-oiled pipeline allows you to ship code more frequently with higher levels of confidence.

This sounds awesome, right? However, with awesomeness comes complexity. Building a fully automated, multi-cloud pipeline is not easy, and it goes without saying, the bigger the application, the more complex the implementation.

Thankfully, there are a number of both open-source and enterprise tools to help us achieve our goals.The suitability will depend on the architecture of your application, but here are a few that are worth looking at.

  • Docker: although Docker in itself won’t serve all your needs, it’s containers will almost certainly feature.

  • Kubernetes: an open source product from Google. It is an extremely popular tool for deploying, managing and scaling containers.

  • Pivotal Cloud Foundry: the Pivotal adaptation of the Cloud Foundry platform and is very much at the enterprise end of the spectrum. PCF is aimed at highly complex, distributed microservice applications.

  • Spinnaker: focuses on creating a custom pipeline that allows you to dictate when, where and how to deploy your application.

Stability

Which takes us to our next goal - that of stability. With multi-cloud, it’s easy to failover in the event of catastrophe. We aren’t just talking about diverting traffic to a different region with the same provider, but switching platforms entirely.

Now, it’s very rare to experience a full platform outage, but this quote from Werner Vogels, CTO of Amazon Web Services is a reminder that it can happen.

Everything fails, all of the time - Werner Vogels

And indeed it does. In 2015, AWS had a major EC2 outage, and more recently at the beginning of this year with S3 their cloud storage service.

Portability doesn’t guarantee stability. When building your pipeline, you need to take stability into consideration and the following are a few steps you can take to help:

  • Assess: ensure you assess the risk, considering both external factors like DDOS, regional/ global failure, and internal factors like human error

  • Plan: define a ‘fault tolerance’ and agree a strategy that suits the needs of your clients and the resource you have available. Achieving 99.999% uptime comes at huge cost and simply isn’t required by every platform

  • Practice: carry out regular practices, making sure to test in a production environment. Refine and refactor

Now that we have a highly portable app, we are in an elevated position to weather a storm.

Specialisation

The third benefit that a multi-cloud strategy can bring to the table is the opportunity to specialise in widely diverse fields. No one platform can meet all your needs - or is unlikely to do so.

Here is just 3 of the awesome products available from 2 of many cloud providers:

  • Google Spanner: a mission-critical, scalable relational database

  • Amazon Lex: the deep learning Speech Recognition engine that powers Alexa

  • Google Translate: real time translation with outstanding accuracy

By choosing a cloud-native approach, you empower your engineering team to choose the product that best fits your application and are no longer restricted to using services proprietary to a single cloud provider.

Summary

In summary, by focusing on building a highly-portable application, with a automated and repeatable deployment pipeline, you are not only in a better position to protect your application (and customers) from catastrophe, but can equally benefit further from economies of specialisation.

Articles

By using this website you agree to our cookie policy
x