With our first article explaining the different terms and deployment models used in the cloud computing industry, we will now focus on what advantages has the cloud brought to most businesses, mainly in the SME sector.

With the majority of the workloads now residing in the datacentre, hosted on datacenter grade equipment, including redundancy mechanisms to reduce downtime to only a couple of minutes in case of hardware failures, the cloud has proved to be worth the cost.

The SMEs can now focus on deploying their services and not maintaining the underlying infrastructure, which is the backbone of the cloud computing “Infrastructure”. The Infrastructure service offered by cloud service providers (IaaS) is the core component that will be used by most businesses to deploy their services.

The fact is, the majority of businesses simply need core cloud services: compute, block and/or object storage, backup and restore features, some level of OS automation and network connectivity.

This brings us now to the choice of the cloud service provider (CSP) and the approach businesses are taking to make that choice. The cloud computing market is dominated by the hyper-scalers (Amazon, Google, Microsoft) which are also referred to as the big cloud providers or the big 3 and most businesses choose between these CSPs to move their workload to the cloud.

One of the biggest myths about the big 3 is that they offer better networks and data centre hardware. For all purposes and intents, there is no major difference between the hardware on the racks of any cloud provider and what the big cloud providers have in their racks. The computing power and performance are basically uniform, with the main components supplied by the same manufacturers.

Some companies choose to use the services of the big 3 because they have a plethora of services/images that they offer, just in case you need them. In reality, what percentage of all workloads use advanced features like computer vision, speech recognition, direct network circuits and serverless computing? There definitely is the need for such workloads, but once again what is their share from all the cloud workloads?

The market dominance of the big 3 has also brought with it what is more commonly referred to as a vendor lock-in where the term managed “service” deployed on one of the three hyper-scalers with certain customizations only relevant to that CSP will be quite a hurdle when migrating off that CSP for large enterprises and even more for small businesses (which might have no in-house IT team).

 

Cloud Agnostic

 

There’s no doubt the cloud computing industry loves its buzzwords. The cloud is all about innovation; and, as cloud technology continues to evolve, so does innovative terminology.

Take the term “cloud agnostic” for example. In the strictest definition of the term, cloud-agnostic tools, services, and applications can be moved to and from any on-premises infrastructure, and to or from any public cloud platform, regardless of the underlying operating system or any other dependency. Businesses that employ a cloud-agnostic strategy are able to efficiently scale their use of cloud services and take advantage of different features and price structures.

A true cloud agnostic tool, service, or application assures organizations of consistent and standard features regardless of what platform it’s deployed on. A cloud-agnostic strategy helps you avoid being locked into a single cloud services provider. Vendor lock-in can be a problem if the vendor changes product offerings or discontinues service. It can also be a problem if a vendor drastically raises prices or goes out of business. Using a cloud-agnostic strategy, you are free to seamlessly move to other cloud providers as your needs dictate.

Another term in the cloud computing industry is cloud-native where functions underlying the virtual instance can be advantageous to the deployment. Applications and services built in a cloud environment are called “cloud-native” because they were designed to run using the tools and capabilities of a cloud environment. Although this might seem a big advantage for cloud computing technology it also has its disadvantages.

So how can we choose any CSP to deploy our workloads which are independent of the CSP and can be migrated between the cloud providers and even between cloud and on-premise with little to no effort?

 

The Game Changer

 

There is only one variant here that can make the options above possible and that is the System Administrator (now under the new designation of Cloud Admin or DevOps member 🙂 )

The servers (Windows or Linux) are still the same ones being managed along the years, the boxes might have shifted as they are now migrated to the CSP but the OS is still there to be managed and the applications are still there to be deployed. It is the deployment of the infrastructure, the OS and the applications that make a business cloud-agnostic in the end.

Two of the main tools used in this context are configuration management tools (like Ansible) and Infrastructure management tools (or Infrastructure as Code like Terraform, which will use modules from the CSP of choice to provision parts of the infrastructure in case of large deployments).

The cloud admins or DevOps team will not need to know the services from the CSP but will need training on how to use open-source tools to provide the required services by the business. After a couple of days of trial and error with the infrastructure management tools and the configuration management tools, an admin/DevOps member would be able to deploy the required infrastructure, configuration and application into multiple CSPs using mostly the same scripts/configuration.

With the above approach, a business will move to a cloud-agnostic approach which is more future proof and can also provide continuity in case an employee leaves the business as this infrastructure/configuration/application code follows certain standards that could be easily handed over.

A couple of weeks ago, Ryan Calleja a senior Systems Operations Engineer at Ixaris wrote an article about business continuity and disaster recovery and explained how they use Ansible to:

  • Redirect traffic
  • Reverse source and destination in their data replication
  • Switch off services at the impacted site
  • Switch on services at the new site

Ansible is a cloud-agnostic tool and by itself will not make any magic (in fact no one can 😛 ) but Ryan also explains how their software is designed to be performant, scalable, secure and more reliable.

I am definitely sure Ryan spent a couple of days working out the correct syntax and a couple of test runs of the playbooks but in the end, Ixaris is following an approach of using cloud-agnostic configuration management tools to configure/provision the required services and infrastructure.

Ryan’s article can be found here and will take just a couple of minutes to go through it as he describes the approach that Ixaris take for business continuity and disaster recovery

In the first weeks of 2021, we will create a post related to Kubernetes where we will describe a Kubernetes deployment entirely done with Ansible and will also share a playbook that can be used to provision a Kubernetes master and any number of workers together with the Kubernetes dashboard.

This playbook can be deployed against instances from Zyla’s public cloud or instances from any of the big 3 with the same final result. If the cloud is not your choice you can also deploy this to your on-premise hardware or virtual infrastructure.

Deploying your infrastructure and applications with Ansible takes out the guesswork from the process. You don’t have to spend time educating entire teams on how to work with each cloud vendor in your environment, and you can trust that every deployment meets all of your policy each and every time.

The modularity of Ansible’s codebase allows Ansible to manage today’s infrastructure, but also rapidly adapt to new IT needs and requirements from the clouds of tomorrow.

Until then, stay safe and all the best for the festive season 🙂