Kubernetes 101

Dan Noguerol, senior platform architect at VMware explores some of the key considerations for organisations on their Kubernetes journey.

As business success increasingly depends on the ability of a company to quickly deliver digital services and software, enterprises are turning to containers and Kubernetes as essential tools for building, deploying, and running modern applications at scale. 

Whereas containers encapsulate applications and make them portable, Kubernetes functions as a container management system at a higher level, allowing large-scale and complex applications to be run across multi-cloud environments. 

By providing uniform deployment, management, scaling, and availability services for applications – Kubernetes increasingly offers significant advantages for IT teams. Already 40% of the enterprise companies included in the Cloud Native Computing Foundation’s biannual surveyreported that they’re running Kubernetes in production environments. 

The case for Kubernetes

Companies may choose to containerise their applications for multiple reasons. Microservice architectures that deconstruct applications into smaller, independent services have benefits that include greater reusability, more frequent release cycles and improved testing.

Containers are lightweight and enable better utilisation of infrastructure resources. This makes them ideally suited for supporting microservices. 

Additionally, certain application workloads require elasticity – they must grow to support spikes in usage and shrink when idle. Containers, being lightweight and quick to respond, support elasticity requirements perfectly.

Runtime dependencies can also be challenging to manage, especially in polyglot architectures. Containers encapsulate their runtime dependencies and provide isolation between application dependencies. This is especially beneficial to IT groups that must support multiple development teams with varying application runtime requirements.

Commercial off-the-shelf software (COTS) vendors are also increasingly embracing container images as a standard unit of deployment for their applications which makes running operationalised production containers important. 

Container runtimes provide the low-level foundation needed to run containers. However, running and managing a few containers is one thing. The challenge comes when the number of containers scales to hundreds or thousands. This is where container orchestration technologies, such as Kubernetes, come into play.

They provide important capabilities such as ensuring the correct number of healthy containers are always running and scaling application containers up or down either manually or via utilisation-driven autoscaling. They can also automate application roll-out and roll-back with zero downtime and mount disk volumes to support stateful workloads.

Overcoming initial roadblocks

Kubernetes is an extremely flexible technology. It is however also extremely complex. There are three aspects for companies to consider when assessing its difficulty:

Operational considerations

The first operational challenge is getting Kubernetes clusters up and running. There are many different approaches made available by the community, all with different strengths and weaknesses, so developers must identify an appropriate solution to meet their own specific brief.

Application isolation requirements are an important aspect to consider here – default Kubernetes only offers a simple namespacing mechanism that may not provide adequate workload isolation.

To meet security requirements, one may need to either employ additional mechanism(s) to enforce isolation or consider running multiple Kubernetes clusters. The consensus in the community seems to have gravitated towards the latter.

The second operational challenge is the so-called ‘day two’ operations – ensuring the health of Kubernetes clusters. For example, developers need to identify and implement the mechanisms for detection and recovery if a cluster node is lost.

In addition, operators need to create ways to scale clusters up or down to support changes in workloads, whilst keeping the underlying Kubernetes software, as well as the operating system it runs on up to date. 

Application deployment 

Application deployment to Kubernetes requires two things: creation of container image(s) and creation of Kubernetes resources. Those resources include things such as services, deployments and persistent volumes defined using text files. The learning curve here can be a bit steep as it requires those creating them to have extensive knowledge of the underlying deployment infrastructure details.

Container images can be thought of as the template for creating containers. Hence, Kubernetes needs access to those images and they must live somewhere accessible to the cluster(s) – typically a cloud-based or on-premise registry. There are several registry solutions available, but they are not part of core Kubernetes so must be installed and maintained independently. 

Application runtime dependencies will often have bugs or vulnerabilities that need to be addressed over time. Because container images embed these runtime dependencies, addressing such vulnerabilities entails identifying and patching them.

However, those steps ensure only future deployments will run with the new container images. All currently running containers based on the old image must be manually replaced.

Implementing automated mechanisms to identify, track and address these concerns in production can be challenging. Leveraging the right combination of tools – whether commercial or open source – will be important for the sustainability of a potentially fast-growing number of Kubernetes deployments.

Service discovery

Finally, service discovery is an important consideration in maintaining any container-based application or service ecosystem.

Containers will frequently have dependencies on other containers and all of them may be started or stopped at any time. Kubernetes provides some basic capabilities in this area, but more sophisticated requirements can often involve load balancers, ingress routers, service registries or service meshes.

Identifying an approach that can scale with an organisation’s needs while limiting unnecessary development and operations complexity is important. Service discovery has proven to be a shifting landscape over the past few years as different approaches were vetted, and although things have stabilised a bit, there are likely still changes on the horizon.

The future of Kubernetes 

The Kubernetes ecosystem is expanding quickly, enabling a wide variety of new capabilities that leverage it as a foundation. Enterprises considering Kubernetes or those in the process of evaluation and deployment, can benefit from the hard-won wisdom of those who have gone before them.

The move to cloud native technologies — including Kubernetes — is not easy, but it’s clearly worth it for organisations that prioritise software development and need to achieve faster development cycles, better resource utilisation, and access to the best-in-class open source technology.

Categories

Related Articles

Top Stories