Author: Matthew Simon, Technical Director, Cloud & Infrastructure Center of Excellence
In Part I of this series, we noted there is much confusion on how containers work when it comes time to make critical decisions on solutions. We discussed the origin of that confusion and overviewed the differences between containers and container orchestration. In this installment, we’ll take a closer look at applications, microservices, and related infrastructure that supports containerization.
When it comes to applications running on containers, there are the two basic patterns:
- Long running applications
- Microservices
Both these patterns require different levels of infrastructure components to be in place, something to keep in mind when making IT decisions.
Long running applications
In this pattern, the container plays the same function as a virtual machine in that the container will always be running. This is the most basic type of container, so it does not require the orchestration functions/features required to manage microservice-based containers. So, in this case we can leverage simple Docker-based deployments like Elastic Container Service (ECS) or run our own Docker nodes on EC2 instances to support the deployment of these containers.
Similar to how we lift and shift applications from on premise to the cloud, we can look at containerizing applications as part of the migration process. This simplifies the migration in that a container encapsulates all elements needed to support the application, ensuring that it will run just as it did on premise when it is deployed to the cloud.
From an O&M standpoint, these containers need to be managed in the same way we manage virtual machines, including monitoring, patching, and auditing. In the case that high availability is required for an application, we need to run multiple containers instead of multiple virtual machines.
Microservices
In this pattern, containers are not long running. Rather, a container is activated (spun up) when the service is required to process a request. Then when the process is completed, the container is terminated. Because of the ephemeral nature of containers, we need more capability to manage these types of containers than Docker can provide. Here, solutions like Kubernetes provide container orchestration features, such as:
- Service discovery – Service discovery is the process of the system figuring out how to connect to a service running as a container. One issue with containers is that they are ephemeral, so when a container is stopped/terminated, the service is no longer defined. Kubernetes introduces the concept of a pod – a resource that containers get deployed to that still remains after a container is stopped. Here, the service connection information is associated with the pod so that when a container is deployed to the pod, the service is already defined.
- Invocation – Because the containers for these services are only activated when the service is invoked and is terminated once the service is completed, there needs to be a process to manage it. Here, Kubernetes supports Job Objects, which create one or more pods and ensure that a specified number of them successfully terminate. As pods successfully complete, the Job Object tracks the successful completions. When a specified number of successful completions is reached, the task (i.e., Job) is complete, and the pods are deleted.
- Elasticity – Elasticity is the ability to automatically grow the number of containers to support a service based on demand. Elasticity is solved in Kubernetes by using ReplicaSets. A ReplicaSet controls the number of replicas or exact copies of the container that should be running at any time (similar to auto scaling groups in AWS).
- Data resilience – Like the service discovery described above, pods provide the ability to define persistent data sources that containers can write to. Here, each container during the time it is running has a persistent location to write to and retrieve data That data lives on after the container has been terminated.
While orchestration provides some of the key elements needed to run microservices, it is not a complete set of features to support a service mesh that can manage how all the microservices will interact to perform workflow. Below is a list of other function/features that are needed but are not built into Kubernetes:
- Tracing – Since each microservice only represents a small component of a work process and runs independently, we need tracing that allows us to complete a transaction as it flows from microservice to microservice so that teams can identify where bottlenecks or issues arise within the process.
- Monitoring – Here, because containers are short-lived, we need a monitoring system that can track the service across the multiple containers to be able to monitor the availability and metrics associated with a service running in the containers.
- Logging – While Kubernetes supports the ability to generate logs, because microservices are running multiple replicas of the same application container, it is important to aggregate those logs so they can be viewed in one place to allow support teams to monitor them. Also, since the containers are short-lived, logs need to be stored externally so they persist after the container has been destroyed.
- Authentication – Security is an important aspect of all applications, so we need to ensure that service-to-service communication as well as end user-to-service communication is secure and in alignment with federal standards like Homeland Security’s requirement of transport layer security (TLS). TLS is the security protocol that has replaced SSL (secure sockets layer). It is a cryptographic protocol designed to provide communications security over a computer network. Other aspects of security require a key management system to automate key and certificate generation, distribution, rotation, and revocation.
- Fault tolerance – Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of (or one or more faults within) some of its This is crucial because microservices all run independently, so here we leverage development patterns like retry rules, circuit breaker, and pool ejection to provide fault tolerance.
It’s important to note again that Kubernetes is a foundation for container orchestration, but it isn’t a microservice mesh. And while Kubernetes can be expanded, out of the box, it does not support a full service mesh, which means users can’t just turn it on and expect to use microservices. When deciding on container solutions, decision makers need to understand all pieces must be in place for an application to work. That said, there are open source tools that can be integrated into a Kubernetes environment to provide the missing items so as to have a full service mesh that can support microservices. Examples of open source tools that can be integrated into a Kubernetes environment to meet these needs include:
- Tracing: Jaeger
- Monitoring: Prometheus
- Logging: ELK
- Authentication: Istio
- Fault Tolerance: Istio
The issue with using these various tools is that it requires managing several independent modules on top of the Kubernetes cluster. A better option is to leverage a tool like Red Hat’s open source product, OpenShift. OpenShift is a Kubernetes based platform in which Red Hat has integrated all components listed above to provide a fully functional service mesh that can support microservices deployed on a Kubernetes platform. OpenShift offers you the convenience of a full service mesh running on a Kubernetes infrastructure.
Other useful applications
Besides the platform used to deploy and manage the containers, there are other applications that can be set up to support the overall container structure. These include:
- Container registry – Rather than leveraging a public container registry like Docker Hub, it is sometimes advantageous to set up a private registry where all development groups can register and write their containers so the agency maintains ownership of all containers developed by the contractors it hires.
- CI/CD Pipeline – Because containers greatly simplify the deployment process for applications, it is advantageous to use the continuous integration/continuous deployment (CI/CD) process to allow teams to easily push out and test incremental changes made to the application. Doing so enables development teams to more quickly deliver features needed by customers. A variety of pipeline tools are available to facilitate CI/CD.
Decision making time
As we said, many of these terms, functions, and processes are commonly confused; however, now that you better understand the complexity of containers and the architecture needed to support them, you can make more informed decisions on which solutions are right for your environment. When your agency is ready to discuss further, Octo is here for you.