It should be no surprise that as there evolves a new generation of application architectures that a new generation of load balancing accompanies it. Since the birth of load balancing, just before the turn of the century, the technology has moved in a predictable pace, which means it is time for innovation.
Today we're seeing adoption of the fifth generation of application architectures accelerating rapidly. The growth of cloud-native (microservices-based) applications is growing faster than expected. The latest Stackrox Containers and Kubernetes Security research tells us that nearly 30% of respondents have containerized more than half their applications.
So, too, is the growth of workloads that come from this disaggregation of applications into its composite business functions. This deconstruction is driven by a desire for velocity in delivery of digital capabilities to consumers. By reducing the scope to business functions, each can be independently developed, tested, and delivered without significantly impacting other business functions. This allows business to scale more rapidly by delivering new digital capabilities faster and more frequently. The result is that one application is now five or more workloads, each containerized and scaled on its own.
In modern architecture those traditional layers have decomposed into multiple components. Over 80% of a modern application is comprised of externally sourced components. Presentation frameworks, local data, session data, transactional data. Even logic has been torn apart and distributed across workloads that represent individual business functions.
As a monolithic application is broken up into different functions the east-west traffic increases. The heavy reliance upon APIs demand further scaling, optimization, and low latency access. Load balancing is still the primary means of achieving that scale. But it is not always delivered via a traditional proxy. Data paths are more complex and dynamic today. This has led to a new generation of load balancing that is as distributed as the workloads it scales.
Load balancing is evolving to deal with the landscape changes, and at F5 that means a focus on the application.
We will also see application services such as security follow load balancing into a new disaggregated model because the application it secures has disaggregated. In this model, security and scale at the microservice (component) level is analogous to container-to-container security and scale. Therefore, we see the rise of service meshes associated with Kubernetes clusters. A service mesh is intended to address the need for secure, scalable container-to-container communication.
We see the service mesh demands and needs changing. As this continues to grow, it is not without or free from challenges. Complexity remains a significant challenge to those developing, deploying, and operating containers. To wit, research from Reflex saw nearly half (43%) of respondents cite "complexity" as their greatest challenge running containers in production environments.
These challenges will need to be solved with a management (control) layer which must enable a variety of roles to deploy and operate this new generation of load balancing and application services. F5 is distinctly positioned to solve the problem of operating at scale without being overwhelmed by the complexity of modern architectures and environments.
For example, we offer Aspen Mesh to address the issue of complexity with Kubernetes and Istio deployments. We are also working on solutions based on the power of NGINX to control and provide visibility into modern application deployments.
The embrace of modern application architectures will continue to have a transformative effect on application services such as security and visibility as they follow load balancing. For F5 that means thinking outside the box and looking inside the container cluster.