This is the last blog in a series on the challenges arising from digital transformation.
One of the primary purposes of containers and the orchestration that manages it (effortlessly, so we’re told) is efficient scale. There are other, mostly development oriented benefits, that drive the choice to embrace containers but from the perspective of NetOps, it’s about scale and (the lack of) security.
Traditional delivery architectures don’t work inside container environments. Communication is erratic, dynamic, and unpredictable. Communications are multiplied and the environment is highly volatile. What it looks like right now isn’t what it will look like in two minutes, let alone ten.
But you still need to provide core network and security services to the ‘apps’ inside the container environment. You aren’t going to be able to insert identity and access control inside, nor manage certs across variable instances. Terminating SSL at the app becomes a nightmare for both you and the apps themselves. Even app security like bot defense and OWASP Top Ten protections become problematic because where do you shim them into such a volatile environment? Plus, it’s not like you’re going to map every service in a container environment to a public IP. You need a secure inbound path to provide application services without interfering with the operation inside the environment.
Luckily, apps deployed as multiple microservices inside a container environment are still apps (even APIs) and have a “app specific” endpoint that provides an opportunity to establish a secure inbound path without mucking with the container environment. This enables a bifurcated network architecture based on traditional principles of stability and scale while embracing modern approaches to software-based scale for containers. This two-tiered architecture enables a reliable, secure endpoint through which to transition from the N-S backbone to the E-W paths within a container environment.
A two-tier architecture provides the reliability and security necessary while supporting the speed and scale demanded by containerized apps. By constraining the chaos of containers, you maintain sanity and isolate shared infrastructure from the entropy inherent in containerized apps. This has the unintended benefit of supporting a production pipeline with variable changes. While the apps might change more frequently in the containerized environment, changes to identity and access control and other security-related services is likely less frequent. Updates can be performed independently of one another, freeing DevOps to move at a faster pace without overwhelming NetOps on the N-S side of the ‘house’.
The ingress point can manage the app-specific policies (performance and some security) while the rest of the secure inbound network handles the rest. This makes the inclusion of containers into production environments serving legacy apps a smoother, less painful process as it’s the least disruptive.
It’s important to remember that for a two-tier architecture to work that the ingress effectively becomes the app. Or at least its virtual equivalent.
Even if there are a hundred microservices on the other side of the container “wall” the ingress is the strategic point of control that enables scale, accessibility, and secures applications against external threats. Use the ingress to apply those services that are best handled before traffic enters the volatile, uncertain realm of container world.
...And this concludes our exploration of the surprising truth about digital transformation. There are many other gotchas and obstacles that need to be addressed, but the four covered in this series: cloud chaos, skipping security, diseconomy of scale, and container confusion are likely the most critical of those that must be addressed.
If you’re aware of the impacts of change, and you’ll be able to harness the power of automation, software-based security and scale, and new architectures to successfully navigate your way to the new data center.