It’s been a while since I’ve posted something new here. I have been admittedly been busy with consulting commitments, and in most cases under NDA. This unfortunately leaves anyone visiting my site with the idea that I’m on a crusade against security appliances, which the IETF calls ‘middleboxes’. The truth is not opposition to middleboxes themselves, but rather the constraints placed on networks due to their current deployment models.
To properly scale network security to architectural limits, we need to think small to go big. I’m a big believer in unitized network security functions (NSFs), where a unit can be a container, a VM, or yes even a middlebox. The real trick is to make use of dedicated resources for NSFs in a way that has quantitatively predictable and stable performance.
We need to consider the functions performed by the NSFs themselves. I recently had the privilege of working with a number of my great co-authors on RFC 8329, the “Framework for Interfaces to Network Security Functions”. In the section on preventing ossification of NSFs, the reader would easily understand that it is important to prevent classifying NSFs in such a way as to limit their use. Towards that, I would suggest that relative to processing packets, all NSFs fall into one of three categories:
1) Classifiers – NSFs that examine packets in a manner resulting in a forwarding action, without changing the original packet. This is a very broad definition that can include NSFs such as ACLs, firewalls, SDN switches, and even intrusion prevention systems to name a few.
2) Transformers – NSFs that modify transiting packets as a result of their function. Again, this can broadly include encryption/encapsulation functions such as VPNs, proxy services, and network address translation (NAT) to name a few.
3) Collectors – NSFs that copy packets for out-of-band collection and analysis. It seems these days that each week brings a new player into the field of packet and event analytics.
It is of course obvious that many NSFs may combine qualities from two or even all three of these categories. That is perfectly fine in my view as long as the NSF can:
– Perform the same set of functions consistently on all ingress packets, so as to be quantitatively performance stable
– Can integrate into a common policy management system via APIs
– Can provide detailed event logging to a common reporting system via APIs.
This model moves beyond basic micro-segmentation, which is the evolving norm for deploying network-based security in cloud/datacenter environments. Micro-segmentation is a concept in which NSFs are deployed transparently between cloud-based assets and their users. Clumsily, public-cloud customers can be expected to purchase security solutions from a marketplace, and deploy them in method similar to perimeter deployments of the past. I’m heard terms like ‘software-defined perimeters’ or ‘rings-around-things’ to express this approach.
Unfortunately, cloud consumers are quickly learning that this crude-level of micro-segmentation isn’t really saving them much in time and resources, and often results in less that optimal performance delivery. The emergence of so many security-as-a-service provides, offering aggregated security functions tailored to application and productivity requirements, is coming to the rescue in force. Initially the domain of DDoS defense, now CASBs, private access providers, web application proxies, and other services are rapidly coming into place, with the simple realization that what their customers really want are bespoke clean pipe services scaled to their requirements, and associated reporting with service level agreements.
Curveball Networks is here to advise on how to scale network-based security from a vendor-neutral, interoperable, and scalable approach. Please reach out to discuss your ideas and requirements in detail.