Security Beyond the Perimeter

Influencer Blog By:
By Andrew Hay
Co-Founder and CTO, LEO Cyber Security

Whether we like it or not, the way we architect, utilize, and secure the networks and systems under our control has changed. When servers were safely tucked away behind corporate firewalls and perimeter-deployed intrusion prevention controls, organizations became complacent and dependent on their host security. Unfortunately, inadequately architected security controls that rely solely on broad network-based protection can make the migration of an organization’s systems to private, public, and hybrid cloud hosting even more exposed to attackers than they were before.

How we used to see, sell, and consume security

Most have heard the “defense in depth” analogy that relates security to a medieval castle with controlled access to different locations of the castle and a defensive moat around the perimeter. This “hard outside” and “soft inside” model was designed to make it as difficult as possible to get past the perimeter. However, once inside the walls, the trusted individual had elevated access to resources within the network.

This umbrella-centric model offered the most sweeping protection for the maximum amount of systems and users within a network. Unfortunately, as time went on, security professionals neglected the endpoints in favor of the broad coverage offered by network-based appliances. As a result, the moment a user left the protection of the enterprise network, they were faced with two options:

1) Navigate a complex VPN connection back to the home office before heading to the internet, or
2) Turn a blind eye in hopes that nothing happened while they were “off network”

Going back to the analogy, we often forced our users to either check in at the castle before allowing them to go back out to the world – an additional step that would often upset the individual – or trust them to make good choices beyond the castle walls.

The former sacrificed usability for increased security, whereas the latter sacrificed security for usability. Unfortunately, neither option offered a good balance. Making matters worse, however, is that security is often added to the technology stack much later in the deployment lifecycle.

Much like in castle building, security controls are often bolted on as previously unknown deficiencies are uncovered or new innovations in technology materialize. All too often, organizations implement security controls in reaction to security, privacy, or compliance-driven events – such as a data breach or a change in a particular mandate. As such, security is often implemented as a knee-jerk reaction with little to no thought of the complexity involved or the impact on employees and the business.

Unsurprisingly, the medieval defense analogy has lost much of its relevance in a world where systems and users move effortlessly from within the confines of a walled corporation to a local coffee shop and, perhaps, even to a different country as part of normal business operations.

What changed the security model?

Virtualization was a huge technology shift that immediately added complexity to the building and security control of networks. Nearly half (45 percent) of the respondents to TechTarget's ninth annual IT Priorities Survey said server virtualization would be their top data center infrastructure priority in 2017. But, how do you place a physical firewall appliance (or any appliance for that matter) in-line with a network that is completely software based?

The explosion of the Bring Your Own Device (BYOD) movement has added yet another layer of complexity. No longer can organizations rely on trusted devices being issued from the business to its users. From WiFi-enabled personal devices (tablets, phones, and other consumer goods) to personal computers and laptops, BYOD devices have become a common sight within the workplace. Mikako Kitagawa, principal research analyst at Gartner, stated in a recent press release that the low adoption of corporate-issued mobile devices underlines the fact that large numbers of personally owned mobile devices are used in the workplace. "In fact, more than half of employees who used smartphones at work rely solely on their personally owned smartphones," said Kitagawa.

But, how do you secure something that is not owned by the business, yet is being used for business tasks?

Not to be outdone, cloud architectures add even more security complexity into the mix. From Software-as-a-Service (SaaS) file sharing and customer tracking platforms, to Platform-as-a-Service (PaaS) custom application deployment, to Infrastructure-as-a-Service (IaaS) server instance hosting, cloud extricated the processing and hosting of traditional software into another organization’s data center. In January 2017, RightScale conducted its sixth annual State of the Cloud Survey. According to the study, 85 percent of enterprises have a multi-cloud strategy, up from 82 percent in 2016. Among enterprises, 38 percent have -more than one hundred VMs in AWS and 21 percent have more than one hundred in Azure. This begs the question, if the architecture is owned by someone else and located in a different geographic location outside of the company’s control, how can the organization extend its security controls beyond its own perimeter?

Containers, perhaps the newest architectural model to push into the enterprise, take the concept of ease-of-deployment (first observed in virtualization environments and perfected in cloud architectures) to the extreme. Containerized operating systems can be used for development environments and the testing of new software, without bloating the machine. To scale, container orchestration tools (Kubernetes, Swarm, and Mesos, among others) were introduced to help manage the dynamic nature of containers. However, what they did was make the provisioning of containers quick, easy, and often hidden from the standard monitoring capabilities of the organization. According to the 2017 Portworx Annual Container Adoption survey, 69 percent of respondents with knowledge of their company’s financial investments said their company was investing in containers. That percentage increased from 52 percent in 2016.

How do we bridge the past and the future?

Securing the next generation of hosting platforms requires a new approach that not every organization is ready for. Some industry analyst firms promote the idea of a “cloud first strategy” for all technology deployments. Though not a bad idea per se, this doesn’t mean that forklifting a company’s entire architecture into cloud or containerized environments should be its number one priority – especially if it’s being forced to choose between a new architecture and the traditional security controls that are depended upon.

Thankfully, technology has evolved to allow for more seamless security in environments that need to span traditional data centers, virtualization, and cloud environments. The invention of “software defined” technology, such as Software Defined Networking (SDN), Software Defined Storage (SDS), and Software Defined Security (SDS) has allowed organizations to grow their capabilities without the need to choose between having security and having new technology stacks.

Dynamic cloud environments, especially public IaaS clouds, have greatly reduced the network and hardware controls to create defined perimeters or security choke points. The technical nature of cloud-hosting environments also makes them more difficult to secure. Cloud server security mechanisms need to dynamically scale up and scale down with the servers themselves, including the ability to operate in either the private or public sides of a hybrid cloud.

The elastic nature of cloud-hosting environments can also lead to dramatic increases in exploitable server vulnerabilities. Cloud servers are easily replicated or cloned within seconds, typically to increase available computing power. If a server is vulnerable to exploit, cloning that server multiplies the vulnerable surface area available to attackers. Every single server has to be rigorously hardened before it can be exposed to public cloud threats; the speed and scale of cloud deployments coupled with deteriorating change management makes automation absolutely critical.

The destruction of data in a multi-tenant cloud (and now containerized) environment is an often overlooked concern by service users. Vendors often assure users that data destruction upon user request (i.e. deleting a file, closing the account, etc.) takes place and that organizations should not be concerned about their data being “found” at a later date by another user. With the exception of publicly available documentation and marketing materials stating that the data will be deleted, what assurances do organizations have that this process aligns with their own data disposal processes and procedures? The answer, unfortunately, is that the provider might not even know the answer as they may be relying on virtualization technology or an IaaS provider that cannot provide the guarantees to pass to their customers. Simply speaking, the data destruction promises might be entirely beyond the control of the provider.

Though the idea of utilizing a cloud or container platform may seem enticing from a cost, workflow, and availability perspective, the security granularity required for highly sensitive and regulated data may not be something that organizations are willing to sacrifice. If new architectures, applications, and cloud hosting environments are adequately managed, protected, and monitored, companies should continue to poke and prod at providers until they can prove it.

Integrating all of these disparate components across multiple platforms and providers requires a combination of orchestration, automation, and an understanding of what’s happening at any point in time. One example of the integration of the aforementioned disparate pieces is Juniper Networks’ automatic and dynamic detection and mitigation ecosystem known as the Software-Defined Secure Network (SDSN).

In Closing

So, how do we, as security professionals and business owners, decide what mitigation controls should be deployed to future-proof our security? Using key points from the Juniper Networks Software-Defined Secure Network (SDSN) page, it’s actually much easier than it sounds.

Organizations should leverage technology that is truly agnostic and not tied to one particular vendor. Heterogeneous environments require heterogeneous and efficient solutions to ensure that what works today will also work as technology evolves.

Meaningful insights are key to understanding what is happening across an entire organization’s technology footprint – whether it’s on-premise, BYOD, cloud, or containerized. No organization wants to jump through hoops to understand its security posture and defense capabilities.

An organization’s ability to rapidly respond to and defend against new threats impacting its security, privacy, compliance, and operational state will serve to reduce the mean-time-to-resolution (MTTR) in the face of an incident and will free up key personnel to fight other fires.

Companies need to make deployment (and by proxy, security, and compliance) an easy task and not one fraught with complicated integration paths or consultant-heavy customized solutions. They need to leverage something that works so that they’re not spending time troubleshooting solutions to the original problem.

An organization should also seek out the strongest partner ecosystem to help it transition from legacy architectures or expand into new architectures in a safe and secure manner. Remember, the journey to cloud or containers shouldn’t be taken without a proper map in-hand.

I’ll leave you with these final words: Don’t trade the security, privacy, and compliance of the organization, stakeholders, and data just to save a few dollars. The savings now could result in the loss of millions (or perhaps tens of millions) of dollars down the road.