Narrowing the Attack Surface: A Strategic Approach to Security

With the sustained onslaught of ransomware and high-volume destructive attacks, it’s clear that operations are growing in complexity and threat actors’ skills are improving.  What’s worse, security teams are often asked to do more with less, and there is little tolerance from organizational decision makers for any shortfalls.

Against this backdrop, is there any hope for a security operations team to keep up?  Absolutely.   It all starts with a comprehensive security strategy that addresses an organization’s unique threat landscape, followed by scrutiny of IT architecture, and focusing resources where they are needed most to identify and mitigate threat actors before their objectives are achieved.

Dissecting Threats

The core of any security strategy should be a reduced attack surface area to limit where threat actors can have success.  In order to achieve this objective, the threat landscape as it applies to a particular organization must be fully grasped.  This typically consists of three major categories:

1)  Commodity threat:

The commodity threat is that group of threat actors who are common to all.  Commodity actors don’t necessarily have a target in mind; they are truly fishing with dynamite as they recon the internet looking for low hanging fruit.  While the commodity threat does not normally build custom tools for their tradecraft, they have many open source attack frameworks to leverage for sophisticated attacks.  The threat research community has a pretty good bead on the indicators on this group of actors.  Normally, well-orchestrated signature-based security controls and a well-patched environment will drive down the success the commodity actors will have.  This allows a security team to focus on the more sophisticated threats.

2)  Targeted threat:

Targeted threat actors are specifically seeking out an organization.  Unlike the commodity threat that sends out millions of phishing emails, the targeted threat may send out 10 spear phishing emails, to specific individuals who have the access they want.  These attacks come from spoofed email addresses that increase the likelihood that a potential victim will open the email.  It is with this “Targeted Threat” group that a good security team really earns their pay. These are the threat actors who mostly avoid signature based detection and require sophisticated security operations to be caught.   They are most commonly referred to as an Advance Persistent Threat and includes sophisticated criminal gangs and low-level nation-state actors.

3) Advanced targeted threat:

Finally, Advanced Targeted Threat are the actors that aren’t discussed in open source reporting because they work very hard to stay out of the news.  These are the high-end nation-state actors who like to move down low in the OSI model to gain access.  They don’t just live on the application layer; they tap networks through both physical and remote means.  The Advanced Targeted Threat will coordinate close access, on premises, operations with remote operations.  They are after national secrets and sophisticated technology intellectual property.  Not all organizations are targets of these actors; however, a good security strategy can bring some success in warding off these really bad guys as well.

Examining IT Architecture

In my opinion, the most significant challenge most security teams face is an architecture that was never designed with security in mind.  And, after framing the threat landscape, the next step is to take a hard look this important area.  Based on 24-years of military experience, I’ve learned that understanding the physical terrain that you want to protect in combat operations, and leveraging that terrain in your defensive plan, is critical.  This basic security principle translates well to the “cyber terrain.”

This first place to start is with active directory.  If a security team is not involved in how the active directory is organized and managed, they will likely never have much success in protecting the environment.  Active directory infrastructure has a tendency to grow and organize itself based on ease of management versus security principles.  Some IT service management teams are downright negligent in how they have set up their environment.  The first thing I look for in active directory is how Organizational Units (OU) are organized.  There should be a security strategy applied to how OU’s are built out.  This provides the ability to logically segment an environment and deny access to resources based on risk of the members of that OU.

For example, a security team is always going to have that group of legacy business applications that for one reason or another cannot be patched in a timely manner.  If all of these servers are placed into a high-risk OU group, exposure can be limited to the rest of the network if these servers are compromised due to a lagging patch.  This same principle can be applied to the user group OU’s.  If there is a high-risk category of users, say users authenticating to your guest Wi-Fi, this group should be managed in a different OU than a normal user population would be.  This ensures this high-risk and transient user group has limited access to the rest of network resources.

The last thing to consider in an active directory security strategy is how to manage and create accounts with elevated privileges.  At the end of the day, the most sophisticated actors are trying to elevate privileges such that they are now an insider rather an outsider.  Closely monitoring who has elevated privileges and multifactor authenticate those privileges will put up a significant barrier for most threat actors and limit the attack surface area.

The next step when assessing architecture is to look at the segmentation of datacenters.  The NIST model of the three-tier datacenter architecture should be the goal.  Whether is it is segmentation between a webserver, application and database or micro-segmentation development, testing and production, this is a critical task to undertake or a security team will fail.

The way software development happens today, engineers go to their software libraries at repositories, such as GetHub, and download libraries to achieve whatever feature they are trying to develop in a project.  Many times, the software developer has no understanding of what ports and protocols are required, so they like to provision servers in a “trust all” mode.  Instead, every server should be provisioned with a “zero trust” model approach.  Meaning, all ports and protocols should be closed and only the ones required by the application should remain open.

Another common mistake is combining server functions on one host.  For example, hosting an application on the webserver.  There are cost implications that are driving IT service managers to use this strategy; however, this practice is high-risk and does not allow for the security team to manage a well-segmented environment.

The last point of discussion on architecture is remote access.  And, this is a very short conversation—all remote access into an environment must have multifactor authentication without question.  Users loathe this because it causes more pain to access the environment remotely.  However, without this security control, it is not a matter of if, but when an environment will be compromised and owned by a targeted or advanced targeted actor.

Refining Security

Once architecture is optimized, the next step is to narrow the focus of security operations to ensure the best bang for the buck.  In my experience the threat actor is usually only interested in two percent of a network, but they use the other 98 percent to gain access to it.  With limited resources available, security teams should purposefully focus those resources at the most important risks.

The first step, which is the one nobody wants to conduct, is classification of data and business applications to understand which are the most critical to protect.  The old military saying, “Those who protect everything protect nothing” really drives this point home.  I have met with organizations that have twenty plus security classification categories.  That level of granularity is unnecessary.  It is far more manageable and repeatable to start simple with three categories:

1) Low: The data or application is public knowledge and no damage if compromised

2) Medium: Workloads and data are critical of business operations and will have business impact if disrupted

3) High: If compromised or disrupted, it could have substantial, if not existential, consequences.

Once these three buckets of classifications are determined, then security dashboards and custom views of the security telemetry received from controls based on risk can be established.  On our security team, we refer to these as “Named Areas of Interest,” meaning we are focusing on these critical systems more than other systems of lower risk.

Another strategy to consider is how much of your security team’s efforts should be put towards trying to protect user terminals.  Cutting-edge security teams are assuming their user base is already compromised.  Instead of trying monitor and sensor tens of thousands of user terminals, they are closely monitoring the ingress and egress points between the users and business applications, looking for anomalous activity.

They are also putting in some architectural designs that treat identity as a security perimeter, such as requiring sandboxed browsers for web applications and multifactor authentication.  Banks and popular cloud applications are already using this exact strategy to protect infrastructure from their customers whom they assume are compromised.

Keeping Pace

Even with these strategies in place, security teams will always struggle to keep up with evolving threats.  Diligence is key to anticipate and stay ahead of what could potentially harm an organization.  Another important element is to have a team in place that is not afraid to admit they don’t know it all.

An “ego-free” approach helps everyone acknowledge that teamwork is necessary to keep sophisticated hackers at bay.  This is a dynamic field in a constant state of flux; thus, a security strategy should be fluid and flexible, and most importantly regularly evaluated, assessed and adjusted.

 

Jeff Schilling is chief of operations and security for Armor’s cyber and physical security programs for the corporate environment and customer hosted capabilities. For more information, visit www.armor.com.

Resource Center

More security resources at your fingertips.

Practical Content for Security, DevOps, & IT Professionals