Artificial Intelligence & Machine Learning: Securing the Latest Wave of Innovation— Part 2

“Anything that could give rise to smarter-than-human intelligence—in the form of artificial intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement—wins hands down beyond contest as doing the most to change the world. Nothing else is even in the same league.”
Eliezer Yudkowsky

Our last blog, which was the first in this 2-part series covering artificial intelligence (AI) and machine learning (ML), reviewed the fundamental differences between the two, clarified some misconceptions about them, and discussed their popularity in today’s business world.

In fact, according to a 2018 survey conducted by MemSQL, 74% of respondents consider AI and ML to be a game changer with the potential to transform their jobs and industries. Furthermore, 77% of respondents are actively using AI and ML, supporting the notion that these technologies are rapidly growing and making their way into business operations around the world.

However, like any new technology, with great growth comes great risk. This blog will illustrate how cybercriminals are weaponizing AI and ML, how these technologies are used to create more secure environments, and how Armor is employing AI and ML within our own platforms.

Cyberthreats Against AI & ML

According to Boston Consulting Group (BCG), for those responsible for corporate security—everyone from CIOs to CISOs to CROs—AI presents 2 types of risk that change the nature of their jobs:

  1. Criminals, bad actors, unscrupulous competitors, and insider threats can potentially manipulate fledgling internal AI programs.
  2. Attackers could also use AI in a variety of ways to exploit vulnerabilities in their victims’ defenses.

Alarmingly, more than 90% of cybersecurity professionals in the United States and Japan expect hackers to use AI against the companies which they work for, indicating that AI will likely be used maliciously by threat actors to launch and complete attacks against companies and individuals—and their respective data. And, as mentioned above, a company’s proprietary AI projects could be targeted as hackers look to manipulate them for their own purposes.

What are some possible scenarios of AI being used nefariously? BCG lists the following examples in its report:

1) Manipulate the pricing algorithms of a competitor

2) Control facial recognition software to target specific individuals or allow others to get away with various crimes

3) Broadcast messages, make phone calls, and open malicious websites using virtual assistants, such as Amazon Echo, Google Assistant, and Apple’s Siri

As more and more of our world becomes connected through AI and the internet of things (IoT), there will be an increase in vulnerabilities for threat actors to exploit, and for cybersecurity professionals to protect.

Is the possibility of hackers compromising a company’s AI application something to actually worry about? What is the likelihood of this happening? A recent report found that 20% of companies have already incorporated AI in some offerings or processes and 70% of executives expect AI to play a significant role at their companies within the next 5 years. This indicates an element of scale to be worried about as more companies turn toward implementations of AI to fuel growth.

Now that we have established that AI will be a prevalent technology—and threat—in the years to come, let’s explore the real-world impact of cybersecurity attacks against these systems. According to the BCG report, companies’ AI initiatives present an array of potential vulnerabilities, including malicious corruption or manipulation when training the data, implementation, and component configuration. No industry is immune, and there are many risks that ML and AI already present to companies, such as:

  • Increased risk of credit card fraud
  • Appearance of discriminatory hiring practices
  • Compromise of systems that control cyberphysical devices, which manage traffic flow, train routing, or dam overflow
  • Interference with medical devices
  • Intervention in or meddling with IoT-connected devices using ML or AI systems

As you can see, exploiting AI and ML infrastructure vulnerabilities has many consequences, affecting brand reputation, financials, and even physical safety. These threats will be the next frontier of the asymmetrical battle between cybersecurity professionals and hackers.

AI + ML + Cybersecurity

Although the risks of AI and ML can seem frightening, that doesn’t mean they can’t be used for good. Security professionals are realizing that AI and ML can secure corporate and personal environments. As cybersecurity professionals look to level the playing field in the asymmetrical arms race, AI will be just another instrument in the data security toolkit.

A few ways that AI or ML technologies can help bolster cloud and cybersecurity initiatives include:

  1. Antimalware and other security technologies moving beyond the signature Traditionally, malware has been identified according to its signature or unique identifier. However, with 230,000 new malware samples being created daily, ML can be deployed to identify malware based on its behavioral patterns and attributes, rather than having a pre-defined signature. This allows malware researchers to figure out how to quarantine and remediate malware that has never been seen before. Additionally, identifying unknown indicators of compromise (IOC) will be an area where we see growth in this space in years to come.
  2. Deception technology that lays intelligent honeypots, or traps, for hackers As seen in the BCG report, offensive tactics “create a duplicate of the environment to be infiltrated to make attackers believe they are on the intended path and then use the deceit to identify the culprit. AI-enabled response systems can segregate networks dynamically to isolate valuable assets in safe “places” or redirect attackers away from vulnerabilities or valuable data. These systems independently recognize hackers during approach and landing, trap them in a contrived environment, and direct them down paths security professionals laid out for them—simultaneously securing your most valuable systems and identifying threat patterns and actors in your environment.
  3. Cybersecurity skills gap narrowing through automation and increases in efficiency AI helps alleviate pain points associated with the industry’s talent shortage. While staffing cybersecurity teams, it is imperative to use security resources efficiently, so companies are turning to AI to automate repetitive tasks. This frees human resources for higher priority, strategic activities. ML and AI can automate playbooks and tasks such as analyzing logs for IOC, checking threat lists, and consolidating information about compromises across a variety of tools and services. Because of this, many security and orchestration platforms have become commonplace within security operation centers (SOCs) worldwide.

These are just a few of the many ways security professionals are using AI and ML to deliver stronger cloud security. As noted, AI and ML are double-edged swords when it comes to cybersecurity. These technologies offer new vectors and applications for hackers to target, while also enhancing the ability to defend critical data, applications, and systems.

AI, ML & Armor

So, how is Armor is thinking about these technologies and the impact they have on our business and clients?

Armor recognizes that security-as-a-service (SECaaS) providers such as ourselves have a responsibility to our clients to view AI applications as the next frontier of “cloud workloads” to be secured. Armor offers several solutions to help secure AI applications in the cloud.

We provide host-based protections and controls for AI applications—many AI applications run on traditional client- or server-based virtual machine (VM) architectures. Our Armor Anywhere service offers file integrity monitoring (FIM), malware protection, intrusion detection (IDS), and vulnerability scanning. We also secure other parts of the network that might not be safeguarded via our host-based controls. This includes log management and web application firewalls (WAF) with dynamic threat blocking.

Armor is taking steps to either leverage existing AI and ML capabilities to improve our current cybersecurity toolkit or build them ourselves, particularly for playbook automation and response. Recognizing that AI applications are the next frontier of cloud workloads to secure, our security information and event management (SIEM) tool uses ML to analyze user behavior analytics (UBA). When used to examine user event and network flow logs, AI security and ML technology can distinguish between normal and abnormal traffic behavior, identify early warning signs of irregular actions, categorize risky behavior, and provide security analysts with the right tools to prevent costly damage.

Gartner states that by 2022, 95% of security incidents will be because of human error or misconfiguration. We think this means that the security incidents of the future will not resemble those of the past—attacks from outside the network—instead, hackers will use compromised credentials to access the network as a credentialed user. Therefore, security analysts will need to use UBA to move beyond the traditional signature-based approach to identifying IOC and focus on the behavioral profiles of users and data in an environment instead. For example, a hacker might use apparently legitimate credentials, but, if they sign in at the wrong time, from the wrong location, or from a different IP than expected, these behaviors could all designate the attempt as an IOC.

In addition to using out-of-the-box ML capabilities provided by our technology stack, Armor is also working on ways to use security automation, orchestration, and response (SOAR) platforms to automate playbooks that were historically run manually by our SOC teams. This liberates the SOC to focus on higher priority tasks, giving us an advantage over competitors that continue to manually run playbooks.

The result? We have an industry-leading dwell time of days vs. months.

Armor will continue to work on securing new AI applications and systems; we also commit to improving our security tools and processes to take advantage of the many security benefits AI and ML technologies offer.

Resource Center

More security resources at your fingertips.

Practical Content for Security, DevOps, & IT Professionals