image_pdfimage_print

As AI makes headlines and augments the way we work, there’s one area to be cautiously optimistic about: cybersecurity. Can it lend a hand? Certainly. But it can lend a hand to the bad guys, too. In the hands of bad actors, AI capabilities can help them identify vulnerabilities and exploit paths faster to launch attacks or breach your defenses. It’s a double-edged sword, and as we learn to wield it, here are some things to consider along the way when dealing with challenges with applying AI to data security. 

1. It doesn’t eliminate the human element from cybersecurity.

AI can solve some problems faster than humans can alone, but it’s not hands-off yet. And, as long as humans still play a role in its implementation, the human element introduces vulnerabilities AI can’t solve for. (In some cases, such as training models, they can even be influenced by error.)

Worse: AI may even make managing the human element more challenging. As AI-based or AI-generated attacks increase, training people to identify them and how to apply security becomes more and more challenging.

2. There isn’t a silver bullet in security, but AI’s false sense of security can be risky.

Security is a never-ending, always-evolving landscape. Just like a security-savvy enterprise, attackers and malware developers make it their business to modernize, update, and evolve their tools every single day. While AI can be a tool in your toolbox, it’s also in theirs. 

Enterprises need to exercise the same vigilance, adopting new tools and technologies, and applying a many-layered approach to stay up to date. AI should be part, but not all, of that effort.

3. AI is currently more reactionary than proactive. 

The very nature of learning models and pattern recognition is reactionary—it’s a powerful tool when it comes to anomaly detection and spotting behavioral outliers that can indicate a breach or lurking attacker. However, its application is often labeled as proactive. 

Ample, proactive tech and processes in your security stack help you to build a better defensive posture that’s crucial when attackers are always learning new ways to evade this reactive detection. 

Read more: 5 Ways to Address Security Gaps Before an Attack >>

Reactive is important, but a security strategy has to cover the before, during, and after of a breach to be effective. That means good data hygiene, patch management, multifactor authentication, fast analytics of consistent logging, and plenty of training and tabletop exercises to ensure recovery objectives are thoroughly tested and can be met.

4. Pattern recognition that actually works requires models to change.

AI is advanced pattern recognition. But what if any outliers trigger responses because the models don’t evolve?

Here’s the problem: Users on an enterprise network are generally quite predictable. Models learn this over time, but this means any irregularities can actually introduce problems into the security environment rather than solving them. (Say, a system that reacts and modifies firewall rules on the fly, locking legitimate users out of a system.) 

There still needs to be a solid degree of hands-on review (and experts to review it, and reams of data) to sift out false positives while also keeping an eye on the model over time.

5. Expanding on the above, to avoid automation without oversight, AI at high levels of networking and security should always undergo “supervised learning.”

Putting too much trust in AI as part of a security system that can influence and change could surface key concerns around AI and its tendency toward bias, hallucinations, and error. By design, these systems can make up new yes/no rules on the fly based on what they’re seeing as “patterns” or “outliers.” That means, it could make changes you don’t intend, oversecure a system, or undersecure a system. (A slight anomaly in user behavior could cause a lockout—even out of your physical office building.)

These types of model-based adjustments could be even harder to determine, locate, or reverse than a human one. 

A bigger problem: If your permissions or access rules are complex, the AI might be unable to distinguish between exceptions and anomalies.

6. Human intervention will always be critical—and even more critical is instructing what is bad or good.

Supervised learning reintroduces the human element into the models, which takes us back to point #1: Humans need to be a part of the equation, and they need to be able to interact with the AI as users and one of the most important aspects of security. Users and AI in security have to go hand in hand.

What is a good result? What is a bad result? When should the AI take action? While it’s not wise to let it run entirely on its own, you do run the risk of reintroducing bias or risk the more you intervene. 

We can’t fully rely on AI or assume it’s giving us a complete and accurate picture. A human will have to make sense of the information to make the right decision, cautiously.

7. Anomaly detection at scale requires immense amounts of data to get right.

The data required to execute advanced pattern recognition is immense. To find that needle in the haystack, data must come from a multitude of sources. It’s not just the volume but also the breadth from which the data must be gathered and analyzed. 

Too many enterprises underestimate this, making AI as a security tool far from turnkey. Data platforms must offer incredible visibility, simplicity, and scale to meet these demands.

AI Won’t Solve Every Security Concern, but Your Data Storage Can

While AI may not solve all data security concerns, a data storage platform designed for resiliency can—especially one that delivers on recovery before and after AI picks up the scent. 

Pure responds rapidly to threats. We power analytics and AI platforms for higher volume ingest and correlation to arm threat hunters and forensics with speedy insights so you can contain attackers before they plant malware. We also secure data with full immutability, and protect data with strict access controls and granular, rapid data recovery

Discover Pure Storage Disaster Recovery as a Service and our Ransomware SLA that guarantees shipment of clean arrays for recovery after an attack—just two ways you can ensure resiliency. And, if you are adopting and training AI models for security or otherwise, Pure Storage can also support you with storage optimized for AI workloads. As you integrate new data sources, new operational databases, new transformation workflows, and more, you may find you need storage requirements you didn’t account for. An agile storage system like Pure Storage can support such evolving demands, with strong integrations with all hardware and software solutions in the AI ecosystem.