Cybersecurity: Past, Present, and the AI Future
Photo Credit: Steve Fisch

Cybersecurity: Past, Present, and the AI Future

Heather Adkins and I sat down for a conversation with Garrett M. Graff at Verify 2024–a conference organized by the William and Flora Hewlett Foundation and Aspen Digital (The Aspen Institute) that brought together journalists, policymakers, and corporate leaders for discussions about the state of cybersecurity, the lessons learned over the years, and where the ecosystem is headed next.

Since the days of floppy disks and dial-up, threat actors have been trying to find vulnerabilities and exploit them. Many techniques employed in today’s attacks have existed since the early days of the internet–but what’s changed is that cyber is now the “tool of first resort” for nation-state adversaries, which means the scale is much grander.

For our part, a major state-sponsored cyber attack that happened nearly 15 years ago set us on a trajectory that put us on the front foot for dealing with such attacks today.

AI presents another inflection point. And what governments and industry do next will determine if we can tilt the scales in favor of the defenders for the long run.

The Wake-Up Call: Operation Aurora and the Shift to Secure by Design

Operation Aurora took the entire tech sector by surprise. 

Attributed to China, Aurora was a highly sophisticated attack that targeted over two dozen companies. Other companies either hadn’t discovered the attacks, or hadn’t wanted to disclose them. 

And can you blame them? 

It’s uncomfortable work telling the world you’ve been breached and intellectual property has been compromised. But that kind of transparency is key to security. Which is why in the ensuing years, we launched our Threat Analysis Group, or TAG, to spot, disclose, and attribute threats. 

It’s also what prompted us to abandon our old “perimeter defense” model of “crunchy on the outside, chewy in the middle” (with high outside walls but limited interior defenses) in favor of a zero-trust model in which all users, all devices, and all applications are continuously checked for security risks, and yet security comes easily and naturally for users.

Fast-forward to 2024 and we’re still advocating for technology to be secure by design–which is to say that it should be safe before it reaches people, before we start coding, and throughout its lifecycle.

That work to transition organizations to a zero-trust, secure-by-design security model becomes even more urgent as we look ahead to AI.

The Future: AI Will Change the Game

It’s been 15 years since Aurora, and nation-state attacks, once rare, are now more common than ever. At our fireside, Heather put it this way: “People come and go,” but when you’re talking about cyberattack campaigns from nation-states, “there’s a longevity to their mission.”

Threat actors can choose from a range of targets and need to succeed only once, while defenders must protect an increasingly complex surface and need to succeed every time.

The good news is that AI can reverse this paradigm that we call the “Defender’s Dilemma” by allowing defenders to process more data faster than ever and by putting the right information at their fingertips the moment they need it.

“We get trillions of data points a day for cybersecurity detection,” Heather explained. “We don’t have the tools to process all of that data in a meaningful way–and we’re never going to hire or train our way out of this. But that’s where AI comes in.” 

Final Thoughts

At the end of our Verify 2024 conversation, Garrett asked me what lessons from cyber governments should keep front of mind as they prepare to regulate and enable AI. 

Two came to mind: 

1. Work together to minimize fragmentation. Whether you’re talking about security regulations or AI regulations, when governments work together to align on principles with input from diverse stakeholders, they have a better shot at developing cohesive frameworks that help increase the level of protection for everyone.

2. Optimize for innovation. From the G7’s Hiroshima AI Process to the United Nations AI Advisory Body to the Biden Administration’s Executive Order, governments are demonstrating that they’re serious about AI frameworks that manage AI risks, but also set us up to seize AI opportunities. But we can’t let our foot off the gas–the last thing anyone wants is a future where attackers innovate using AI and defenders don’t.

ha yeah.. then i can teach eventually In simplistic as gets equations , life, goverance;; us forces. wen needed. etc etc etc..so if you proud of him. which iz great. hope excited about this. because more intriged more help and they can teach, n refelect... thank you sir

Like
Reply

hi it's jenifer GEN. I'm getting a laptop . asap. as well as a phone that's just for work. i cant believe i asked all companies how they getting in myphone. to point i can'twork... i bought neew phones last 3years. i findout 2 who ones hacking pressing charges.. yes extremely serious. especially i got world-leaders and changing ecomic system- u name it im involved. til i get feedback with whole significance of my infrastructure system. which is much easier understand if i explain the logicof howmany aspects involved. hope get your feedback. if i can speak for self. likelittle meeting... not in detail. but get feel of the changes that proven factual results and outcomes.i feel that would make sense . so someone aware of my capabities. and get ones input. so i will get private phonefor work... hopevGoogle. but this wouldbe myi...interview. .okay thank you! all have great day.sincerest, GEN ai

Like
Reply
Stephen Culp

Business & Civic Entrepreneur - Silicon Valley to Tennessee Valley. Former Navy Reserve officer, Peace Corps Volunteer. Co-founder Delegator, ProDiligence and S Ventures.

1mo

Kent Walker – as a former student of yours (High Tech Crime at Stanford), I am so glad to see you continuing to make a difference, on an increasingly important scale.

Like
Reply

If artificial intelligence is so genius as scientists say, it will be able to block the access on its operating system code to the people that don't have the correct biometric data, password-picture flows, etc. safety valves. It must be able to defend itself from the hackers.

Like
Reply

To view or add a comment, sign in

Explore topics