AI’s Wetware Problem

By Aaron Brantly, Director - Tech4Humanity Lab

When Project Mayhem won the DARPA Grand Challenge in 2014, the project lead from Carnegie Mellon, Dr. David Brumley, writing on his team’s success, quoted DARPA program director Mike Walker saying the program was just the ““beginning of a revolution” in software security.”  What could be better than an AI white hat hacker finding and fixing vulnerabilities at machine speed? Yet, ten years later, AI has not solved the pervasive challenge of cyber defense. Although there have been substantial advances in intrusion detection and prevention systems, AI-based penetration testing, and the inclusion of all manner of AI in modern cybersecurity, the problems facing actors at all levels seem to only be growing. If AI is improving cyber defenses, reducing the number of potential vulnerabilities in software and hardware, and lowering the relative costs for secure development and testing, then why is the overall challenge of cybersecurity not abating?

Embedded within the challenge of cybersecurity, there remains a fundamental disconnect between software, hardware, and wetware. AI has and will continue to substantially improve the relative security of the software and hardware, but the largest attack vector, wetware, still accounts for more than 90% of all successful cyber breaches. AI in cyber defense may be increasingly undermined by the role AI plays in cyber offense and criminality. As cyber defenders increasingly use AI to address the core technical challenges in code and hardware, the human attack vector becomes increasingly enticing. Where once Nigerian Prince emails or blanket scams purporting to offer millions were the norm, AI now enables social engineering attacks on the most vulnerable piece of modern networked infrastructures: human users.

The use of AI to target the human user through the crafting of targeted communications facilitates a degree of efficiency in social engineering that wasn’t previously possible. While whaling (going after CEOs or other high-value targets) and occasional spear-phishing (going after specific low to mid-value targets) were at times cost-effective, both methods are relatively expensive. By contrast, large-scale, impersonal phishing campaigns were low-effort and low-reward exploits with broad reach and limited effectiveness. LLMs have altered this dynamic. Where once generic social engineering efforts were broad-based, those same efforts, with the assistance of AI and leaked data, are becoming increasingly similar to the far more effective and lucrative but time-consuming spear-phishing campaigns.  Just as the underlying technical systems are becoming more resilient in part due to AI, the control of those systems by their users is becoming increasingly vulnerable to a different type of AI. 

The problem does not stop at a user’s keyboard or mouse. As AI’s become increasingly “smart” they will be able to leverage data on individuals to create digital twins. Already, AI is able to spoof human verification tests such as captchas. Yet, in the near future, behavioral models trained on high-fidelity data will increase in accuracy to a point where nearly all human verification tests are potentially vulnerable. The paradox of cybersecurity resides in its socio-technical nature. The bridging of code, hardware, and wetware presents very distinct and divergent cybersecurity challenges. While AI has minimized vulnerabilities and improved the detection of technical exploits in the first two, it has and will likely continue to foster more and more vulnerabilities in the latter. 

AI will be increasingly used to manipulate and mimic human users to gain illegitimate access to systems. The consumption of user data by LLMs and related models will present increasing cybersecurity challenges. At the root of the issue is not whether AIs will exist in the future. They will. Nor is it a question of the utility of AI models to cybersecurity or science more broadly. AI has increasingly demonstrated its value and importance in security and science. Yet the use cases of AI and the data those use cases are built upon are likely to be increasingly central to their ability to affect cybersecurity and human security more broadly. AI does not need to be dystopian in its effects. It can improve security. Yet when left unregulated and at the whims of the market, with limited controls exercised over what data it is allowed to ingest and process, or when its focus becomes behavioral modification, analysis, or mimicry, it poses an increasingly substantial security challenge.

The techno-libertarian views of the United States are likely to lead to more rather than fewer vulnerabilities in the wetware of systems and networks. The market and, in many cases, science more broadly follows the move fast and break things motto coined by Meta founder Mark Zuckerberg. Yet market and scientific endeavors transpire within the sovereign jurisdictions of states. Policymakers within these states have a responsibility to assert themselves and ask whether AI should be allowed to ingest certain types of data or perform certain types of actions.  The splitting of the atom both provided enormous energy and the potential for violence and destruction. Following the initial use of atomic weapons during World War II, the global community began the arduous process of building norms, international legal frameworks, and market controls to regulate the building blocks of both atomic energy and weapons. The road has been bumpy and filled with missteps and fear. The AIs being developed are similarly amazing in their utility and their destructive potential.

Just as humanity didn’t want an atomic age without humans, it doesn’t want an AI age without humans. Harnessing the power of AI can improve security and undermine it. How we choose to control the building blocks of AI through its use of data, algorithms, computational power, etc., will determine its potential impact. While it is sexy to think of AIs as black boxes from which solutions to many different problems are derived, the black box concept is not conducive to human oversight. Rather, whether we seek to improve the security of software, hardware, or wetware, policymakers should push for transparency, oversight, and norms that safeguard those values and attributes that are critical to human security.