By: Kaysan Frueh
A quick Google search of ‘US Air Force’ instantly pulls up images of fighter pilots and stunning photos of fighter aircrafts like Lockheed Martin’s F-22 Raptor flying menacingly across a vast blue sky. After loading the Air Force website, the user is greeted with the Air Force’s recruiting slogan “Aim High” with a montage of aviation-related content, including videos and images of proud pilots. When one thinks of the Air Force, a fighter pilot is the most common first thought. But with increased research and abilities of AI algorithms and the possibility of self-piloted planes, the Air Force’s proud pilot reputation may be shattered, drastically changing the branch and eliminating this desired career field.
We all associate the US Air Force with courageous fighter pilots, but in reality, only four percent of Air Force personnel are pilots. The job is extremely competitive and flight school only accepts and graduates the very best. Despite this, the Air Force’s reputation as a branch dedicated to air power and flying, is a significant aspect of their recruiting success. If recruits no longer join the US Air Force with the dream of one day flying in an adrenaline-pumping dog fight, a foundational element of Air Force recruiting and common passion would disappear. Based on scholarly research, we will most likely see AI pilots in our future. As AI pilots replace human pilots, a career field will slowly disappear. AI pilots are argued to be beneficial in their cost effectiveness and ability to decrease potential loss of human lives.
In August, the Defense Advanced Research Projects Agency, or DARPA, held a three day competition called the ‘Alpha Dogfight trials’ to test and compete AI pilots flying F-16 falcons in Dog Fight simulations. The eight teams involved in the competition had one year to develop their AI programs and each team took a different approach in their programming. After three days and many simulations, the winning algorithm was developed by Heron Systems. The AI agent used something called ‘deep reinforcement learning’, a type of AI machine learning where the algorithm is able to learn the best steps to take to complete a goal. It does this by completing a task in a virtual experience over and over again to develop a type of understanding.
Heron systems’ AI pilot was put to the test against a human Air Force pilot, call sign ‘Banger’, an F-16 flight instructor with over 2,000 flight hours. The AI pilot beat Banger in each round. The Air Force pilot reflected that, “I may not be comfortable putting my aircraft in a position where I might run into something else – or take that high-aspect gunshot is a better way to say that. The AI would exploit that. It is able to have a very fine precision control, with perfect-state information between the two aircraft. It’s able to make adjustments on a nanosecond level”. An AI pilot doesn’t understand or rationalize the risks of a dog fight like a human does, instead through trial and error, it assigns numbers representing the risks of various maneuvers. While this often makes the AI pilot a more skilled pilot and difficult enemy, the argument there remains that the AI’s automated decisions would fail in morally or ethically complex scenarios. In regards to tactics alone, AI may outperform pilots, but war is complex and requires moral and ethical decisions that are presently beyond the scope of automated systems.
Interestingly, as the AI - human dog fights progressed, Banger was able to, “significantly shift his tactics and last much longer”. Since the AI ‘thinks’ differently than a human pilot thinks, Banger had to learn how an AI pilot fights and then shift tactics to combat this new enemy. This suggests that, with practice, Banger might be able to perform better against the AI algorithm.
AI is not without risks. Possible downsides of a fully AI-controlled fighter jet include the potential for cyber-attacks, bugs, vulnerabilities, or predictability that might result in vulnerabilities. For example, if the underlying AI algorithm were attacked it is conceivable that this may alter its performance. Similarly, if the AI system encountered a “black swan,” a strategy that it hadn’t seen before, it might not employ optimal tactics and therefore fail to achieve its mission.
AI pilots, like human pilots, are not guaranteed to perform perfectly in every situation. Both contain many pros and cons in their performance. AI pilots have a level of precision and accuracy that a human pilot cannot replicate, and they can take risks and act in seemingly unpredictable ways to defeat a human pilot. Their “OODA loop” which stands for “observe, orient, decide, and act” is faster than that of a human. However, human pilots can often adapt to new or dynamic situations more effectively since they do not rely on established programming and machine-learned experience.
A recent poll found: “71 percent of the public would “probably not” or “definitely not” fly on an aircraft without a pilot on board”. And while riding a commercial plane piloted by an AI algorithm versus commanding an AI fighter jet are very different, the Air Force relies on a wingman’s mentality of trust and brotherhood. Would a commanding officer be able to trust an AI pilot? Would an American pilot fighting for the strong cause of freedom and the ability to return home be stronger than the ‘will to fight’ that an AI has? Or would the AI’s lack of emotions allow it to fight better? Many questions and concerns arise on this topic and make this issue very convoluted.
What can be said or done with the AI algorithms and research that we currently have? While some may want to jump straight into a completely AI piloted Air Force fleet or leave AI to science fiction novels, a new effort called the ACE (Air Combat Evolution) funded by DARPA seeks to find a healthy middle ground in applying AI technology. ACE reported that currently “30 percent of the flying” is done by humans “with automation taking over the rest”. The ACE mission is to automate more pilot tasks and increase “human machine-symbiosis”. Timothy Grayson, director of the Strategic Technology Office at DARPA expands on this idea, stating, “Let’s think about the human sitting in the cockpit, being flown by one of these AI algorithms as truly being one weapon system, where the human is focusing on what the human does best [like higher order strategic thinking] and the AI is doing what the AI does best”. The goal is for a human pilot to become more lethal by “effectively orchestrating multiple autonomous unmanned platforms from within a manned aircraft”. Furthermore, ACE created a hierarchy of AI versus human piloting stating that humans would perform the “higher-level cognitive functions” while the AI would perform the “lower-level functions” to create this human-machine symbiosis. So instead of asking when AI algorithms will be good enough to take over, a more effective question would be: how can we exploit the positive attributes of AI and combine those with human pilots to augment overall performance? More specifically, when will we see the Luke Skywalker and R2-D2 equivalent of effective machine and human teamwork in the US Air Force? The ability to effectively combine AI and humans could lead to more successful missions, fewer miscalculations and accidents, and a stronger Air Force, ultimately saving more American lives and keeping the United States more secure.
A human and AI integrated team may be the logical next step for the Air Force to take. While this might lead the Air Force down a path towards a total AI-piloted fleet or the complete loss of the pilot career field and recruiting foundation of the branch, the bottom line is, whatever mixture of human and AI pilot is most effective and lethal is what will be employed by the Air Force to best “fly, fight, and win in air space and cyberspace”.
Kaysan Frueh is an undergraduate student at Virginia Tech. She is pursuing a double major in National Security and Foreign Affairs and Spanish and is a scholar with the Hume Center for National Security and Technology. She is a member of the Corps of Cadets and is pursuing a commission through Air Force ROTC. She is a triplet and a native of Washington State.