Doctor Lewis Zimmerman looks real, he treats patients, moves in a physical space, answers questions, and would pass the Turing and Lovelace tests, but he is an AI doctor on the science fiction show Star Trek Deep Space Nine. Yet as the show opens Benjamin Sisko, captain of the space station expresses doubts about having an artificial intelligence for a doctor. Doctor Zimmerman in the spirit of the show jokes with the reluctant Captain Sisko and eventually they are able to work out their differences and the show moves on introducing occasional plot twists that place the fate of the living occupants in the hands of an often emotional and moody AI.
Ethics in Artificial Intelligence
Who decides what is right and wrong? As humans we are taught by our parents/guardians the difference between right and wrong and we are taught the morals and values that not only guide our lives but that also serve to make us functioning members of society. That being said morals and values may have minor variations from person to person but can have large differences when it comes to different cultures. For example, in some countries it is considered rude to leave a tip for a meal because excellent service is naturally expected and employees are paid a living wage whereas in other countries tipping is generally accepted as a societal norm and it is considered rude not to leave a tip. As advancements are made in regards to Artificial Intelligence a question arises… what are the repercussions of the decisions made by Artificial Intelligence. Or to rephrase who is going to face the consequences that result from Artificial Intelligence making decisions?
Space Technology and the Human Condition
Often when people think of space exploration, the first thing that comes to mind is Neil Armstrong setting foot on the moon. Although this was a remarkable achievement, space exploration also results in scientific advances that regularly make a positive impact right here on earth. Many of the discoveries we learned about in grade school, products we could not live without, and new careers we hope to work in all have their roots in space exploration. Whether people believe it or not, space exploration is an integral part of bettering human lives.
A Diseased Cyberspace and How to Treat It
Grateful Dead Lyricist John Perry Barlow once wrote in “A Declaration of the Independence for Cyberspace” of a potential place free from state intervention where a new social contract might arise in the absence of privilege and prejudice, economic power, military force, and station of birth. Regrettably, Barlow’s vision has not come to pass. Instead, cyberspace has been invaded by all those qualities he sought independence from and more. Writing following a contentious election in the United States (US) and amidst a pandemic that has forced billions to work and socially interact intermediate through technologies, it is difficult not to question whether cyberspace has itself become in so many ways a diseased space, where the worst qualities of humanity are cultivated.
Patient Centric Cybersecurity - Excerpt
Over the last several decades there has been a shift in standard models of healthcare both in the United States and globally. Patient centric approaches in health care reorient the power relationships of physicians and patients. This shift elevates the needs and challenges of the patients and builds a more robust and communicative relationship to foster improved health outcomes. Recently, Nataliya Brantly, VT STS PhD Student and Dr. Aaron Brantly, Department of Political Science, Tech4Humanity Lab Director, took up the issue of patient centric care and focused on expanding it to encompass cybersecurity concerns.
Which Countries Visit the Most Hidden Services Sites?
This week, my new article with Eric Jardine and Gareth Owenson, “The potential harms of the Tor anonymity network cluster disproportionately in free countries,” came out in PNAS. Using a global sample of Tor users, we show that a higher percentage of Tor clients in politically free countries go to hidden services sites than in less free countries. Past studies have shown that much of the traffic to hidden services sites goes to cryptomarkets and child abuse imagery sites. The corollary is that higher % of clients in repressive regimes use Tor to access the surface web.
Current and Future Applications of Geospatial Artificial Intelligence
By Peter Muskett
A crucial component for analyses within the field of big data analytics is the inherent connection data has with geography and location. [BA1] A commonly cited adage proposes “that at least 80% of all data are geographic in nature” either through the provision of imagery or the ability for data to be quickly georeferenced. Aggregating and interpreting this geospatial data as a means to “empower understanding, insight, decision-making, and prediction” is a core feature of GIS and for years has served as a driving force behind both governmental and corporate processes of operation. Utilizing GIS to build and implement analyses of Location Intelligence is not a new concept; however, the manner and rate at which these existing practices have evolved and paired with new systems of AI and machine learning to build Geospatial Artificial Intelligence (geoAI) have been particularly notable. The deployment of these technologies has the potential to profoundly impact the human condition, therefore there is a growing concern as to how geospatial data can be responsibly collected while still protecting the privacy and overall well-being of the populations being analyzed.
To best answer this question, it is first necessary to examine the benefits of expanding geoAI capabilities to understand why the field has seen an increase in investment in recent years. Due to its interdisciplinarity, the field of geoAI has seen a flexible and diverse customer base utilize its capabilities for a variety of purposes. In business, many companies view investment in geoAI as a matter of survival in a constantly evolving marketplace. Manager of General Motors Advanced Data Analytics, Bruce Wong, for example, states that a pressing concern in the market is that a considerable portion of consumers today find it both financially and psychologically easier to search for and purchase goods through the internet or local online marketplaces rather than through the more traditional means of going to a physical location such as a dealership. As a result, the explosion of commerce conducted through the internet has played a substantial role in pushing companies such as GM to bolster their geospatial artificial intelligence capabilities as a means to strategically place dealership locations to maximize profit in areas likely to see substantial payoff.
From a standpoint of public safety, law enforcement departments have begun heavily integrating geoAI to improve policing and respond more effectively to crime. Methods to accomplish this task range from creation of comprehensive crime maps to the real time analysis of video camera feeds that can recognize “aggressive behavior” and “biometric features.” As police departments in the United States face increasing societal pressure to alter their operations to better serve their communities it will be worthwhile to track into the near future whether this substantial investment into geoAI capabilities will not only decrease crime rates but also improve cultural perception and legitimacy through better policing. Ultimately, this serves to illustrate the rising prevalence of geoAI as a crucial measure for the private industry to remain competitive and relevant to consumers seeking the most optimal and convenient means to purchase, and for governing bodies through the utilization of new technologies to ensure order and safety.
What makes this pairing between remote imaging, georeferencing, and artificial intelligence as a driving force behind profit and public safety particularly concerning, however, is its potential for deepening surveillance practices and risks of data breaches. In the case of remote viewing through the usage of drones or satellites there is almost no possibility for individuals to provide “informed consent” for the monitoring of their private property, if they are even aware of the surveillance in the first place. For this reason, many question the ethics of analyses deriving from information taken without a citizen’s explicit awareness. This also does not begin to take into consideration the detrimental psychological effects and negative public perception that the utilization of UAVs can have on a population. Studies have demonstrated that those, especially who have experienced armed conflict or life in an authoritarian regime, closely link unmanned aircraft with “evidence of spying” and “collaboration” with oppressive governing bodies. Therefore, the presence of such technology, even if used for simple purposes such as detecting land change, can cause distress and mistrust within a population. Additionally, a growing worry regarding the collection of data of this nature is the increasing ease of de-anonymization of publicly available records by linking “data back to individuals using their geographical location.” This concern is compounded by the fact that “comprehensive privacy, data protection and storage standards may be largely non-existent” in states across the globe, highlighting how in certain regions the technology has far outpaced political efforts to responsibly collect, de-identify, and share personal data of citizens.
What has become increasingly clear is that the technologies utilized in geoAI are evolving at a rapid rate and the concerns regarding privacy and general well-being will only worsen if proper action is not taken in the present. Developing technologies such as hyperspectral imaging, which utilizes hundreds to thousands of wavelengths in an image capturing process at the pixel level in comparison with the three to ten bands employed by traditional multispectral imaging, highlight a future characterized by an enhanced ability to quickly and remotely tell a much deeper story about the landscape, or lifeforms, captured than ever before. Already, studies of hyperspectral imaging in a human context have demonstrated the ability to remotely detect and classify human emotion, such as happiness, based on facial expression and tissue oxygen saturation, and disease diagnosis by remotely differentiating between healthy and malignant tissue in a patient. While developing techniques utilized in geoAI such as these present a new and exciting future that can greatly improve the lives and of many, others maintain a justifiable fear of the possibilities if this data is deployed by actors with malicious intent. Additionally, even if used by actors of public trust there exists the question as to whether it is being used responsibly and properly to ensure that uses of the data are not inaccurate or misleading.
At present, a massive and instantaneous collection and analysis of human sentiment and physical health through remote sensing and AI is still a distant prospect. Presently, there are a multitude of barriers ranging from the real-time analysis of big data, to potential privacy concerns over widespread data collection that might deter implementation. At the very least, however, it serves as a relevant example of how geoAI might contribute both to understanding the human condition and uncovering previously hidden patterns, while also raising legitimate concerns of scope and availability. Of particular concern is the utilization of geoAI by government and corporations absent of sufficient legal or regulatory oversight.
Virginia Students Are Game Changineers
Computational Thinking (CT) forms the backbone behind cybersecurity, autonomous systems, data sciences, and many of the tech jobs. Understanding CT instills in the learners the manner in which computer scientists and cyber professionals think about the problems at hand. Needless to say, the construction of modern-day digital artifacts is highly complex. The skill set needed to adequately create interesting programs (such as Pacman and other apps) using today’s programming languages often requires multiple years of computer science training. As we enter the world of smart homes and smart cities, the need to be able to think, reason, interact with, and program these digital devices is of paramount importance.
AI in agriculture: symptom or a remedy?
Every day new technologies, in particular Artificial Intelligence (AI) and Machine Learning (ML), are being developed and implemented on farms in the United States and around the world. Yet, rarely is the changing nature of technology on farms considered from a human perspective. Specifically, is AI in agriculture, in fact, benefiting or harming humanity?
How could AI Pilots affect the Air Force?
A quick Google search of ‘US Air Force’ instantly pulls up images of fighter pilots and stunning photos of fighter aircrafts like Lockheed Martin’s F-22 Raptor flying menacingly across a vast blue sky. After loading the Air Force website, the user is greeted with the Air Force’s recruiting slogan “Aim High” with a montage of aviation-related content, including videos and images of proud pilots. When one thinks of the Air Force, a fighter pilot is the most common first thought. But with increased research and abilities of AI algorithms and the possibility of self-piloted planes, the Air Force’s proud pilot reputation may be shattered, drastically changing the branch and eliminating this desired career field.
To Prevent Pandemics, Automate Meat Production by Divorcing it from Animals
If ever there were a need for a safer, more automated food production process, it’s in our meat industry. Right now, we breed into existence billions upon billions of farm animals each year, living creatures who require extensive resources (food, water, land, etc.) for months or years before we slaughter them. The inefficiency of this system is well-publicized, as is the fact that it’s a leading driver of deforestation, antibiotic resistance, climate change, biodiversity loss, animal welfare concerns, and more.
Misperceptions about Misinformation and Disinformation
As the election cycle in the U.S. entered its final stretches, many personae non gratae have emerged, aiming to influence the election at the last minute by spreading political misinformation and disinformation. They include the Russian government and the individuals and groups it sponsored, such as the Internet Research Agency (IRA). They are allegedly waging political disinformation campaigns targeting the upcoming U.S. election, intended to spread disinformation, sow divisions among the U.S. electorate, affect the electoral outcome, and ultimately degrade the democratic system.
Electronic Voting and Election Security
With the onset of the pandemic, several countries and local governments, including state governments in the U.S., are considering or have adopted online voting, also known as remote voting or Internet-voting. Some countries, such as Estonia, have fully embraced I-voting as a regular mode of voting. Moreover, many governments have already adopted tools of electronic voting, such as electronic---usually paperless---voting machines, in their process of election administration. Experts estimate that up to 12 percent of voters will vote on paperless equipment in the 2020 U.S. elections.
Innovation in Election Administration and Voter Turnout
Silencing Freedom: Belarus’ Internet Shutdown
When authoritarian regimes are faced with increasing protests and unrest within their domestic populations the blocking and throttling of a nation’s Internet is an all too common occurrence. Recent days have seen a bevy of reports that the Belarusian government of Alexander Lukashenko, having engaged in suspected election manipulations on a massive scale, is now using its power over local Internet Service Providers and domestic networked infrastructures to shut and slow down the means of by which citizens mobilize in mass against their governments.
The TikTok Ban: A Closer Look
TikTok, a Chinese video-sharing social networking service has risen in global prominence in recent years to a user base of more than 800 million. There are more than 40 million U.S. TikTok users. The Trump administration has declared TikToK to be national security threat. Most security analysts have found this not to be the case.
Book Review: The Decision to Attack: Military and Intelligence Cyber Decision-Making
Aaron Franklin Brantly
University of Georgia Press, 2016, 226 pp.
Reviewed by Dr. John G. Breen Distinguished Chair for National Intelligence Studies, U.S. Army Command and General Staff College, and CIA Representative to the U.S. Army Combined Arms Center
“The Russian government hacked into the e-mail accounts of top Democratic Party officials in order to influence the outcome of the 2016 U.S. Presidential election.” This is a clear statement of guilt, definitive and direct, with little room for doubt. An attack like this demands a response. Doesn’t the manipulation of an American election warrant some sort of retaliation? Could this be an act of war? So why isn’t the U.S. (at least overtly) doing more in response?
Well, read closely the official statement about the Russian hacking from the Department of Homeland Security and the Director of National Intelligence.1 In colloquial intelligence-speak, it doesn’t really say the Russian government is definitively responsible for the compromise. The statement notes merely “confidence” that the Russian government “directed” the compromise and offers as evidence only that these attacks were “consistent with the methods and motivations of Russian-directed efforts.” The careful use of indefinite phrases such as “consistent with”, “we believe” or “we judge” leaves inconvenient room for reasonable doubt and plausible deniability about who actually conducted the attacks and who is ultimately accountable.
These types of assessments, as dissembling assurances go, sound eerily familiar, ala the 2002 Iraq WMD National Intelligence Estimate. Was there WMD in Iraq or not? Before the invasion, the community certainly said “we judge” that there was. Think of it this way; no mafia don could be convicted in a court of law by a prosecutor asserting only that the state was “confident” the individual was guilty. Offering as proof that the murder was “consistent with the methods and motivations of Mafia-directed efforts” is not sufficient. Did the don order the hit, conduct the act himself, or is he being blamed as a convenient scapegoat? These intelligence assessments simply do not seem to provide the unambiguous attribution necessary to reasonably contemplate retaliation.
This lingering ambiguity is a key issue addressed in Aaron Brantly’s 2016, The Decision to Attack: Military and Intelligence Cyber Decision-Making. An Assistant Professor at the U.S. Military Academy, Brantly provides a detailed academic exploration of cyber warfare, seeking to better understand how states interact within cyberspace. He posits that states should generally be considered rational actors and therefore will rank order their likely actions in cyberspace based on positive expected utility, i.e. how successful these actions will be compared to the risks they engender (expected utility theory). Decision to Attack is an excellent treatment of this crucial domain, packed densely with insight into a deceptively pithy 167 pages.
The research encapsulated in Decision to Attack suggests the key determinant in a state choosing to undertake an offensive cyber-attack is anonymity. That is to say, a state’s ability to keep its attack secret as it is being undertaken, as well as its capacity to hide or at least obscure the origin of the attack afterward. There is no barrier to action if there is no risk of retribution. As Brantly notes, the hurdle for choosing offensive cyber-attacks is extremely low when a state can assume some level of anonymity:
“Anonymity opens Pandora’s box for cyber conflict on the state level. Without constraints imposed by potential losses, anonymity makes rational actors of almost all states in cyberspace. Anonymity makes it possible for states at a traditional power disadvantage to engage in hostile acts against more powerful states.... Because anonymity reduces the ability to deter by means of power or skill in most instances, the proverbial dogs of war are unleashed. If the only constraints on offensive actions are moral and ethical, why not engage in bad behavior? Bad behavior in cyberspace is rational because there are few consequences for actions conducted in the domain.”2
Brantly does offer some hope that states will not rationally engage in “massively damaging” cyber-attacks, given that with greater complexity and scale these attacks become less likely to be kept truly unattributable. His assertion seems to be that these states, particularly those smaller states with less traditional or conventional power (military and otherwise), will focus on small to mid- range types of attacks. That said, even seemingly minor attacks can apparently lead to unintended significant impacts, certainly if these pile up over time -- a cyber domino effect. For example, a relatively small-scale compromise of an individual’s email account, followed by propagation of resultant inflammatory “revelations” seeded into the press and on-line social media, might lead to the upending of an otherwise democratic election.
Given the demonstrated importance of secrecy and obfuscation in the cyber domain, Brantly appears to argue in Decision to Attack that cyber-attacks should be considered a type of covert action. He points out that the U.S. government’s approach to cyberspace has to this point relied on the military, with Admiral Mike Rogers currently the commander of both the National Security Agency and Cyber Command (CYBERCOM). To Brantly, this indicates the president has given the military, and not the Central Intelligence Agency (CIA), the lead as the main covert operator in the cyber domain. He offers in criticism that this arrangement may run counter to Executive Order 12333, which provides lanes in the ethical/moral superhighway for the intelligence community. Brantly indicates though that the Department of Defense’s capacity to address the scale of the problems identified in cyber make this designation “appropriate.”3
While perhaps not the major focus of Brantly’s research, the implications of relying on the military to conduct these types of offensive operations are perhaps worth further exploration. There are reasons the CIA was designated and utilized during the cold war to be the primary organization responsible for covert action. In sum, it seems to have everything to do with plausible deniability. If you are caught by an opposing state in the conduct of covert action while in uniform, this might be considered an act of war. Is it any less so in cyber? Perhaps.
In March 2015 the CIA embarked on an unprecedented “modernization” effort designed to “ensure that CIA is fully optimized to meet current and future challenges,” largely by pooling analytical, operational and technical expertise into ten new Mission Centers.4 A new operational component -- the Directorate of Digital Innovation (DDI) -- was also added to the existing four
Directorates: Operations, Analysis, Science and Technology, and Support. The DDI is said to be focused on “cutting-edge digital and cyber tradecraft and IT infrastructure.”5 Public statements from the Agency have highlighted the importance of culture, tradecraft, and knowledge management in this new Directorate, stressing the DDI’s role in support of the CIA’s clandestine and open source intelligence collection missions.6,7
In a July 2016 speech to the Brookings Institution, CIA Director John Brennan discussed the mission of the newly created DDI and the risks posed by cyber exploitation. For example, Brennan suggested the Arab Spring revolts were influenced by on-line social media’s ability to swiftly facilitate social interaction and cause destabilization, that cyber could be used to sabotage vital infrastructure, or might be used by terrorist organizations to indoctrinate potential lone wolf actors.8 CIA of course looks to exploit this same cyber domain to its own ends. In CIA’s vernacular, destabilization then is “covert cyber-based political influence”; sabotage is a “cyber-facilitated counter proliferation covert action”; and indoctrination becomes an “on-line virtual recruitment.” What distinguishes between these actions –- anarchic destabilization versus covert cyber-based political influence -- is the intent, noble or ignoble, of the perpetrator.
Cyber espionage then at least might be thought to follow many of the same tradecraft norms and to be constrained by many of the same rules and self-imposed restrictions as real-world, “Great Game” espionage and especially some types of non-lethal covert action. For example, if caught in the midst of a recruitment operation against a foreign diplomat in some capital city, that country’s government typically would simply kick the offending CIA officer out of the country, declaring the individual persona non grata. One could say there are systems in place, most informal, that allow for a bit of espionage to be conducted without causing conflagration. This is especially true when dealing with near-peer competitors, as evidenced by decades of cold war intrigue. CIA was chosen, for example, to conduct covert action in Afghanistan during the cold war in order to avoid an act of war incident. CIA’s actions against the Soviet occupation were no less deadly, but use of foreign cutouts and misattributable materiel, i.e. tradecraft, allowed for plausible deniability and lack of attribution. The Soviets could “judge” all they wanted that the U.S. was behind their mounting losses, but without proof, this was meaningless.
CYBERCOM, the closest military counterpart to the DDI, was created as a joint headquarters in 2009. Unlike the CIA’s DDI clandestine collection posture, CYBERCOM’s stated mission appears much more broadly focused on traditional (though still cyber) offensive operations and network defense, i.e. “ensure U.S./Allied freedom of action in cyberspace and deny the same to our adversaries.”9 U.S. military joint doctrine on cyberspace operations is filled with otherwise conventional military terms such as “fires” and “battle damage.” A cyber weapon deployed against an adversary on the cyber battlefield can be called a “payload.”
The military’s cyber effort also appears to be somewhat encumbered by familiar bureaucratic challenges, not the least of which involves nominal joint efforts to operate in a domain not easily divvied up amongst services used to “owning” a particular geographic space, i.e. ocean, air, land. As noted by an Army strategist working on the Joint Staff Directorate for Joint Force Development: “The opportunity for one service to infringe on, or inadvertently sabotage, another’s cyberspace operation is much greater than in the separate physical domains. The command-and-control burden and the risk of cyberspace fratricide increase with the number of cyberwarriors from four different services operating independently in the domain.”10
How much greater then is the challenge in deconflicting operations between these disparate DoD cyber operators and those in the intelligence community and CIA’s DDI, all engaged on the same cyber field of play? If CIA has worked for years to gain cyber access to a particular source of protected information, and another actor wants to “deliver a payload” against that target, who decides which mission is most important? Do we choose the intelligence collection activity we need to better understand the enemy or is it the cyber-attack that cripples an adversary’s critical capability? And is there an advantage in having the military or a civilian organization conduct either of these covert action operations? These and many other important questions that spin off from reading Decision to Attack await further exploration.
Ultimately this decision-making should perhaps extend beyond the inside view taken by the operators from CYBERCOM or the CIA’s DDI. The process will hopefully include policy-makers struggling with how and why to use cyber as either an offensive tool or a tool of espionage. Brantly provides the reader with these delineations, offering definitions of, for example, cyberattack versus cyber exploitation. He also provides a solid starting point for a discussion about which of these approaches is most appropriate and a framework in which to understand our own and our adversary’s potential decision-making processes. There’s more to be said and in this evolving domain there is much more to understand, but Decision to Attack should be in the library of those hoping to make the right call when it comes time to act. IAJ
Notes
1 https://www.dni.gov/index.php/newsroom/press-releases/215-press-releases-2016/1423-joint-dhs- odni-election-security-statement
2 Brantly, Aaron Franklin. The Decision to Attack: Military and Intelligence Cyber Decision-Making, University of Georgia Press, 2016, pgs. 158-159.
3 Ibid, pgs. 123-124.
4 https://www.cia.gov/news-information/press-releases-statements/2015-press-releases-statements/cia- achieves-key-milestone-in-agency-wide-modernization-initiative.html
5 https://www.cia.gov/news-information/speeches-testimony/2016-speeches-testimony/director-brennan- speaks-at-the-brookings-institution.html
6 Ibid, https://www.cia.gov/news-information/press-releases-statements/2015-press-releases-statements/ cia-achieves-key-milestone-in-agency-wide-modernization-initiative.html
7 https://www.cia.gov/offices-of-cia/digital-innovation
8 Ibid, https://www.cia.gov/news-information/speeches-testimony/2016-speeches-testimony/director- brennan-speaks-at-the-brookings-institution.html
9 https://www.stratcom.mil/factsheets/2/Cyber_Command/
10 Graham, Matt. U.S. Cyber Force: One War Away, Military Review, May-June 2016, Vol 96, No. 3, Pg. 114.
BOOK REVIEW InterAgency Journal Vol. 7, Issue 3, Fall 2016
Sticks and Stones – Training for Tomorrow’s War Today
Written with COL. Thomas COOK:
‘I know not with what weapons World War III will be fought, but World War IV will be fought with sticks and stones.’ – Albert Einstein
Technology is great, when it works the way we want it to. Over the last couple years it seems the ever-mounting stream of hacks could leave even the most stoic of technologists cringing. As researchers at the Army Cyber Institute at West Point, our task is to be forward thinking and anticipate the hill after next. We are one part of the Army’s robust effort to address cyberspace issues of today and tomorrow. Along with our cross-service and cross-agency partners we are making progress: we are working our way through a highly disruptive era in technology and politics to find solutions ensuring the security of the United States. At the same time, as we step forward into the complexity of a fully integrated future, we must not lose sight as a military of the fundamentals of fighting and defending the security and interests of the nation. The more the tools and gadgets of modern warfare are challenged by state and non-state actors, the more critical it becomes that our men and women in uniform maintain the fundamental skills of warriors from previous generations.
Networked warfare and cyber warfare are but two of many catch phrases of the last couple of decades rising to prominence. These are concepts that we must continue build on to improve our precision, coordination and efficiency as defenders of the nation’s security and interests. Yet despite these advances, the US military must also be prepared to operate in a world where the lights do not turn on, engines do not start, and all our efficiencies leave us with only the rifle in our hands. Our ships, armor, aircraft, satellites, and almost all other military systemsare highly dependent on digital systems vulnerable to attack.
As a nation, the US must expect the unexpected by training our military to perform in the absence of technologies they have never lived without. Our incoming officer and enlisted corps are digital natives: they leverage GPS, laser guided munitions and other modern tools expertly. But as recent hacking incidents on cars, ships, supply systems, GPS, and even aircraft indicate, the diversity of threats posed to our systems are immense. While calls to fix the code and secure the systems are being heard loud and clear. The Army and other organizations like ours are working day and night to solve a persistent stream of cyber challenges It is important to remember that we are solving problems as our cyber surroundings change under our feet.
As we write better code, build more robust hardware and develop better cyber warriors for both offensive and defensive operations, our ability to observe, orient, decide, and act across the services and within them will become more robust. At the same time, we must recognize that we are creating puzzles that others will try to solve and that eventually, given enough time, energy and luck, most puzzles are solvable. Technology has enhanced the capabilities of the Army and her sister services. Under the continued direction of President Obama, Secretary Carter, and foresight of Admiral Rogers and Generals Alexander, Cardon, Hernandez and many others, we as a nation have established the foundations of a robust national approach to cybersecurity. This is an evolving process and the Department of Defense (DoD), Federal, state, local, and private entities will necessarily continue to build capabilities improving our aggregate resilience. The problem of cybersecurity is not isolated to the DoD alone and as a nation we must work together to strengthen our mutual security and resilience.
Across the armed services there is a yet another need, specific to our profession. Just as medieval castles layered their fortifications, so too must we train and develop redundancies in our men and women and the systems they use. These redundancies should be well adapted to a world in which the technology we have grown so dependent on fails us. The services must recognize that our need to train in, and for, cyberspace related conflict does not obviate necessary skills found in the historical foundations of military arts. Skills such as celestial navigation, non-computer aided mathematics and many more are critical to maintaining operational effectiveness in the absence of the tools upon which we now so often depend. Robots, drones, and all the science-fiction that has become science-fact is nothing compared to the determined will of a well trained and educated, highly motivated and creative Soldier.
Enter the Policy and Legal Void
Soldiers are down range and have suites of tools available to them that they cannot use to their full capability. They are not technically limited, but rather constrained by the authorities and pre-requisite policies established in a pre-digital age. We tell them to go and defeat ISIS, Al al’Qaeda, or pick another future adversary, but they must do so with their hands tied behind their backs. Make no mistake, as a nation we are currently involved in a global conflict. The conflict is not defined by traditional weapons, but by bits and bytes traversing fiber lines and airwaves. This global information war collides with many of the values of Western Democracies, and the societal constraints of authoritarian regimes. The robust constraints on governmental instruments serve a valuable purpose, yet at the same time our Soldiers in the field are struggling to navigate complex legal and policy waters while corporations are drowning in data that might inform or provide context for a variety of mission sets. The volume and velocity of this data is only set to grow as globally the number of Internet enabled devices increases from approximately 17 billion to 50 billion and beyond. At the beginning of the digital age it is imperative that we, as a society, begin discussing the future we are rapidly entering.
Constraints are pivotal for maintaining the fundamental civil rights Americans cherish. Civil rights, to include various liberties such as privacy, free speech, and freedom of religion among others are challenged by data repositories that eliminate anonymity and the ability to be forgotten and to forget. Yet, we as a society are fooling ourselves if we believe that when we order Google to delete us from search results, or Facebook to remove our profile that that data ever really disappears. The vast majority of US Internet users are simultaneously consumers and products in a complex digital ecosystem that will only become more complicated with the expansion of the Internet of Things into our homes, offices, and even our bodies. We and our political elite can pretend to be neo-Luddites, but we are not. We are voracious consumers of innovation. We innovate without significant thought to consequence, and in so doing often fail to assess the risks of the world we are designing.
As we demand and consume innovation, we ignore the fact that we are retaining the policies and laws of yesterday, and in the process shackling those in our society to whom we have assigned the responsibility for protecting us. As we innovate and adapt so to do our enemies, with terrorists, states adversaries, and criminal networks preying upon our innovation and learning to innovate and adapt as we do. All the while, we tell ourselves that if we provide the military and law enforcement with the policies and legal structure to defend us that we will be entering into some Orwellian nightmare. Yet, in many respects the nightmare is of our own making. We bleed trillions of dollars a year to cyber criminals and state espionage campaigns, and willingly allow those who engage in political violence, child pornography and other nefarious behavoirs to run rampant through the systems that we once thought would usher in a bright new era for humanity.
General Michael Hayden asserted during a talk after his time at the NSA that he would go right up to the line in using every legal authority granted him and the agencies under his control, but that he would go no further. He said the agencies of the federal government were designed to operate within a rule of law system beholden to the will of the people. Edward Snowden, the EFF, the ACLU, and others have challenged the extent to which federal authorities extend control over systems used by the US and allies. They have challenged the concept of secret courts and classified policy directives. Some have even indicated that individuals from the intelligence community (IC) engaged in illegal activities beyond the scope of even secret courts and classified policies. Around the margins there will always be those who violate the intent of law and policy. However, the vast majority of members of the IC are well intentioned individuals who seek to protect their fellow citizens.
The basic distribution of relevant national security and law enforcement authorities within United States Code are divided between Title 10 (Military), Title 18 (Law Enforcement), and Title 50 (Intelligence). The U.S. Code has been evolving in various forms since World War II, and was designed primarily in a pre-digital era in which it was logical to provide clear lines of demarcation between domestic and foreign, law-enforcement, military and intelligence. These lines are blurred in a world in which terrorists recruit from abroad, and plan in both conflict and non-conflict zones operations against the Homeland. These lines are strained by states engaging in cyberattacks against critical infrastructure, and espionage environments that span military, civilian, and intelligence spheres.
I have met with police agencies asking for intelligence capabilities, and with military organizations requesting the ability to view online media accounts with known terrorist connections. In the present environment, the tools available to track and engage terrorists are robust, but authorities require the military, IC, and law enforcement to engage in a dance along a legal and policy tightrope that slows the process down and increases risks. Moreover, because each entity is so ingrained within its authorized framework they are limited in their abilities to think effectively across the lines to anticipate what other agencies and entities need. Often they are further constrained by not knowing what they are truly allowed to share, when they are allowed to share it, and under what conditions. To some extent fusion centers provide valuable bridges between stovepiped institutions. Additionally, entities often embed personnel within one another’s structures, but even these attempts provide avenues for communication fail to fully mitigate the problems faced.
The constraints imposed by the various titles within the cyber environment are particularly frustrating when one realizes that the tools available to the corporate sector for marketing and sales often in many ways exceed the capabilities of both intelligence and law-enforcement. Critics are correct in challenging the assertions of the government and its agencies that these tools are capable of preventing all attacks, but as the volume of data increases, and as the skill and efficiency of the community increases in tandem with advances in technology and volumes and types of data, it is likely that these challenges will be met head on and solutions found.
We can and must educate the citizenry about the world we are rapidly entering. The world in which we carry mobile supercomputers that far exceed the capabilities of the devices used to land astronauts on the moon. We excrete data from our phones, our watches, our credit card transactions, our communications, our homes, and soon our cars. We produce zettabytes of data, and we are only at the beginning of the digital age. We can fool ourselves into saying we can remain private, we can remain anonymous, we can remain hidden from the future, but the reality is far different. The US is operating in a policy and legal void based on a static technological environment of yesterday. Yet the environment is not static, it is nearly exponential.
Credit needs to be given to EFF, CDT, the ACLU, and others for challenging the conversation, but this challenge needs to go further and extend to our schools, our local and state and federal legislative and legal bodies. If we want to maintain the current constraints on law enforcement, intelligence and military institutions, we must do so knowing these constraints are self-imposed and carry certain risks, just as there are risks associated with the removal of constraints. We must acknowledge that the constraints we impose are primarily limited to those to whom we have delegated responsibility for our protection both at home and abroad and not to the companies we so willingly give our data to on a daily basis. We must recognize that we will continue to generate and consume enormous amounts of data both as consumers and products in a complex socio-technical-economic ecosystem that is still in its infancy. It is only by confronting the reality of both the present and the future that we can begin to address the current status of laws and policies and determine where they need to be.
The Value of Intelligence and Secrets
Secretary of State Henry Stimson was famously quoted “Gentlemen don’t read each other’s mail” in 1929. Just a couple years later during the 1930-31 London Naval Conference and the 1932 Geneva Disarmament Conference, Secretary Stimson would come to understand and appreciate the value of national security intelligence and would reverse himself. The value of intelligence to both the United States and our allies would become of paramount importance during World War II and in the Cold War to follow. Whether the breaking of Enigma codes, the Purple codes of the Japanese or the use of double agents in the United Kingdom, intelligence saved lives and provided strategic and tactical advantages.
Intelligence is not a new state activity, but one that is thousands of years old rooted in classical antiquity. For as long as humans have been bipedal and walked the earth they have sought advantages over one another and their environment. In our modern hyper-partisan environment, an era of liberal democracy and utopian goals of radical transparency many are quick to condemn our intelligence community (IC). We decry their sources and methods, but even more so we decry their failures when they infrequently occur. As a nation and people, our IC faces a paradox, we ask them to provide perfect protection, but we work hard to limit the techniques and tools by which to achieve our stated objectives.
As a liberal democracy, we have every right to constrain the state which we establish. Members of our intelligence community recognize and respect this right. As General (Ret.) Michael Hayden has been quoted numerous times and wrote a book discussing that the intelligence community plays to the edge of acceptable behavior. They go right up to the legal, ethical, and moral lines that we establish, but no further. They serve at the pleasure of those whom we elect to represent our interests. Their mission depends upon collecting information and developing intelligence products to keep us safe. This requires the manipulation of human assets (spies), the manipulation of computers and similar devices, the breaking of signals, the collection of images and signatures from a variety of sources. These activities are accomplished within the constraints of US law and under the supervision of the Executive Branch, the House Permanent Select Committee on Intelligence and the Senate Select Committee on Intelligence, as well as the Senate and House Armed Services Committee and a bevy of other oversight organizations dispersed throughout the US government can and should be considered reasonable and unsurprising functions of intelligence services.
The most recent WikiLeaks document dump does not serve the national good, but rather harms efforts of well-intentioned professionals working to provide intelligence on adversaries who would seek to do us harm. For the better part of the last three years, I have been researching the online behaviors of the Islamic State and Al-Qaeda. These organizations are well-attuned to technology and its vulnerabilities. They actively seek to evade intelligence and law-enforcement agencies by using encrypted communications and a variety of platforms. They actively crowdsource and train one another on best practices. The release of these documents whether verified or not harms efforts of professionals working tirelessly to put together a complex mosaic of bits of intelligence to prevent terrorist attacks and strategic and tactical surprise. The release of this information while temporarily serving the benefit of patching and protecting individuals outside of the gaze of the US IC likely harms efforts to understand and track terrorists who desire to attack the homeland and our allies. Time and again I have seen intelligence leaks spread through the jihadist communities like wildfire, and within days the tactics, techniques, and procedures for avoiding intelligence and law enforcement agencies have changed. Leaks such as the recent WikiLeaks disclosures do not make us safer; they provide those who wish to harm us with an information edge and degrade our national security.
We as a nation, like Secretary Stimson, detest other’s reading our mail, but we should not forsake the value of the intelligence community and the work it does to keep the nation safe. We should work through our elected leaders to convey the lines within which we wish our intelligence professionals to operate and should consistently pressure our elected officials to keep watch over those we empower to protect us. Intelligence does and will continue to provide value to the nation and to achieve this value requires secrecy and the development of sources and methods that often reside beyond the public spotlight.