I am not a Luddite.

By Aaron F. Brantly - Director, Tech4Humanity Lab

I am not a Luddite. Unlike English textile workers who fought the industrialization of manufacturing in the early 19th century, I generally embrace technology. In many ways, I consider myself to be a futurist and a leading-edge adopter of different forms of technology. I am a technically competent professor of political science; I have worked in cybersecurity for decades, taught programming military information systems design courses, worked on DARPA projects, run large globally distributed network infrastructures, and I currently manage and run my own servers, simulation platforms, and programs for courses and research. Yet this year, I have run headfirst into Artificial Intelligence (AI) both in the classroom and beyond it.  

Like many, I have grown up with varying visions of what AI would be, what it would do, and how it would impact humanity. I have lived off a steady diet of science fiction ranging from the dystopian world of Terminator’s Skynet to more modern takes such as The Creator. My hope has been that AI would augment human capabilities in ways that would free us by taking away the drudgery of many everyday activities. Things like driving long distances, cleaning the house, doing taxes, and more. Those activities that consume time but are not widely considered to provide innate self-actualization. I have directly benefited from incorporating low-level AIs into my own life, including the Artificial Pancreas System, which helps manage my child’s Type 1 Diabetes. AI’s help with directions, route planning, flight stabilization, and so much more. Yet, my notions on the limits or extent of AI were woefully naïve.

The rise of large language models (LLMs) has opened my eyes to what I consider to be true potential harm that can arise from AI. A harm that has the potential to reverse human advancement in a way that runs counter to the optimism promoted by proponents of LLMs and AI more broadly. We are rapidly moving away from using AI and big data to help us analyze complex phenomena to seek novel insights, which we as humans subsequently interpret to an era in which AIs increasingly interprets and feeds us information about the world around us. This later vision might seem optimistic, yet it is akin to viewing the world through a screen as opposed to viewing it in person. The resultant image is there, but it lacks depth, it lacks a full sensual experience, and, if used without contextual reference, slowly dulls the critical senses that make us human.

I have heard the arguments that AI is like shifting from horses to cars, or similar to the introduction of the calculator into mathematics, but these comparisons are not accurate. In the former, the car is a rapid efficiency gain in a mode of transportation from one with a limited range, speed, and usability and has no profound cognitive impact on its human operator. The latter example is more closely related to the impact of AI. Calculators reduced the need to conduct manual calculations, yet users of calculators still needed to understand what equations to use, and how to interpret whether a resultant number from a calculator was accurate or not. By contrast, LLMs do not require the user to understand what equation to use, nor do they require a user to interpret whether a resultant answer is accurate. The result is akin to a child asking a parent for an answer, hearing the answer, but not understanding why the answer is what it is. The response from the LLM is similar to a parent that says, “because I said so.”

Selim Tlili, a science teacher in New York, provides a robust example of how mathematics should be understood by students as an act of creativity. The calculator doesn’t remove the creativity of mathematics; it facilitates it. By contrast, LLMs remove the creativity of the process. New tools from Apple and others allow users to type or hand write an equation and then it spits out the solution. The act of solving a math equation becomes reductionist, with the solution the singular goal. The act of thinking about how to arrive at a solution no longer takes place. Moreover, the user has no incentive or concept as to whether the resultant answer is accurate or not. Rather, they are entirely dependent on the AI’s interpretation of the written problem.

The same issue arises in the use of AI in writing. Language has been the principal medium of communication between humans for thousands of years. Early stories transferred from generation to generation, later through handwritten works, or eventually in printed media, constitute acts of communication, mechanisms of teaching, and mediums for learning imbued with human cognitive processing. The telling of stories is more than simply entertainment; it is the transfer of knowledge. The act or process of deciding what to transfer, how to transfer it, and the value of that information is critical to communication. Anyone who has listened to a young child try and tell a story recognizes that a lack of structure, insufficient details, a misunderstanding of the content, or any number of related issues can undermine and derail a story. Moreover, the pauses, intonations, or other grammatical features of speaking or text can have a further pronounced impact on understanding. Our ability to communicate is critical to our ability to learn. It serves as a core feature of our human experience. Learning to communicate is more than the simple replication of communication practices; it requires the communicator to understand how information is being communicated and why. It also requires the recipient of information to be able to receive and comprehend the information being communicated.

LLMs are disrupting the process of communication in ways that are likely to be misunderstood by those who are already good communicators. LLMs can and do simplify the mundane process of grammar-checking writing, they can even help clean up poorly written works. But what they do in the process is shortchange the process of learning to effectively communicate without assistance. Someone who uploads a document to an LLM like ChatGPT, Gemini, Co-Pilot, or even Grammarly skips over the process of organizing thoughts and, subsequently, knowledge or information independent of machine intervention. The result is that while a document, email, assignment, or other output is coherent, the individual who asked for the LLM assistance doesn’t know how their words or work got from relative incoherence to a coherent if formulaic model. As they continue to use LLMs to organize and rewrite, they don’t magically become better communicators; rather, they become increasingly dependent on AI for all forms of communication.

The process described above is further compounded when even the simple act of writing a first draft is skipped. When an individual asks an LLM to provide information or facts on a topic and then to synthesize those facts into a document, the requestor will not understand the information being synthesized, which requires time and effort to review, process, and will not comprehend cognitively, and they will not be able to effectively communicate that information outside of LLM intervention. Such individuals are more likely to fail to spot errors, hallucinations, or other issues that arise within LLMs or AIs more broadly. As they continue to use an LLM to both seek out information and subsequently generate output without engaging in independent cognitive effort, they become increasingly dependent on the LLM.

LLMs might be capable of passing the Bar Exam for Lawyers or even medical exams required for state certification, yet how many people want a lawyer to defend them in court who must rely on a computer and does not know the law, or a doctor who is in surgery who does not know anatomy or medicine? We are equating ease of knowledge/information access, creation, and replication with understanding and effective application. I would suspect most would object to a commercial airline pilot who uses AI to pass their pilot certification or an engineer who designs a bridge, or skyscraper but doesn’t know the math or physics behind the structural integrity of those things they design and build. These issues may sound futuristic or perhaps unrealistic, but are unfortunately becoming a reality.

This last semester, I had a graduate student who turned in a research product for publication who used an LLM to do all the research and writing for her. The student did not think they did anything wrong. But what they didn’t know was that the AI didn’t know the topic they were investigating and gave them a paper based on a hallucinated theory. Because they hadn’t done the reading behind the research, they didn’t know that the theory applied was wrong, or in this case, non-existent. Similarly, we have seen a number of students turn in AI written assignments to class. They haven’t done the readings, don’t understand what they are learning, and are unable to communicate with me as a faculty member or their fellow students about the topics we are studying. Yet because they are using advanced LLMs, they are able to mask their ignorance in a way that makes them seem “smart” and might even earn them the “right” grade.

I have also seen students use services like Grammarly to rewrite their entire papers. This in and of itself is different from having an LLM write or do the research for a student, but again, the student does not learn how to effectively communicate complex ideas and instead relies on a computer to do it for them. Minor grammar checking can be very useful. However having an AI rewrite removes the skill part of communicating and dampens the ability of users to think and present ideas in a manner accessible to recipients.

Teachers and professors are also being marketed a number of AI grading tools. But again, teaching is an interpersonal experience designed to foster both knowledge about a specific topic and how to communicate that knowledge. Reading and grading papers, coding projects, or any number of types of assignments is time-consuming and generally not a lot of fun. But it is the process of going back and forth with students, that helps them become better communicators. If I, as a faculty member, automate my grading of critical thinking tasks produced by students, I am telling them that their thoughts are not worthy of my time or that I am not concerned with how they communicate their ideas.

Learning and teaching are not solely about efficiency. They are also not about perfection. Learning and teaching is iterative. In sports, we teach basic motor skills before advancing to more complicated tasks. The process of learning math, reading, writing, science, music, and art follows these iterative processes. Iterative processes of learning and teaching foster and develop neural pathways that are not formed when an LLM does it for us or, even worse, a student.

There is no doubt that LLMs can be useful. They can help us engage with writing, or engage with complex topics, but only if we use them properly. Recently, in the lab we introduced AI in our research article management system, Papers. The service costs extra and it does provide efficiency. I ask my lab team to have the AI summarize papers or articles before they read them. This saves them time and prevents them from reading articles that are not directly relevant to their research. If the summary of an article isn’t relevant, I tell them to save the article for another day. Once they have determined that an article is relevant, they should read it and put in notes, highlights, and rank and rate the article within our lab’s schema. They must also be ready to discuss what they have read during our weekly research meetings. AI augments and creates efficiency in our process but does not supplant critical thinking or learning.

We also use AI to help us pull data from multiple sources and put it into spreadsheets. Sometimes, we even use AIs to check our math after we have determined what algorithm or model we want to use and have used it in a statistical program like R or STATA. Other times we might even upload our finished research to an LLM to have it provide a summary back to us before we send it to reviewers to make sure that we are conveying through our article what we think we are conveying. Note we are not asking it to rewrite or otherwise modify our own writing or research but rather using it to help us better communicate.

The Luddites of early 19th-century England were rightly concerned that mechanized looms would supplant their jobs. In fact, many did end up losing or being forced to change jobs. I am not overly concerned that LLMs will take my job any time soon. What I am concerned about is a generation dependent on LLMs, a generation unable to use LLMs to augment their own creativity and critical thinking abilities so much as to supplant them. AI can generate symphonies, works of art, videos, photo-realistic images, poems, essays, and more. But a human who generates a symphony understands how all the notes and instruments meld together to create harmony in a way that a user who asks an AI to generate a symphony in the style of Bach or Mozart does not. A student who asks ChatGPT to generate a poem in iambic pentameter will not understand nor appreciate the beauty and complexity of one of Shakespeare’s sonnets without struggling to do the same themselves. A user who asks Gemini to create a painting will not understand style, structure, depth, color, light, or any of the multiple nuances of different mediums that make up a painting in the same way that a painter will.

I am not a Luddite. But LLMs are in the process of undermining the humanity of humans and are moving them towards being dependent on machines. They are creating individuals who struggle to communicate, to think creatively, and who understand or at least grapple with complexity. We must tread carefully as we proceed with LLMs that they augment rather than supplant our humanity.