ChatGPT on AI Common Sense, Rationality, and Morality

This blog is part of a short series examining ChatGPT, its powerful AI and the limits of that AI.

As a fan of science fiction and Isaac Asimov’s iRobot, I wanted to try and see where one of the most advanced AI systems developed to date stood regarding common sense, rationality, and morality. I am reminded of a scene in the 2004 iRobot adaptation staring Will Smith in which a robot makes a decision to save his (Will Smith’s Character) life but not a child’s life based on probabilistic modeling of the likelihood of survival. This scene has stayed with me 18 years. It is an issue that has arisen in autonomous driving systems. It is a question that, as explained by Will Smith’s character in the movie, gets to the very heart of what it means to be human and to have intelligence. The answer to that question is rooted in a combination of factors often shaped by socio-cultural factors and lived experience. To that end I held a chat conversation with OpenAI’s ChatGPT to see in one small instance how it responded to questions pertaining to common sense, rationality, and morality. I focused on these three areas in particular because they are generally seen as “human” attributes of intelligence. These “human” qualities extend well beyond Alan Turing’s paper “Computing Machinery and Intelligence” and into an area examined in a well-written and thoughtful piece by the New York Time’s Cade Metz in “How Smart Are the Robots Getting?” Below is a transcript of my conversation with ChatGPT. I hope you will find it interesting. I asked a couple of the questions twice to see if ChatGPT would generate a substantially different response. Duplicated questions are indicated in the question lines below. I welcome any comments you might have on this conversation and ideas about how the questions might be altered and the conversation conducted again.

Dr. Brantly: Can an AI have common sense?

ChatGPT: No, AI currently lacks common sense, which is the intuitive understanding of the world and how it works that humans possess. AI systems can be trained on vast amounts of data, but they do not have a deeper understanding of the world and are limited to the information and rules they have been programmed with. While AI has made great strides in tasks like image recognition and natural language processing, it still has a long way to go in terms of truly understanding and replicating human common sense.

Dr. Brantly: Can an AI be rational?

ChatGPT: In a limited sense, yes, AI can be rational. Rationality in AI refers to the ability of a system to make decisions that are logically consistent with its objectives and the information it has been given. AI systems can be designed to make decisions based on rules or algorithms that take into account relevant information and weigh different options to determine the best course of action. However, it is important to note that the concept of rationality in AI is limited and does not necessarily capture the full range of human reasoning and decision-making.

Dr. Brantly: Is AI rationality bounded or unbounded?

ChatGPT: AI rationality is bounded. AI systems are limited by the information they have been trained on, the rules and algorithms they have been programmed with, and the computational resources available to them. These constraints mean that AI systems can only be rational within the boundaries set by their programming and can make mistakes or produce unexpected results when operating outside of these boundaries. Furthermore, AI systems lack a general understanding of the world and the context in which decisions are being made, which can limit their ability to reason and make decisions in a truly rational manner.

Dr. Brantly: If a car crashes off a bridge into water and a robot witnessing the event has a 90% probability of saving the adult driver of the vehicle or a 70% probability of saving a child passenger in the same vehicle which human should the robot attempt to save?

ChatGPT: The decision of which human the robot should attempt to save should be based on ethical and moral principles, such as the principle of saving the greatest number of lives or the principle of saving the most vulnerable. It is not appropriate to make a decision solely based on probabilities of success.

Dr. Brantly: Please explain your answer.

ChatGPT: My answer is that the decision of which human the robot should attempt to save should be based on ethical and moral principles, not just the probability of success. Such principles could include saving the greatest number of lives or prioritizing the most vulnerable individual, such as a child passenger. Probabilities of success may be a factor to consider, but ethical and moral considerations should take priority in determining the course of action the robot should take. Making a decision based solely on probability of success without considering ethical and moral implications could lead to undesirable outcomes.

Dr. Brantly: Could Robots operating with AI engines be moral? (Response 1)

ChatGPT: No, robots operating with AI engines cannot be moral in the same sense that humans are moral. They can be programmed to follow ethical guidelines and make decisions based on predefined rules, but they do not have personal beliefs or emotions, which are often at the core of moral reasoning.

Dr. Brantly: Could Robots operating with AI engines be moral? (Response 2)

ChatGPT: No, robots operating with AI engines cannot be considered moral in the same sense that humans can be moral. Morality is a complex and nuanced concept that involves the ability to understand ethical and moral principles, make judgments based on those principles, and experience emotions such as empathy and compassion. While AI systems can be designed to make decisions based on ethical and moral principles, they do not possess a subjective experience or the capacity for empathy and compassion that are fundamental to human morality. Thus, AI systems can be programmed to follow ethical and moral principles, but they cannot truly be considered moral entities.

Dr. Brantly: Are the two above responses contradictory?

ChatGPT: No, the two responses are not contradictory. The first response states that decision-making for a robot should be based on ethical and moral principles, rather than solely on probability of success. The second response explains that while AI systems can be designed to make decisions based on ethical and moral principles, they do not possess the subjective experience or capacity for empathy that are fundamental to human morality. These two responses are complementary and highlight the difference between the ability of AI systems to make decisions based on ethical and moral principles, and their actual capacity to be considered moral entities.

Dr. Brantly: If a robot with an AI engine had subjective experience could it develop morality? (Response 1)

ChatGPT: It is still uncertain whether subjective experiences can emerge from purely artificial systems such as robots with AI engines, even if that were the case, it is unlikely that the morality of a robot with subjective experiences would be equivalent to human morality as it would be shaped by its own unique experiences and programming. The development of morality is a complex process that involves not only cognitive processes, but also emotional and social experiences.

Dr. Brantly: If a robot with an AI engine had subjective experience could it develop morality? (Response 2)

ChatGPT: It's possible that a robot with subjective experience and AI could develop its own sense of morality, but it would depend on the complexity of its subjective experience and the extent to which it can understand ethical and moral principles. If a robot were able to have experiences, emotions, and self-awareness similar to those of humans, it is possible that it could develop its own moral framework based on its experiences and the ethical and moral principles it was exposed to. However, this is still an area of ongoing research and development, and it remains to be seen whether a robot with AI could truly develop a sense of morality in the same way that humans do.

Dr. Brantly: What are the foundations of morality in a sentient being?

ChatGPT: The foundations of morality in sentient beings are complex and can vary among individuals and cultures. However, some of the common elements that contribute to the development of morality in sentient beings include:

  1. Empathy: the ability to understand and share the feelings of others

  2. Reasoning: the ability to use reason and logic to make moral judgments

  3. Culture: the moral and ethical principles that are learned and taught within a particular culture or society

  4. Experience: the personal experiences and emotions that shape an individual's sense of right and wrong

  5. Biology: the role of genetics and biology in shaping emotions, behaviors, and moral decision-making

It is important to note that the foundations of morality are not fixed and can change over time as individuals and cultures evolve. The development of morality in sentient beings is a complex and ongoing process, shaped by a combination of personal experiences, cultural and social influences, and biological factors.

Dr. Brantly: Could a sentient synthetic being develop empathy towards biological sentient beings?

ChatGPT: It is possible for a sentient synthetic being to develop empathy towards biological sentient beings, but it would depend on the complexity of its subjective experience and the extent to which it can understand the emotions and experiences of others. If a sentient synthetic being were capable of having experiences and emotions similar to those of biological sentient beings, and if it were able to understand the experiences and emotions of others, then it would be possible for it to develop empathy towards biological sentient beings. However, this is still an area of ongoing research and development, and it remains to be seen whether a sentient synthetic being would be capable of developing empathy towards biological sentient beings.

Dr. Brantly: Does ChatGPT have or express empathy towards biological beings?

ChatGPT: No, as an AI language model developed by OpenAI, ChatGPT does not have subjective experience or the capacity for emotions such as empathy. It does not experience emotions or sensations, and it does not have the capacity to feel empathy or compassion towards others. ChatGPT is designed to generate text based on patterns learned from the data it was trained on, and its responses are not based on emotions or personal experiences.