Ethics in Artificial Intelligence

By Cameron Simmons, Lab Graduate Research Assistant

            Who decides what is right and wrong? As humans we are taught by our parents/guardians the difference between right and wrong and we are taught the morals and values that not only guide our lives but that also serve to make us functioning members of society. That being said morals and values may have minor variations from person to person but can have large differences when it comes to different cultures. For example, in some countries it is considered rude to leave a tip for a meal because excellent service is naturally expected and employees are paid a living wage whereas in other countries tipping is generally accepted as a societal norm and it is considered rude not to leave a tip. As advancements are made in regards to Artificial Intelligence a question arises… what are the repercussions of the decisions made by Artificial Intelligence. Or to rephrase who is going to face the consequences that result from Artificial Intelligence making decisions? Should it be the AI itself? Should it be the ones who made and employed the AI? Would the responsibility differ if the AI had the ability to learn, grow and advance on its own after its initial creation? Where does the responsibility lie? 

            For me personally, I would need to look at something that we already have in place and then use that to extrapolate outwards. Let’s take a car for example, if a car is built and there is something fundamentally wrong with the car such as faulty brakes that causes it to get into some kind of accident or faulty airbags that does not do the job of correctly protecting the passengers when an accident does occur it is the usually the car company that is at fault. It wouldn’t be the dealership that sold the car as they were only an intermediary. Also, the person driving, the driver of the vehicle that were in the accident with may be at fault as well. So, in terms of AI I believe that if there was something wrong with the creation of the AI that resulted in an amoral decision or that resulted in some kind of consequences those should fall on the creators of the AI because similarly with cars, they test out the product based on varying measurements and standards and if they cut corners somehow or skipped a phase in the testing they should have to deal with those consequences. If on the other hand a car was built and everything was working perfectly and then through some action of the user there was an accident whether that result from driving or lack of maintenance that should fall onto the owner of the car. Similarly with AI if the company that has purchased or been using the AI failed to do some routine maintenance of some sort and then as a result, they are consequences that need to be faced then those consequences should be faced by that company. The problem that I have conceptually is what to do with when the AI system itself is beginning to learn, develop and grow on its own. I don’t have a real-world comparison that I can go to, to look at to give me a base understanding of how to handle it. Do we consider the AI its own being once it has the ability to learn on its own? If that’s the case, how do we enforce consequences on AI systems. 

            No, rather than try to conceptualize a version of AI jail to restrict the actions and functions of AI to try and fit a punishment I think the answer might actually be a lot simpler that I originally thought. If there is an action that is brought about by the decision making of the AI, I believe that fundamentally there must be a problem with the AI. Some where in the code that handles how the AI begins to retrieve information, process it and create decisions, or in the AI’s ability to learn, grow and develop there is an issue that has resulted in the AI needing to face consequences. In this instance I feel that again it would be the creators of the AI that would need to be held responsible because at the end of the day AI is a tool. A tool that was made to perform a specific task and to help make everyday tasks easier. The AI does not have the ability to create its own moral decisions it is simply mindlessly running through code that it was programmed to run. 

            An article by Virginia Dignum differentiates ethics into three different categories. Ethics by design, ethics in design and ethics for design. Where ethics by design focuses on the “technical/algorithmic integration of ethical reasoning capabilities as part of the behavior of artificial autonomous system.” Ethics in design focuses on “the regulatory and engineering methods that support the analysis and evaluation of the ethical implications of AI systems as these integrate or replace traditional social structures.” Finally, ethics for design focuses on “the codes of conduct, standards and certification processes that ensure the integrity of developers and users as they research, design, construct, employ and manage artificial intelligent systems.” I think if we can focus on refining ethics through these three windows we can ultimately regulate how ethics is handled in AI systems and interfaces.