Artificial intelligence (AI) is changing people’s lives in numerous ways. Countless tasks that people have traditionally carried out are now being done by robots or AI. Machines capable of learning raises a host of ethical issues. These questions stem from two lines of thought - ensuring that machines do not harm humans and other sentient beings (roboethics), and the moral status of the machines themselves (machine ethics).
As AI algorithms play a progressively larger role in modern society, it will become increasingly crucial to develop AI algorithms that are not just powerful and scalable, but also transparent, predictable and resistant to manipulation. Today, AI algorithms are taking on cognitive work with social dimensions, which means they inherit social requirements previously assigned to humans.
There are important legal questions that need to be answered, such as what moral obligations does a system’s programmers have? Who is responsible if the machine makes an error and causes harm or even death to an innocent person? Is it the designers of the AI system, the manufacturer, the owner of the machine or even the person that may be in the driver’s seat?
Here are some of the top ethical concerns in artificial intelligence today.
1. AI mistakes
AI, like humans, learn to detect the right patterns and act accordingly. This requires a training phase and a test phase before being fully developed. Neither of these phases can cover all possible examples that a system may need to deal with. These systems can make mistakes in ways humans usually don’t. For example, random dot patterns can “trick” a machine to read things that aren’t there. If we rely on AI to usher in a new world, we need to ensure the machine performs as planned, and that people can’t overpower it to use it for their own benefit.
As AI progressively learns how to automate, people increasingly fear technology will overtake all the jobs that humans currently perform. According to a report released by McKinsey, 800 million jobs could be lost worldwide to automation by 2030. For the first time ever, humans are competing with machines on a cognitive level. And, if the experts are correct, machines will ultimately have the capacity to be much, much smarter than us.
Since currently most of us sell our time in exchange for the income we need to sustain ourselves and our families, what happens if there aren’t enough jobs for all of us because machines can outperform us? What will the impact of automation be on our ability to work and provide for ourselves?
3. AI bias
One of the societal issues that AI is meant to solve is bias. Afterall, aren’t computers neutral and unable to act with bias when it comes to race, gender and sexual orientation? Since computers are programmed by people, bias is a real possibility. There have already been high-profile instances of AI bias. One such example is an AI algorithm used by parole authorities to predict the likelihood of reoffenders. The algorithm was shown to be biased against people of African American descent. Exactly how this bias was created is unknown because the details of the algorithm was kept confidential. AI systems that are biased are likely to become a growing problem as AI moves out of the data science labs and into the real world.
Whatever type of service AI is meant to perform, the essence of ethics as it relates to machine learning is really the question of how technologies embed values and assumptions into their algorithms. Everything we design is going to have ethical – and non-ethical - questions at every stage of development and operation. Understanding how those questions impact existing methods of development is critical to ensuring that the brave, new world of ever-present artificial intelligence we are building is a world we want to live in.
The technology industry is being driven by rapid change and innovation. Marsh & McLennan Agency (MMA) can help your company stay ahead of the risk curve. We offer a team of experienced professionals with expertise in the unique risks facing the technology industry. Contact us here to learn more.