Ethics and Human Values

Artificial Intelligence (AI) systems will have dramatic influence on our lives and decisions made today on the level of effort spent on imbuing AI systems with our values as human beings is important. I feel that understanding ethical questions is an important part of being an AI practitioner.

Part of writing ethical AI systems, especially deep learning models, is ensuring that data that we use for training is free of bias. A well known class of cases of biased data involves data that may not represent all races and ethnicities fairly.

By human values I mean personal values that each of us makes that indicate how important family, helping people, wealth, personal safety, safety of others, relative importance of personal vs. community goals, education, world peace, etc. are.

Assuming the field of AI continues to make rapid progress, I believe that sentient AIs will eventually be built and some safeguards need to be in place to offer some guarantee that they will have our values. Decisions need to be made on what types of AIs we will build, military use of AIs, etc.

I don’t think that leaving these decisions to large corporations and our governments is a real solution. Rather, as AI practitioners we should question the data that we use and the fairness of algorythms. Think for a moment how AI systems that utilize our personal data from being on the web affects us. To use Amazon as an example, how often are you tempted to buy something that Amazon “thinks you might be interested in” even though you had no intention of buying that item when you logged on to their web site? Do you know people who married their spouses based on the AI recommendation of an online dating system? I think it is obvious that many people will lose their jobs to AI systems in the future, and considering the effects on society is important, but isn’t it also important to consider our own personal feelings and control of our lives?

Let’s look at an example. Consider your personal ideas of how much you value your own safety vs. the safety of others. Sometime in the future you will use a self driving car. Would you want your car to save your life as the only passenger in your vehicle if it meant running a car full of passengers off the road and killing them? Setting aside the legal implications, it is not too much off a stretch to envision the idea of personalizing the values of cognitive systems that act on our behalf. Another example of where we would prefer that our personal values be taken into account in cognitive systems might be education: would you want your young children or grandchildrens’ social groups in preschool and kindergarten to be automated by AI systems that group kids into play groups based on big data and machine learning? I think not!

I believe it is well within the topic of this book to at least discuss and think about ways in which the cognitive systems we build should be consistent to our personal values as developers and/or the values of people who use these systems.

What can we do right now to reflect our human values in the actions of systems that we create? I offer one suggestion: when we use machine learning, consider parameterizing relevant values and include them in training data. This could be as simple as considering the values of caring for the environment and effect on poor regions of the world when building an AI system to choose companies to invest in. Whether a user cares or does not care about these issues should affect the results produced by an investment model.

TBD: provide references to my 2 favorite privacy books