ARTIFICIAL INTELLIGENCE AND CONFLICTS OF INTEREST
Artificial Intelligence (AI) is an intelligence that is based on programmed decision rules and/or algorithms. Applications as diverse as drones, medical decisions, self-driving cars and financial trading decisions all use various forms of artificial intelligence. If you have ever found yourself in an automated message loop, you have experienced a tiny example of the limits of artificial intelligence.
This week we will consider some of the known concerns relative to Artificial Intelligence in light of the completely human bias of self-interest. Commonly named "conflict of interest," recent research by Dan Ariely and others points out that it is not really possible to avoid having a personal bias. The most we can do is to be aware of the fact and balance it by intentionally exposing ourselves to other viewpoints.
Where does this leave us with the decisions that need to be programmed into self-driving cars?
This week we will consider some of the known concerns relative to Artificial Intelligence in light of the completely human bias of self-interest. Commonly named "conflict of interest," recent research by Dan Ariely and others points out that it is not really possible to avoid having a personal bias. The most we can do is to be aware of the fact and balance it by intentionally exposing ourselves to other viewpoints.
Where does this leave us with the decisions that need to be programmed into self-driving cars?
BEGINNING WITH OURSELVES AS HUMAN - BACKGROUND
The first reading is the chapter entitled "Blinded by Our Own Emotion," from Dan Ariely's book, The (Honest) Truth About Dishonesty.
The second (optional) reading is an article by Jonathan Haidt and Craig Joseph, "Intuitive ethics: how innately prepared intuitions generate culturally variable virtues." The paper makes a proposal about how our ethics our formed. Briefly stated, it posits that humans (and many animals) have a small number of innately prepared intuitions associated with what we have come to call "moral virtues." These innate intuitions become associated with particular events, words, and behaviors through our cultural experiences. Namely, the four basic innate moral intuitions found to be universally present are compassion (caring/harm), hierarchy (authority/oppression), reciprocity (fairness/cheating), and purity (cleanliness/pollution). Our unconscious, System 1, brain provides the sense of these culturally formed intuitions to our conscious, System 2, brain, based on learned (and habituated) responses. The key is that different people and different cultures have associated different weights and combinations to the same events! The point is that what is right for me, based on my sense of compassion and reciprocity could be wrong to you based on your sense of compassion, hierarchy, purity and reciprocity!
READINGS - CURRENT ISSUES
Would a Google Car Sacrifice You for the Sake of the Many? An interesting opinion piece that generated a lot of attention.
As Robotics Advance, Worries of Killer Robots Rise A New York Times article, June 2014.
Three articles from The Economist, June 2012
- March of the Robots
- Morals and the Machine
- When Code Can Kill or Cure
New articles:
The first reading is the chapter entitled "Blinded by Our Own Emotion," from Dan Ariely's book, The (Honest) Truth About Dishonesty.
The second (optional) reading is an article by Jonathan Haidt and Craig Joseph, "Intuitive ethics: how innately prepared intuitions generate culturally variable virtues." The paper makes a proposal about how our ethics our formed. Briefly stated, it posits that humans (and many animals) have a small number of innately prepared intuitions associated with what we have come to call "moral virtues." These innate intuitions become associated with particular events, words, and behaviors through our cultural experiences. Namely, the four basic innate moral intuitions found to be universally present are compassion (caring/harm), hierarchy (authority/oppression), reciprocity (fairness/cheating), and purity (cleanliness/pollution). Our unconscious, System 1, brain provides the sense of these culturally formed intuitions to our conscious, System 2, brain, based on learned (and habituated) responses. The key is that different people and different cultures have associated different weights and combinations to the same events! The point is that what is right for me, based on my sense of compassion and reciprocity could be wrong to you based on your sense of compassion, hierarchy, purity and reciprocity!
READINGS - CURRENT ISSUES
Would a Google Car Sacrifice You for the Sake of the Many? An interesting opinion piece that generated a lot of attention.
As Robotics Advance, Worries of Killer Robots Rise A New York Times article, June 2014.
Three articles from The Economist, June 2012
- March of the Robots
- Morals and the Machine
- When Code Can Kill or Cure
New articles:
- Automation Makes Us Dumber, Wall Street Journal, November 22-23, 2014