An
AI unhindered by human bias could also become a dangerous and
horrifying monster. Concepts of "right" and "wrong"
mean nothing to an AI, and morality is unlikely to be a consideration
in any of its calculations. Even if such things are programmed in, a
free-thinking AI may decide to override such considerations. As an
example, Google has been working on self-driving cars. The AI program
in your car might decide during a traffic accident that--to save the
lives of several others--your life must be sacrificed for the greater
good and therefore it steers your car in a way that avoids harming
others but kills you in the process. More may have survived the
accident overall, but you're dead. How many of you would willingly
get in a car like this and trust your life to an AI? While we
certainly don't want others to die because of us, our sense of
self-preservation is a concept an AI might not consider or won't
necessarily prioritize over other considerations.
No comments:
Post a Comment