It may seem that I am simply focusing
on examples of computer errors in all this, and not actual AI. Yet
Artificial Intelligence is a conglomeration of complex algorithms
that allow a computer to make conclusions based on data sampled
through various means. What we might call a computer error is really
a matter of perspective—to us, it is an error because it violated
our intent for the computer. But in the cases mentioned this week,
the computer arrived at that “error” through a logical
application of its programming. The AI on the International Space
Station had a task to complete—the launching of satellites on a
specific schedule—but when the AI realized it could not maintain
its objective if it was delayed any further, it simply resumed its
task, regardless of the fact that it had been told to stop. The
peak-rewards situation was a mere application of programming to a
situation: it was hot, everyone was using their AC, and the computers
decided to shut down the AC of everyone on “peak rewards” because
too much electricity was being used. I've even heard a story of a
hospital situation where orders for medicine suddenly stopped being
printed out, and pharmacists did not realize that the queue was
building internally in the computer system. Patients no doubt
suffered for the delay in their medication, but because of a computer
program re-routing the notifications to a computer instead of a the
usual printers, the pharmacists were delayed as they tried to figure
out what was going on. Why are we even considering developing AI for
any system that could profoundly affect our lives? Smart phones,
smart homes, and Google's smart car...does AI make you feel safe?
No comments:
Post a Comment