Data collected from VR users can easily
be used to manipulate by those who control VR platforms. In fact,
commercial third-party software designed for VR developers already
allows for data collection to help identify which parts of their
worlds are most engaging and which parts need more work, based on the
reactions of users in real time. These systems are also capable of
influencing people using VR, manipulating them, for example, to make
more environmentally conscious choices and or affecting their choices
in tests for racial bias. AI-controlled avatars can be used to
“nudge” users into accepting certain ideas or views through
seemingly innocuous AI-avatar conversation responses such as smiling
or frowning, and these avatars could be even more effective if they
are able to access data about the user’s emotional responses
through eye-tracking or emotion capture. Emotional data collection
and influence upon VR users is currently without limit, as no laws
exist to restrict the types of behavioral data VR companies can
collect from users, nor are there laws restricting how that data will
be used. There are also no laws to protect against who will have
access to this data—data which could be used and shared among
profit-seeking advertising companies, insurance companies, the
police, and the government. Laws were finally enacted against
subliminal advertising in the 1970's—will VR users have the same
protection someday? Or will anyone care, in a technological landscape
where giving up all privacy is increasingly accepted?
No comments:
Post a Comment