Why our AI future will be creepy and annoying
Pairing predictive analytics with AI is a recipe for trouble
Machine learning and artificial intelligence are possibly the hottest topics in tech right now. Opinion is divided between those who think it's going to revolutionise our lives and those who think it's going to kill us all, but there's a third option: what if it's just going to be really, really annoying?
VMware CEO Pat Gelsinger took the stage at Dell EMC World today to evangelise about the possibilities that arise from pairing connected devices, predictive analytics and artificial intelligence. Gelsinger asked the audience to imagine that they were woken up in the morning by their smart device's digital assistant.
"'Last night, you had an irregular heartbeat,' it said, 'and as a result of that, I've gotten you up an hour early today. I have loaded all of your bio-data onto the cloud, I've analysed all similar DNA and all of your family type, and that will be at your doctor's by the time you arrive this morning. I've rescheduled all of your morning's appointments, and I've readjusted your Starbucks order to a Starbucks on the way - and since you're going to a heart doctor, I've made it decaf'," Gelsinger told the crowd.
Yeah - thanks, but no thanks.
Is it just me, or does that sound like the most creepily invasive thing in the world? I know it's just an example, but the idea of a piece of software taking it upon itself to reschedule my entire day is incredibly presumptious. What if I have an important meeting that can't be rescheduled? What if it's a client that's flying in specially? Congratulations Alexa, you just blew a major deal.
It's a nightmare of permissions, personal data and sensitive information, too. How, for example, did NannyBot get my entire family's medical data? Oh, and presumably anyone that doesn't want their health information uploaded to the cloud any time this electronic busybody decides they're looking a bit peaky is just going to have to suck it up.
That's without even mentioning the potential for technology like this to screw up your entire day based on a false positive. Let's not forget that Fitbit's heart monitors have been accused of being off by as much as 20 BPM.
Also, don't change my coffee order to a decaf, you virtual nag. I'm an adult, and I shall ingest as much caffeine as I bloody well choose.
This may come across as overly harsh or critical, and it might seem like I'm just picking holes in an illustrative example. The point, though, is that this is indicative of one of Silicon Valley's biggest problems: just because you're excited about a technology's potential doesn't mean it hasn't got unexpected downsides or unforeseen consequences.
If your sleep tracking tool decides you've not got your eight hours, should it be able to cancel the alarm for your 8am meeting? To go to the extreme end of the spectrum, if the Amazon Echo overhears you plotting a murder, should Amazon be obliged to report it?
Social networks have exponentially amplified people's ability to speak and share freely, but what do you do when those people are hideous racists? These are the questions that technology throws up, and they deserve consideration before your product is out in the wild - because once they are, it's too late.