The Hyperbole Of The Facebook AI Scare

//The Hyperbole Of The Facebook AI Scare

 

Facebook AIA couple weeks ago I read an article on Forbes.com about how “Facebook shut down an artificial intelligence engine after developers discovered that the AI had created its own unique language that humans can’t understand”.  I instantly thought about The Terminator movie franchise and got slightly concerned.  You know, the story where an artificial intelligence defense network, named Skynet, becomes self-aware and initiates a nuclear holocaust?  Then insert Arnold…yada, yada, yada.  After reading the rest of the article, I proceeded to tell a friend of mine about the article who responded with a “What, What!?”.  After that little exchange, we mentioned it to three other folks.  Exactly how many people that story trickled down to, I can’t really say, but apparently I wasn’t the only one who read the article or heard about it on various news outlets.  It’s fairly safe to say the story got a lot of folks worked up.

Twelve hours after the initial reports, Dhruv Batra of Facebook’s Artificial Intelligence Research group weighed in on the situation in a Facebook post describing the “coverage clickbaity and irresponsible”.  He continues:

“While the idea of AI agents inventing their own language may sound alarming/unexpected to people outside the field, it is a well-established sub-field of AI, with publications dating back decades.

Simply put, agents in environments attempting to solve a task will often find unintuitive ways to maximize reward. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI”. If that were the case, every AI researcher has been “shutting down AI” every time they kill a job on a machine.

I encourage everyone to read the actual research paper or the overview blog post from FAIR:  https://code.facebook.com/…/deal-or-no-deal-training-ai-bo…/.”

So I read the actual research paper.  Newsflash.  Apparently Skynet has not become self-aware.  And although several scientists and luminaries including Elon Musk and Stephen Hawking have warned that AI could have unforeseen and dangerous consequences, I think it’s safe to say the initial reports were hyperbole.

So what’s the moral of the story?  Don’t believe everything you read?  Yes, of course.  Have you ever heard of the term “fake news”?  Recently, we wrote a couple blog posts related to changing IT providers and highlighting traits of an outstanding IT provider.  In both of those posts, we emphasized doing thorough research and evaluation.  The bottom line is instead of taking some Forbes.com writer/contributor or other news outlet at their word, I should have done my own research and went straight to the source prior to blabbing my mouth.  Now I’m not going to sit here and go on a rant about how the mainstream media can’t be trusted.  I’ll save that for when I do a political blog, which will be NEVER.  And I’m not going to go on a rant about how people are generating ridiculous content just so they can get higher Google rankings.  But I could (because it’s happening).  But I won’t.  I do think I’ll call my friend tomorrow and tell him it was hyperbole.  That’s the least I could do, right?

OK, final thought.  Which will happen first?  AI nuclear holocaust or zombie apocalypse?  I’m voting for neither.  Stop the insanity.

 

2017-09-26T10:35:13+00:00By |Technology|0 Comments