In some ways I feel that we have barely tapped into human intelligence (an experience that can go way beyond the narrow field of intellectual comprehension), yet it seems that the progress we have made, as expressed in AI, may undermine the possibility for many people to continue to evolve their human intelligence.
“… artificial intelligence opens a new world of image, audio, and video fakery …
… Another obvious beneficiary would be hoaxes. Consider the video below — a demonstration of a program called Face2Face, which essentially turns people into puppets, letting you map their facial expression to your own. The researchers demonstrate it using footage of Trump and Obama. Now combine that with prototype software recently unveiled by Adobe that lets you edit human speech (the company says it could be used for fixing voiceovers and dialog in films). Then you can create video footage of politicians, celebrities, saying, well, whatever you want them, too. Post your clip on any moderately popular Facebook page, and watch it spread around the internet.
… we can’t deny that digital tools will allow more people to create these sorts of fakes … AI-powered fakes and manipulations aren’t hard to spot now … but researchers say they’re just going to get better and better.”
The proliferation of realistic fakes would be a boon to conspiracy theorists, and would contribute to the current climate of deteriorating confidence in journalism.”
This article I read a couple of weeks ago makes this even more poignant … the software engineers who create and train deep neural networks do not know how they actually work and make decisions. The neural networks, it seems, can only be indirectly affected and tweaked … by intuition … are scientists pushing themselves off the edge of reason? Even at this early stage we do not have “control” over these creations:
“Yesterday, the 46-year-old Google veteran who oversees the company’s search engine, Amit Singhal, announced his retirement. And in short order, Google revealed that Singhal’s rather enormous shoes would be filled by a man named John Giannandrea.
Giannandrea, you see, oversees Google’s work in artificial intelligence … Early in 2015 … Google began rolling out a deep learning system called RankBrain that helps generate responses to search queries.
… Singhal carried a philosophical bias against machine learning. With machine learning, he wrote, the trouble was that “it’s hard to explain and ascertain why a particular search result ranks more highly than another result for a given query.” And, he added: “It’s difficult to directly tweak a machine learning-based system to boost the importance of certain signals over others.”
… in order to tweak the behavior of these neural nets, you must adjust the math through intuition, trial, and error. You must retrain them on new data, with still more trial and error.”
and if you got this far and would like to go a bit deeper … this up-to-date primer on artificial intelligence.
One Trackback
[…] of black box it will yield a result (I’m just realizing how this seems to coincide with the rise of AI). This thinking, by definition, does not take into consideration […]