Google and Elon Musk to Decide What Is Good for Humanity

Click here to view original web page at www.wired.com
googleai-inline-660x571
esenkartal/Getty

Recently published Future of Life Institute (FLI) letter “Research Priorities for Robust and Beneficial Artificial Intelligence”, signed by hundreds of AI researchers, many representing government regulators, some sitting on committees with names like “Presidential Panel on Long Term AI future”, in addition to the likes of Elon Musk and Stephen Hawking, offers a program professing to protect the mankind from the threat of “super-intelligent AIs.” In a contrarian view, I believe, that should they succeed, rather than upcoming salvation, we will see a 21st century version of 17th century Salem Witch trials instead, where technologies competing with AI will be tried and burned at stake, with much fanfare and applause from mainstream press.

Before I proceed to my concerns, some background on AI. For last 50 years AI researchers promise to deliver intelligent computers, which always seem to be five years in the future. For example, Dharmendra Modha, in charge of IBM’s Synapse “neuromorphic” chips claimed two or three years ago that IBM “will deliver computer equivalent of human brain” by 2018. I have heard echo of this claim in statements of virtually all recently funded AI and Deep Learning companies. Press accepts these claims with the same gullibility it displayed during Apple Siri’s launch and hails arrival of the “brain like” computing as a fait accompli. I believe this is very far from the truth.

The investments on the other hand are real, with old AI technologies dressed up in new clothes of “Deep Learning”. I addition to acquiring Deep Mind, Google hired Geoffrey Hinton’s University of Toronto team as well as Ray Kurzweil whose primary motivation for joining Google Brain seems to be the opportunity to upload his brain into vast Google supercomputer. Baidu invested $300M in Stanford University Andrew Ng’s Deep Learning lab, Facebook and Zuckerberg personally invested $55M in Vicarious and hired Yann LeCun, the “other” deep learning guru. Samsung and Intel invested in Expect labs and Reactor, Qualcomm made a sizable investment in BrainCorp. While some progress in speech processing and image recognition will be made, it will not be sufficient to justify lofty valuations of recent funding events.

[ Also on Insights: Artificial Intelligence and the Transformation of Healthcare ]

While my background is in fact in AI, I worked for last few years closely with preeminent neural scientist, Walter Freeman at Berkeley on a new kind of wearable personal assistant, one based not on AI but on neural science. During this time, I came to the conclusion that symbol based computing technologies, including point-to-point “deep” neural networks (not neural science) can not possibly deliver on claims made by many of these well funded AI labs and startups. Here are just three of the reasons:

  1. Every single innovation in evolution of vertebrate brains was due to advances in organism locomotion, and none of the new formations indicate the emergence of symbol processing in cortex.
  2. Human intelligence is a product of resonating, coupled electric fields produced by massive population of neurons, synapses and ion channels of cortex resulting in dynamic, AM modulated waves in gamma and beta range, not static point-to-point neural networks.
  3. Human memories are formed in hippocampus via “phase precession” theta waves which transform time events into spatial domain without use of symbols like time stamps.

Each of the above three empirical findings invalidates AI’s symbolic, computation approach. I could provide more but it is hard to fight prevalent cultural myths perpetuated by mass media. Movies are a good example. At the beginning of the movie Transcendence, Johnny Depp’s character, an AI researcher (from Berkeley:), makes bold claim that “just one AI will be smarter than the entire population of humans that ever lived on earth”. By my calculation this estimate is incorrect today by almost 20 orders of magnitude, it will take more than a few years to bridge this gap.

Which brings me back to the FLI letter. While individual investors have every right to lose their assets, problem gets much more complicated when government regulators are involved. Here are the the main claims of the letter I have problem with (quotes from the letter in italics):

  1. Statements like: “There is a broad consensus that AI research is progressing steadily”, even “progressing dramatically” (Google Brain signatories on FLI web site), are just not true. In the last 50 years there has been very little AI progress (more stasis like than “steady”) and not a single major AI based breakthrough commercial product, unless you count iPhone infamous Siri. In short, despite overwhelming media push, AI simply does not work.
  2. “AI systems must do what we want them to do” begs the question of who is “we”? There are 92 references included in this letter, all of them from CS, AI and political scientists, there are many references to approaching, civilization threatening “singularity”, several references to possibilities for “mind uploading”, but not a single reference from a biologist or a neural scientist. To call such an approach to study of intellect “interdisciplinary” is just not credible.
  3. “Identify research directions that can maximize societal benefits” is outright chilling. Again, who decides whether a research is “socially desirable”?
  4. “AI super-intelligence will not act with human wishes and will threaten the humanity” is just a cover for justification of the attempted power grab of AI group over the competing approaches to study of intellect.

Why should government regulators support technology which failed to deliver on its promises repeatedly for 50 years? Newly emerging branches of neural science which made major breakthroughs in last years are of much greater promise, in many cases exposing glaring weaknesses of AI approach, so it is precisely these groups which will suffer if AI is allowed to “regulate” the direction of future research of intellect, whether human or “artificial”. Neural scientists study actual brains with imaging techniques such as fMRI, EEG, ECOG, etc and then postulate predictions about their structure and function from the empirical data they gathered. The more neural research progresses, the clearer it becomes that brain is vastly more complex than we thought just a few decades ago.

AI researchers on the other hand start with a priori assumption that brain quite simple, really just a carbon version of Von Neumann CPU. As Google Brain AI researcher and FLI letter signatory Illya Sutskever recently told me, “brain absolutely is just a CPU and further study of brain would be a waste of my time”. This is almost word for word repetition of famous statement of Noam Chomsky made decades ago “predicting” the existence of language “generator” in brain.

FLI letter signatories say: Do not to worry, “we” will allow “good” AI and “identify research directions” in order to “maximize societal benefits” and “eradicate diseases and poverty”. I believe that it would be precisely the newly emerging neural science groups which would suffer if AI is allowed to regulate research direction in this field. Why should “evidence” like this allow AI scientists to control what biologists and neural scientists can and can not do?

It is quite possible that signatories motives are pure. But at the moment AI lobby has a near monopoly on forming public opinion and attracting government dollars through the influence of compliant media. Indeed government regulators in this space are all AI researchers, often funding AI startups with taxpayers dollars and later taking up jobs with the very same companies they funded and were supposed to “regulate”. And often, when government regulators lead, private VC funds follow in a “Don’t fight the Fed”, sheep like movement.

There is yet another dimension to this story: In addition to threat of upcoming “singularity” FLI letter reference section has many references to “mind uploading”. After life-time of immersion in Von Neumann architectures, from Ray Kurzweill and Peter Thiel on, many silicon valley prodigies are obsesses with the idea of becoming immortal via mind upload into silicon. Threat of death is a powerful emotion indeed, but it belongs in realm of religious thinking rather than “dispassionate and objective science”.

Let me conclude with another movie quote: At the end of movie Beautiful Mind, mathematician John Nash played by Russell Crowe, recovering from mental illness, lectures a group of students in a cafeteria: “Trust mathematics, trust your teachers”, pauses, then adds with a wink: “just stay away from biologists, do not trust those guys.” Indeed, today’s AI researchers are all children of Rene Descartes, trusting in absolute power of logic and mathematics as they push their religion of Cartesian Dualism on the rest of us. Inadvertently, they tell us all to drink AI Kool-Aid in their “SkyNet is Coming” sermon.

I believe that neural science and biology utilizing wearable sensors is already much more fruitful than AI in delivering personal assistants guiding us through daily life, keeping us healthier and stress free, based on better understanding of brain, rather than logic of CPU programing and algorithms of AI focused on weapons and robotics. I hope US press will arise to a defense of scientists’ rights to continue to perform such free research rather then limiting their research to work on “desirable” results. As a famous US journalist once said: “Those who sacrifice liberty for security deserve neither.”

Roman Ormandy, founder of Embody Corp, is an entrepreneur working in mobile personal assistants space, specifically using wearable sensors. His background is in Computer Science, 3D Graphics, Linguistics and Neural Science.

Leave a Reply

Your email address will not be published. Required fields are marked *