“The human brain isn’t really empty, of course. But it does not contain most of the things people think it does – not even simple things such as ‘memories’.
… Our shoddy thinking about the brain has deep historical roots, but the invention of computers in the 1940s got us especially confused. For more than half a century now, psychologists, linguists, neuroscientists and other experts on human behaviour have been asserting that the human brain works like a computer.
… computers really do operate on symbolic representations of the world. They really store and retrieve. They really process. They really have physical memories. They really are guided in everything they do, without exception, by algorithms.
Humans, on the other hand, do not – never did, never will. Given this reality, why do so many scientists talk about our mental life as if we were computers?
… in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. That spirit ‘explained’ our intelligence – grammatically, at least.
The invention of hydraulic engineering in the 3rd century BCE led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.
By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.
… Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software.
… The information processing (IP) metaphor of human intelligence now dominates human thinking, both on the street and in the sciences.
… But the IP metaphor is, after all, just another metaphor – a story we tell to make sense of something we don’t actually understand. And like all the metaphors that preceded it, it will certainly be cast aside at some point – either replaced by another metaphor or, in the end, replaced by actual knowledge.
… Misleading headlines notwithstanding, no one really has the slightest idea how the brain changes after we have learned to sing a song or recite a poem. But neither the song nor the poem has been ‘stored’ in it. The brain has simply changed in an orderly way that now allows us to sing the song or recite the poem under certain conditions.
… Fortunately, because the IP metaphor is not even slightly valid we will … never achieve immortality through downloading. This is not only because of the absence of consciousness software in the brain; there is a deeper problem here – let’s call it the uniqueness problem – which is both inspirational and depressing.
… there is no reason to believe that any two of us are changed the same way by the same experience … Those changes, whatever they are, are built on the unique neural structure that already exists, each structure having developed over a lifetime of unique experiences.
… This is inspirational, I suppose, because it means that each of us is truly unique, not just in our genetic makeup, but even in the way our brains change over time. It is also depressing, because it makes the task of the neuroscientist daunting almost beyond imagination.
… This is perhaps the most egregious way in which the IP metaphor has distorted our thinking about human functioning. Whereas computers do store exact copies of data – copies that can persist unchanged for long periods of time, even if the power has been turned off – the brain maintains our intellect only as long as it remains alive.
… Meanwhile, vast sums of money are being raised for brain research, based in some cases on faulty ideas and promises that cannot be kept … the $1.3 billion Human Brain Project launched by the European Union in 2013. Convinced by the charismatic Henry Markram that he could create a simulation of the entire human brain on a supercomputer by the year 2023, and that such a model would revolutionise the treatment of Alzheimer’s disease and other disorders, EU officials funded his project with virtually no restrictions. Less than two years into it, the project turned into a ‘brain wreck’, and Markram was asked to step down.
… We are organisms, not computers. Get over it.”
source via James Wallbank
2 Comments
Fascinating read and the summary statement is perfect. But here’s another perspective – with each new “theory” throughout time, the process of researching it and later disproving it has taught us more and more about the structure of the brain and how it works.
Much of this work has resulted in medical breakthroughs in treating and curing many neurological diseases. Just a few examples:
– neurostimulators that significantly reduce epileptic seizures in drug refractory epilepsy;
– tumor treating fields recently developed by Israeli company Novocure to treat recurring glioblastoma;
– transcranial magnetic stimulation, also developed by an Israeli company – Brainsway, to treat major depressive disorder.
Putting forth theories and then researching them has value regardless of the outcome. We learn through both success and failure (I would even posit that we learn more from failure than we do from success).
From this perspective, the current IP theory will lead us to learn more and hopefully come one step closer to the ever elusive “actual knowledge” that the article refers to.
There are many examples in science which we have been able to make something work without necessarily understanding the underlying mechanisms.
I am convinced there are examples of interventions which looked useful at the time but later turned out to have undesirable side-effects … one example that comes to my mind right now is mercury fillings in denstistry.
I think its OK to pursue an existing theory … however I believe that better research into underlying mechanisms (which often requires revisiting core ideas which have become taken for granted) can open up avenues of exploration that we can’t even imagine within an out-dated paradigm.What more I believe that once there is evidence that questions an established theory it becomes a moral imperative to do this kind of research and a moral failure to uphold a story just because we are comfortable and used to it … such as pumping $1.3billion into a dead-end research program.
If you follow this thread, I believe, you will come to major faults in common scientific methodology … and if you follow that long enough you will come to fault in our underlying metaphysics of what the world is.