By Grigori Guitchounts
October 27, 2021
image above: A rendering of dendrites (red)—a neuron’s branching processes—and protruding spines that receive synaptic information, along with a saturated reconstruction (multicolored cylinder) from a mouse cortex. Courtesy of Lichtman Lab at Harvard University.
WN: The article highlighted below got me thinking . . .
In my post Easter Song: Reflections on the Resurrection, you will find the following.:
In modernity, it was claimed that “Science” had displaced Faith as authoritative portal to understanding. But this is modern mythology. Noted former CBC Ideas broadcaster David Cayley highlights this in How To Think About Science, 24 hours of broadcast interviews with top historians, sociologists and philosophers of science. (Yes, I listened to them all.) One may also read his subsequent book: Ideas on the Nature of Science. Though it does not include everything from the series. The full transcript may be found on Cayley’s website, and here.
In that transcript, p. 148, during an interview with Steven Shapin (who is an historian and sociologist of science at Harvard University, co-author of Leviathan and the Air Pump, and author of A Social History of Truth and the Scientific Life), contends that “. . . trust is imperative for constituting every kind of knowledge. Knowledge-making is always a collective enterprise: people have to know whom to trust in order to know something about the natural world.) We read:
We might mean that the idea of science has some authority, that people think that science has got a method that guarantees the production of reliable knowledge, and that’s an intriguing idea, except the evidence is that not just lay people, but scientists themselves, have tremendous disagreements about what it is that method–the scientific method–might be. So we’re left with rather a puzzle about what we might mean by saying science is the characteristic culture of modernity, and I’d like to leave it at that because I’d like to encourage a lot more interest in the conditions under which we can talk about science and the modern world.
Religion, we should understand now, has enormous authority. Whether or not it’s increasing its authority in our public life, especially in this country [the USA], is another matter. But religion did not go away. Religion was not killed by science. For all that the commentators at the end of the 19th century or early 20th century said so, they were wrong. Religion is alive and well.
Sociologist of science Bruno Latour, in the transcript mentioned explains further (p. 43), summarized by Cayley:
The divide that Bruno Latour crossed separated science from the facts that science establishes. Under the modern constitution, an established fact was a free-standing reality. Once discovered, scientific laws were assigned to nature, or to God, and all marks of their human origins were erased. Latour argued that facts remain attached to the practices and assumptions that make them facts in the first place. A fact about climate, let’s say, depends on the apparatus that made the observation. It depends on the network of definitions that stabilize the meanings of basic terms in meteorology. It depends on the good faith of the researcher who attests to its veracity. It depends, in short, on what Bruno Latour calls science practices. These practices do not invalidate facts, rather they make the facts what they are. But this was not, at first, appreciated, Bruno Latour says.
String theory answers the need for a unified explanation of the phenomena of physics. It is pursued out of a deep conviction that there must be such an explanation, but, according to Maxwell, this must finally be a matter of faith rather than knowledge. It’s what he calls a problematic aim, and science generally has not wanted to face up to the fact that it rests on such metaphysical foundations. The consequence is that science has become neurotic. It is, as we say today, in denial.
NM: For scientists, I think, the big problem is that to acknowledge this does violence to the official view about the nature of science, especially when one comes to defend science against other things, like religion and politics. What is it that is so special about science? In science, nothing is taken as an article of faith. Nothing is taken on trust. Everything is open to being assessed empirically, and that’s an extremely simple line to take. In politics, there is dogma of various kinds. In religion, there is a book, there is an oracle of some kind or other. There is faith. In science, there isn’t. But if I’m right, and if you acknowledge that, for science to be possible at all, you have to make these highly problematic assumptions, then there is in a sense an article of faith, and it’s an article of faith that you can’t do without. That simple way of distinguishing science from religion and science from politics doesn’t work anymore.
Of course, there’s another way of doing it, but then you have to say that the difference is that in science these basic assumptions are subjected to constant scrutiny and that we are always seeking to modify and improve the assumptions we make in the direction of that which seems to be the most fruitful from the point of view of helping us to improve our empirical knowledge. That’s really what marks out science from these other things. That doesn’t go on at present because of the neurosis of science, but that’s what one should be able to say about science.
Bluntly then: science and religion (and politics!) are equally built on articles of faith . . .
The above discussion with Nicholas Maxwell is in a chapter entitled “From Knowledge to Wisdom.” This is title as well of his book, with the subtitle: A Revolution for Science and the Humanities.
Saint Paul writes in I Corinthians 1:
Not many of you were wise by human standards; not many were powerful; not many were of noble birth. 27But God chose the foolish things of the world to shame the wise; God chose the weak things of the world to shame the strong. 28He chose the lowly and despised things of the world, and the things that are not, to nullify the things that are, 29so that no one may boast in His presence.
30It is because of Him that you are in Christ Jesus, who has become for us wisdom from God: our righteousness, holiness, and redemption. 31Therefore, as it is written: “Let him who boasts boast in the Lord.”
Historian Sir Larry Siedentop writes of this inverse positing of true wisdom in Inventing the Individual: The Origins of Western Liberalism:
In effect, Paul’s vision of a mystical union with Christ introduces a revised notion of rationality–what he sometimes describes as the ‘foolishness’ of God. It is a foundation for a rationality reshaped through faith. It constitutes a depth of motivation unknown to ancient philosophy. ‘No one can lay any foundation other than the one that has been laid; that foundation is Jesus the Christ.’ For the sacrificial nature of love is open to everyone. And it counts everyone as a child of God. (p. 59)
So Saint Paul writes in the same chapter above:
20Where is the wise man? Where is the scribe? Where is the philosopher of this age? Has not God made foolish the wisdom of the world? 21For since in the wisdom of God the world through its wisdom did not know Him, God was pleased through the foolishness of what was preached to save those who believe.
And in that same letter, chapter 13, we read:
2If I have the gift of prophecy and can fathom all mysteries and all knowledge, and if I have absolute faith so as to move mountains, but have not love, I am nothing.
Knowledge in science, Nicholas Maxwell and Saint Paul before him, say, must be accompanied by wisdom: one that embraces humility, eschews boastfulness, and demonstrates sacrificial love on which to build a beneficent foundation of well-being in society for all.1
On a chilly evening last fall, I stared into nothingness out of the floor-to-ceiling windows in my office on the outskirts of Harvard’s campus. As a purplish-red sun set, I sat brooding over my dataset on rat brains. I thought of the cold windowless rooms in downtown Boston, home to Harvard’s high-performance computing center, where computer servers were holding on to a precious 48 terabytes of my data. I have recorded the 13 trillion numbers in this dataset as part of my Ph.D. experiments, asking how the visual parts of the rat brain respond to movement.
Printed on paper, the dataset would fill 116 billion pages, double-spaced. When I recently finished writing the story of my data, the magnum opus fit on fewer than two dozen printed pages. Performing the experiments turned out to be the easy part. I had spent the last year agonizing over the data, observing and asking questions. The answers left out large chunks that did not pertain to the questions, like a map leaves out irrelevant details of a territory.
But, as massive as my dataset sounds, it represents just a tiny chunk of a dataset taken from the whole brain. And the questions it asks—Do neurons in the visual cortex do anything when an animal can’t see? What happens when inputs to the visual cortex from other brain regions are shut off?—are small compared to the ultimate question in neuroscience: How does the brain work?
The nature of the scientific process is such that researchers have to pick small, pointed questions. Scientists are like diners at a restaurant: We’d love to try everything on the menu, but choices have to be made. And so we pick our field, and subfield, read up on the hundreds of previous experiments done on the subject, design and perform our own experiments, and hope the answers advance our understanding. But if we have to ask small questions, then how do we begin to understand the whole?
Neuroscientists have made considerable progress toward understanding brain architecture and aspects of brain function. We can identify brain regions that respond to the environment, activate our senses, generate movements and emotions. But we don’t know how different parts of the brain interact with and depend on each other. We don’t understand how their interactions contribute to behavior, perception, or memory. Technology has made it easy for us to gather behemoth datasets, but I’m not sure understanding the brain has kept pace with the size of the datasets.
Some serious efforts, however, are now underway to map brains in full. One approach, called connectomics, strives to chart the entirety of the connections among neurons in a brain. In principle, a complete connectome would contain all the information necessary to provide a solid base on which to build a holistic understanding of the brain. We could see what each brain part is, how it supports the whole, and how it ought to interact with the other parts and the environment. We’d be able to place our brain in any hypothetical situation and have a good sense of how it would react.
The question of how we might begin to grasp the entirety of the organ that generates our minds has been pressing me for a while. Like most neuroscientists, I’ve had to cultivate two clashing ideas: striving to understand the brain and knowing that’s likely an impossible task. I was curious how others tolerate this doublethink, so I sought out Jeff Lichtman, a leader in the field of connectomics and a professor of molecular and cellular biology at Harvard.
…Late one night, after a long day of trying to make sense of my data, I came across a short story by Jorge Louis Borges that seemed to capture the essence of the brain mapping problem. In the story, “On Exactitude in Science,” a man named Suarez Miranda wrote of an ancient empire that, through the use of science, had perfected the art of map-making. While early maps were nothing but crude caricatures of the territories they aimed to represent, new maps grew larger and larger, filling in ever more details with each edition. Over time, Borges wrote, “the Art of Cartography attained such Perfection that the map of a single Province occupied the entirety of a City, and the map of the Empire, the entirety of a Province.” Still, the people craved more detail. “In time, those Unconscionable Maps no longer satisfied, and the Cartographers Guilds struck a Map of the Empire whose size was that of the Empire, and which coincided point for point with it.”
The Borges story reminded me of Lichtman’s view that the brain may be too complex to be understood by humans in the colloquial sense, and that describing it may be a better goal. Still, the idea made me uncomfortable. Much like storytelling, or even information processing in the brain, descriptions must leave some details out. For a description to convey relevant information, the describer has to know which details are important and which are not. Knowing which details are irrelevant requires having some understanding about the thing you’re describing. Will my brain, as intricate as it may be, ever be able to make sense of the two exabytes in a mouse brain?
To get a better sense for this, I called Andrew Saxe, a computational neuroscientist at Oxford University. Saxe agreed that it might be informative to make our models truer to reality. “This is always the challenge in the brain sciences: We just don’t know what the important level of detail is,” he told me over Skype.
How do we make these decisions? “These judgments are often based on intuition, and our intuitions can vary wildly,” Saxe said. “A strong intuition among many neuroscientists is that individual neurons are exquisitely complicated: They have all of these back-propagating action potentials, they have dendritic compartments that are independent, they have all these different channels there. And so a single neuron might even itself be a network. To caricature that as a rectified linear unit”—the simple mathematical model of a neuron in DNNs—“is clearly missing out on so much.”
I have found myself revisiting my undergrad days, when I held science up as the only method of knowing that was truly objective (I also used to think scientists would be hyper-rational, fair beings paramountly interested in the truth—so perhaps this just shows how naive I was).
It’s clear to me now that while science deals with facts, a crucial part of this noble endeavor is making sense of the facts. The truth is screened through an interpretive lens even before experiments start. Humans, with all our quirks and biases, choose what experiment to conduct in the first place, and how to do it. And the interpretation continues after data are collected, when scientists have to figure out what the data mean. So, yes, science gathers facts about the world, but it is humans who describe it and try to understand it. All these processes require filtering the raw data through a personal sieve, sculpted by the language and culture of our times.
Please click on: Neuroscience’s Existential Crisis
- Further, for those interested, from my post Easter Song: Reflections on the Resurrection, I add.:
There is a rough parallel between earlier “Historical Jesus Quest” historians in their dismissal of New Testament historical reliability, and earlier scientific research that to this day (scientific) materialists claim explains everything–without reference to God.
A classic text on this is by physicist Stephen M. Barr who writes:
The question before us, then, is whether the actual discoveries of science have undercut the central claims of religion, specifically the great monotheistic religions of the Bible, Judaism and Christianity, or whether those discoveries have actually, in certain important respects, damaged the credibility of materialism (Modern Physics and Ancient Faith (2003), pp. 2 & 3.)
The author, in an irenic, often understated manner, concludes the latter, saying at the end of the book:
It is certainly conceivable, if to many of us not credible, that materialism is true, but surely it is not irrational to ask for somewhat stronger arguments on its behalf (p. 256).
So with issues of historical reliability of the New Testament: it is conceivable that the documents are overall not very historically reliable. But the evidence does not compel one towards that conclusion. It is not irrational to assert that.
Barr repeatedly disclaims offering “rigorous scientific proofs” for traditional Judeo-Christianity. (There are none!) Rather, he systematically carves out room for its possible embrace based on what is known from modern physics. Not a few surprises await (at least the previously uninformed reader–like me).
You may read more in the post mentioned.