God’s helicopter and computer brains

Embed from Getty Images

There’s a modern parable about a man whose rowing boat loses its oars and is swept out to sea. Onlookers at the shore’s edge call out to the man. He shouts back that they are not to worry because God will save him. The onlookers alert the coastguard who send a helicopter which lowers a rescuer down to the man. The man refuses to take the rescuer’s hand and reassures the rescuer that God will save him. Eventually, as the boat drifts further out into open water, a large wave causes it to capsize and the man drowns. On entering heaven, the man asks God why he didn’t save him. In frustration, God replies, “What more could I have done? I sent you a helicopter!”

In the summer of 1994, I was sent a reading list to prepare me for university life. All of the books were interesting, but two altered my view of the world more than anything I had read before or I have read since. The first of these was, “The Blind Watchmaker,” by Richard Dawkins. It is a book that explains evolution, and it explains it with the common objections of a sceptic in mind. As such, it is a act of great teaching because it anticipates its audience.

Dawkins takes apart the case made by creationists under the guise of ‘intelligent design’. He shows that organs such as the eye are not irreducibly complex, as creationists suggest, but can evolve, tiny step by tiny step, with each new move conferring an advantage on the organism that possesses it. As a rebuttal of silly, religiously motivated ideas, The Blind Watchmaker is a masterpiece. However, I also think Dawkins overreaches in this book and in his other works, when he argues that evolution demonstrates the nonexistence of God. You cannot really demonstrate such a thing. To many believers, evolution is God’s helicopter; the physical, rational manifestation of the higher truth that they believe in. God is, was and always will be unfalsifiable.

The second book that changed me was, “The Emperor’s New Mind,” by Roger Penrose. This is an argument against the idea that the brain works like a computer. Central to the case is a theorem published in 1931 by the mathematician, Kurt Godel. Godel’s theorem shows that any formal system of logic that we develop will not be able to express all mathematical truths. Penrose demonstrates that a computer is equivalent to a formal system and therefore no computer can apprehend all mathematical truths. Assuming that human mathematicians can apprehend these truths, the mind of a human mathematician cannot be a computer. Indeed, human mathematicians tend to switch between different formal systems for different purposes.

Penrose doesn’t explain exactly what he thinks the human mind is, although he does put forward a few ideas. Critics may wonder whether Penrose is indulging in a spot of mysticism because if we deny that human brains are like computers then what are they? Are we suggesting that humans have a spirit or a soul? I don’t think Penrose suggests any such thing, he merely claims that we need a better mechanism to explain human cognition than that of a computer. The fact that he could not describe this mechanism does not invalidate his criticism of the human-as-computer concept.

So Penrose popped into my mind when I read a piece by Robert Epstein, a research psychologist, for the digital magazine, Aeon. And I was reminded of this when I saw Dylan Wiliam recently return to this article on Twitter.

Epstein believes that cognitive scientists are wrong to view the human mind as an information processing system and to talk about it using terms borrowed from computer science. He suggests that every epoch has developed a model of the mind based upon their own preoccupations, from spirit to hydraulics to clockwork mechanisms to chemistry and now to computers. And the computer analogy is sticky, with cognitive scientists unable to talk about the mind without resorting to it. He rightly cautions us against the errors we will make if we start to think that brains are literally like computers.

For instance, there is no reason to believe that we will soon be able to model human minds or download our brains to the internet.

Like Penrose, Epstein doesn’t really develop an alternative model, beyond a few hints. Instead, his view strikes me as a mix of behaviourism and some of the cognitive science that he criticises. Humans, he believes, respond to external stimuli such as rewards and punishments. Specifically, their brains are changed in an orderly way by experience. To me, I can’t see how that is so much different from suggesting that they store information. I can even point to the cognitive science concept of a limited working memory as a possible mechanism for ensuring the orderliness of these changes.

I share Epstein’s scepticism about the idea that human brains literally are computers but I’m much more relaxed about using computers as a model, especially if this model makes testable predictions. That’s probably the key difference between the computer model and those of previous eras. The hydraulic model, as far as I know, did not have much predictive power. Yet the computer model does, and some of these predictions stand up. Which is the best you can hope for in science.

I am also keen to hear more about Epstein’s take on behaviourism. If this has greater predictive power in some areas then it might help us edge closer to the truth of what is actually going on in our heads.

However, I would caution against ever thinking we’ve nailed it, whatever we come up with. Some arguments are intrinsically philosophical and cannot be answered by understanding the underlying mechanisms. And these arguments are not esoteric details; they affect how we live our everyday lives and even the way we might frame education policy.

For instance, the concept of ‘free will’ is critical to how we view human behaviour. I am in favour of emphasising human choice and agency because I believe it leads to a healthier society than one where everything is pathologised. And, ironically, this is clearly a choice. Am I free to choose free will? Yes, I am free to choose free will.

Why? Well, there are those who would argue that scans of brains show us that parts of them light up when we make a decision, even before we are consciously aware of the decision we have made. But you can’t disprove my belief in free will in this way, just like you can’t disprove someone’s belief in God by pointing to the process of evolution. Just as God’s helicopter is the mechanism by which he attempts to save the man, these electrical pulses in the brain, whatever they are, represent the mechanism by which free will works. They are a description.

Instead, to challenge the existence of God or free will requires moral reasoning, something we are all capable of engaging in and something that is notably absent in the logic of computers.

Advertisements

17 thoughts on “God’s helicopter and computer brains

  1. Much confusion results from writers using the terms ‘brain’ and ‘mind’ without explaining the difference.

    Brains (and bodies) do process information. That’s the function of nervous systems.

    Referring to information processing doesn’t imply the use of the computer analogy. Plants process information physiologically. Animals with nervous systems process information. Computers process information. They all do it in different ways.

    Many cognitive scientists abandoned the computer model decades ago, because it didn’t hold for higher-level cognition.

    Some are still using it, quite reasonably, as an analogy to help explain particular functions. Computers and brains are similar in some respects but not in others. Some computer-brain analogies hold true. Others don’t.

    Analogies and models can yield testable hypotheses. The hypotheses can be tested; models usually can’t. Models (hydraulic model, brain-as-computer, behaviourism) are usually too complex and non-specific to have ‘predictive power’.

    The key to the mysteries is the biology, which might or might not take decades to explore. That doesn’t mean it’s unfathomable.

    No need to introduce concepts like free will or to apply moral reasoning. Even if human cognition consists of nothing but the functions of brain and bodies, that doesn’t mean it can be reduced to the functions of brain and body; systems produce emergent properties and some behaviours result in more desirable long-term outcomes for everybody than others.

    1. A thorough and convincing reply to a very meaty and stimulating post! The helicopter analogy is very good (and the way you construct your posts is brilliant, Greg!), but I don’t think Dawkins argues that natural selection proves the case against God. I think he just says “I have no need of that hypothesis”, or words to that effect.

      I agree with logicalinstrumentalism that we don’t have to switch to moral reasoning to deal with unsolved questions about the brain/mind. I also agree that saying something can be explained by the functions of physical entities means that it can be reduced to those functions — a very common error, particularly by people who are antagonistic to science.

      1. After being engaged in edutwitter debates for two years I would ask – is it antagonism or is it that these people ask questions that those who believe in x would rather not answer? Sceptics are often the people who cut the deepest but are the most insightful in terms of errors in thinking and theories that we all have.

  2. “Instead, to challenge the existence of God or free will requires moral reasoning, something we are all capable of engaging in and something that is notably absent in the logic of computers.”

    People at MIT are hard at work on creating moral machines (http://moralmachine.mit.edu/). Their basic approach seems to be crowdsourcing our collective sense of morality. Once they have a large enough data set they should be able to train a neural network to make morally acceptable decisions at least in the narrow context of driving a vehicle.

    Machine learning techniques are still in their infancy (relative to the timescale human existence) and yet they have been able to do some incredible things. I would hesitate to make broad claims about what computers can’t do.

    See also: https://www.weforum.org/agenda/2017/11/3-ways-to-build-more-moral-robots/

    1. Perhaps they will be able to do so in the future. I am not aware of them being able to do this yet. You have to bear in mind that AI advocates have promised much in the past. Maybe we are on the verge of a revolution. Maybe not.

      1. I think AI advocates are always forgetting that they are still in control of setting the parameters that these machines can operate within. It’s hard not to get excited by technology and science but we also need to keep a firm grip on reality here – where are we, what does it actually do/mean, who is in control of the project/contributing to it and most of all, what impact does it have on reality.

  3. Worth checking out Dennet’s video on the perils of telling people they have no free will and the counter argument that consequences don’t determine reality.

    http://breakingthefreewillillusion.com/dennett-stop-telling-tales-about-free-will/

    I find I agree with both sides. Like telling people how to make explosives not all true facts should be explained to everyone.

    But if you are thinking about free will it is really worth thinking about what it is you think has free will. Many think it is something other than their physical body but then don’t have a good explanation as to why damage to our brains so curtails our choices. If it is our physical body then there are two choices for a duplicate of our body (if that were possible) it would either make exactly the same decisions as us or it would randomly make different choices with some probability. There are only two choices determinism or some randomness. Neither changes how we enjoy things such as the music we love.

  4. “Once they have a large enough data set they should be able to train a neural network to make morally acceptable decisions at least in the narrow context of driving a vehicle.”

    1. a morally acceptable decision may be that the computer-driven car doesn’t run over someone. To say that it could ON PURPOSE doesn’t make sense. It only makes sense for a human being to not do that when they have a choice to do so. A computer can’t truly and freely choose otherwise.

    2. Expanding the car situation to the broader moral arena, what would be the meta-ethical basis for “morally acceptable decisions”? A survey of humans and the winning decision would be the one where at least 51% ticked that choice?

    1. People encode moral decisions into rules all the time. A recent case in Canada involved rules about performing liver transplants and patient alcohol consumption. What difference is there in writing rules on paper and saying people must follow them and writing them into a computer?

      1. A lot of school rules are effectively codified moral judgements.

        The difference is that a good school upholds its rules, unless it is not sensible to do so, and there is always scope for reinterpretation if situations change. A computer applies rules blindly.

      2. Chester,
        Yes but – there is a reason justice is depicted with a blindfold and a book of rules. While a good school is one that modifies and reinterprets rules for the best result this is a circular definition of a good school. In some cases blind justice is what is needed to avoid the judges tainting the outcome with their prejudice. In other cases it would prevent a good judge from seeing that the rules are inadequate for the circumstances. The problem for legal justice is not that we don’t understand how to make a system that allows for appeals to more expert judges or changes in the law and pardons but that it is expensive.

      3. Anywhere someone plans to replace human decision makers with an AI they would need to establish by testing with prior cases that the AI does well enough that the cost of appeals is outweighed by the savings of using an AI. If that is the case then the use of an AI provides a better moral outcome because there is a cost to making good decisions and that is lower with the AI doing some of the work.

      4. Moral “rules” are not algorithms. This is the assumption a materialist makes. To say that John shouldn’t kill Jane, his wife, may seem like a “rule”, something reduced to an equation for a computer to be able to “understand” what the moral “options” are, but it immediately can be seen not to be when one asks WHY (in a moral sense!), for example, should John not kill his wife. If the answer is that it’s wrong because there exists a government law that says it is wrong for John to kill Jane, in this particular situation, then consider this.

        “You”, a robot, have Anne Frank hidden in your cellar. Joseph Goebbels knocks at “your” door and asks if “you” have any Jews in “your” house. Joe reinforces the government’s election promise (and it’s a moral “rule” to keep your promises!) to protect its majority citizens from the scourge of Jewish culture, blah, blah, blah…but “your” program says to protect life or handing over someone to certain death is against the “rules” or Jews are human beings and not untermensch or….

        Encoding a moral decision into a rule is not what ethics is really about. That’s just knowing that such-and-such a rule exists. What morality is ultimately about is meta-ethics, i.e. knowing Why something is moral and understanding that it has to be based on something that is itself moral and not drawn from pragmatic considerations or an evolutionary idea about maximising fitness or…take your choice.

        This is why AI will never be able to capture what morality is about because it is not a material entity and cannot be reduced to algorithms or mathematical rules, as much as the materialists are forced to argue by the necessity of their evolutionary worldview.

        In any case, your liver story misses my original point. The distribution of livers according to the someone’s alcohol consumption is not a moral “rule” but, in all probability, one of expediency i.e. not enough livers to go around so therefore we give them to people who we think will have the longest life. To say that people who consumed too much alcohol don’t deserve a new liver because it is morally wrong to distribute them to such completely begs the question of what the absolute “correct” morality is.

  5. If evolution is God’s helicopter why is it so wasteful and directionless? I think it unlikely that a deity would use a mechanism so cruel and wasteful to advance his/her creation. If there is a a higher being behind it he/she cannot be good. Heartless, vindictive and inefficient yes, but not good.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.