Mentalizing the Machine

February 17, 2017

Brett J. Palmero
Department of Biology 
Lake Forest College 
Lake Forest, IL 60045

When humans interact with one another, they try and understand what the other is thinking. This process is called mentalizing, and it is the inference of mental states of fellow human beings. We mentalize to figure out each other’s intent. (Chaminade, 2012, p. 1) It is the source of human empathy and interaction. Intent is thought to be what separates humans from cold calculating machines. Alan T. Turing (1951), one of the fathers of artificial intelligence or AI, researched if AI could fool a person into believing the machine is also person. His studies showed that the AI rarely fooled the human because the human did not sense intent in the machine’s actions. This distinct difference between human and machine has sparked a new interest. Instead of trying to fool a human, maybe they can substitute a human for a machine and get the same mentalizing response. Researchers now want to understand if a human knows that they are interacting with a machine, will they try to understand what the machine is thinking? To measure this, they saw if certain areas of the brain that were activated during human to human mentalizing were activated during human to machine mentalizing. If so, what does say about the differences between human and machines? If a human interacts with machine in a similar way as a fellow human, AI could one day be considered an equivalent to a human.
In an effort to understand human to machine interaction an experiment was conducted. The researchers wondered how they could create a scenario where a human would care to think about what a machine is thinking about. It then dawned on them that a game would stimulate mentalizing because to win a game, a human has to understand their opponent. The chosen game was rock-paper-scissors, and the opponents were to be a human, AI, and a random number generator. The random number generator was a negative control because there was no need to think about what it was thinking because it was random, and the human the positive control to see what areas of the brain were activated. To get rid of any confounding variables, like winning or losing influencing mentalizing, the wins and losses were all preordained. This meant the only variables accountable for differences in brain activity must be the subject’s perceived opponent. The actual opponent was a system that had the human subject always win five and lose five. A total of eighteen subjects were tested. One subject was excluded because of their excessive movements that would create outliers for the results. Each was told to try and make strategies to win against the human and the AI. This would make them have to mentalize to predict what their opponent would do because they believed they could beat their opponents. To fully understand what level of mentalizing the participants were undergoing, the researchers measured both brain activity and self reports on how effective participants felt their strategies were. Researchers hoped strategy effectiveness was similar between human and AI opponents because this would mean that the subject was mentalizing the same amount in terms of playing the game. Both variables were measured to completely understand each interaction during the game.

Figure 1. Experimental paradigm. Each game of Rock-Paper-Scissors consisted of a countdown, a response screen and a result screen, and lasted 3 s. Five games were played for a round against one given opponent, and each round was preceded by a 2 s video of the opponent (video framing) and was followed by a 2.5 s video of the opponent providing a feedback on the results of the game (video feedback).

The researchers hoped to find significant similarities between the human and AI fMRI scans. The hypotheses were that the parts of the brain that were active with human to human games should also be activated in a human to machine game. Working or short term memory, controlled by the precuneus, should be activated to understand events as they are happening. Others parts of the brain that should be equally as active versus the AI, are the posterior intraparietal sulcus for attention and the dorsolateral prefrontal cortex for executive decisions. All these parts are involved in playing a game, but the main parts of the brain that were looked at were the medial prefrontal cortex and the right temporoparietal junction. These parts were in control of mentalizing. If these were activated at all, the human is thinking about what the machine is thinking. Even if it only a little bit, it shows the brain treating the AI as if it were also a human that needed to be understood. Another variable the researchers decided to measure was motor resonance controlled by left premotor cortex and the anterior intraparietal sulcus. Motor resonance is when mirror neurons make the brain mimic actions of another person to try and understand what they are thinking. Since the subjects were actually interacting with their opponents and not just observing them, the researchers expected no activity in these brain areas. All of these areas were measured using fMRI scans to see what was active when against the different opponents.
After all the experiments were conducted, the results were processed for significance. Despite the researchers’ confidence, quite a bit of their hypotheses were proven wrong. For the self reported effectiveness of strategy, the subjects reported that creating a strategy against the humans was more effective than strategies against the AI and the random number generator. (Chaminade, 2012, p. 5) There were also significant differences in brain activity of all the areas of interest previously mentioned. All of them had significantly lower activity than the human opponent when versing the AI opponent. (Chaminade, 2012, p. 5) This meant that human did not mentalize about machine thinking the way researchers thought they would. Despite this there was a increased level of activity between the AI opponent and the random number generator opponent, showing some increased level of mentalizing. Another result that contradicted the hypothesis was the activation of motor neurons. (Chaminade, 2012, p. 6) This may have been due to the subjects using past moves of their opponents to predict their next set of moves. Overall the brain scans showed that humans do not perceive machine cognition at the same level as fellow humans. Nevertheless, the significant difference between AI perception and the random control shows that mentalization is occurring.
A human needs hope to think. If there was no way of beating the AI, the human would never feel the need to strategize and understand what the AI was thinking. Since the subjects were told they could win if they understood their opponent, their mentalizing areas and game playing areas were activated. Even though the activity was minimal, it was more than the random number generator. The simple fact that the subject tried to understand the machine shows that human machine interaction is not the same as a human object interaction. Humans believe that there is something worth understanding in the machine. The reward of winning drove the process of mentalizing, but human interaction is not always about winning. Most just want to understand the fellow human they are with. Once an understanding is established, there is a connection between the two. This is considered a social reward which may be way mentalizing areas are much more active during the human to human interaction. When the social reward is lost the human to AI interaction is merely problem solving. Enough mentalizing is used to win, but not as much as to discover the AI’s intent. This idea of intent is still what separates humans from machines.
These studies showed a very important difference between human and computers. Intent in one’s actions still is the driving force of humans and what differentiates them from computers. Yet humans still react to AI as if they were a pseudo human. There is a need to understand what the AI is thinking so the human can dictate its actions accordingly. Since there is some mentalizing going on, AI may be able to sufficiently replace humans in certain jobs. For example, an AI could be a teacher because the students could learn and interact with the AI at a level close to that of a human. The teacher student relationship is usually one of a formality and does not need the empathetic nature of intent in a human.

Figure 2. Lateral and medial brain renders (rendered with freesurfer) showing activated clusters (p < 0.05 FWE corrected, k > 25 voxels) for the contrast between intentional and random.

For future studies, researchers need to discern if a human mentalizes because of their need to understand intent or if it because of the want of a social reward. If a human solely wants to establish a connection with a person they do not need to necessarily understand a fellow human’s intent. Understanding what one might do next does not always reveal their overall intent. A need for social reward being the sole driving force of human interaction and mentalizing is another reward driven process that could be emulated with an AI. When intent is not a major driving force in decision making, the human is no longer different than the AI.

Note: Eukaryon is published by undergraduates at Lake Forest College, who are solely responsible for its content. The views expressed in Eukaryon do not necessarily reflect those of the College.

References

Chaminade, T., Rosset, D., Da Fonseca, D., Nazarian, B., Lutscher, E., Cheng, G., & Deruelle, C. (2012). How do we think machines think? An fMRI study of alleged competition with an artificial intelligence. Frontiers in human neuroscience, 6, 103.
Turing, A. (2004). Can digital computers think?(1951). B. Jack Copeland, 476.

Related Links: