Human intelligence, artificial intelligence and behavior

In the morning, I wake up. I first head to the bathroom. My next stop is the kitchen where I make myself and my wife an espresso. Over the course of a year this morning ritual is probably deviated from one or two times and always not for reasons I can help. This is a pattern of behavior that, my wife, associates with me. We have the technology in place already that could replace this pattern of behavior with a robot that would do the exact same thing mimicking, even, going to the bathroom. But that robot wouldn’t be me. 

Extrapolate my behavior to a hundred other contexts, maybe a thousand if you can observe me undetected (being observed would alter my behavior) long enough and you will be able to capture a lot of the things associated with being me, without ever cloning a David Amerland. 

The example perfectly encapsulates the intelligence in AI question right now. The different robots that could emulate my behavior in different contexts would not mirror my intentions or recreate my thoughts. Their behavior would be no more than the rendition of a specific pattern they have come to recognize and pattern recognition is what we observe when we see AI at work. 

The Chinese Room Experiment

But, I can almost hear you say, if a bot can do what you do in certain contexts David, does it really matter if it isn’t you? And the answer to that is no. Within that context. This is, essentially, the argument at the heart of John Searle’s Chinese Room thought experiment.  (See video below for a more succinct explanation.)

Within certain contexts it might make more sense to use a machine than an organic, namely me, to do something. But the intent and meaning would not be the same. To use my morning ritual example, sure my wife would get her morning coffee in bed but you could argue the sense of care and affection the action denotes would be absent, only the utility of it would remain. 

That too is the point behind the use of AI systems in everyday life and business and, perversely, it is also the argument against their use in certain contexts. If a customer service department is all about utility as in solving certain problems in certain contexts on a 24-hour availability basis then a chatbot will do a brilliant job at this and will even add some affective garnish by learning to say “Thank you” and “I am sorry” when warranted. 

If, however, we are seeking to connect through an empathetic aspect of human contact and have someone understand the nuance of a predicament that exists beyond the mere presence of a problem then an AI is not the “droid we’re looking for” to paraphrase a Star Wars moment, a little. 

We needn’t look deep for nuance either. A customer issue that begins with “this is the second time X happens” provides more than factual information about an issue. There is already there a human level of frustration that creates a tinderbox that an affective-deaf response will make explode. If brand loyalty and customer satisfaction are the things you’re trying to gain through the use of a chatbot then the risk of the exact opposite happening grows exponentially. This is why people don’t like interactive voice response (IVR) systems and would rather hold to talk to a human operator. 

All of which brings us now to the question of intelligence. Researchers define intelligence as “a general mental ability for reasoning, problem solving, and learning.” but, in organic life forms, intelligence acquires a few more dimensions that are harder to define: love, affection, care and empathy, to mention just a few, play a role. 

Behavior And Meaning

When I share a piece of gum with a friend I sometimes go running with I care nothing about the health of his teeth. I use, instead, the moment to create a common memory that cements our relationship. There are social aspects to the action that stem from the fact that both he and I live in a physical world and experience its friction on us. This is what embodied cognition ultimately generates and this is a form of intelligence that reveals a deep understanding of the world and a meaning that resides behind every action. 

There are some things here we intuitively know but cannot yet truly understand. There is no real reason for consciousness, for instance. We’re beginning to formulate a framework to better understand it but right now we cannot see it, measure it or hope to improve upon it. 

Meaning is something only human brains actively look for when they examine the factuality of the external world. I have been explaining this for some time now and it still needs further unpacking. Communication without meaning (our Chinese Room Thought Experiment) quickly devolves into something without effect, which is why semantics plays such a key role in symbol representation and search

In other words, unlike any AI on the planet, knowing the “why” of our “what” changes the parameters of our operation and substantially affects our performance. That might also be the measure of true intelligence.  

Vision is an interpretation of sensory input that goes beyond the input itself. Perception is reality and words express more than what we say they do.

This is exactly why words, alone, have the power to raise us beyond our limits as in the case of a “Virgin Queen’s” speech at Tilbury or depress us and make us feel less than we are. 

The reductionist approach holds that we can extract what is essential to understanding the concept of information and its dynamics from the wide variety of models, theories and explanations proposed. The non-reductionist argues that we are probably facing a network of logically interdependent but mutually irreducible concepts.”

We cannot divorce intent and meaning from adult behavior and adult behavior is intelligent by default. That is an intelligence that incorporates those elements of cognition that are abstract and immaterial but which nonetheless lead to correlative changes in neural substrates in the brain and cellular clusters in the body. 

Business Practises And Personal Goals

These are the practicalities then: If you’re running a business that implements a chatbot you need to either narrowly define the context of its operation in order to guarantee the quality of the experience or risk achieving the exact opposite of what you hoped for.  

When it comes to you (the neurobiomechanical unit that identifies every time you say “I”) then your personal goals are always driven by a complex structure created by social aspirations, ambitions, beliefs, values, self-awareness and purpose that are filtered through memories, knowledge and experiences to create what we call “character”. Truly that is only behavior. 

A company whose character of the people in it aligns with its reason to be is unstoppable. A person whose internal world maps to their external reality is incredible. 

In both cases we strive to reduce the disconnect, minimize the sense of cognitive dissonance that is experienced every time we fail to live up to our own established benchmark of behavior.

And since we are talking about AI we should understand that it is a tool, just like any other but that doesn’t mean that it cannot have unintended consequences. That’s nothing new though, right? 


Go Deeper: 

Intentional book by David Amerland The Sniper Mind by David Amerland
Take Control Of Your Actions.    Make Better Decisions.

You can read a sample of The Sniper Mind chapter here.

Get Intentional on Audible.