Humans and Mechanical Morons: Human Advancement in the Machine Age

Posted by

(Video Transcript Below)

PART 1: DYSTOPIAN PROPHECIES

Many fear that intelligent machines will disrupt our social systems and bring about a dark, dystopian future. If that’s the case, then people will have to develop social intelligence – which is human intelligence and creativity applied to social skills and systems. Only then can we ensure that the Machine Age will benefit rather than harm humanity.

In this current Machine Age, machine intelligence is slithering its way into all domains of life. Nowadays, smarter algorithms are matching people to better options for dining, dating, and binge-watching videos. Machines are performing more sophisticated tasks like image and speech recognition. Companies are investing resources in machine learning and automation like self-driving cars. And recently, the AI called AlphaGo defeated a human world-class player in the board game Go, which is known as one of the most complex and intuitive games out there.

Yet so far, AI is a far cry from the badass intelligent agents of Sci Fi movies. Yet some say it’s only a matter of time before superintelligent machines will make humans obsolete. And that leads to all sorts of questions, like:

  • What if machines take our jobs?
  • What if an aging population can’t adapt to rapid change and develop new skills at any age?
  • What if machines programmed with poor algorithms don’t take into account unintended consequences? Or what if algorithms have errors that cause them to accidentally wipe out humanity?
  • What if a controlling minority monopolizes AI technology? Will everyone else be reduced to the status of pets or even slaves?
  • What about autonomous drones and weaponized machines?

So there are all sorts of pessimistic fears about intelligent machines. And some public intellectuals warn that this age of technological change will be different from all the others. Whereas in the Industrial Revolution, machines replaced physical labor. This time, the machines will compete with us directly for jobs dealing with cognitive functioning. What will people do then? How will they find work? Will we have a whole class of “useless” people?

And that brings up questions about how we’ll deal with vast unemployment when machine labor can do everything better than we can. Some propose a universal basic income, which is a program where everyone receives money regardless of work – basically putting everyone on welfare. Perhaps then we can avoid a dystopian world of egregious poverty and maybe we’ll have a luxurious form of Communism (unlike the genocidal versions of the past).

But perhaps an even greater problem than unemployment is how people without social intelligence will find meaning in their lives. Finding meaning requires that we face uncomfortable circumstances. We must shift our perspectives, manage our emotions, ask questions, and find ways to deal with suffering when we don’t get what we want. Reality doesn’t stay still for long, so there will always be surprises that challenge us to adapt our old ways of thinking and perceiving.

So if we believe that people will become so hopelessly dependent on machines, it leads me to wonder how providing everyone with material income solves the greater problem of widespread depression. Will machines swoop in and also cure us of our existential angst? Or maybe we won’t have to worry because universal welfare might allow kept men and women to finally spend time doing things they truly enjoy.

But will that create a society of narcissists and hedonists all focused entirely on themselves? Research on unemployed men does not look good – it shows they drift to low performance. They tend to fall into depression, sit on the couch, develop chronic pain problems, and get addicted to opiates and video games. So will guaranteed income actually unleash people toward more freedom or will it shackle us to a culture of dependency and depression?

We can’t find any real solutions until we recognize the importance of developing social intelligence, and how humans and machines differ.

PART 2: INTELLIGENT HUMANS AND MECHANICAL MORONS

Machines might be programmed to mimic our human traits in specific cases, but cannot actually understand them. Therefore, I call machines the “mechanical moron” because they can’t form basic social understandings or make simple decisions that even a young child can do. They also don’t have a sense of ethics, and they can only do a narrow range of tasks well.

Because humans are conscious complex systems, we develop a uniquely human intelligence. The brain is not a computer. Evolved systems like the brain have fundamentally different properties from designed systems like computers. A designed system has convenient boundaries between its parts. So you can analyze the computer’s transistor independently without understanding how its electrons work because we designed it that way.

But an evolved system perceives and reacts to its environment in ways that evolved naturally, not through human design. It relies on the local feedback mechanisms of its parts all interacting at once. And that local information exchange remains a black box. We can’t just take evolved systems apart and analyze each part separately to generalize the behavior of the whole. Doing so requires cutting the parts from their natural feedback connections to the greater environment.

And unlike computers, human brains don’t store words nor the rules that tell us how to manipulate them in hard drives. You won’t find a copy of Star Wars anywhere in the brain, even if you remember it. We don’t take images and store them in short-term memory buffers, and then transfer them into a long-term memory device.

Instead, humans are born with reflexes, senses, and learning mechanisms. If babies lacked any of these capabilities, they’d probably have trouble surviving. But if you submerge a baby underwater, its natural reflex causes it to hold its breath. If you put something in its hand, its reflex is to grasp at it. And over time, they learn to adapt to the environment – they adjust their past actions so they can interact more effectively with the outside world. That’s called learning.

Babies also rely on senses. Sensory experiences feed our imagination to form expectations and predict likely outcomes. As AI researcher Ben Medlock notes, to train a machine to recognize cats, you have to train it using millions or even billions of cat photos. But humans need only a small sample to understand the idea of a cat. When we think of cats, we picture how they move, we hear the sound of their purring, we might feel an impending scratch. This rich store of sensory information allows us to understand how to actually interact with a cat.

Mechanical morons excel at handling detail complexity – give them a bunch of numbers and complicated equations and they’ll crunch away. But they flounder when it comes to the dynamic complexity of any social system. These deal with the interactions of many interconnected variables with different goals. Just imagine the social dynamics of a classroom or workplace – when you put messy humans together, you’ll get all sorts of unpredictable and nonlinear behaviors.

Machines can efficiently recognize patterns only because they follow predefined classification rules. They must categorize things. But categorizing reduces complexity into simpler abstractions. For example, I can categorize a person as healthy, intelligent, neurotic… But that person is more than those categories.  And a dynamic person changes often so that I’d have to frequently update my categories over time, even create new ones. Computers don’t create new categories.

Humans have creative power and imagination. We can create something new that didn’t exist before – we can write complex works of fiction or start businesses. We can perceive needs not being satisfied. And we do that by perceiving from the infinite data around us something not previously defined nor determined relevant. So we don’t just look for trends within familiar categories, but we look for changes in the trends – the breaking of an old pattern and creation of a new one. And that’s how we align with a dynamic reality that shouldn’t be reduced to a simple set of categories. The moment we do that, we stop seeing anything new. We stop learning.

Machines can’t create because they don’t have desires – they don’t ruminate over finding their soul mates or yearn to find their true calling. But humans have volition – we develop vision and set goals. We can have a burning desire to create a particular future and drop everything else in pursuit of it. We also must constantly choose between competing desires, deciding which one to pursue at any moment. So our desires always direct our intellect. It’s the desire that motivates us toward an end; the intellect just helps us find the best route.

We also never see our surroundings objectively as there are an infinite number of ways we can frame the objects in our environment. But we create meaning. We constantly filter what we see according to our desires. We evaluate things based on how we perceive their function – does this person or object help me get what I want, or is it an obstacle? Or is it irrelevant, in which case I might not even notice something right in front of my eyes.

Machines can’t have desires because they don’t have emotions. You won’t see a computer getting bored or showing up to work with a bad attitude. It’s also not going to feel like suddenly dropping all its routines to go discover its true calling. Without being directed, a machine has no clue what’s meaningful to work on. The neuroscientist Antonio Damasio studied people with damage in the part of the brain that generates emotions. He found that they could still think logically, but because they couldn’t feel emotions, they also couldn’t make simple decisions. They could describe what they should be doing in logical terms, but couldn’t even decide on what to eat. So without emotions, we can’t make decisions. And neither can machines.

Humans also have higher levels of abstract thinking and feeling. So we can feel embarrassed or regret about the things (or people) we’ve chased after, or we can feel proud of our values. We have a “conscience” (well, some of us do). Machines definitely don’t feel embarrassed when their algorithms make errors.

Humans introspect. We reflect on our experiences. We consider ethical questions, we form values, we ponder what it means to live a meaningful life. We understand the feeling of suffering all too well. Machines don’t have ethics. They’ll just act toward programmed goals within programmedboundaries. And they follow programmed algorithms reflecting the intelligence of their creators. Tools amplify the efficiency of their masters, so if the programmer has biases, faulty ethics, or makes algorithm errors – those will all sneak into the program.

Humans require social intelligence because we are social beings. A baby’s vision is blurry but it pays special attention to faces, especially its mother’s. It also prefers the sound of human voices over non-speech sounds. So we’re wired to make social connections. People who suffer from loneliness or social isolation experience increased risks of mortality comparable to smoking. Our social interactions also affect others in seen and unseen ways. And that’s why people value being in the presence of someone who can make them feel better. That’s why we seek out spiritual teachers, motivational speakers, or even good managers in the workplace.

Humans have empathy – we understand what anger, love, or grief in another person feels like. You can program a machine to recognize that certain facial cues moving in a particular pattern with particular brain activity correlates to the emotion variable called “anger”. But that machine does not feel anger. You’ve just programmed it to follow predefined rules and spit out a label. But for humans, empathy has a strong healing power. We don’t just heal our suffering through some calculated medical diagnosis. But also from building an empathic relationship with others. Some people transform under the guidance of a therapist who can listen and help them feel understood. A machine won’t be that kind of therapist for you.

Humans develop social intelligence through learning how to engage in play. Through play, we learn to negotiate rules and navigate unpredictable social environments. When dealing with people, we need flexibility to change course and handle unforeseen circumstances and hidden emotional needs. Machines haven’t evolved through self-directed play. They don’t do well in unknown social environments. The mechanical moron tends to do only a narrow range of tasks very well, such as mathematical calculations. I’d like to see a machine start and manage a company. Or direct a film. Or take care of a child for even one day. But all those tasks require social intelligence.

Humans have this powerful unconscious pattern-recognition tool called intuition, which allows us to know things without consciously knowing the logic behind it or exactly what elements we’ve observed. The majority of our daily activity occurs unconsciously. Even basic movement patterns like walking or typing. Or any complex skills we’ve developed in our profession.

Even experts in their fields struggle to describe all the rules they follow. If they tried to detail step-by-step instructions for someone else to follow, that very practice of writing it down would simplify all the patterns they observe and overlook their unconscious rules. Our unconscious understands exponentially more than our conscious analytical mind, while expending a mere fraction of the effort. Our intuitions let us know where to focus our attention and what to ignore from an unlimited number of options. Of course our intuition can lead us to make biased (and stupid) decisions. But it’s also responsible for many of the good decisions necessary for success.

Therefore, some call intuition the highest form of intelligence. Because without it, we’d exhaust ourselves trying to overthink how to do even the most basic activities. Machines can be programmed to mimic intuition, but they always follow algorithmic logic.

The human intuition perceives new patterns by first asking questions that direct our focus. The mechanical moron can only ask what its creators programmed it to look for, but humans have infinite questions they can ask every moment, sometimes randomly changing their train of thought. Humans ask new questions.

So humans may be sloppy and slow at times, but only we can see the big picture and sort the important matters from the trivial. We have creativity, empathy, flexibility, and intuition. And we ask the questions that determine what to pursue.

PART 3: GLASS HALFWAY

Machines can help us solve problems. But only humans determine the problems worth solving. And the people who develop their social intelligence will do a better job identifying social needs.

We should view machines as tools rather than rivals. Technologies can help lift the poorest out of poverty by granting access to things only enjoyed by the elite of a prior era, or introducing things even the elites did not have. A standard television set used to cost over $6,000 in 1964, and today you can get one for less than $150. Workplace fatalities in the late 19th century used to be 30 times more likely than today now that we have machines doing the more dangerous work.

However, if you rely on machine intelligence but lack social intelligence, then you risk becoming very efficient at doing the wrong things. Good problem solvers must know what questions to ask. They must synthesize large amounts of information. They make decisions. And they adapt flexibly to changing circumstances. So an over-reliance on machine algorithms can lead to linear tunnel vision, and that might blind us from the solutions to complex social problems. And unfortunately, everywhere you look, you’ll find organizations that lack social intelligence – in tech companies, schools, hospitals, and especially bureaucratic government agencies.

Nowadays, more people are treating packaged data and fancy graphs as oracles, and that replaces their ability to perceive nuance and have more complex understandings. So perhaps more concerning than the rise of machine intelligence is the rise of human stupidity. We can’t afford human stupidity because the machine age will eliminate many jobs. It will also create new jobs, but only if we adapt rapidly to change.

Existing professions will have to shift their approaches. Doctors can use machines for statistical work like suggesting diagnoses while they focus on patient relationships. Teachers can use machines to deliver routine lessons while they focus more on social/emotional development and creating more personalized learning experiences. People who are open to experiment and reinvent themselves will have so many new opportunities. Today, artists can manage their careers much more independently. It’s so much cheaper to create art, to market yourself, and publish your work on an online platform. You can maneuver around the film studios, music labels, book publishers, or the education establishment.

I can come up with a list of things machines will not be able to do:

Machines don’t create art. They don’t create new businesses. They can’t create meaning for us. They can’t come up with new things to invent. They can’t teach the humanities very well. They make poor spiritual teachers and awful comedians. They certainly can’t run a community. Or a business. Or do anything that requires managing people.

A machine with just logic and facts will have a hard time persuading. Persuasion requires appealing to emotions. Effective lawyers must persuade judges and juries. Entrepreneurs must persuade customers that a product will satisfy their emotional needs. Even finding friends or romantic partners involves persuasion. There will always be jobs requiring persuasion and other social skills.

I can go on and on, and I’m somewhat baffled by our lack of imagination about human potential. When we set low standards for people, we end up encouraging a self-fulfilling mindset of helplessness. We might end up designing social systems that prevent adaptation and ensure a habit of inadequacy.

A world where we don’t require human labor or intelligence implies a world without human desires. But we don’t have to search hard to observe that humans don’t have a shortage of desires. People always desire to put an end to their suffering. Machines can help us get what we want, but they can’t do the work of finding meaning in life for us.

Nobody knows what exactly the job market will be like in the next decades. But we know the future will put a premium on social intelligence. Creative people will need to manage their own careers and seek new skills at any age. Non-creative people may have a big problem ahead. They can’t depend on slow, traditional schooling institutions. We’ll need more diverse models of learning catered to different environments, different skills, and all ages. Workplaces will need to train employees in social intelligence when machines can handle routine tasks. Governments will need to remove obstacles that prevent people from transitioning careers and starting new enterprises.

This is video 1 in a series about Social Intelligence in the Machine Age. To see more videos, visit here.

Advertisements

Leave a Thought

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s