© Deniscristo | Dreamstime.com
Robot Takeover

The Rise of the Machines

July 21, 2020
Can robots become sentient and take over all jobs?

Automation is not a new phenomenon in manufacturing. American manufacturers started replacing people on production lines with automatic palletizers, filling machines, and case packers back in the 1950s. Robots did not come into the picture until the 1990s. Most of the large manufacturing plants in the U.S. are now highly automated.

But there is a new threat that is striking fear into the heart of working people. It is the possibility that artificial intelligence will progress to the point that machines will become sentient and replace people in all working environments. This idea has been popularized in movies like the Terminator, when scientists created a computer chip that made machines conscious and self-aware. Tesla founder Elon Musk and physicist Stephen Hawking both warned that machines will eventually start programming themselves, and trigger the collapse of civilization.

This idea of artificial intelligence advancing to the point of sentient machines is becoming a popular concept in the media. An article from the Brookings Institute states that "a quarter of U.S. jobs will be severely disrupted as artificial intelligence accelerates the automation of existing work.” A study from the Oxford Economics Group suggests that "robots could take over 20 million manufacturing jobs around the world by 2030.” An article in Smithsonian magazine, When Robots Take All of Our Jobs, said "fully 47% of all U.S. jobs will be automated in a decade or two.”

Many computer scientists believe that sophisticated artificial intelligence systems using deep learning can develop networks of layered algorithms that talk to each other, and will ultimately lead to consciousness. In his book The Singularity is Near,”futurist Ray Kurzweil predicts that computers will be as smart as humans by 2029.

If you evaluated all of the speculative articles on artificial intelligence in the last decade, you could conclude that that we are on the verge of building a robot that is self-aware and can think just like a human. Creating a computer that is sentient would require simulating the capabilities of the human brain and, contrary to popular reports, no computer has made the simplest self-initiated decision or has manifested any hint of intelligence to date.

How do computers and artificial intelligence compare to the human brain?

A digital computer system is a non-living, dry system that works in serial as opposed to parallel. It can operate at very high speeds, and the design includes transistors (on/off switches), a central processing unit (CPU) and some kind of operating system (like windows) based on binary logic (instructions coded as 0's and 1's). All information must go through a CPU that depends on clock speed. Digital computers do not create any original thought. They must be programmed by humans.

The human brain is a living, wet analogue of networks that can perform massively parallel processes at the same time and operates in agreement with biological laws. There is no programming, and the brain has the ability to change from one moment to the next, constantly forming new synapses. The human brain also includes areas we call the subconscious and conscious mind, which are absolutely essential in reaching consciousness or sentience.

The best book explaining the differences between a computer and the brain is The Future of the Mind by Michio Kaku. He says, “The brain does not work like a computer. Unlike a digital computer, which has a fixed architecture (input, output, and processor) neural networks are collections of neurons that constantly rewire and reinforce themselves after learning a new task The brain has no programming, no operating system, no Windows, no central processor. Instead, its neural networks are massively parallel, with billions of neurons firing at the same time in order to accomplish a single goal: to learn. It is far more advanced than any digital computer in existence.”

Digital super–computers have billions of transistors. But to simulate the typical 3.5 pound human brain would require matching the brain’s billions of interactions between cell types, neurotransmitters, neuromodulators, axonal branches and dendritic spines. Because the brain is nonlinear, and because it has so much more capacity than any computer, it functions completely different from a digital computer.

Neurons are the real key to how the brain learns, thinks, perceives, stores memory, and a host of other functions.The average brain has at least 100 billion neurons. The neurons are connected to axons, dendrites and glial cells, which each have thousands of synapses that transmit signals via electro/chemical connections. It is the synapses that are most comparable to transistors because they turn off or on. But it is important to point out that each neuron is a living cell and a computer in its own right. A neuron has the “signal processing power of thousands of transistors.” Neurons are slower but are more complex because they can modify their synapses and modulate the frequency of their signals”.

Each neuron has the capability to communicate with 10,000 other neurons. Unlike digital computers with fixed architecture, the brain can constantly re-wire its neurons to learn and adapt. Instead of programs, neural networks learn by doing and remembering,  and this vast network of connected neurons gives the brain excellent pattern recognition.

Neuroscientists know that having feelings and emotions is necessary to emulate human thinking, and it also may be a key to establishing consciousness. In fact, it appears that to even have a chance of being self-aware or conscious, the computer will have to be equipped with emotions. Michio Kaku says, “Hence, emotions are not a luxury; they are absolutely essential, and without them a robot will have difficulty determining what is and is not important. So, emotions, instead of being peripheral to the progress of artificial intelligence, are of central importance.”

The brain uses emotions as a value system to help determine what is most important. For a robot to attain human thinking, it would need to be designed with a value system and emotions, even though many emotions can be irrational.

In computers, information in memory is accessed by polling its precise memory address. This is known as byte-addressable memory. In contrast, the brain uses content-addressable memory, such that information can be accessed in memory through “spreading activation” from closely related concepts. For example, retrieving the word “girl “in a digital computer is located in memory by a byte address. On the other hand, when the brain looks for “girl,” it automatically uses spreading activation to memories related to other variations of girl, like wife, daughter, female, etc.

Another big difference is that the computer lacks sensory organs like eyes, ears, tongue and the sense of touch. Although computers can be programmed to see, or smell, they cannot truly “feel” or experience the essence of senses. For example, writes the computer might have a vision sensor, writes Kaku, but the human eye “can recognize color, movement, shapes, light intensity, and shadows in an instant. The computer can neither hear nor smell like the brain much less decide whether the sense pleases it. The five senses give the brain an enormous understanding of the environment.”

He adds: “To catalog the common sense of a 4 years old child would require hundreds of millions of lines of computer code.” Without a temporal lobe, the robot could not talk. Without a limbic system the robot would not have any emotions”.

The unconscious mind is a great reservoir of our experiences. It is not like a computer hard drive because it records everything we have smelled, touched, tasted, or heard including perceptions, memories, feelings, reflections, thoughts, hope since birth. It is also the seat of our emotions and repressed memories. There is no one place which stores this information; it is stored all over the brain from the pre-frontal cortex, to the thalamus, and many other different parts of the brain. The unconscious mind does not reason or think; it simply stores all of the information needed by the conscious mind for the thinking process.

All conscious thinking processes begin in the subconscious mind and are outside human awareness. Consciousness is a holistic phenomenon occurring simultaneously in the entire brain. The brain calls up information that is content addressable. This may be feelings, experiences, memories, or facts that the brain views as related to the problem. Just how the brain can access the right neurons to gather the relevant information for the conscious mind to think is still unknown.

To solve a problem or find and answer, the digital computer processes information from memory using CPUs, and then writes the results of that processing back to memory.

The most important point in comparing the brain to a computer is that in a computer, the answers are all programmed in. In the living brain the answers are created.

As neurons process information, they are also modifying their synapses. As a result, retrieval from memory always slightly alters those memories. Unlike the digital computer, in the brain, processing and memory are performed by the same components.

Self-Awareness

The only model that we know that has evolved to self-awareness and consciousness is the human brain. Over millions of years, the human brain grew in size and complexity until it developed conscious thought and self-awareness. The author assumes that to really achieve artificial intelligence that has self- awareness will require designing a computer that has most of the features and capabilities of the human brain.

The artificial intelligence theorists seem to be counting on the fact that at some point in the next 20 years a microprocessor will be invented that will reach a “singularity point” where it becomes conscious and self-aware. This article shows that for the brain to evolve to self-aware status requires developing an unconscious mind, using emotions, having modulated neurons and content addressable memory, and combining processing with memory.

All of these articles that project that self-aware robots with intelligence that can match the brain offer little proof. The reality is that progress of artificial intelligence towards consciousness has been dismal. Everything that computers do is still programmed by humans. In reality, developing a self- aware computer is not going to happen in this century and probably not at all based on digital architecture.

Mike Collins, president of MPC Management, is the author of The Rise of Inequality and the Decline of the Middle Class. He has more than forty years of experience in manufacturing.

Popular Sponsored Recommendations

Voice your opinion!

To join the conversation, and become an exclusive member of IndustryWeek, create an account today!