Author(s): Ömer Özgür
The world of Artificial Intelligence usually has two climates, spring, and winter. Artificial intelligence has sometimes inevitably entered the winter, but often it has become more robust and revived.
In the world of artificial intelligence, winter and early spring climates are experienced similar to the seasons. The entry into the ice ages indicates the long-term predictive failure of humans and being too sure about some issues.
In this article, we will talk about the problems and human misconceptions that cause the world of artificial intelligence to enter the winter.
The climate change of artificial intelligence can be explained by the Hype cycle. The hype cycle describes the life cycle of emerging technologies.
This cycle has five steps; Technology Trigger, Inflated Expectations, Disillusionment, Enlightenment, Productivity.
The part that interests us the most in the hype cycle is that innovative technology creates so much expectation and fails. For example, it is expected that Level 5 autonomous vehicles will be on the road by 2020, or machines will be capable of doing all the work of humans. Still, there are many problems that we need to solve technologically and ethically.
The AI winter is a time of dwindling funding and interest in AI research. The term is derived by analogy with the idea of nuclear winter. AI fails to live up to expectations, research funding is cut, a vicious circle ensues, and the AI world goes into winter.
Challenging Evolution Processes
With the discovery of Perceptron, the first AI spring began to occur in the 1960s, but it wasn’t long before the first spring gave way to winter.
In the 1980s, AI appeared as an expert system. Expert systems are systems that rely on humans to create rules. People set these rules, and they lack generalization, and winter is inevitable.
In the early 2000s, machine learning methods emerged. These techniques were models powered by statistics, not neuroscience or psychology.
And as we approach the present day, Deep learning is on the rise. The main reasons are the ability to perform parallel processing at high speed and capacity and produce, collect and transmit sufficient information.
Machines, like humans, are prone to laziness. So why does this happen?
The sole purpose of algorithms is to minimize a cost function mathematically. Even the most successful algorithm doesn’t know its meaning. For this reason, shortcuts that reduce the cost function can be learned during training.
The algorithm produces correct results for the wrong reasons. For example, you want to develop an algorithm that classifies wolves and dogs, and your algorithm is also thriving on test images.
When we examine which features the model has learned with heat mapping, we see that the model has learned the background, not the features of dogs or wolves. Most pictures of wolves were taken with snow, white dogs with a darker background.
Seeing Development as a Continuous Process
The first-step fallacy is the claim that ever since our first work on computer intelligence, we have been inching along a continuum at the end of AI so that any improvement in our programs, no matter how trivial, counts as progress.
How much can developments in a particular field improve general intelligence?
We cannot reach space by climbing a mountain.
Know Your Subconscious First
The unconscious mind consists of processes that occur automatically in mind and are not reflected in consciousness. No matter how free one thinks of himself, most of his/her physical and psychological behaviors are carried out automatically and fluidly by the subconscious.
From seeing, hearing, breathing, and digesting food, it happens beyond our conscious control. The human brain can recognize the person in front of it in milliseconds, determine its age, gender and establish many more connections.
For example, seeing or understanding the sentence we read is effortless for us. But when we do this with a machine, we realize how difficult a process it is to see.
We forget about the neural networks of our brains that have been finely shaped by natural selection over millions of years. The brain is not a born blank slate.
Our brain has billion years of prior experience with the machine. And if we do not know ourselves very well, we will think this problem is simple and develop engineering accordingly and fail. Difficulty and simplicity need to be better understood depending on the type of intelligence.
The idea that intelligence can be separated from the body, whether as a non-physical substance or as wholly encapsulated in the brain, has a long history in philosophy and cognitive science.
Embodied cognition theory suggests that our thoughts are grounded or inextricably associated with perception, action, and emotion and that our brain and body work together to have cognition.
Embodied cognition is a theory that is gaining ground scientifically. Emotions start in the brain and are transmitted to the body, or does the brain create feelings by reading body reactions. For example, you can trick your brain about happiness by forcing yourself to smile.
Embodied cognition is that human intelligence seems to be a strongly integrated system with closely interconnected attributes, including emotions, desires, a strong sense of selfhood and autonomy, and a commonsense understanding of the world. It’s not at all clear that these attributes can be separated.
Chemist or Alchemist?
Today, AI research is more like alchemy than a chemist. We are at the stage of pouring together different combinations of substances and seeing what happens, not yet having developed satisfactory theories.
Still, the practical experience and curiosity of the alchemists provided the wealth of data from which a scientific theory of chemistry could be developed.
Why AI is Harder Than We Think
Why Will the Artificial Intelligence World Enter Winter was originally published in Towards AI on Medium, where people are continuing the conversation by highlighting and responding to this story.
Published via Towards AI