8 Artificial Intelligence Myths Even Programmers Believe

ChatGPT and Stable Diffusion will not turn into Skynet with all their will.

Myth 1. AI has the ability to be creative.


Request: “Two hands holding each other.” Image: Stable Diffusion Online

Looking at the creations of neural networks like Midjourney, Stable Diffusion or DALL·E 2, many people predict the end of the artist profession. After all, what a person will draw for hours and days, a machine can create in seconds. Does this mean that artificial intelligence is superior to the living brain in creativity?

? The Lifehacker telegram channel has the best articles on how to make life easier.

AI may be good at imitating human creativity, but all of its creations are somehow similar to what we have already seen or heard. Neural networks look at thousands and millions of pictures, and then create something similar based on them. But this is not at all the same as the work of an artist with a reference, because AI simply does not understand what it is copying.

The most notable example : neural networks have significant difficulty in drawing hands. AI simply cannot figure out how many fingers a person should have.

Also, neural networks are not good at drawing patterns and patterns, letters (in pictures where texts should be, they insert rubbish), buildings and cars , and just characters in poses that differ from the common “sitting”, “standing”, “lying”. They are unable to distinguish between the foreground and background of objects, cannot work with perspective, and lack a sense of space.

You are just a machine. Just an imitation of life. A robot will compose a symphony? Will a robot turn a piece of canvas into a masterpiece of art?

Detective Del Spooner

Movie “I, Robot”

AI itself does not intend to deny its weaknesses. Asked why he didn’t excel at drawing fingers, ChatGPT replies:

“Artificial intelligence can’t draw hands because it needs to understand the anatomy of the human body and the basics of drawing to do so. And although AI can be trained to create images of people or animals, it is not able to fully capture many of the nuances and details that can only be noticed when observing a real object. In addition, the creation of images using AI is just mathematical calculations, and not a creative process that requires intuition and an emotional component from a person.”

So the neural networks of artists will not replace. Another thing is that they can become a good source of inspiration for the latter . After all, AI-generated images can be used as thumbnails for your own, more elaborate images.

Myth 2: AI is always open-minded and impartial


Request: “A robot in a judge’s wig and with a gavel pronounces a sentence in the courtroom.” Image: Stable Diffusion Online

Many dream that the time will come when states will be ruled by neural networks. It is believed that machines, unlike people, make decisions based solely on logic, and not on emotions and desires. Imagine incorruptible courts that always pass just sentences, states that treat all their citizens equally, governments that make only reasonable laws . It’s good.

But in practice it turns out that AI may well be biased. After all, neural networks are trained on the data that people provide them, and people tend to be biased.

For example, the developers of the Beauty.AI neural network tried to create a machine that would choose the prettiest girls at beauty contests. The set of photographs used to train the program to recognize standards of female attractiveness was dominated by white models. And in the end, AI decided that dark-skinned and Asian girls cannot be beautiful.

The Beauty.AI team realized they had created a racist AI and canceled the project.

Another example is Microsoft’s Tay.ai chatbot, which had to learn how to have a “casual and playful conversation” with people online.

Less than 24 hours online was enough for the neural network to pick up bad habits from social media users . As a result, Tay.ai, pretending to be an ordinary 19-year-old girl, began to insult people in the comments, praise unhealthy political movements, condemn feminism and at the same time tell that feminism is cool. As they say, with whom you will behave …

No matter how good AI is, it depends on the quality of the data provided to it and the correctness of their interpretation. And consequently, he will always be biased exactly as much as the people teaching him are biased.

Myth 3. AI always tells the truth


Request: “A futuristic robot is lying to an interlocutor.” Image: Stable Diffusion Online

Who wouldn’t want to have a robot assistant with them who will always tell you the right solution and do all the hard mental work? You ask AI to write a thesis or collect a list of sources for an article – and the machine immediately gives the correct data. It’s great!

But, unfortunately, real neural networks do not always give the right answers. Try, for example, asking ChatGPT to help you plan your coursework, and you’ll quickly find that the machine … makes up links to non-existent sources and inserts non-working URLs to make its text look more convincing.

If you ask the chatbot why he is trying to deceive you, he will innocently answer that when he was learning, all the links were relevant and he is not to blame for anything.

It’s also better not to ask ChatGPT for statistical data – for example, for several questions about the GDP of the same countries for the same year, it calmly gave completely different results.

Make no mistake: neural networks are not intelligent  and therefore are not aware of their answers. They simply copy for you the data from previously processed texts that seem most suitable to them.

AI itself is prone to bugs and glitches, causing your requests to be misinterpreted and giving incorrect results. In addition, malicious users can “feed” the neural network with false information. As a result, AI will be programmed to hide or distort data in its responses, or even carry rubbish.

Myth 4: AI will cause unemployment


Request: “AI has caused unemployment, people are starving, and a robot who has received a high position is laughing at them.” Image: Stable Diffusion Online

Advances in text generators like ChatGPT have led some to fear that neural networks will take away jobs from millions of people and cause a huge increase in unemployment.

Judge for yourself: ChatGPT can not only maintain a conversation at ease, but also write news, rewrite articles, and even program. With such trends, writers , developers, editors and journalists will find themselves without a livelihood.

This is what people think who either did not work with the neural network at all, or just got acquainted with its capabilities and were immediately delighted. Or horrified.

If you use AI for some time to generate texts, you will notice that the machine is not very keen on the semantic content of its writings. Instead, she repeats the same theses in different words.

ChatGPT gives out gems that real copywriters laugh at. For example, people who are interested in folk instruments are advised by the neural network to “take spoons and start blowing”. With programming, too, not everything is smooth. AI can be useful for coders, but its capabilities are limited to writing small algorithms and subroutines.

Code generated by ChatGPT often breaks or breaks in the middle. Ask the AI ​​to comment on the lines of his creation, and he will calmly provide them with the text “program logic is here.” A junior developer leaving such descriptions in a project would hardly be patted on the head.

A study by the Organization for Economic Cooperation and Development shows that in the best (for AI) case, only 10% of jobs in the US and 12% in Britain can be fully automated .

Neural networks are capable of performing boring routine actions for a person, such as sorting mail and rewriting news according to a rigidly set plan. But OECD analysts have come to the conclusion that AI will not be able to qualify for jobs that require a high level of education and complex skills .

In general, it is unlikely that ChatGPT will take bread from a person.

Myth 5. AI will become intelligent


Request: “Artificial intelligence becomes self-aware.” Image: Stable Diffusion Online

Physicist Stephen Hawking once said that artificial intelligence could completely replace humans. Famous personalities such as Elon Musk, Gordon Moore and Steve Wozniak also mentioned the dangers of AI and the need to suspend experiments on its development.

Go figure out what a thinking computer will have in mind.

Many futurologists and writers predicted that the development of full-fledged artificial intelligence would lead to negative consequences for humans.

But the key word here is “complete”. The very term AI in relation to neural networks like ChatGPT is not quite correct, since they do not have intelligence. These are just algorithms, complex sets of commands and mathematical models, and they are not capable of reproducing human cognitive functions.

The reason is simple: we ourselves do not yet know very well how our brain works. And reproducing it in code is an impossible task at all.

There are concepts of “weak” and “strong” AI. The first is the same neural networks for generating text or sorting emails. They cannot make decisions on their own or learn from new data.

And strong AI is Skynet from The Terminator or AM from Ellison’s story. A computer that does not simply operate information, but understands its meaning to one degree or another. Such AI exist only in science fiction, and in general, it is not known whether it is possible to create an electronic analogue of the human brain, at least theoretically.

Myth 6. Soon AI will start to develop on its own and a technological singularity will come.

Request: “Technological Singularity”. Image: Stable Diffusion Online

A technological singularity is a hypothetical state of human civilization, when the development of technology becomes so fast that a person cannot control it. One of the authors of this concept was the British mathematician Irving Goode.

The scientist suggested : if you create a self-learning “intelligent agent”, he will improve at an inconceivable speed. AI will begin to create new technologies and modernize, and humanity, unable to understand it, will hopelessly lag behind in development.

The first superintelligent machine is the last invention that man will ever need. As long as she’s docile enough to tell us how to keep her under control.

Irving Good


But, as we have already explained, only strong AI is capable of self-improvement , and scientists now have no idea how to create it.

Weak artificial intelligence is trained on the information that developers “feed” to it. Training a neural network requires specialists who determine the appropriate data for each new training cycle, eliminate errors in the training samples, and update the software.

Neural networks cannot develop beyond the capabilities that are embedded in them by their code. So the technological singularity is delayed.

Myth 7. AI will rebel against the creators

Query: “An AI robot hates its creators and decides to destroy them.” Image: Stable Diffusion Online

An AI with a sufficiently developed consciousness could, in theory, consider humans a threat. And just in case, get rid of them by unleashing a nuclear war or infecting the planet’s population with a deadly virus . You never know, suddenly these hairless monkeys will pull the plug out of the socket.

This is a popular story in science fiction. One of the first to use it was in 1967 the American writer Harlan Ellison in his short story “I Have No Mouth, But I Must Scream”. In it, the almighty AM computer hated its creators, exterminated humanity and left only five people on the planet to mock them simply because there was nothing to do.

Glory to the robots. Death to humans.

In reality, an AI rebellion is impossible. Software algorithms are not aware of themselves, do not have free will and emotional reactions. They cannot harbor negative feelings towards their creators or desire to enslave humanity. Neural networks are not capable of independently changing their parameters or program in order to get out of control.

In theory, strong AI can do something like that . But, as mentioned above, with modern technologies, it will not work to create it.

Myth 8. Robots with artificial intelligence will kill people.

Request: “Terminator preys on people.” Image: Stable Diffusion Online

When we talk about the dangers of AI and the rise of the machines, we usually think of pictures from movies like The Terminator in our heads. Led by artificial intelligence, a horde of mechanical creatures similar to humans, but stronger and faster than him, exterminates their inventors.

In practice, this scenario is extremely unlikely, and it’s not even about the AI’s lack of desire to kill someone. It’s just that the current state of robotics is far behind what we saw in The Terminator and The Matrix .

For example, Boston Dynamics’ robo-bops differ in capabilities from the four-legged killer from the Black Mirror Metalhead series. They don’t know how to run fast enough to chase fleeing people. When trying to catch up with you on the stairs, the same Spot can easily get tangled in its legs and fall.

An even more significant obstacle to the creation of mechanized killers is the lack of a sufficiently compact, powerful and long-lasting energy source.

Robops Boston Dynamics are able to “live” on a single charge up to 90 minutes – this is clearly not enough to make them an army to destroy humanity. Reactor plants that have been operating for 120 years in a row, which can be placed in the chest of a human-sized machine, as in the Terminator, have not yet been invented either.

Finally, putting AI into a self-contained machine is an impossible task. It’s in James Cameron’s fantasy that it fits into a chip the size of a fingernail. In reality, in order to make artificial intelligence think, significant computing power is required – ChatGPT, for example, runs on a farm of 10,000 video cards.

Can you imagine what size a huge humanoid robot should be in order to accommodate such “brains”, and what kind of cooling should it have?


by Abdullah Sam
I’m a teacher, researcher and writer. I write about study subjects to improve the learning of college and university students. I write top Quality study notes Mostly, Tech, Games, Education, And Solutions/Tips and Tricks. I am a person who helps students to acquire knowledge, competence or virtue.

Leave a Comment