Tallinn University of Technology

Is artificial intelligence (AI) currently only a smart algorithm in a home appliance or can it already think for itself and discuss the world's problems? This is discussed by two researchers of Tallinn University of Technology, Eduard Petlenkov and Yuri Belikov, whose research base is difficult to get a picture of - it nestles somewhere in servers, the location of which is not always known even by researchers.

Professor Juri Belikov ja professor Eduard Petlenkov
Foto: Konstantin Sednev/PM

Author: Kaido Einama / Postimees

However, both scientists agree that artificial intelligence has not gone out of control. Although people no longer know exactly how a machine reaches one or another result (although this has also begun to be studied more thoroughly in recent years), scientists are still of the opinion that it will take an enormous amount of time until we can start talking about an artificial intelligence close to a human.

"When we have regular photo shoots here," says Belikov, when he shows the so-called laboratory of artificial intelligence scientists,” then there is always a problem where to take pictures of us. Others have exciting devices, test tubes, we usually end up taking a picture in the corridor.”

We also initially take some pictures in the relaxation corner, then some clicks also «behind the desk». Even server cabinets with flashing lights are not to be seen, they are somewhere far away. Researchers admit that sometimes they do not even know where the server they work from their computers is located. And there is no need. Artificial intelligence is also nesting somewhere in the cloud - a machine that is scary for many and will soon take over the work and thinking of humanity.

"Artificial intelligence is really a little bit feared," admits Eduard Petlenkov, a professor at TalTech, "but there is no reason to." Artificial intelligence is a natural continuation in the line of human inventions and just another tool to help us."


"This is a philosophical question," says Eduard Petlenkov, recalling how a few weeks ago one of his master's students asked the same question when he was already finishing his master's thesis. It also raises questions among experts.

"Artificial intelligence is a much broader concept," explains Petlenkov. "It has smart algorithms behind it. However, machine learning is just another subfield, one approach to the realization of artificial intelligence, one of its teaching methods."

Eduard Petlenkov has been involved in the research of artificial neural networks since 2000, which is also a method of applying artificial intelligence (AI) that slightly imitates thinking in the human brain. AI was already being talked about at that time, but of course this concept has developed a lot in 20 years. However, so far it has not reached the level of science fiction movies, from which most ordinary people get their idea of ​​what AI can be.

AI is the most intense and the most powerful method that can analyze data and make decisions, the researcher summarizes what it is exactly. It is a recent invention that helps in automating decision making.


The arrival of AI is one logical step forward in human development, both scientists believe. In the beginning, people did everything by hand, then they invented the wheel, then they started farming, that is, they kept inventing different tools that make life easier.

A hundred years ago, automation came: now it was possible to do many more things at once. Many simple jobs were done by machines, people were left with higher level work. Only the work of the brain, or thinking, was not yet replaceable. Even the calculations were automated, but decisions and analyzes were left to humans, but not for long. Artificial intelligence is now the next logical step when automating analysis and decision-making.

"With a computer, we can analyze much more data, draw much more interesting conclusions, which a person alone would never be able to do," says Petlenkov.


Therefore, it is now necessary to find a new place for a person in the world and move to the next, higher level. You no longer have to deal with the analysis of simpler things.

The next step in automation, according to researchers, is now decision automation.

Yuri Belikov clarifies: “it is easiest to define machine learning and AI through tasks. The task of AI is similar to the task given to a human, the machine has to solve human tasks and finally has to make a decision. Machine learning is different from that, it is a subfield where you have to model so that the model is as accurate as possible. This model is designed for some specific tasks. Machine learning itself does not make decisions, it is given input, and the machine provides output. For example, even a Google search, to which you enter a keyword and the output is a list of websites or a list of videos on YouTube.“


So, what exactly are artificial intelligence researchers studying?

"We are not engaged in the development of artificial intelligence as such," says Petlenkov. "Instead, I study control systems, Juri's energy solutions. Each field has tasks that can be solved with the help of AI. Building automation is a good example: physically it is no longer possible to make many devices in the house more efficient, but artificial intelligence helps to save resources even better with the help of management, it learns, considers weather forecasts, predicts the indoor climate of the building according to the outside weather, etc.»

In this way, all fields are already engaged in the development of artificial intelligence to some extent.


Yuri Belikov has been dealing with issues of trust in artificial intelligence, which is a new field, only a few years old.

"Reliability and interpretability are perhaps a completely new field that has been developed for five years, only in the last couple of years it started to be studied more seriously," says Belikov. This is an even more recent topic than artificial intelligence itself.

"We all agree that it is a convenient tool for solving all kinds of problems. If a person cannot do it himself, artificial intelligence comes to the rescue," Belikov explains. "At the same time, the models within artificial intelligence are becoming more and more complex, and we no longer know how the machine reached the result it is giving out."

For example, human speech recognition models, or NLP (natural language processing) models, are already very complex, where billions or even trillions of parameters are used. Behind it is a huge amount of computing power that only giants like Google, Facebook and Microsoft who have the necessary computing power can deal with.

"However, if we look at simpler problems, simpler models are of course used, but even these are already too complicated for humans," Belikov explains the nature of the new problem that has arisen before the researchers.


If you feed the machine increasingly new data, for example, animal pictures, then the machine familiar with cat pictures will eventually be able to identify a cat from the picture in 98% of cases. But how did the model arrive at this decision?

"It turned out that there is no answer anymore, because no one knows how artificial intelligence reaches this result," says Belikov, explaining why it was necessary to start researching the reliability of artificial intelligence. Explainable AI, the latest new subfield of AI, examines how the model reaches its decisions.

According to Belikov, a whole series of algorithms have already been created that try to explain the functioning of artificial intelligence, but they still give slightly different results. There is no unified methodology. The biggest problem in the field right now is that it is not possible to explain exactly how the machine works in the "black box".


If we don't know how artificial intelligence works, maybe the machine has outsmarted us?

Scientists do not believe so.

"We have to ask artificial intelligence "Why?" question," explains Petlenkov. “If you think of AI like an ordinary person, then the machine is just as lazy and takes the easiest way. We teach artificial intelligence like children in school. Imagine that you give children test tasks and multiple-choice answers A, B, C, D. If the correct answer is A all the time, then the new tasks given in the test are all answered by the child with A. The same question is now with artificial intelligence: yes, the answers were all correct, but did the child (or the machine) learn that A is essentially the correct answer, or did it simply know that if the answer is A, it is always correct?"

Petlenkov says that if, for example, we teach from images that men are always somewhere in the urban environment, while women are in the forest, the artificial intelligence will later guess wrong when it is shown a man in the forest. The machine then learned to distinguish between male and female based on the environment and acquires biases that cause it to make wrong decisions later in other situations.

It's like a teacher's question "Why?", adds Petlenkov, that is, if the child answers correctly with multiple choice A, because it has been correct all along, then he can't answer the question "Why?".

So, he doesn't really know the right answer. The same must now be asked of artificial intelligence.


If artificial intelligence incorrectly guesses the gender of the person in the picture "man from the forest", then according to the researchers, there were too few pictures of men from the forest.

"At the same time, AI notices things that we ourselves don't always notice," adds Eduard Petlenkov. "For example, AI can instead find small details that men prefer slightly different colors than women, behave a little differently, etc."

But is it possible to extract people's own biases from the data fed to artificial intelligence?

According to Juri Belikov, it already depends on the developer of the model: “there are different algorithms. Some learn with predetermined characteristics. There are also self-learning models that discover differences completely independently."

Eduard Petlenkov gives such an example: “Teaching the intellect is one method, but the other method is self-learning. Then there is no one to teach next to you. For example, a machine follows many animals well and makes a decision that they are one animal, and these are other animals. He doesn't know who is a dog and who is a cat, but he can distinguish one species of animal from another. This approach can be applied to any field where similarities and connections need to be found. Afterwards, you can already say that these here are cats, and these are dogs."

Yuri asks: "But does it make sense for us to develop an artificial intelligence that works equally well in every field?"

He gives the example that a person also becomes a specialist only when he has studied his ten thousand hours as a medical specialist. If he wants to retrain now, it will take ten thousand hours again to become, for example, an actor. It is the same with artificial intelligence.


Neither AI scientist wants to predict the possibility of quantum computers and when they will come, but they believe that the use of this type of computer would give a huge jump to the development of artificial intelligence. Quantum computing is also being done in Estonia.

So, what happens when they come anyway?

"Many things that are impossible now will become very easy," says Petlenkov as the first thing. "For example, unbreakable encryptions and 128-bit keys or difficult passwords are no problem for a quantum computer to crack."

According to him, computing power is currently the limit of AI’s development. Google and some other giants can do big calculations, others don't have such good capabilities. But AI still needs more power. The kind that is needed for even more complex calculations cannot be obtained anywhere else than from quantum computers.

Yuri Belikov adds: "Of course, we are currently speculating about quantum computers here, because when talking to quantum scientists, there is no common understanding among them as to what will happen when these computers come. There are prototypes, but the number of qubits in them is very limited. The question is still in the air whether the laws of physics even allow the creation of larger quantum computers or not? But when it finally comes, there will be huge new opportunities in the field of artificial intelligence as well."


A washing machine and a vacuum cleaner are very simple examples of the use of artificial intelligence. The machine looks at how much people use, what kind of laundry they put in, both researchers state that there are actually very primitive calculations and "if it's so, then do it" algorithms behind it.

A vacuum cleaner or any other such home appliance looks at how people behave based on a set of inputs, gets used to it, and then acts on it with very simplified rules.

For more complex artificial intelligence, the processor of a washing machine or a vacuum cleaner is not enough, so calculations could be done already in the cloud, i.e. on servers with higher capacity. Eduard Petlenkov adds that a smartphone with very powerful hardware is already enough for image processing. But compared to Amazon's cloud service, the volume of image processing that the mobile can handle is of course very small.

Of course, in marketing, machine learning, with which home appliances are actually more involved, does not have such a sound name as artificial intelligence. Therefore, the label "with artificial intelligence" goes on the washing machine, because artificial intelligence is a marketing word there, behind which is instead machine learning or an even simpler solution - a standard algorithm with conditions.


"It won't come tomorrow," Yuri Belikov quickly confirms. "The machine will not become independent any time soon. Of course, artificial intelligence will become smarter, more complex, more powerful, but like in science fiction movies - it won't happen anytime soon,“ he reassures those who are a little worried about a self-thinking machine.

Eduard Petlenkov adds: "Of course, it's easiest for people to judge how good artificial intelligence is, and that's why a text conversation with a machine seems the most believable, as if it were thinking and speaking on its own. Chatbots use very simple models, they choose from among predetermined answers, but these answers can be very many.”

"Of course, Google does not share the background of its speech models, how they are built, that is why it is difficult to comment," says Juri Belikov as a comment on the recent opinion of a Google employee that the technology giant has already created a machine that thinks for itself, but the Estonian scientist does not believe that Google has come so far.

"We have a little experience with Facebook's art robots," he says, "we tested these models, and they work very well, some images look very creative. Facebook aka Meta mentioned how it works. A complex neural network, into which a whole number of images are thrown, learns them, and later, when you say that you want such an image, it is assembled into an original image not from hundreds, but from millions of images. It's still basically like a collage."


When a neural network starts making decisions, who is responsible anyway? Artificial intelligence cannot be imprisoned, Petlenkov gives an example, the person must still be responsible. If it is a life-critical field and a dangerous situation would arise because of artificial intelligence, then artificial intelligence is not yet allowed in that field.

"If the car is driven by AI, it doesn't get tired and doesn't make mistakes," the scientist gives another example. "But we still trust people more, who get tired and make mistakes. If the AI drives the car, some paradoxes arise: for example, would the AI save the driver's life or instead minimize the number of victims if there is a risk of hitting several people and killing the driver? Human himself always saves himself, he usually does not save others by sacrificing himself. The machine has a paradoxical choice between saving the driver alone or instead saving several pedestrians and letting the driver perish.”

However, no one would ever buy a car that minimizes the number of victims but can leave the driver in his own car as a victim.

At the same time, at the other extreme, AI can start to save the driver who is sitting in the car at all costs, victimizing everyone else. According to the researchers, this is another big problem that needs to be solved.


Artificial intelligence can become a personality, but not soon, researchers believe.

"At the moment, the industry is a bit afraid of artificial intelligence, because they don't understand what exactly is going on inside," says Eduard Petlenkov. "This is a problem for every large complex system, not just artificial intelligence. No one has a complete idea of ​​how the whole system works anymore."

"All the more conservative fields, not only industry, but also lawyers or public services are still a bit afraid of artificial intelligence. They use it, but they don't rely on it," Belikov adds.

Both researchers think that the regulation of AI definitely needs a more serious open discussion because many people fear it, even though it shouldn't. However, according to scientists, artificial intelligence is much more helpful than harmful.

The European Union is currently discussing what to do with AI. The field needs to be regulated, because if it is not regulated, big companies will use it for their own benefit and data will sometimes be misused.

"Artificial intelligence is a tool that can be very useful and to make life easier," concludes Eduard Petlenkov. "At the same time, it can be abused, like any other tool. It's like an ax with which you can build houses or kill."

Article was published in the newspaper Postimees on the 8th of August