Tallinn University of Technology

Birgy Lorenz and Gert Jervan share their thoughts about artificial intelligence, from ethical and technical stand points and from the view of higher education. Ethically, it is important to do our homework ourselves, technically, though, the chat bot does not really understand what it says to you, it just knows how to do it very well. 

Birgy Lorenz, Senior Researcher, Chair of the Ethics Committee of TalTech, IT-Didactic Centre ½ Photo: Karl-Kristjan Nigesen

If we start from the beginning, the advent of artificial intelligence (AI) has been a hot topic since the 1980s. I do not think that AI will replace people, but we will certainly interact more and more in our daily lives with machines and scripts that do the more boring and routine jobs for us.

In addition to tedious and routine tasks, the ‘Ai, carramba!’ (AI) can give a student a helping hand, for example, when a student needs to ask for advice quickly because they do not understand a homework assignment and cannot ask the teacher. The AI can also help if you need help translating or correcting typos, or if you are looking for ideas for writing a story or drawing a picture. A computer-based intelligence can also help you learn to write better. However, the rule of ‘Trust, but verify!’ applies – a robot can create a lot of nonsense, and you as a human make the final decision on whether or not to use what is offered.

Teachers will be able to make better use of the possibilities offered by artificial intelligence, starting with tasks that allow for automatic assessment. There are solutions that present tasks and exercises at the appropriate levels of difficulty. Teachers can ask a script for help to identify weaknesses in a student’s work and suggest ideas for improvement.

In addition, the AI enables the creation of interactive learning environments, where the experience for students can be superior to real-world learning. For example, you want to travel to distant lands or thousands of years into past, but you do not have the money or it is not safe or not possible at all. Possibilities are endless in virtual reality, and computer games, for example, take advantage of this.

We need to talk about academic integrity!

Ideally, enrolling at a university should mean that you have a thirst for knowledge and a desire for self-development. A diploma is important, but the process and the skills and knowledge you gain are more meaningful, because a power cut, for example, cannot take them away from you.

The aim of the defence of student papers is to assess the competencies acquired by the students. The students are responsible for the content and quality of the thesis, regardless of the sources used, including generative AI.

The school aims to support the acquisition of knowledge and skills. Using AI to help with your classwork or course paper might help, but you might not always learn to use your brain as efficiently. It is important that if you have used external help to a greater extent, you should report it, for example, by referencing the person who helped you. This is also a guideline in the TalTech Code of Academic Ethics.

Academic integrity is an agreed and honest way of behaviour in academic realm when studying, researching, and testing knowledge. It is based on the understanding that academic institutions – universities, research institutions, research units, and higher education establishments – respect the intellectual creation and intellectual property of others. Academic fraud, such as plagiarism and cheating, is a violation of rules established at the university, incl. its values. When students carry out research or gather information for a presentation or a seminar paper, their activities must also comply with the principles of good research practice: freedom, responsibility, honesty, and objectivity, respect and caring, justice, openness and cooperation.

From a student’s point of view, honesty and objectivity are important principles, which means that a student has the courage to admit mistakes and, if necessary, reassess their previous work in the light of new research; interprets both data and research results objectively and not arbitrarily; and does not falsify or fabricate data or plagiarise.

So what should we do?

Clearly, competing with a computer is not the best use of time for the human brain. If we leave the dull work to the AI, then people can work on developing their creativity. In fact, beauty lies in human mistakes that a machine cannot make.

We are emotionally intelligent – we are able to perceive and express emotions and make others understand them. A human being is able to think about ethical issues and make decisions that are responsible and fair. In other words, only we can make decisions that are good for humans. You can never let the AI have the final say in political decisions affecting humanity – while this may seem convenient, it is certainly not ethical.

Human participation is essential for understanding cultural and social differences and communicating with people from different cultures. This is particularly important in international communication and inter-cultural cooperation. It would be a shame if World War III or IV was started by a programme that neither likes nor dislikes the ensuing fighting.

Finally, I would recommend looking into the different types of ‘Kratt’ AIs that can be used in education, but do your own homework anyway. It would be sad if, in 2028, the biggest concern in Estonia would be students who are incapable of solving maths problems independently or writing a short story in Estonian at the end of basic school. What should we do with such students at the university, let alone the issues employers face once they enter the workforce?

Birgy Lorenz ja Gert Jervan

AI – what’s all the fuss about?*

Professor Gert Jervan, Dean of School of Information Technologies ½ Photo: Heiki Laan

Artificial Intelligence (AI) as a scientific discipline has its roots in the 1950s. In the following few decades, AI-oriented research received a lot of attention all over the world, including in Estonia. Examples include the work of Enn Tõugu and his colleagues at TalTech and the Department of Cybernetics, who successfully worked on creating expert systems which is one of the applications of artificial intelligence.

For various reasons, the development of AI has essentially stalled on several occasions. These periods are known as AI winters. This has been due to underestimating the complexity of the AI and low capacity of computers. Nevertheless, AI algorithms have entered a wide range of fields and applications in the last decade. So why has the AI suddenly received so much attention recently?

At this point, we need to talk about terminology, because there is a lot of confusion in everyday media. The classical definition of the AI states that it is a system that can solve arbitrary tasks requiring human intelligence and can do so more successfully than humans. Today, the notion of artificial general intelligence (AGI) is also used to denote a system corresponding to this description. Many groups made repeated attempts to create an AGI in the 1970s and 1980s, but these attempts failed consistently, leading to the AI winters. However, researchers realised at the beginning of this century that using AI algorithms to solve narrower and more specific problems was not only possible, but also produced very good results. Examples include artificial neural networks or statistical machine learning. Such algorithms are used for optimisation, image recognition, route planning in self-driving vehicles, and in many other fields. These applications are known as ‘narrow AI’ that have very little in common with the AGI (or AI in the classical sense).

A decade ago, as more powerful computers and big data became available and AI algorithms were developed further, deep learning methods began to dominate. However, despite the aptness of these algorithms, they are still statistical methods that provide answers on the basis of learned past knowledge without any (self-)awareness, emotions, or thinking capacity. This category also includes natural language processing (NLP) and large natural language models, such as ChatGPT which has recently received a lot of attention.

What all these models have in common is that they can process natural language, but they do not understand the content. ChatGPT is able to process a colossal amount of online information and create an extremely complete and smooth text for us, without understanding any of the content. With each new big language model, we ask ourselves increasingly – how is this possible? How can statistical methods achieve such results? Is it really not hiding something more intelligent?

This feature is one of the major weaknesses of large language models, because despite the exponential increase in the complexity of the models, they do not become more reliable. The GPT-3 model contained 175 billion parameters, while GPT-4 contains one trillion and is set to grow to 100 trillion. It has been estimated that at present the complexity of models doubles every 3.5 months, while Moore’s Law, which describes the complexity of computing technology, observes that the complexity of computers doubles in only 18 months. However, these models are still based on information available to them without understanding its content. They do not learn to distinguish the truth from false. They simply become more assertive (more natural), and give an increasingly confident impression that the generated text is true in every case. As there are so many potential applications for large language models (which have been springing up like mushrooms lately), it is crucial that we take into account issues such as hallucination, bias, alignment, and many other challenges, both ethical and technological, when building AI applications (as well as in studies). Even if we correct ChatGPT by giving it accurate information, someone else can similarly feed it wrong or biased information at the same time. The more you disclose your private information to it, the more it gets to know you. However, it is not possible to ask for this information back.

Is the GPT-4 another step towards the AGI? Yes, definitely. The GPT-4 is able to perform many tasks at or above the level of a normal human. Nevertheless, it is still very limited, and even now, the road to a fully-fledged AGI is very long. At the same time, the various AI algorithms give us a huge leap forward in solving everyday problems, because, as has been aptly said, the advent of the AI will have the same impact on society as the invention of the printing press in the past. Let’s use the tools at our disposal. But let’s also keep our heads and not rush blindly into the future.

* ChatGPT was not used even once for creating this text.

Loading...