Tallinn University of Technology

It can take seven years to get a PhD.
And a month to write a useful business plan or a year to write a book.
And yet, when AI shows up, our mistake is thinking that 
if we can’t find useful brilliance in one simple prompt, it’s broken. 
– Seth Godin 

The past five years or so have turned many long-standing understandings of learning upside down. The previous major upheaval was likely the rise of social media and, as one of its extensions, the widespread adoption of various e-learning solutions in the early years of this century. This was primarily perceived as a technological development that did raise new questions but did not shake the foundations of learning. Reading and writing largely remained the same even in digital environments, sources had to be cited and they had to be real. Written assignments remained an important part of the educational process, although a growing concern about plagiarism gradually emerged, leading to the creation of several dedicated solutions to combat it. A person with higher education was expected to be able to express themselves fluently both in speech and in writing. Even the boom of smart devices some time later did not significantly disrupt educational processes.

Perhaps the most significant influence, however, was the triumph of mass education (largely enabled by digital environments) — on the one hand, it provided opportunities to more people, but on the other, it often placed such a heavy burden on academic staff that the previously more individual approach became impossible.

In any case, by the end of the second decade of the century, the situation changed. Some key developments that could be mentioned include the following:

  • the growing ideologization of Western societies and the rise of so-called culture wars — social cohesion has weakened and polarization has increased;
  • the emergence of the so-called Generation Z, characterized by a love of technology, curiosity, and a desire to find their own path, alongside lower tolerance for stress and criticism (see below) and reduced ability to concentrate compared to previous generations;
  • The COVID-19 pandemic and the accompanying societal changes — in many places, teachers were unable to adapt to the changed circumstances, and with a few years’ delay, higher education began to see an influx of young people from the aforementioned generation, whose basic education had also been disrupted due to the pandemic.

Several sources additionally point to the following trends:

  • a noticeable decline in oral and written self-expression skills;
  • a decline in functional reading skills as well as reasoning and analytical abilities;
  • a decline in critical thinking ability;
  • a decrease in tolerance among young people. The interim "awakening" era allowed the replacement of argumentation skills in many areas with bullying and threats of "canceling." As a result, young people often can no longer calmly discuss with people who hold different beliefs. One suggested reason for this is the increased irritability caused by social media. 

And then came ChatGPT and Copilot 

ChatGPT, which became public at the end of 2022, was the first widely adopted generative artificial intelligence application, followed by a host of others. While social media and smart devices had not significantly shaken the existing education model, it soon became clear that this disruption would be much more serious.

It soon became clear that traditional written assignments (reports, essays, and even theses) have become outdated. At first, some lecturers still tried to detect AI-generated content using the existing plagiarism detection tools, but today this is increasingly difficult and, due to large class sizes, also increasingly time-consuming. As a result, the number of young people who can no longer produce any serious text without AI assistance is rapidly growing.

The decline in literacy is largely caused by the loss of the ability to make connections between objects — although today’s general-purpose AI solutions cannot truly analyze the surrounding world, they can quite successfully generate “analysis-like products,” which a lazy and convenient user may copy verbatim without thinking about the result themselves. The outcome is, again, that deep learning based on problem-solving is replaced by the ability to press a button (see Seth Godin’s idea at the beginning of the article).

Somewhat older learners (who already have proper literacy skills) and academic staff themselves are threatened by the decline in the quality of academic research. AI can very successfully create something that is indistinguishable from a term paper, review, scientific article, or literature overview. However, in a large-scale work, a single hallucination can be enough to make it completely worthless. Even worse can be the result when such a flawed product passes through a whole series of control mechanisms successfully — purely because everyone involved was simply extremely overloaded.

Some time later, it became clear that AI-generated answers depend on its training and trainers. Since the trainers are mostly people with a specific worldview, the problem is further amplified — users become convinced (similarly to social media echo chambers) that "I am right, thinking differently is inherently wrong and rightfully subject to sanctions."

This list could go on for a long time.

Terra incognita

Regarding all the risks mentioned, the problem is that artificial intelligence is still largely a terra incognita, or unknown land. Just as explorers of old had to be prepared to face completely new dangers, today we should proceed with AI much more cautiously than has been done so far (at least in Estonia). One example is slopsquatting — AI that "sees hallucinations" has led to a new dangerous practical risk, namely the creation of a fictitious, outwardly legitimate but actually non-existent software package loaded with malware and inserted into a major software repository.

In any other field, a device, machine, or system that, for example, fails to perform as expected 2% of the time would likely have no real practical use and would probably be considered only a prototype. However, AI solutions with a similar error rate are being eagerly used and, unfortunately, quite uncritically. I recall a recent complaint from a colleague who, when pointing out a mistake in a student's work, was told, "But AI said so!" and both parties stuck to their opinions.

A recent buzzword is prompt engineering, which is essentially designing the information given to AI so that the response is as accurate and comprehensive as possible. But when obtaining information requires phrasing it in a certain way, there is always a risk of “getting it wrong” either accidentally (the asker lacks sufficient design skills) or intentionally (manipulatively — as a line from an Estonian film classic goes: “Thank God someone let the truth shine through that way again!”). Very often, the unanswered question remains where the line lies between requests to get a) the most accurate or b) the most pleasing answer to the asker (in English, the phrase “gaming the system” is used — exploiting the system’s weaknesses).

The problem grows even more when such “designed information” generates further derivatives — we can recall the Soviet Union, which was founded as a state on lies and where those lies grew increasingly until eventually creating a kind of parallel society, in which everyone knew the official ideology was a complete fairy tale, yet much of the population continued to play along. 

So what can actually be done? 

To avoid ending the story on a dark note, we could think about how to make lemonade out of a dropped lemon.

One somewhat social-Darwinist idea would be to accept the fact that, as a rule, AI helps smart people become even smarter, and for others, the opposite. A person who lets ChatGPT do all their schoolwork punishes themselves sufficiently by usually not being very capable in real life despite their education. However, this idea would probably not be very welcome at the societal level as a whole.

Another option would be to ban it through laws and regulations. There is little point in dwelling on this, as in the current situation it would be a fairly hopeless attempt.

Third and probably more reasonable would be a smart and calm way forward. From the perspective of universities, this would mean responsible permission for AI use. Long ago, I wrote that there could be “licenses” for the internet, and AI use would actually need at least the same. In other words, AI should start being introduced quite early — but only after a person has acquired a certain basic education. A rule of thumb could be: no use before secondary school, learning to use it in secondary school, and no strict restrictions at university. Half-jokingly, one test could be recommending classic literature — those who can successfully write it from memory might be allowed to use AI later.

But that assumes teachers and lecturers need to have much more time for students than before. It’s not worth copying a careless parent who handed their toddler a tablet early on, didn’t bother or manage to engage with them, and then wonders why the teenager “turns out badly.” Like computers, the internet, social media, and smart devices before it, AI also needs to be introduced to young people carefully, with explanations of what to do and why—or why not. 

In conclusion 

One very thought-provoking and disturbing vision from five thinkers (Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, and Romeo Dean) about the future of artificial intelligence is described in the document "AI 2027." It is highly recommended reading for those interested, but one can only hope that we will not have to choose between the two development scenarios outlined there.

The article was published in the Tallinn University of Technology magazine Mente et Manu

REFERENCES 

  1. U.S. Department of Education, National Center for Education Statistics. (2024). Highlights of the 2023 U.S. PIAAC Results Web Report (NCES 2024–202). Washington, DC. Taken from here in May 2025.
  2. Microsoft study: Artificial intelligence reduces human critical thinking
  3. Beyond Z: The Real Truth About British Youth – Speech by Alex Mahon, Chief Executive, Channel 4.
  4. Rozado, D (2024) The Politics of AI. An Evaluation of Political Preferences in Large Language Models from a European Perspective. Centre for Policy Studies

Some ideas worth using at university in the age of artificial intelligence

  • Old-school seminars/discussions — in one classroom; as a change, it could be really interesting to have discussions without computers and slide presentations, and in a stricter case, even without paper materials.
  • Oral exam — whether the classic ticket system or some e-version; the important part is direct interaction and the requirement to master ALL the course material, just like in the old days.
  • Various short written forms — it’s important to keep the format compact enough that “feeding” it to AI would actually be more troublesome than writing it yourself; for example, you could use Discord-style chats, microblogging services, messaging apps from different platforms, classic web forums, or similar.
  • Community discussion — an actively engaging learning community should be created where expressing opinions is interesting and valuable to the learner themselves.
  • Various thoughtful ways to involve AI — and this doesn’t mean just “throwing in” written assignments or even summarizing texts. Instead, one could try comparing different “intelligences’” opinions, experimenting with prompts (but again, in a way that doesn’t turn into manipulation of people and opinions), doing “hallucination hunts” (for example, “who can get the most absurd claim or funniest fake source from AI about topic Y”), and so on. 

TalTech contributes to the responsible use of artificial intelligence (AI)

  • According to Gert Jervan, Dean of the School of Information Technologies, the university is taking steps to soon make various AI tools, including paid versions, available to staff and students. To ensure the responsible use of AI, attention is also being paid to the development of legal and ethical frameworks.
  • "We want to offer, by autumn, initial guidelines to instructors who wish to use artificial intelligence in their teaching. Explanatory learning materials about AI are being created, and on the website ti.taltech.ee we will compile guides, positive examples, and descriptions of tools," says Jervan.
  • "AI is here to stay. Use it wisely and make AI an effective assistant for yourself," encourages the dean.