Sven Nõmm: artificial intelligence as a societal challenge and opportunity

10.12.2024
Sven Nõmm: artificial intelligence as a societal challenge and opportunity. 10.12.2024. The term “artificial intelligence” or “AI” was coined 68 years ago. This marked the beginning of a discipline that, over the decades, has transformed both science and everyday life. Many consider the 1956 Dartmouth Conference to be the starting point of the field. Since then, methods for data analysis have evolved: information retrieval became data mining, which then developed into data science–a field that tackles complex problems such as classification, clustering, anomaly detection, and pattern recognition. Machine learning has provided a framework for data science by combining statistics, mathematics, and computer science. All of this is enhanced by the logic and reasoning capabilities that form the foundation of artificial intelligence. For decades, AI has found applications in many scientific and technological fields. Despite its potential, AI long remained a niche topic, primarily regarded as a subfield of computer science. But time flew by, and AI-based technologies gradually made their way into various tools and eventually into everyday life. They enabled functions like unlocking phones with facial recognition or receiving travel recommendations—developments that recently seemed both natural and harmless. However, the emergence of large language models, capable of generating code or essays indistinguishable from those written by humans, has significantly raised the profile of AI’s impact, making it both more relevant and more controversial. Education and AI – partners or opponents? Technological progress has always influenced education. Pencils and paper gave way to the slide rule, which was later replaced by the calculator – and today, all of these are integrated into smartphones. Similar examples can be found in many other fields. Today, AI is capable of automating exercises that once helped students develop skills – such as writing programming code or essays. This presents one of the greatest challenges in modern education: will practice, a fundamental component of learning, disappear altogether? If a student can complete an assignment without practicing, the acquisition of knowledge and the development of critical thinking are both at risk. AI also raises ethical and legal questions in education. For instance, is it acceptable for a student to use a chatbot to write essays or solve complex problems? Or how can we assess a student’s knowledge if their achievements rely heavily on technology? In the context of universities, our primary responsibility is to ensure a high standard of education for our graduates. This means that students must acquire skills – learning to code, writing essays, and applying various types of knowledge in practice. This is the foundation of academic education, even in highly practical fields. At the same time, we must create conditions that enable the design of the next generation of AI systems and tools. This must be done in alignment with societal expectations – adhering to ethical, legal, and safety principles, along with transparency and trustworthiness. Yet we must not forget that many of the AI-powered tools – such as chatbots or coding assistants – depend on internet access. But what happens if that connection is lost? A strong solar storm or a cyberattack could disrupt internet access for hours. In everyday situations, AI-based tools are becoming an integral part of our work. They can solve programming tasks with remarkable speed. Although writing essays remains a challenge for AI, it can help structure texts more easily and handle many repetitive tasks more efficiently. If we limit the use of this powerful tool, we risk making our graduates less competitive in the job market. It’s also crucial for people to learn how to interact with AI-based systems and collaborate with them effectively. Undoubtedly, it is difficult to strike a balance between the opportunities and challenges posed by artificial intelligence. A comprehensive solution – one that meets the needs of all parties involved – has yet to be found. We must create conditions that enable the design of the next generation of AI systems and tools. This must be done in alignment with societal expectations – adhering to ethical, legal, and safety principles, along with transparency and trustworthiness. The potential and challenges of AI AI offers significant advantages. In research and development, it has led to impressive results, and it can also effectively support routine administrative tasks. However, AI-based technologies have demonstrated their power for both good and bad. Extreme positions – such as banning AI entirely – stand in contrast to liberal approaches that advocate for avoiding any regulation. A more moderate view suggests that research and, in particular, applications should be subject to regulation. Artificial intelligence is here to stay. In a democratic society, banning it is not a feasible option – and even if it were, the consequences would be devastating. Beyond chatbots and image generators, there is a broad and diverse system of AI applications spanning manufacturing, transport, and everyday use. Although these lesser-known solutions receive less attention, they are indispensable in many fields and influence our daily lives. Banning such systems would cause serious disruptions – not only in the production of AI components but also in the logistics sectors that support their distribution. In addition, a ban would necessitate replacing existing technologies – a process that would lead to confusion and interruptions in many areas of life. At the same time, in a democratic society, protecting individual rights is not only necessary – it is an uncompromising obligation. At the heart of every decision must be the task of maintaining a balance – between fostering innovation and safeguarding human rights. At the heart of every decision must be the task of maintaining a balance – between fostering innovation and safeguarding human rights. The future of AI in Estonia Estonia has created favorable conditions for the development of AI. For example, TalTech’s AI Center of Excellence brings together research and development, supports international collaborations, and contributes to increasing awareness around artificial intelligence. The center’s collaboration with TalTech’s IT Didactics Center helps organize training for both university members and external participants. TalTech is also a trusted partner of the Estonian government – providing academic support and expert knowledge in shaping a nationwide AI ecosystem. We also offer the necessary academic backing for the AIRE AI and Robotics Center, which has become a bridge connecting businesses and the university. Yet to build this whole system, it is crucial that people know how to use AI-based technologies in everyday life. AI is not just a technology – it is an opportunity. To fully harness the potential of artificial intelligence, we must learn to work with it. This means raising overall awareness and creating opportunities for all members of society to understand, implement, and guide AI fairly and ethically. Artificial intelligence is not our adversary – if we understand it and learn to use it wisely, it becomes our partner. Artificial intelligence is not our adversary – if we understand it and learn to use it wisely, it becomes our partner.
Sven Nõmm | Foto: TalTech

Sven Nõmm | Foto: TalTech

This is an opinion article
The thoughts expressed in the article are those of the author of the article and may not coincide with the views of Trialoog.

Artificial intelligence represents both a challenge and an opportunity, as it is transforming science, education, and everyday life. TalTech's AI lead, Sven Nõmm, emphasizes the need to find the best possible balance between promoting innovation and adhering to ethical principles.

The term “artificial intelligence” or “AI” was coined 68 years ago. This marked the beginning of a discipline that, over the decades, has transformed both science and everyday life. Many consider the 1956 Dartmouth Conference to be the starting point of the field.

Since then, methods for data analysis have evolved: information retrieval became data mining, which then developed into data science–a field that tackles complex problems such as classification, clustering, anomaly detection, and pattern recognition. Machine learning has provided a framework for data science by combining statistics, mathematics, and computer science. All of this is enhanced by the logic and reasoning capabilities that form the foundation of artificial intelligence.

For decades, AI has found applications in many scientific and technological fields. Despite its potential, AI long remained a niche topic, primarily regarded as a subfield of computer science. But time flew by, and AI-based technologies gradually made their way into various tools and eventually into everyday life. They enabled functions like unlocking phones with facial recognition or receiving travel recommendations—developments that recently seemed both natural and harmless. However, the emergence of large language models, capable of generating code or essays indistinguishable from those written by humans, has significantly raised the profile of AI’s impact, making it both more relevant and more controversial.

Retro-futuristic illustration of how artificial intelligence might have been imagined in 1956 | Image: Trialog/ChatGPT

Retro-futuristic illustration of how artificial intelligence might have been imagined in 1956 | Image: Trialog/ChatGPT

Education and AI – partners or opponents?

Technological progress has always influenced education. Pencils and paper gave way to the slide rule, which was later replaced by the calculator – and today, all of these are integrated into smartphones. Similar examples can be found in many other fields.

Today, AI is capable of automating exercises that once helped students develop skills – such as writing programming code or essays. This presents one of the greatest challenges in modern education: will practice, a fundamental component of learning, disappear altogether? If a student can complete an assignment without practicing, the acquisition of knowledge and the development of critical thinking are both at risk.

AI also raises ethical and legal questions in education. For instance, is it acceptable for a student to use a chatbot to write essays or solve complex problems? Or how can we assess a student’s knowledge if their achievements rely heavily on technology?

In the context of universities, our primary responsibility is to ensure a high standard of education for our graduates. This means that students must acquire skills – learning to code, writing essays, and applying various types of knowledge in practice. This is the foundation of academic education, even in highly practical fields.

At the same time, we must create conditions that enable the design of the next generation of AI systems and tools. This must be done in alignment with societal expectations – adhering to ethical, legal, and safety principles, along with transparency and trustworthiness. Yet we must not forget that many of the AI-powered tools – such as chatbots or coding assistants – depend on internet access. But what happens if that connection is lost? A strong solar storm or a cyberattack could disrupt internet access for hours.

In everyday situations, AI-based tools are becoming an integral part of our work. They can solve programming tasks with remarkable speed. Although writing essays remains a challenge for AI, it can help structure texts more easily and handle many repetitive tasks more efficiently. If we limit the use of this powerful tool, we risk making our graduates less competitive in the job market. It’s also crucial for people to learn how to interact with AI-based systems and collaborate with them effectively.

Undoubtedly, it is difficult to strike a balance between the opportunities and challenges posed by artificial intelligence. A comprehensive solution – one that meets the needs of all parties involved – has yet to be found.

We must create conditions that enable the design of the next generation of AI systems and tools. This must be done in alignment with societal expectations – adhering to ethical, legal, and safety principles, along with transparency and trustworthiness.

Artificial intelligence is breaking out of the lab walls and amplifying innovation across all areas of life | Photo: Getty Images/Unsplash

Artificial intelligence is breaking out of the lab walls and amplifying innovation across all areas of life | Photo: Getty Images/Unsplash

The potential and challenges of AI

AI offers significant advantages. In research and development, it has led to impressive results, and it can also effectively support routine administrative tasks. However, AI-based technologies have demonstrated their power for both good and bad. Extreme positions – such as banning AI entirely – stand in contrast to liberal approaches that advocate for avoiding any regulation. A more moderate view suggests that research and, in particular, applications should be subject to regulation.

Artificial intelligence is here to stay. In a democratic society, banning it is not a feasible option – and even if it were, the consequences would be devastating.

Beyond chatbots and image generators, there is a broad and diverse system of AI applications spanning manufacturing, transport, and everyday use. Although these lesser-known solutions receive less attention, they are indispensable in many fields and influence our daily lives. Banning such systems would cause serious disruptions – not only in the production of AI components but also in the logistics sectors that support their distribution. In addition, a ban would necessitate replacing existing technologies – a process that would lead to confusion and interruptions in many areas of life.

At the same time, in a democratic society, protecting individual rights is not only necessary – it is an uncompromising obligation. At the heart of every decision must be the task of maintaining a balance – between fostering innovation and safeguarding human rights.

At the heart of every decision must be the task of maintaining a balance – between fostering innovation and safeguarding human rights.

The future of AI in Estonia

Estonia has created favorable conditions for the development of AI. For example, TalTech’s AI Center of Excellence brings together research and development, supports international collaborations, and contributes to increasing awareness around artificial intelligence.

The center’s collaboration with TalTech’s IT Didactics Center helps organize training for both university members and external participants. TalTech is also a trusted partner of the Estonian government – providing academic support and expert knowledge in shaping a nationwide AI ecosystem. We also offer the necessary academic backing for the AIRE AI and Robotics Center, which has become a bridge connecting businesses and the university. Yet to build this whole system, it is crucial that people know how to use AI-based technologies in everyday life.

AI is not just a technology – it is an opportunity. To fully harness the potential of artificial intelligence, we must learn to work with it. This means raising overall awareness and creating opportunities for all members of society to understand, implement, and guide AI fairly and ethically. Artificial intelligence is not our adversary – if we understand it and learn to use it wisely, it becomes our partner.

Artificial intelligence is not our adversary – if we understand it and learn to use it wisely, it becomes our partner.