Telli Menüü

TI-Hüpe CEO Ivo Visak: Education Needs an Intervention

Beginning on September 1st, 2025, Estonia is integrating AI into its national high-school curriculum. In its first year, the program, known as TI-Hüpe (AI Leap), will see about 20,000 students in grades ten and eleven and 4,700 teachers gain access to AI-based learning platforms, including ChatGPT and Gemini. Eesti Elu spoke to the program’s CEO Ivo Visak to learn more about it. Find the conversation below.

Ivo Visak in "Koolijuhi võimalus kasutada TI-d," screenshotted from Ti-Hüpe's YouTube channel.
Ivo Visak in "Koolijuhi võimalus kasutada TI-d," screenshotted from Ti-Hüpe's YouTube channel.

Given AI’s flaws, what is behind the urgency to implement it now?  

An important way of framing using AI in schools has to do with problematic use and practices that the students are already heavily involved in—including reduced critical thinking, academic fraud—meaning that there’s an intervention we have to do. The “AI leap” has already happened amongst students, but it hasn't been guided or pedagogically grounded. That’s why we need to act now. As a former high school principal, I don’t see the value in a passive approach in waiting, watching, and hoping someone else will solve the issue.

I recently spoke with a high school teacher in Tallinn who felt the program’s rollout has felt somewhat rushed. I’m curious to hear your thoughts on this.

I understand where that criticism is coming from. The program really started gaining momentum only in February. To put it into perspective, the whole process has moved very quickly, just months, not years. I was brought into the program in June, and since then, I’ve been working to lay the foundation and begin discussions with partners like OpenAI and Google. At the same time, many teachers feel they need support right now. That means first raising all teachers to a certain baseline level before we can offer more specialized, subject-specific training. So, for us, it’s about building a solid foundation first.

“… why would a student choose a tool that helps them learn, when there’s another tool that just gives them the answer instantly? That’s a core pedagogical dilemma.”

(Ivo Visak)

The teacher also expressed concern that AI could widen learning gaps, motivated students might use it effectively, while others might rely on it for shortcuts.

I think one of the main points is that we’re not trying to compete with the tools students are already using at home. That would be a pointless effort. The tools we’re providing in schools are fundamentally different. They don’t simply give answers. As I mentioned earlier, we’re using a Socratic model. These tools are designed to guide students toward an answer, not hand it to them. But that brings us to the elephant in the room: why would a student choose a tool that helps them learn, when there’s another tool that just gives them the answer instantly? That’s a core pedagogical dilemma. Of course, the education system wants students to engage in deeper learning. But if the system is structured primarily around grades and test scores, then students are motivated to seek shortcuts, because that’s what the system is really rewarding.

This is exactly why we’re involving education and psychology experts in the process. The question of motivation is more critical than ever in the age of AI. We now live in a world where students have all the answers at their fingertips. But without genuine motivation to learn, meaningful learning simply won’t happen. In those cases, students are either driven by external pressures, or they're part of the small percentage with a strong intrinsic desire to learn.

Even with the Socratic model you mentioned, these tools can still produce inaccuracies. Will there be a focus on teaching students how to use AI critically?

Absolutely. While hallucinations have become less frequent, it’s still important to understand the bigger picture of how large language models (LLMs) work. At their core, LLMs are prediction models. They generate the most likely next word or sequence based on patterns in the data they’ve been trained on. So if they don't have enough relevant data, or if they’re prompted in a limited way, their responses can be inaccurate. One of the key issues is that many students (and even some adults), use generative AI in the same way they use Google. This kind of AI literacy needs to become widespread. Without it, there’s a persistent misunderstanding of what these tools are and what they’re not.

It’s interesting to think about how AI also reshapes assessment. How do you see this playing out?

Vox published a really good article on AI and cheating, written by a professor from Stanford, Victor Lee, I believe. He’s been researching cheating throughout history, and the piece does a great job of exploring what cheating means in the context of AI and education. One idea I found especially compelling was his description of AI cheating as having “democratized cheating.” It stems largely from the U.S. university context, where, historically, wealthier students could afford ghostwriters for major assignments. But now, with generative AI, everyone has access to a ghostwriter.

“What are we really trying to achieve through education? If the system is still driven primarily by scoring and performance metrics, then it creates false motivations. That’s something we need to reckon with.”

(Ivo Visak)

This becomes particularly problematic in education systems, especially in the humanities and social sciences, where assessment is still heavily based on how well students synthesize a large body of reading. If the metric remains focused solely on producing well-written essays or summaries of complex ideas, then it's no surprise students would turn to AI. The system unintentionally encourages it. And that leads to bigger questions: Why are we in school in the first place? What are we really trying to achieve through education? If the system is still driven primarily by scoring and performance metrics, then it creates false motivations. That’s something we need to reckon with.

If we want to use these AI tools well in education, we can’t think in short cycles, like the typical four or five years when parliaments change and policies come and go. Changing education isn’t like that. It takes years, not months or days. Education systems only show real results over a long time. What really matters is what kind of norms and values AI brings into education, how those get established and embedded, and that norming takes time.

Speaking of norms, how do you envision teachers using AI?

One of the main things I see is that AI, as it currently stands, is kind of in a middle phase. There’s an economic theory that with any hype, a bubble eventually bursts, followed by a plateau, and then real, steady growth in usefulness. I’m pretty sure that’s where we are now. The bubble will have to burst before these tools reach their true potential. Another important aspect is that AI will take many different forms. For example, there might be lots of AI agents working quietly behind the scenes—tools that help a teacher notice important details, allowing the teacher to focus on what humans can naturally handle, while the AI keeps track of other vital information in the background.

So, the future of AI depends a lot on how these tools develop, what kinds of regulations societies decide to accept or reject, and what possibilities we are willing to embrace. This is deeply connected to societal values. We should be open to the possibilities AI offers, letting machines handle tasks that aren’t very useful so humans can focus on what really matters.

Responses have been edited for clarity. 

This article was written by Natalie Jenkins as part of the Local Journalism Initiative.

Loe edasi