The EdTech sector is facing a period of reckoning. As AI tools reshape the landscape and user expectations evolve, many platforms are being forced to rethink how they attract, retain, and scale their user base.
Take Chegg, a prominent online education company, which has recently announced layoffs of approximately 22% of its workforce, citing declining demand as students increasingly turn to AI-powered tools like ChatGPT. This shift underscores a broader challenge in EdTech: how to build platforms that not only innovate but also retain and grow their user base in sustainable ways.
Gaiane Simonian understands this deeply. As the former Head of Olympiads Development at Uchi.ru, she didn't just lead one of the largest EdTech engagement engines in Eastern Europe, but she also pioneered a new model for scaling educational platforms. By turning academic competitions (Olympiads) into a recurring, data-driven growth engine, she proved that business expansion in EdTech could be driven not by ads or discounts, but by meaningful learning events that users love.
Today, she holds an MBA from MIT Sloan and is developing her own EdTech ideas. Gaiane brings a critical insider perspective to this turning point in EdTech. At Uchi.ru, she reengineered infrastructure and engagement models to sustain user trust at scale. In this interview, she shares what most startups miss when chasing scale and how insights from her 12-million-user Olympiad engine reveal deeper truths about what makes EdTech platforms actually work.

Gaiane, what does it take to build trust in EdTech infrastructure at scale, and why do so many platforms still miss the mark?
Many platforms fail not because of poor technology, but because they scale too early, before they've tested thoroughly or understood how schools actually work. Reliability is something you earn before going mainstream.
It's not just about servers and uptime. It's about trust: a student shouldn't lose their progress, a teacher shouldn't have to apologize to a classroom, and parents shouldn't doubt the product. In EdTech, even an MVP must be stable, because the stakes are emotional. A bug isn't just a glitch—it can mean tears or lost confidence.
We designed our infrastructure from day one to support resilience. We tested different browsers, slow connections, and even duplicate accounts for the same student. All those edge cases matter. That's infrastructure, too.
You enabled than 200,000 students to stay concurrently at the Uchi.ru platform. How did you build an infrastructure that could handle that kind of load?
We architected the system from the start to manage simultaneous demand: using microservices, horizontal scaling, caching, and fault-tolerant queues. Each Olympiad module was separated into its own service to prevent overload across the main platform. Ahead of every nationwide launch, we conducted stress tests simulating millions of sessions. These were complemented by smart session management, real-time logging, and intelligent traffic segmentation to prevent cascading failures. Once in production, we relied on a robust monitoring environment, autoscaling protocols, and a 24/7 support rotation to keep everything running smoothly.
Running competitions across seven time zones sounds like chaos. How did you manage traffic peaks?
Actually, seven time zones helped. The launch began at midnight Moscow time—early morning in the Far East—and traffic ramped up gradually. Once the system stabilized and automated, it handled the load smoothly. In the early years, though, the whole company stayed up for launches. Every department tested different scenarios. We fixed bugs live based on early user data. It was intense but essential.
Olympiads were a core entry point to Uchi.ru. How did you connect them naturally to the rest of the platform?
From the start, I saw Olympiads as more than a product but a strategy. I believed they could be the key to scaling an EdTech platform in a way that aligned with educational goals. And they were.
Our Olympiads were either highly relevant to the existing courses, so users would smoothly convert, or they were massive in reach, bringing volume. In both cases, they worked as strategic acquisition tools.
We nudged teachers to assign Olympiads as homework so that students encountered the paywall later at home—ideally with a parent nearby. But ultimately, a child needs to want it. That's the real trigger for conversion.

You turned Olympiads from a traditional classroom activity into a digital movement with over 12 million users. What do you think made your approach so successful and so widely adopted?
A few things came together. First, we didn't treat Olympiads as just assignments—we made them events. Every launch felt like a premiere: we had countdowns, mascots, even gala openings at schools or with government officials pressing a big 'Start' button. That kind of energy made kids excited to participate.
Second, we kept them simple and joyful. The design was intuitive, the content playful but meaningful. We gave every child a certificate and made sure success felt personal—even if they didn't win, they felt proud.
Third, we aligned with school rhythms. Timing mattered. A September Math Olympiad caught the back-to-school wave. A December programming challenge rode the global Hour of Code. That helped us build a reliable, recognizable cadence.
And finally, we respected every stakeholder. Teachers got classroom dashboards. Parents saw value. Districts got real data. The whole system worked because we built for trust. That's what helped it scale.
What metrics were most important when evaluating success, especially since Olympiads were free?
New users and reactivated users were key—people who hadn't used the product in a while but returned because of the Olympiad. We tracked conversion into online courses, session length (ours exceeded 20 minutes on average, even during the pandemic), and overlap between Olympiad participants month-to-month.
Loyalty metrics also mattered. They help predict retention better than any short-term spike. If users love the product, they'll stick around and explore more. If not, they'll drop off, no matter how good your numbers look.
Olympiads were free, but still boosted growth. How did you prove their value?
We tracked spikes in new registrations and user retention. The traffic graphs spoke for themselves. Activity soared at launch, then spread into the platform's other areas. Even though it wasn't direct revenue, the ROI was clear.
Your Olympiads felt more like events than assignments. How did you make them fun while keeping educational value?
It was all in the content mix: difficulty, storytelling, interactivity, and pacing. The trial round was low-pressure and playful. The main round had a timer and a single shot.
We also designed a child-friendly UX: minimal text, drag-and-drop mechanics that worked for small hands, standardized interface elements, and clear visual cues. Tasks needed to feel intuitive, so kids could focus on solving, not navigating.
We kept difficulty balanced so that about half the tasks were accessible to most students, and only a few were truly hard. That created motivation, not frustration. Every child received a certificate, and teachers got thank-you letters to reinforce the cycle.
You once said Monday is the worst day for launches. How did you find that out?
From listening. Teachers told us Mondays were packed with planning meetings. When we moved launches to Tuesday, we saw a clear traffic bump. It's a small detail, but teacher loyalty is everything—it drives the rest.

You built dashboards for multiple stakeholders—teachers, principals, and districts. How did you tailor each one?
We didn't just give data—we solved problems. Teachers needed class diagnostics, principals needed performance by grade level, and districts needed tools to spot talent or support struggling schools. We made the data digestible and directly useful for each group.
Was there a crisis that reshaped how you build?
Yes. During our fastest growth phase, we sometimes underestimated peak loads. When the platform crashes, that becomes your only priority. We started running early stress tests, added circuit breakers, and learned to think not just about scale, but limits.
AI is everywhere now in EdTech. Where do you see it helping, and where do people overhype it?
AI is powerful, but its value lies in support, not substitution. We used to think teachers were the only source of knowledge. Then EdTech made them guides. Now, AI helps them personalize.
But AI still can't replace the teacher. It can identify why a student is struggling—maybe fear, maybe confusion—but it's the teacher who turns insight into action. I like the Eedi model, where tutors get AI suggestions but make the final call. That's human-centered personalization at scale.
MagicSchool is another example—it doesn't win by having the best AI, but by solving teacher pain points like lesson planning and adaptation. That's the key: solve a real problem.
If you were building a new EdTech product today, what early decisions would you prioritize?
Firstly, I would design the platform with a modular architecture, separating content, logic, and user data. That makes the product easier to scale, adapt, and localize. We learned this the hard way—splitting a monolith after launch is painful.
Another critical decision would be to build in behavioral tracking that goes beyond logging right or wrong answers. It's essential to capture micro-interactions like hesitation, retries, or where users click and how often they return to previous questions. Those micro-signals are essential for personalization later. If you skip them early, you can't add them later.
Finally, I would implement an internal A/B testing system directly within the admin dashboard. EdTech needs constant iteration. Easy testing allows product teams to refine without waiting on engineering.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.