
In the fourth quarter of 2024, global spending on cloud infrastructure grew by $17 billion, or 22% year over year, reaching $91 billion. Despite its enormous scale, the cloud services market continues to grow rapidly, thanks in part to new services and AI tools.
Iaroslav Molochkov is a Senior Software Developer with experience at SberTech and EPAM, an expert in Java development, distributed systems design, performance optimization, and cloud technologies. He outlined the main CloudTech trends, discussed the implementation of complex IT projects, and explained the principles that help him communicate effectively with colleagues and mentor other team members.
— How did you get started in software development, and why did you choose Java as your specialization?
— I was interested in math and physics back in school. Later, I enrolled in MEPhI, one of Russia's leading technical universities, and started working as an intern at a company that specialized in business process automation.
The choice of Java was somewhat accidental. In school and university, I studied both Pascal and C. But the Java instructor not only provided abstract theory, but also practical knowledge and instilled in me a culture of continuous learning that still helps me today.
Plus, Java is one of the most popular programming languages, with a vast number of frameworks and extensive documentation. It has a huge community. With Java, you can land a well-paying job and build a successful career.
— One of the key stages in your career was working as a Principal IT Engineer at SberTech. What major projects did you implement in that role?
— Most of them involved the company's internal processes. One example is organizing the migration from the GridGain computing platform to a custom build of Apache Ignite tailored to internal needs.
The most important initiative, in my view, was release management for Apache Ignite, an open-source in-memory data grid. I fully managed the release of a new version that demonstrated improved performance. Specifically, I coordinated all the processes: collecting commits, identifying the causes of test failures, prioritizing bug fixes, quality control, and so on. Throughout the release cycle, I also communicated with the developer community, discussed release details with them, and resolved disputes.
I became the first person on my team within the release frequency improvement initiative to go through the entire release manager path. So after the project ended, I began training my colleagues in these skills.
— What was the most challenging part of the process for you?
— Releasing a new version is a complex task in itself. First, you need to initiate the process and nominate yourself for the role of release manager. Then, you need to kick off discussions about which features will be included in the release and get approval from the decentralized community. You also have to manage developers' expectations throughout: discussing blockers, the need to fix certain bugs, and so on.
After discussions are completed, a release list is prepared, followed by a scope freeze period. At this stage, the codebase is "frozen"—changes can only be made if new critical bugs are discovered. These must be discussed and quickly fixed accordingly.
Finally, you must prepare and publish the release on time and conduct a vote among community members. At this stage, load testing is performed to ensure stable performance. If the release shows weak performance, it simply won't pass the vote, and the release won't go live. So, release management is a complex organizational and technical job. It requires a broad set of both hard and soft skills.
— After SberTech, you moved to EPAM as a Senior Java Developer. How did your responsibilities change?
— At SberTech, I was primarily responsible for release management and for helping to integrate the released product into other divisions of Sber—the largest bank in Russia and Eastern Europe. At EPAM, the focus shifted to development and IT architecture design. For nearly two years, I worked with a major European DIY retailer and was involved in decomposing a PHP monolith into microservices, including designing and writing many of them.
I should note that I was promoted fairly quickly to Lead Java Developer—a shift from an operational to a tactical level.
— What projects are you working on now?
— I currently work at a leading international company that develops tools for programming in various languages. My responsibilities include designing new plugin features related to various cloud providers (AWS, GCP, Azure, and others). I'm also involved in performance optimization.
Among my projects, for instance, is adapting mission-critical projects to new interfaces for interacting with cloud providers, as well as ensuring the stability of this software. I also continue to mentor other team members, optimizing the team's workflows.
— How do different cloud providers' approaches vary, and how do you take these differences into account when developing new features?
— Major providers mostly offer similar services, since the competition in this area is intense. Nevertheless, there are differences. For example, AWS focuses on versatility and scalability and offers the widest range of services (IaaS, PaaS, SaaS) and deep customization. Their philosophy is to provide "building blocks" for any task—from data storage to complex computations. AWS actively develops services for large enterprises, including specialized solutions like GovCloud for government needs.
Accordingly, when working with AWS, developers must consider the high degree of customization. This requires deep understanding of the services. New features must be tested for compatibility with a wide array of existing AWS tools. It's also important to optimize solutions for their pricing model to minimize user costs.
Google Cloud Platform, on the other hand, focuses on innovation in data, analytics, and machine learning. They leverage Google's expertise in these fields and position themselves as an open, cost-effective platform supporting multi-cloud and hybrid scenarios. When developing for GCP, it's important to take their strengths in ML and analytics into account, integrating new features with BigQuery or AI tools.
Azure targets enterprise clients, especially those already using Microsoft's ecosystem (Windows Server, Active Directory, SQL Server). Its strength lies in hybrid cloud support and seamless integration with on-premise infrastructure. Development for Azure requires emphasis on compatibility with Microsoft products and hybrid scenarios. Security is a priority, so new features must meet strict data protection standards.
Finally, Nebius is a relatively new player built on the infrastructure of Yandex Cloud. Their approach is aimed at the European market, with a focus on flexibility, localization, and compliance with strict regulatory requirements. Nebius emphasizes AI tools and high-performance computing. And since Nebius's infrastructure is younger than its competitors', developers must consider regional availability and test new features for compatibility with their ecosystem.
— CloudTech today is a "foundation" in software development. Where is this industry heading, and what trends do you see?
— I would highlight several key trends:
- Multi-cloud and Hybrid Strategies: Companies are increasingly using combinations of public and private clouds to optimize costs, increase resilience, and meet regulatory requirements. Hybrid solutions allow integration of on-premise infrastructure with cloud systems, providing flexibility.
- AI and Machine Learning in the Cloud: Cloud platforms are becoming the foundation for scalable AI solutions. Services like AWS SageMaker, Azure AI, or Google Vertex AI simplify model development and deployment. In 2025, demand for AI-optimized cloud resources (e.g., GPU/TPU) continues to grow.
- Serverless Architectures: Technologies like AWS Lambda, Azure Functions, and Google Cloud Functions are gaining popularity. They allow developers to focus on code instead of infrastructure management, reducing costs and accelerating development.
- Cloud Cybersecurity: As cloud services become more popular, attention to security intensifies. Trends include Zero Trust adoption, automated vulnerability management, with data encryption remaining as a de facto standard. Platforms like Palo Alto Prisma Cloud are becoming the standard.
- Edge Computing: Cloud technologies are integrating with edge computing to process data closer to its source (e.g., IoT devices). AWS Outposts and Azure Edge Zones support this trend, reducing costs and improving performance.
- Sustainability: Cloud providers are investing in green data centers using renewable energy. IT giants like Google and Microsoft have announced carbon neutrality goals.
- Low-Code/No-Code Platforms: Cloud services like Microsoft Power Apps or OutSystems simplify app development for non-technical users, accelerating digital transformation.
- Containerization and Kubernetes: Containers (Docker) and orchestration (Kubernetes) remain the standard for deploying applications. Managed cloud services like Amazon EKS or Google GKE make their use significantly easier.
- Data and Analytics Focus: Cloud data warehouses such as Snowflake and Google BigQuery, along with analytics platforms, are becoming foundational for working with big data. Integration with AI enables automation of insights.
Overall, the industry is moving toward greater automation and abstraction of infrastructure, allowing developers to focus on business logic. We also see the rise of AI integration and edge computing for real-time operations and personalization, along with tighter regulations (e.g., GDPR and CCPA).
— You've mentioned the importance of soft skills several times. What principles help you effectively nurture talent and build communication on projects?
— The first and most important principle is to always remember that every mentor or manager was once an intern, and act accordingly. That means no sarcasm, no belittling comments. The second is to encourage questions. I always say there's no such thing as a stupid question. It's better to ask and clarify than stay silent. And the third principle is to provide honest, constructive feedback that helps people grow.
Beyond that, there are other practical tips. For example, it's important to support colleagues' curiosity—like exploring different frameworks and libraries within Java. This kind of exploration can help improve project processes in the long run. I also try to foster independence in the team: junior staff and interns should come up with solutions and implement them first, and only then ask for feedback. This speeds up the work and makes it more effective.
Another related principle is encouraging initiative. Team members shouldn't be afraid to take on difficult—but doable—tasks, provided deadlines aren't too tight. That's how I got ahead myself, by working on challenging initiatives like release management.
Ultimately, you should develop people in a way that they can fully replace you—or even surpass you. To do that, they need to have their own well-reasoned opinions on key matters and take personal responsibility for the decisions they make.
— How do you personally stay sharp and keep your skills relevant in such a fast-changing field?
— My motto is to learn something new every day. I regularly read articles and books that are directly or indirectly related to my field, and I work on both my hard and soft skills. Right now, for example, I'm revisiting my university linear algebra course—because AI is the defining trend today, for better or worse. And at its core lie linear algebra and statistics.
So to stay relevant, you can't limit yourself to just the tasks you're given at work. You have to keep pushing your boundaries every single day.
— You mentioned AI as the leading trend, "for better or worse." In what situations can the use of neural networks lead to questionable outcomes?
— First, let me clarify: when I talk about AI, I mean Large Language Models (LLMs)—statistical models that don't understand reality, don't build internal representations of the world, and don't think like humans. LLMs can make mistakes and miss context. They perform very well on narrow tasks they've been trained on, but they're not universally applicable.
That's where the main risk lies. Problems arise when developers over-rely on AI-generated solutions and stop paying close attention to what the output actually is. Everything might seem to work—but there could be a bug or vulnerability underneath. There's even a joke in the English-speaking developer community about "vibe coding"—when someone asks an LLM to generate code, implements it, and ends up with functionality riddled with security holes.
This approach leads to technical debt, lower product quality, and, most importantly, atrophy of critical thinking. Instead of being architects of solutions, developers become mere relayers of neural network suggestions. But an LLM is not a replacement for thinking. Using AI is great. Relying on it blindly—isn't.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.