
In the era of digital perfectionism, every second of a web page's loading time can cost a business customers and revenue. Therefore, web application performance becomes an important aspect. But is it enough to rely solely on Lighthouse—a popular tool for measuring loading speed?
True performance depends on many factors that go beyond synthetic tests. Below, we will explore what exactly a solution architect should consider to ensure web applications run quickly, stably, and, most importantly, provide value to users.
Lighthouse Is Not a Panacea
Lighthouse is undoubtedly an excellent tool. It provides valuable Core Web Vitals (CWV) metrics, rendering metrics, and accessibility metrics, which are important for SEO and user experience. However, it is important to understand that Lighthouse is merely a synthetic test. It operates in controlled laboratory conditions, simulating the application's performance. In the real world, things are much more complex.
Lighthouse does not take into account all aspects of the application lifecycle, real user experience, or the diversity of devices, networks, and locations. It cannot predict application behavior under challenging conditions, such as a "cold start" on a slow connection or peak load. Therefore, relying solely on Lighthouse results carries risks.
Real Users Versus Laboratory Conditions
To truly assess how your product performs, you cannot rely solely on ideal laboratory conditions. The good news is that this inaccurate representation of actual performance is fairly easy to fix. All it takes is implementing a real user monitoring (RUM) system.
RUM records metrics in real-world conditions, taking into account the full range of factors each user encounters: from device type and internet connection speed to specific geolocation. If we only rely on tests like Lighthouse, which run under ideal local conditions and utilize caching, we risk getting an overly optimistic picture.
For example, we may overlook how a user's distance from our CDN node affects response speed. The farther the client is, the longer they wait for a response. RUM allows us to see how users experience these less-than-ideal conditions—with slow mobile internet or in remote regions.
With RUM, an architect can see how real users interact with the application, which allows identifying and addressing issues that are not visible in synthetic testing.
Backend and TTFB
In the world of web development, there are three main metrics that define the user experience: CLS (Cumulative Layout Shift), TTFB, and LCP (Largest Contentful Paint). Among these, TTFB is often underestimated, even though optimizing it is critically important.
A high TTFB value means that the server requires a significant amount of time to process a request. This can be related to the actual load on the servers, where simultaneous access by many users slows down its operation. This is why load testing should be an integral part of development. It allows you to determine how the application behaves under high load and how this affects TTFB.
For example, if your service cannot handle simultaneous user traffic, each request will take longer to process, which will increase the waiting time for everyone.
How is this related to Lighthouse? Tools like Lighthouse, which analyze client-side performance, are great at showing how quickly the browser renders a page. However, they do not always provide a complete picture related to the server side. Even if your frontend is perfectly optimized, a high TTFB caused by backend issues will negate all efforts.
For example, Lighthouse may show excellent results in rendering speed, but if a real user has to wait too long for data from the server, it will still negatively affect the user experience. Therefore, a comprehensive evaluation of performance requires thorough testing, including both frontend analysis and backend load testing, to identify and eliminate "bottlenecks" at all levels.
Although Lighthouse may show excellent scores (90+ points), backend issues such as slow TTFB or poorly optimized APIs can significantly reduce overall performance.
An architect should take into account that server-side performance is an important aspect that is often overlooked, but it directly affects the user experience.
Hydration and JavaScript Cost
Modern web applications often use JavaScript to create an interactive user interface. However, a large amount of JavaScript code can significantly slow down the loading and performance of the application. Hydration—the process by which JavaScript "revives" static HTML, making it interactive—is a particular challenge.
Among the problems with hydration:
- Next.js, React, and other modern frameworks often suffer from long hydration times. The browser spends a lot of time loading, parsing, and executing JavaScript code before the user sees anything interactive.
- A large volume of JavaScript code can block the browser's main thread, causing delays in responding to user actions. Users experience this as "lag"—stutters and freezes.
To ensure high performance of web applications, the architect must carefully monitor the cost of JavaScript and strive to minimize it. The less JavaScript code the browser has to load and process, the faster the page loads and the more convenient it is for the user. To achieve this goal, the architect must carefully plan and implement strategies aimed at reducing the amount of code being loaded.
One of the main approaches is to divide the code into small, logically related fragments using lazy loading. This way, JavaScript code is not loaded immediately when the page is opened, but only when it is actually needed. This helps speed up the initial loading and improves the page's interactivity.
Another important tool is removing unused code, or tree-shaking. This method allows you to get rid of code fragments that are not used on a specific page or in the application as a whole. With tree-shaking, it is possible to significantly reduce the amount of JavaScript code being loaded, which positively impacts performance.
Optimizing images and media files plays an important role in reducing the overall amount of data being loaded. Switching to modern formats such as WebP allows for a decrease in file size without loss of quality. This directly impacts page load speed, especially on devices with slow internet connections.
Finally, using server-side rendering (SSR) or static site generation (SSG), where pages are prepared on the server or at build time, significantly reduces the load on the client's browser, making web applications faster and more responsive.
Edge and Caching
Modern CDNs (content delivery networks) and edge computing (Cloudflare Workers, CloudFront) provide tools for performance optimization. However, Lighthouse primarily does not take these network optimizations into account.
A proper caching strategy can speed up an application more than front-end optimization. To achieve maximum performance for web applications, an architect needs to approach optimization comprehensively, taking the following aspects into account:
- using a CDN (content delivery network) to host static content, ensuring its fast delivery to users;
- applying edge computing to perform logical operations at the network edge, for example, for dynamic rendering or A/B testing;
- developing the right caching strategy.
The latter includes HTTP caching with response header configuration, CDN caching with TTL and cache key determination, as well as the implementation of cache expiration mechanisms for prompt clearing when content is updated.
Images and Media
Lighthouse is a valuable tool for performance analysis, but its capabilities are limited when it comes to images and media files. It is not always able to detect nuances that can significantly affect loading speed and user experience.
Lighthouse does well in checking for the presence of lazy-loading and the use of modern formats, such as WebP. However, it cannot always assess the effectiveness of responsive images.
The key point here is the use of the <picture> tag and the srcset attribute, which allow the browser to independently select the optimal format and file size based on the screen resolution and the user's internet connection speed. Lighthouse cannot always accurately assess how well this responsiveness is implemented.
Moreover, Lighthouse does not directly evaluate visual UX improvements related to the loading process. Techniques such as displaying placeholders (e.g., blurred images) or skeleton loaders make waiting for content more pleasant for the user, but these aspects remain outside the direct scope of Lighthouse assessments.
Although the tool checks the use of formats like WebP, real optimization requires fine-tuning the balance between compression level and image quality, which goes beyond a simple check. The same applies to video optimization: adaptive bitrates and preloading are important components that Lighthouse may not fully cover.
An architect should design a media pipeline that includes automatic image optimization, creation of adaptive versions, and proper caching. This approach not only speeds up loading but also reduces server load.
External Dependencies
Third-party scripts are scripts loaded from other domains. These include analytics (Google Analytics, Yandex.Metrica), social media pixels, A/B testing services, widgets, and advertising networks. Lighthouse can assess their impact "superficially," but in reality, they often increase the bundle size (the size of the loaded code) and directly affect LCP (Largest Contentful Paint) and INP (Interaction to Next Paint).
To minimize the negative impact of third-party scripts on web application performance, architects should implement a number of strategies. First and foremost, this involves asynchronous or deferred loading of scripts (using the async and defer attributes) so that they do not block page rendering.
Delayed loading of scripts until the moment they are actually needed is also effective, for example, for a chat widget that loads only when the page is scrolled. It is important to reduce the number of scripts used, giving preference only to the most necessary services. In some cases, it is advisable to host third-party scripts on your own server (self-hosting), which allows for better control over the loading and caching process.
Finally, using browser hints, such as preload and preconnect, allows the browser to load resources or establish connections in advance.
Load and Strength Characteristics
Lighthouse is a tool for assessing performance in laboratory conditions. It is not intended for load testing or evaluating an application's resilience to stress.
At the same time, load testing allows you to assess application performance under conditions close to real workloads, identify bottlenecks, scalability issues, and optimize the architecture to handle peak loads.
To ensure the reliability and high performance of web applications under load, an architect needs to apply an approach that includes testing and implementing special mechanisms.
For load testing, there are tools such as k6 and Gatling, which allow simulating the activity of multiple users. An important method for assessing resilience is chaos testing, where random failures are intentionally introduced into the system to evaluate its ability to handle unforeseen situations.
Attention should also be paid to parallelism (concurrency) by checking how the application handles simultaneous requests. To increase reliability, retries can be applied—mechanisms for repeated attempts in case of failures. Proper initialization of the CDN (clearing the CDN cache) when updating content is extremely important so that users always see up-to-date information. Finally, it is necessary to assess performance degradation under load to understand how the application behaves when individual services, such as the database or network, are overloaded.
Architectural Dimension
Performance is, first and foremost, an architectural decision. The high performance of a web application is determined by global architectural choices, including the selection of a rendering strategy (SSR/non-SSR), optimization of API interactions (regardless of using BFF), minimization of external dependencies, implementation of multi-level caching, and strict control of performance budgets (LCP, TTFB, TTI).
All of this forms the foundation of responsiveness. Equally important is observability: implementing monitoring, logging, and tracing tools allows tracking application performance and promptly identifying issues.
Clearly defining SLA/SLO (Service Level Agreements/Service Level Objectives), which specify the expected performance and reliability of the system, ensures users experience stable and predictable operation.
The Role of a Solution Architect
The task of a Solution Architect is to set direction and oversee execution. It is important to understand that performance is a responsibility. A solution architect plays a crucial role in ensuring the high performance of web applications at all stages of their lifecycle.
They must establish clear performance requirements at the design stage, at the architectural gates, and also detail important service level agreements (SLAs) regarding performance, specifying the specific metrics that need to be achieved.
An important task for an architect is to establish performance metrics and budgets, such as LCP ≤2.5 s, TTFB ≤200 ms, payload ≤180 KB, and to monitor their adherence. Additionally, the architect should integrate monitoring and alerting systems into CI/CD (continuous integration/continuous deployment) and operational processes to respond promptly to issues. Special attention should be given to monitoring metric performance based on percentiles (e.g., 95th or 99th), analyzing data to identify and address problems affecting the least satisfied users.
Summary
Lighthouse is a useful tool, but it cannot cover all aspects of web application performance. A solutions architect must look beyond Lighthouse, taking into account the real user experience (RUM), and optimizing the backend, JavaScript, networks, images, and third-party interactions.
It is important to remember that performance is an architectural decision that requires a comprehensive approach and continuous monitoring. A well-designed architecture that considers all aspects of performance is the key to a fast and stable application.
ⓒ 2025 TECHTIMES.com All rights reserved. Do not reproduce without permission.




