One of the first things an analyst looks at when receiving a portfolio company's quarterly report is to verify that the numbers are structured correctly.
They might also check whether the company's fiscal year coincides with the other companies in the fund, whether the revenue is reported on a gross or net basis, and whether the currency needs converting.
Only after that do they start rebuilding the file so it looks like everything else.
At a firm managing 60 or 70 portfolio companies, this happens 60 or 70 different ways, every single cycle.
That process, the one that happens before analysis, before AI, before any of the tools the firm spent money on get involved, is where most of the time gets spent. It is also the process that almost none of the AI tools sold into this market over the past several years were designed to touch.
The industry bought a lot of software to help with what comes after the hard part. The hard part stayed hard.
Spending Went Up. The Work Didn't Get Easier.
In finance, change often occurs at a dramatic rate. For example, in one year, from 2024 to 2025, the percentage of finance professionals using generative AI increased from around 7% to 44%. While that jump is significant, almost two-thirds of leading financial institutions expect to invest even more heavily in the technology going forward.
Many businesses invested in tools designed to perform specific functions. These tools worked well for making certain tasks easier, like writing notes quickly or finding files faster. Companies also used chatbots to answer internal questions, which saved time during research.
But none of these tools helped with the data preparation work.
As a result, the monthly reporting cycle stayed as long as it always was.
The institutional knowledge that senior professionals carried in their heads kept walking out the door with them.
When someone has spent ten years studying a specific industry leaves, they build a mental bank of valuable knowledge. They understand how to identify certain metrics, how to analyze specific data points, and which management teams have a track record of sandbagging projections.
The knowledge, judgment, and experience they accumulate cannot be easily duplicated by any available database or spreadsheet.
Most AI tools have no mechanism for capturing that array of knowledge. No prompt surfaces fifteen years of pattern recognition from someone who no longer works at a firm.
Many companies have incorporated AI, but the overall structure of their work remains the same, and they still face a bottleneck at the same phase.
Building a Fix Internally Is a Longer Road Than It Looks
Building a system that handles the entire pipeline is the obvious solution, but it's not a simple build. To make it work, you need to have the right engineering talent on board. You also need to have a solid data infrastructure in place, a significant time investment. And then there are the compliance and security requirements.
Realistically, building something like this from scratch could take years. That means anyone who started three years ago is likely still building. It's a long and difficult road, but it's the only way to create a system that can handle the whole pipeline.
Using off-the-shelf enterprise platforms that could theoretically handle the full scope isn't a perfect solution either. The time it takes to set them up can stretch into quarters.
Costs run high enough that demonstrating a return becomes its own separate project before anyone has seen the software do anything. Even after, most deployments still needed months of calibration before performing reliably.
Most firms ended up somewhere in the middle. Having spent real money, having committed real time, and still running the same manual reconciliation process they were running before any of it started.
Getting the Infrastructure Right Before the AI
These platforms seem to have this one thing in common. They were built with the flow of information across an organization in mind. Making individual tasks faster wasn't their top priority. This is similar to what Notion did for general productivity years ago. It didn't necessarily make any one thing extremely fast, but it changed the way things worked together.
It changed where information lived and who could reach it, and the operational gains followed from that structural shift rather than from any individual feature. Financial services AI is arriving at the same realization, several years later, and with considerably higher stakes attached.
DeepAuto.AI built its platform around that premise. Instead of having one AI handle all requests, it uses many specialized agents that work together. These agents look at an organization's data and share information with each other. That way, each one is aware of what the others are seeing.
A shift in a portfolio company's revenue figures doesn't stay contained inside the reporting workflow. It moves through the system and gets mapped against whatever else that shift might be relevant to, such as risk exposure, covenant thresholds, or comparative performance across the portfolio. The agents share context in a way that most enterprise AI tools, built to operate independently, simply can't.
This design also tackles the issue of institutional knowledge straightforwardly. As it processes data over time, it develops a deeper understanding of the organization's information environment, layer by layer.
Patterns that were previously stored only in a senior analyst's memory start to get captured by how the system routes and prioritizes information.
This way, the system helps preserve valuable knowledge and insights that could otherwise disappear. By doing so, it helps ensure the organization's information environment remains intact and continues to evolve, even as people come and go.
Agents That Share Context, Documents That Actually Get Read
The AI-first enterprise transformation platform's document processing feature can handle complex file formats that other tools struggle with. For example, it can extract important information from lengthy regulatory documents where the key figure is hidden in a footnote. It can also handle portfolio reports that contain embedded tables.
The platform uses multiple specialized vision models to work through that content, pulls out what's usable, and maps it against the broader data environment the system has already built up.
It also learns as it goes. When it sees a report it's worked with before, it can handle it more accurately than it did the first time. And the tricky parts that needed a person to step in? They happen less often as the system learns from the company's data. This means the tedious work of reconciling numbers, which used to take analysts days to finish, now gets done quickly. In fact, the system takes care of most of it before anyone even needs to open a spreadsheet.
Every output the platform generates traces back to its source document. For firms operating under regulatory oversight, auditability matters as much as the output itself.
The entire stack is based on open-source technology, which means it can be used either on the company's own servers or in the cloud, and the client's data stays safe within their own environment either way.
Fixing the Data Problem
Data rarely arrives ready for work. Before you can actually analyze anything, you're usually stuck facing a mess of conflicting formats, missing values, numbers that don't line up, and random holes in the data. This prep phase often drags on longer than expected.
For years, many analysts who spent time reformatting their spreadsheets every month were doing it because the alternative didn't exist yet. Nobody thought it was a good use of their time. For a growing number of firms, it does now.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.





