← Back Home

A Second Look at AI for Finance

4 mins read

In earlier writing, I related AI in finance as settling into three layers: research, diligence, and execution. I still find that framework helpful, but it is increasingly clear that it is more of a starting point than a lasting separation. Companies are moving fast across layers, and what matters more than where a product starts is how effectively it converges toward real workflows and real outputs. The pace of convergence has been faster than I expected, and it has forced me to think more carefully about where value actually accrues as models, products, and platforms all improve in parallel.

This past week, I invited Ramp Labs to come give a demo at my school, where I met Alex and Ben and saw Ramp Sheets in action. The product is impressive for how early it is, but it also sharpened a distinction I keep coming back to, which is that Ramp Sheets is a standalone spreadsheet product rather than an Excel-native layer. Even if a new sheet can technically replicate much of Excel's functionality (which Ramp Sheets does), I think there is both a technical and a cultural bridge to cross. Excel is a shared language, with deeply ingrained habits around shortcuts, macros, templates, and collaboration. That culture matters. Products like Ramp Sheets or Nummo may continue to improve, but adoption in banking environments feels at least as cultural as it is technical.

At the same time, Ramp's distribution advantage is real. The ability to sell new products into an existing customer base is powerful, and Ramp is unusually well positioned to do that. What I am less certain about is how much overlap there is today between Ramp's core customer base and BB or EB investment banks, as opposed to smaller companies, commercial banks, or non-IB finance teams. That uncertainty matters when evaluating how large Ramp Sheets can realistically become in execution-heavy workflows. Distribution helps, but only if it reaches users who live inside these environments every day. Nonetheless, I see competitors looking to serve non-IB teams just as much.

Rogo is also moving faster than I initially expected. In conversations with banking friends, it comes up not just as a research tool, but as something that is great at execution and actual outputs. I would not have anticipated this in the summer (I heard there were a lot of issues). That blurs the clean three-layer framework even further. If research-first tools can reliably support downstream work, they start competing for the same budget and mindshare as execution-focused tools. I want to spend more time understanding how far Rogo is pushing in this direction and how consistent the quality feels in practice. I do not yet have a strong opinion on Hebbia, and I think I need more exposure there before drawing conclusions.

What has most complicated my thinking recently is seeing how strong foundation-model integrations are becoming. Claude for Financial Services, especially its Excel integration, looks genuinely impressive. Cell-level references, scenario testing, formula debugging, and model generation inside Excel are exactly the kinds of tasks analysts care about. This raises the bar meaningfully. Foundation model providers are no longer staying abstract; they are moving directly into the workflow. The question, though, is not whether models like Claude can assist with execution, but whether they will fully own it. Helping an analyst reason inside a spreadsheet is different from learning firm-specific templates, enforcing formatting conventions, managing versioning across decks, or taking responsibility for end-to-end outputs. That gap feels less about intelligence and more about product scope, accountability, and focus.

Farsight still stands out to me for how tightly it integrates into Excel and PowerPoint and how directly it focuses on the actual deliverables analysts produce. Even so, I am less convinced than before that being execution-native alone guarantees long-term advantage. As research tools push down, execution tools push out, and foundation models move closer to the surface, the starting point matters less than the ability to ship quickly, improve quality, expand usage, and become embedded both technically and culturally inside teams.

Where I am landing now is more open-ended than my earlier take. Execution feels close to the moment of value as of now, but the market is growingly defined by convergence rather than clean layers. The most interesting question is which tools become hard to remove, not just impressive to demo. That answer likely depends on a mix of native integration, workflow ownership, distribution, and iteration speed rather than any single advantage. If anyone reading this has perspectives on Hebbia or knows people there or at Rogo, I would love to talk and continue refining this view.

Related Posts