Apr 15, 2026

Duration

/

42:30

Morten Bruun

Member of Technical Staff - Product

Ethan Lee

Director of Product

AI in Finance: Solving the "last mile" problem

Ethan Lee (00:01.632)

All right. Hi everyone. Welcome back to future proof. Today I'm sitting down here with Morton Brun, who is a member of the product team at Hebia, which is an AI platform for finance. welcome to the podcast.

Morten Bruun (00:14.648)

Thanks so much. Glad to be here.

Ethan Lee (00:16.982)

Awesome. So I was hoping that you could start by telling us a little bit about your journey into Hebia. You originally co-founded a company called Flash Docs, which ended up getting acquired by Hebia. Tell us a little bit about your journey and what you're up to now.

Morten Bruun (00:29.966)

Yeah, 100%. So I spent the most of my career in tech, started at Google, then went to McKinsey after business school, joined a startup, and then ultimately started Flash Docs, what is now two, a little bit more than two years ago. And we're really trying to sort of solve this problem of automating PowerPoints. At the time, LLMs were not good enough at writing XML, which is sort of the building blocks of PowerPoint from a coding perspective. So we built a technology that allowed them to do that, or allowed us to build PowerPoints using AI.

We started building a bit of like an application layer product and found some success with that. But then especially as like AI agents really started taking off, we started getting a lot of domain from other companies that wanted to build slide generation capabilities into their product. So if you're building AI for finance like Hebia or AI for consulting or AI for sales or, you know, anyone who's building PowerPoints as part of their work, we kind of became an interesting API. So I ended up with a number of exciting clients across the AI spectrum and

some of the largest tech companies in the world were using our API as well. So we kind of went from being a bit of like a smaller add-in in PowerPoint to be the number one tool for automating PowerPoints. yeah, then Hebbio was one of our clients. We really liked working with them a lot. So when George, the CEO reached out and asked if we could consider joining forces, it was a bit of a no brainer.

Ethan Lee (01:49.625)

That's awesome. And have you folded sort of the functionality of the original slide generation into Hebia or tell us a little bit about what you're working on now.

Morten Bruun (01:58.604)

Yeah, that was really the first thing that we did. So Hebbia was a client, so they had already started using the API, but of course there was much more that we could layer into that. So that was really like the first part of the work, was sort of layering all of our functionality into Hebbia. Then I think there's probably anyone who's working in AI is experiencing, the bar keeps pushing. So work wasn't done just when we had integrated flashstacks into Hebbia, like so we're still sort of continuing to pushing the boundaries.

everything touching PowerPoint generation and so on.

Ethan Lee (02:31.82)

Yeah, that makes sense. And it's cool to start with that alongside a former customer who was already really familiar with everything that you did and understood immediately why it was so valuable. I want to kind of shift over to the Hebia platform itself in general and talk a little bit more about where these finance teams and investment banks are really finding value in your platform. Of course, think everybody is kind of, many companies in the space are looking at, how do we get LLMs to look at a bunch of documents and generate some insights from it?

Tell us a little bit about how Hebbia has found a niche within that space and what the core value that users are getting out of the platform. What's the secret sauce, if you will, for those teams that are using it?

Morten Bruun (03:02.019)

Hmm.

Morten Bruun (03:17.484)

Yeah, for sure. I think a lot of it comes down to the clients that we work with. So when you work in finance, being average is really meaningless. You always want to have edge in the market. So you don't just care about sort of being able to roughly analyze documents and broadly understand, you know,

a folder of files or whatever it might be, you're really, really interested in getting just a little bit ahead of the market all the time. So our users care a lot about how we're able to extract information across very, large amounts of documents. So when we started, or when HeavyEthica started, the context window was maybe like a couple of thousand tokens, and that has obviously grown into being millions. But already back then, a lot of what made us special was our ability to kind of like...

you know, push the limits for the amount of data that you can analyze. So as an investor, you know, when AI first sort of started really gaining traction, could, you couldn't even analyze, you know, the earnings report from Apple. And then of course it's like the context window started increasing, then you could do more and more analysis, but fundamentally that is still like part of the problem that investors are dealing with. So once you started to be able to analyze Apple's earnings calls, you wanted to be able to analyze, you know,

their last two years of earnings calls and you want to analyze all tech companies and their last earnings calls in the last couple of years. And so if you keep kind of like pushing that forward. I think information retrieval and being able to actually understand context and understand a large amount of content continues to be something that gives competitive edge for our clients. Then increasingly, we also see that it's not just about being able to extract the content, but also what you do with it. And that's also where we do a lot of,

I think forward deployed is being sort of maybe a little bit overused, but we do a lot around sort of process engineering, trying to understand in detail, how does our clients work? I think when you zoom out and you look at the 30,000 foot view of what an investor do, it's easiest to say, we analyze some documents and we maybe create an Excel model and an IC memo. And when you look at it from that 30,000 foot view, that's really the sort of...

Morten Bruun (05:34.024)

the extent of it. But when you actually go deeper, all of those items breaks into quite sort of specific processes that are not just specific for each single firm, but can be specific for each single team. So the private credit team at a client might work in a different way than the equities team is working and so on. So being able to also like, not just extract information and retrieve information, but also wrap it around.

a team specific process and keep sort of making it more and more specific for how they work and generate outputs that matches how they work is a lot of what gives our users or our clients edge in the market.

Ethan Lee (06:12.62)

Yeah, that makes a lot of sense and, you know, make sense what you said about how, you know, the, the, the difficulty is not in necessarily summarizing a document or just getting the very basics out of it. Of course, anybody can throw that into a, you know, a basic chat product that really everybody has access to now. Right. and it's kind of interesting that, you know, instead you're talking about how you can incorporate AI layers into some of the existing workflows that may be very complicated, depending on the organization that you're working with. I'm curious, like if you can.

Morten Bruun (06:40.302)

Mm.

Ethan Lee (06:41.848)

Tell us about maybe some of the use cases you're seeing or where maybe knowing the process in more detail really helps you tune the product or tune the, maybe the attention of where the LLM is spending time, what documents it chooses to bring into its context window and what it chooses to ignore. Does the consolidation of some of those use cases kind of help you make it that much better than an off the shelf, say, Claude for Google Sheets or Claude for Google Drive?

Morten Bruun (07:02.638)

Mm.

Ethan Lee (07:10.55)

Yeah, is that what you're finding? And can you tell us a little bit about those use cases?

Morten Bruun (07:14.51)

Yeah, 100%. I can maybe give two different examples. So I think one on the retrieval side and then a little bit one more on sort of, you know, the output side, touching on the processes there. So I think if you think about retrieval, right, you know, Claude, I think the max list, can probably upload what 20 documents or so right now, if you're on like a max plan. That really doesn't get you that far when it comes to sort of being an investor. Our clients and we're privileged to work with some of the leading institutions within finance.

they look at really the mountain of data that they sit on and they're like, hey, we have terabytes of unstructured documents, IC memos, due diligence reports that have sort of painstakingly been put together by some of the hardest working, most qualified professionals in the field over like decades. And those documents include key insights on how we analyze businesses, on our point of view on certain companies, on specific industries.

but they are also like this repository of institutionalized knowledge about how do we approach analyzing the risks within a supply chain heavy business, for instance. And being able to extract those insights and not just think about, how can we make this one individual work more efficiently, but how can we actually use really an institutionalized body of knowledge to make the entire firm much, better? And I think that's also where you see,

these ideas of retrieval and sort of expanding the context window where they really come into play is when you're not just thinking about, how can I analyze 20 documents, but how can I really make a tool that works across like terabytes of data and do that in a way where it's not just picking up on sort of, know, specific elements that you might be able to do with things like RAC, but where you actually want a much more sort of holistic understanding of the context. You know, an investor looking at...

whether it's a good idea to invest into a specific company and is trying to analyze a virtual data room, for instance, it's not just interest in extracting out the revenue numbers for certain years. They might want to do that. But they also want to understand what's the entire context around that? What's the customer concentration? What has been the market pressure on industries that are related to this client? What might be drivers of margin compression? And how does that change over time?

Morten Bruun (09:39.245)

the complexity of the questions is just different when we think about sort of just the retrieval aspects, the volume is very different as well. So I think that's a little bit on the retrieval side. When you look at then generating the output, and again, that falls very close to my area when it comes to generating PowerPoints. Again, when you look at it, and even when we talk to our clients, we're like, so how do you generate say an IC memo or a SIM, like typical sort of finance documents, even our clients might say, hey, you know,

they follow a certain template and they work more or less the same way at the same time. just yesterday I was talking to a client and they wanted to automate their Sims, which is essential documents for them. And they were like, yeah, it kind of more or less looks the same all the time. And they sent me three different examples and there were literally not a single slide that was the same. And of course, when you look at it, they do follow the same structure. Like each Sim is not like, you know, fundamentally a brand new document.

But when we sort of want to understand what then makes a good SIM, it's not just understanding, oh, it has like these, I don't know, 35 different pages and page one to five is generally about this and so on. You actually have to understand what sort of made the investor when they were putting together this PowerPoint choose to use this specific layout to communicate this specific point and how did they phrase it and so on and so forth. So in order for you to create something that's...

actually what the output that they're looking for, you need to do more than just sort of have a high level sort of somewhat pixelated understanding of it. And our clients are just dealing in a space where even if you're 90 % right and you sort of do a really good first pass or something, it's fundamentally still 100 % wrong. So getting not just to like 80 or 90 % correct understanding of what the outputs that they're looking for, that's actually not what our clients want. They want something that get them

all the way to the 100%. So in so many ways, the last mile for us, it's the entire game. And so a lot of the general foundational models out there will do a pretty good job at getting you to that 90%. But for our clients, like, they think that's great, but they also care, you know, a lot about getting that last mile in.

Ethan Lee (11:50.391)

Yeah. Yeah. That makes a lot of sense. What you said about that last mile actually being so much of like where, have you can provide that value, over, you know, stock foundation model behavior. think like the non-domain specific kind of parallel to that is, you know, a common thing that people will almost benchmark LLMs on is like how, what, what, kind of, what is the quality of like the landing pages, like the static landing pages that this LLM can develop. And I think like people have generally seen that

Morten Bruun (12:14.423)

Hmm.

Ethan Lee (12:17.186)

there's kind of an LLM like vibe coded feeling to some of the landing pages that it can generate, which really like is the reason why you would never push that out to production as your marketing site, because it just, it feels like it was generated by an AI. Like it doesn't have like that storytelling or any of that, you know, the qualities that would really make, you know, a high, a highly performing page. It's just, it feels very generic in that way. So I can imagine that that's even more amplified when you take it to a specific domain where like,

Morten Bruun (12:27.022)

Mm.

Morten Bruun (12:38.648)

Mm.

Ethan Lee (12:44.556)

Yeah, like every slide is going to be custom to tell us very specific narrative within the context of finance. And yeah, like that it's going to matter so much more there than just generating marketing site. So super interesting to hear kind of how like even the output itself like for sure needs to have that kind of specificity and that knowing the domain and like having a lot of examples from customers working directly with them to understand like how they want the output can make a lot of sense and where you can provide a lot of value.

Morten Bruun (13:13.346)

Yeah, and then fundamentally it also comes down to sort of like, what are you trying to achieve? Right. And there are lots of businesses who, you know, or have certain processes or things that they do where maybe it does not need to sort of be, you know, the best, but there are lots of tools out there. Like, you know, if you're, if you want to be the best designer, you're using mid journey, even though like you can get pretty good sort of design out of using, I don't know, banana or something similar. Right. But if you really care fundamentally,

deeply about your craft as you do when you're a builder or founder and you care about how a landing page looks like, it doesn't really sort of get the job done to have like an average looking, you know, landing page. If that's the landing page that fundamentally drives, you know, the majority of your inbound leads. And I think the interesting thing for finance is that that's kind of like, it's like that, but sort of 10X because like, you know, the decisions that our clients makes are literally driving, you know,

millions of dollars in value or billions of dollars, maybe even more so, right? So for our clients, it's really a matter of quite large sort of financial outcomes that comes down to your ability to generate something that's not just the average Excel model or the average slide deck or the average analysis of something, but you really all the time wants to do something that's different than the average. The average will just give you market returns and that's...

That's really not like, then you're fired, right? That's not what an investor is sort of put in the world to do. They're put into the world to generate outsized outcomes. And you only can do that if you're a little bit better than the market or think a little bit differently about these things. So again, I think for our clients, as soon as, you know, GPT can do it, or as soon as Claude can do that, it kind of almost becomes a little bit worthless because then

everyone can do it. And then it doesn't have that edge anymore. And then you constantly need to sort of like, you know, go chase the next thing. And I think a lot of us are probably feeling it these days that like, you know, the, best in class website that was like, you know, the going standard, like six months ago, like that, that, that game has sort of like upped itself. for our clients, that's, that's just a matter of sort of professional pride and, and, and obviously wants it to, to, sort of try outcomes is fundamentally a matter of like, you know,

Morten Bruun (15:37.666)

many, many millions that's on the line each single day.

Ethan Lee (15:40.823)

Yeah, absolutely. So that kind of makes me think then, like, you know, for each Hebia customer, then is the way that they use Hebia like very different. Like, I imagine that it doesn't, you know, when you're actually using the product that you're adding a lot of your own sort of unique spin on like the analysis or, you know, adding a little bit of, you know, like you said, maybe it's process, but I'm curious, like when you're, a customer is actually implementing Hebia, Hebia, like what is, what is sort of their level of configuration? Like are

Are each, you know, are the stock tools kind of like what they're using within your product and how do they differentiate, differentiate, I guess even their usage of it, you know, given, know, let's say all financial firms are using Hevia now, it kind of becomes the clod or, or, or chat GPT, so to speak for, for that category. I'm curious, like, is it more like a, canvas where people, you know, design sort of like analysis or process and, and, and they can add their sort of a unique insight or, or a configuration on top of it.

Morten Bruun (16:40.898)

Yeah, I think it's a, I like the way of sort of framing it as a bit of a canvas. So we, when we implement with a client, we go quite sort of deep in making sure that the way that we configure the platform and the way that we sort of onboard them and so on gets specific to, you know, again, their process and the type of problems that they want to want to solve for.

Ethan Lee (16:41.208)

Thank you.

Morten Bruun (17:00.46)

So we have a quite large team of AI strategists and they're all former finance professionals or legal professionals. They come from banking or PE and so on. And so they get paired up with clients within their domain. So they are actually able to sort of say, okay, I had the expertise from working in that field myself. And now my job is to go and understand this specific firm and this specific team in detail and try to understand how can I then shape this canvas into being...

ultimately a useful tool for this team. Now, when you look at the platform, there is obviously a number of different use cases and it differs a lot from client to client how they use them. And you can of course use heavy as like, your general go to AI tool for questions and research and whatnot. But I think some of the very powerful use cases that we see are use cases where we're able to really leverage the data that they already have. So we integrate with their SharePoint folders and they can set up.

projects, almost like a deal room or a project space where they can add their documents and they can add instructions on how they ideally would want to work. That's something that we also capture and learn over time. You should generally feel that the tool is getting better as you continue to use it. And then what we're able to do, which I think is still quite unique, is we're basically able to, over a very vast amount of data, extract insights.

Coming back to this idea of terabytes of data, you know, I was working with a client yesterday who wanted to set up a space where they could analyze basically all their IC memos that they've done within that specific practice, then analyze it and almost use it as a bit of like a brain for how they should think about, you know, new deals that comes along. So what you have is essentially like, you know, a folder or, you know, a space with in their case, like 1500 documents.

And each of these documents are maybe on average, like, don't know, 30 pages long. So you really have like 50,000 pages. And they wanted to drill that down into, I think, 60 different analysis. Each of these analysis, you know, being very specific about what they wanted the output to look like, you know, how it should be set up, whether it's tables or, you know, Harvey Balls showing, you know, scoring it in different ways. Some of the analysis might sort of lean on one another. So there's like cross referencing across all these different things.

Morten Bruun (19:24.59)

Maybe you also want to enrich it with say web data or data from fact set or pitch book or any sort of like finance specific tool. And then ultimately you would want to then see all these different analysis across 1500 documents across 30 pages in each single document and then be able to tie that back and say, okay, the reason why we scored this, I don't know, a three out of five was because on page 17 in this document and on page 19 in this document,

is set X, Y, and Z. And that's kind of like the reasoning that we do over it. Those are use cases that, you know, I think are just so hard to actually like unlock and it drives so much values for these teams because again, they are, you know, truly professional knowledge workers. So their entire edge sits in that sort of knowledge base and our ability to be able to take that amount of data and structure it in a way, allow them to customize their analysis and make sure that the output and the analysis come out the exact way that they want.

And then they can chat over it and query over it. you know, that is something that's quite powerful. And you really only get there if you really dive in, understand the specific processes and how these teams are working, what matters for them, and then building the tool or shaping that came around that.

Ethan Lee (20:40.438)

Yeah, I love that idea that you kind of alluded to, which is like process engineering or almost like forward deployed where you're working very closely alongside of a customer to deeply understand their needs and sort of the unique data that they may have available to them to really customize Hebia to get the most out of the platform in general. And so that makes sense that I would imagine that maybe people's, know, different customers, various setups of Hebia could look quite different or what they get out of it could be quite different. I'm curious, you know, I know that, know, Hebia's

more of a tool for finance and legal teams. But I'm curious if your team, your product development team, has also used your product in any interesting ways. Because I would imagine that that same kind of analysis could be applied to other domains, potentially product development as well. But does the Hebia team use Hebia to help you build it?

Morten Bruun (21:25.315)

Mm.

Morten Bruun (21:28.876)

Yeah, think fundamentally we're big believers in eating our own dog food here. I also think while we do focus on finance and legal, reason why we think that's fundamentally interesting is just because the marginal value or the marginal utility of getting things right in finance or legal is super high because there's always like, again, millions or tens of millions dollars on the line if it's a lawsuit or you're evaluating a deal or whatnot.

So we think that if we can solve for that problem, a lot of other sort of knowledge work problem will be something that we can also address. So to your question, yeah, we do use Heavya a lot internally. Maybe two examples as of recently, we just presented our product strategy lately and we were crafting slides for that. We do that in Heavya where we have, you know,

all the best layouts, the slides that we've used before. And again, have you sort of learned based off of that? So it's really good at sort of thinking, you want to communicate this point. We know that previously you've sort of used similar type of layouts or these types of slides to communicate those points. Maybe we can leverage some of those designs and tweak them a little bit to the specific slide that want to create for this product strategy. And that's kind of like how we put together the deck.

Another thing that we use it for coming back to this example of like analyzing unstructured data at scale is that I think a lot of what I find as a product manager being being really interesting to analyze is really like unstructured data, raw transcripts, reading notes that I have and so on. And then testing different hypothesis against those trying to extract what are sort of the key points and maybe do that on a

per transcript basis because I want to be able to drill down and know that this customer set this specific thing. But I of course also want to sort of zoom out and abstract away and sort of think about, okay, if I'm not just analyzing one call, but trying to get insights from hundreds of different calls, you know, what are like the most impactful features that we can build. So I have a matrix, which is sort of our product for analyzing these things to scale where I load in all the different transcripts, my notes over the course of...

Morten Bruun (23:49.877)

of a quarter and I also like, you know, forward emails. So that gets indexed as well. And it basically becomes this repository of database where I can ask questions and I do these like drill down analysis to understand, you know, within PowerPoint, like what is the main feedback that we're getting and so on and so forth. it's, we try to use it a lot internally. And I think especially for product, there's lots of really interesting use cases there.

Ethan Lee (24:16.748)

Yeah, yeah, absolutely. That does sound like a really cool use case to be able to analyze all of that unstructured data, collect those themes. And certainly there's a lot of maybe even subtle types of requests that can pop in. And I imagine that the Hebia platform is super helpful in bringing some of those out and citing those in answers to questions. One thing that I noticed is that your title is member of technical staff. And certainly that mirrors

Some of the AI labs, they've also chosen to use that label. I'm curious, does that reflect a specific org structure that you guys have decided on for developing the product? yeah, has anything about the role of product and engineering internally, has it shifted in response to using some of these tools or perhaps coding agents? Or is that just the new term that we're using for some of these things? I'm curious what that's about.

Morten Bruun (25:13.646)

Yeah, it's a good question. I like, I don't know, I kind of came into product from, I guess, a bit of like an odd background in some ways, in the sense that I mainly been a founder and a builder more so than sort of a traditionally schooled PM. So for me, I had to learn a lot of things, but it's also been quite remarkable how much has shifted in the last six months. I think if I were to put a number on it, maybe...

10 % of how I worked six months ago is how I still work. The rest 90 % has shifted dramatically. I find myself again, sort of mainly leveraging different AI tools. And I also think the output that I create now is less about sort of, you know, writing and...

you know, helping shape design, commenting on FICMAS or whatever it might be. Now it's much more focused on building. And I think I see that across the board, you know, the way that we organize is both engineering and product and design rolls up into our CTO of us. So I think sort of reporting line, we're all part of a technical team, whether you're design, engineering and product and across the board, we're all essentially writing code in one shape or another.

So I think that has changed drastically. And I think what it allows, especially product managers to do and designers as well for that matter is to move from a world where things were mainly sort of done on paper in one form or the other. Meaning that you spend time sort of like interviewing clients, getting a bit of like a...

pixelated view on the features that they like, then you try to abstract away the sort key findings there. Maybe you talk to internal teams, you do some market research, you look at some data, and you kind of get this at an abstracted layer, you get really the feedback and direction on what the product, and then you spend time writing PRDs, trying to get thoughts on paper, getting feedback on it, hydrating it a lot so that you get...

Morten Bruun (27:16.494)

So you kind of start shaping whatever the features should look like. And you do all of that to ultimately de-risk engineering spending time building something that once it sort of lands in the hands of customers, it addresses the problem that you set out to solve. And I think a lot of that is not that it has sort of gone away, but it has changed drastically. And then I think the second thing that has happened is that that loop

Between when you first start doing discovery work on a feature to you being able to put it in the hands of clients or internal teams for dog fooding or whatever it might be. That loop has been compressed so much that like right now, you know, from having an idea or some type of like thought around what you might want to build next, you can actually just build it. So, and I see that happening across, of course, engineering, like that loop has like really shortened. We do that in product as well, but also our designers are more often like.

not just designing static FICMAs, but I'm through what should the interaction look like when I click this thing, what actually happens, what does our loading stage looks like, and what does different animation look like? And it really makes the product come alive in a different way than it did maybe six months ago, where it was tied to some static FICMA frames and a POD with a write-up on what it should feel like. But now you can actually put your hands on it and...

try it out and click around. And I think that has changed a lot in terms of, you know, again, de-risking that we're not building something that ultimately doesn't feel intuitive when clients get their hands on it. We used to do that more theoretically, and now I see us doing it much more practically and reporting wise, I think that kind of means that these roles are converging more and more. And I think if you're being an engineer that's just focused on coding and just executing, you know, something that...

product throws over to you, I think that can be a quite tough role to be in. The engineers are able to be much more, you know, owners together with product and design, thinking about what type of problems are fundamentally trying to solve, what do we actually want to build, also like long-term and product, are much more effective. So I think that, I don't know if it's PMs becoming more like engineers and vice versa, or if it's more that we're all becoming more like...

Morten Bruun (29:34.382)

GMs and trying to get closer to the actual clients and understand how they operate. I think they're all sort of like riffs on the same type of like, of the same situation, which is that we need to get much closer and understanding our clients and we need to be much, much sort of faster at putting things in the front of real users who can test and help validate different hypotheses.

Ethan Lee (29:56.111)

Yeah, definitely. And what you said is so accurate, the compression of the feedback loop that is a hundred percent happening. The way I've kind of thought about it is like the medium of thinking, like you said, it's kind of shifted. The medium of thinking has shifted from like pen and paper or like, know, interviews and maybe static prototypes. mean, it used to take hours to really prototype a high fidelity interaction, right? But now like you can really think with, you know, with a coding agent to, to, to prototype it live. And that can be like,

Morten Bruun (30:06.478)

Mm.

Morten Bruun (30:16.206)

Mm.

Morten Bruun (30:22.626)

Mm.

Ethan Lee (30:23.32)

a semi-working feature, you can get to those conclusions much faster than if you had spent hours trying to build a high fidelity prototype model in one of those older high fidelity prototyping tools. So yeah, it's super interesting and definitely has shaped product development. It sounds like it had to be definitely at Paragon as well. I think the slightly more negative angle on that, and I'm curious to get your thoughts on this, is that as the cost of like

Morten Bruun (30:36.654)

Mm.

Ethan Lee (30:52.02)

engineering things and building software has just gone down. that is no longer a barrier necessarily for someone just entering a new space and building out a new product. That is now something that AI models are capable of and a small team is capable of. And certainly the public markets are sort of reacting to this. Much has been made of the SaaSpocalypse. I'm curious to kind of get your thoughts on that and kind of hear where that is.

Morten Bruun (31:13.39)

Mm.

Ethan Lee (31:19.276)

where that reaction is justified or maybe not justified, and how SaaS companies can continue to differentiate in this new world.

Morten Bruun (31:27.426)

Yeah, I it's a really interesting question. I I think if you purely look at software, like the value of software as being the code that's written, then for sure. you know, the ability for everyone to now write code and do it much faster, that has obviously changed the game. But, you know, I think the fact that codes get cheaper, you know,

doesn't mean weakening the modes as such, right? Like I think a lot of what makes great software great is really how it becomes part of like, know, fundamentally the customer's workflow or that institution that you work for. And I think that comes a lot back to, the understanding again of those processes and how well you wrap around that. I think if you look at a space like, you know, finance, you know, the reason why Bloomberg is very, very sticky.

is not per se that, you know, I'm not saying that it's going to be easy just to replicate Bloomberg, but it's just as much that you have like, you know, generations and generations of finance professionals who have worked in Bloomberg and have their workflows in Bloomberg and have been trained on Bloomberg and, you know, has a lot of knowledge in Bloomberg. They have, you know, messaging channels in Bloomberg and different analysis in Bloomberg. So all of a sudden you have this like idea of a truly institutionalized tool where I'm sure, you know,

there is a world in which you go out and you vibe code a new Bloomberg, but even if you could do that, and even if you could replicate Bloomberg, that doesn't mean that you can just switch it over. And so I think we sometimes actually underestimate the of stickiness of being truly embedded into some of these workflows. That doesn't mean that there's not a sort of...

if you're a legacy platform for new AI tools that are coming out. And I think you do need to fundamentally rethink a lot about like how you've built your software and you know, some of the features that might be true and sort of like a pre-AI world might need to be with thought completely. It opens up the door for a lot of like new use cases that we couldn't really solve before. And I think your ability to adapt to that is gonna be hugely important. But I think if you only think about, again, software as code.

Morten Bruun (33:45.635)

then yeah, sure. The SESpocalypse is very, very real. And if you're like a very thin wrapper around, you know, foundational models and, know, again, think same Altman said that he wouldn't want to be one of the companies that are fearing the next model release from, from, from opening your eye on topic for that matter. think that very much holds true. But I think if you're in a space where again, the workflows that you're trying to build are not generally sort of just like vanilla generally applicable workflows, but are quite sort of specific and means that you.

have to have a very, very specific understanding of the client, how they operate and again, how they do their analysis and what type of output that they're looking for and the ability to learn, build those things into the software as more of an organic thing over time. I think those products have really only been amplified by it being faster and easier to ship software rather than having sort of been threatened by it.

Ethan Lee (34:44.95)

Yeah, appreciate that. And it's a good reminder too, that people don't buy software for the code that it is. People buy software because it provides them some value as a user at the end of the day. And so certainly, yes, that does require a really deep understanding of kind of what they want and what they're getting out of it and not just the fact that it exists and it's, you know, coded a particular way. So yeah, thank you for kind of sharing your perspective on that. Maybe as kind of a last question that I'm curious to get your thoughts on, you know, as

Morten Bruun (34:55.533)

Mm.

Ethan Lee (35:14.22)

Maybe some of our listeners like are maybe working with finance or legal teams or just, yeah, maybe thinking about building a product in that space. Do you have any kind of like lessons you've learned or things you've learned about winning trust within those spaces? Like, you know, they're very highly regulated industries. You know, these customers have obviously very high enterprise and compliance requirements, but anything that maybe has surprised you working with these customers about, you know, sort of like what they're looking for and like how you can provide value or.

or maybe even sell it all to them in some cases. So yeah, curious to hear your thoughts.

Morten Bruun (35:50.795)

It's a good question. There are obviously things in terms of the product in and of itself, to your point. These are clients who, a lot of what they fear is making the wrong decision based on wrong data, right? So being able to have high accuracy and whatnot. But that obviously goes without saying when you're working within our segment that those things are important. I think if you look a little bit more about how you get in the room to begin with.

I think a lot of these firms, they truly embrace the fact and it's really been, I've been really sort of positively surprised about how they actually are leaning into adopting tools and want to invest in tools in this area as well. So I think what they're really looking for is, it's not just a product that generally works well and solves the problem, but also looking for

a company that they can partner with and sort of build alongside with and have a quite close relationship with. And I think a lot of what they find trust in is not that you're just a great technologist and that you have a cracked engineering team or like really talented PMs or designers or whatever it might be, but that you also have a...

team of people that go on site and again understand their world and understand that the everyday life being in the trenches of an investment banker or a PE associate and whatnot. I think that's one of the things that we see when we sort of...

work with clients that are also working with some of the big foundational models. There are times where we end up taking the entire client, they just use Hebi as the tool. There are also times where they might wanna use the foundational models more for the lightweight, sort of everyday questions and whatnot, and we could use for the more sort of heavy, more challenging, more complex analysis. And in both cases, I think what oftentimes...

Morten Bruun (37:56.995)

gets noted by our clients is that they find a lot of value in working with a team who understand again what a SIM is or what an IC memo looks like or why a certain output isn't something else. And I think that's also something that has, I wouldn't say maybe surprised me, but definitely been a constant reminder, especially working in my specific area, which is about producing again, PowerPoints and Excel and whatnot.

With my background from McKinsey, one of the things that I always found sort of, I think at that time frustrating, but it's also where I think we find a lot of edge is that.

When these people are sending over a PowerPoint, it's not just to sort of like update the company on the latest changes and how sales was last month and whatnot. They're ultimately pitching in on deals that are millions and millions in value, but they're also hugely important, whether it's building out new data centers or whatever it might be. So you want to get that just right. You can't...

Again, you can't live with a 90%. You can't live with something that's average or doesn't follow their standards or doesn't meet their expectations in terms of... So when you look at it as a pure technologist, you might be like, hey, we got a single prompt and we generated this entire PowerPoint and it has your colors and logos and whatnot. How amazing is that? And as a PM or an engineer, you can appreciate how much work goes into that. But...

from our clients perspective, they don't really care about how challenging that problem was to solve from a software engineering perspective. They care about if this was a document that I was sending over to my clients, would that meet the standards of my firm? And if you can't get there, then it's not really that helpful, to be honest, right? And I think that's something that if you're building in this space, I think there is a risk, especially if, you know, maybe you can make the argument that right now wherever we're...

Morten Bruun (39:59.469)

can be building, maybe you see less of that sort of like technologist first approach. But I think there's always a challenge that because you are the one who's been building it and because you've gone through all these iteration of like trying to just make the tool work that you lose a little bit, you know, the perspective that that's not what these companies care about. They don't care about like, you know, the hoops that you had to jump in order to make this like Excel plugin work really well. They're looking at it as like, does that fundamentally solve

the job to be done. And that's not just going from a prompt to a spreadsheet or from a prompt to a PowerPoint. It has like so many more layers. I think, I think really having a lot of empathy for the clients and their specific use cases, trying to get as deep as you can understanding it. again, sort of like pairing up with people that has background in that specific industry has been a huge, huge unlock for us. And, you know, as, as,

As much as I think, you know, our engineering team and deserve so much credit for everything that we've built. I also think the, the, people that we have working alongside our clients are, you know, can overstate how important they are for ultimately bringing together a product that, that, that the social problems out there and it's not just sort of like, you know, a gimmicky thing.

Ethan Lee (41:23.126)

Yeah, absolutely. Such a great reminder. And I can definitely appreciate how your background helps you specifically understand those teams and their needs better. And of course, having people on the ground who are also working with them to get them from that 90 % to something closer to what they could actually deliver to a client. that's super cool to hear your perspective on that. It makes a lot of sense. And yeah, thank you so much for sharing that and for joining us on Future Proof today.

Morten Bruun (41:50.319)

100%. A pleasure, Ethan, and thanks so much for inviting me.

Ethan Lee (41:53.869)

Awesome. Thanks, Morten.


Interested in being a guest on Future Proof? Reach out to forrest.herlick@useparagon.com