We’re happy to share the latest in AltsTech’s series profiling how investment managers are using AI, tech, and analytics to generate alpha. We’re fortunate to interview Steve Chaparro, Founder & General Partner, Beyond Capital Ventures.

I was most interested in learning some of the unique insights they’ve obtained. Steve summarized:

  1. “Technical excellence without workflow clarity is the most common early failure mode.
  2. Systems-native founders are frequently misread as unfocused in early pitches.
  3. Commercial urgency combined with workflow mapping clarity correlates strongly with early enterprise traction.
  4. Constraint-native operators outperform pedigree-heavy founders in complex verticals.”

David Teten: Please give us an overview of your firm.

Beyond Capital Ventures is a $5M early-stage fund focused on one thing: AI Workflow Infrastructure.

That focus came from pattern recognition, not theory. I spent a few years working closely with founders and operators across healthcare, insurance, and revenue operations, and I kept seeing the same failure mode. The AI itself would work, sometimes incredibly well, but nothing actually changed in the organization. The workflow didn’t move.

That’s when it clicked for me. AI creates capability. But the value accrues at the workflow layer, where systems, humans, data, and now agents have to coordinate to produce something reliable and auditable.

So when we say “Workflow Infrastructure,” we mean the systems that make AI usable in the real world, not just impressive in a demo.

Our strategy is pretty disciplined around that insight.

First, we only invest in that category. We pass on a lot of otherwise strong companies because they’re building features or applications, not systems.

Second, we underwrite founders differently. We’re looking for a specific set of traits that show up when someone has actually lived inside complex systems. Can they map a workflow under pressure? Do they understand constraints, not just capabilities? Those signals matter more to us than pedigree or polish.

Third, we built what we call our Eight-Layer Evaluation Engine. It’s a structured way of forcing ourselves to answer the hard questions on every deal, from workflow clarity to architecture to mispricing. It’s less about being “right” and more about being consistent.

Where this all comes together is in how we think about edge.

We don’t believe we see more deals than other funds. Our edge is that we’re looking for different signals. In a lot of cases, the founders we’re most interested in are the ones who feel a bit hard to read in the first meeting, but once you understand the system they’re describing, it’s actually much more coherent than it first appears.

That gap between how something is perceived and what it actually is, that’s where we spend our time.

David Teten: Who are your peers/competitors, and how do you differ?

Our peers are mostly early-stage AI funds, especially pre-seed funds like Orange Collective, solo GPs investing in vertical AI, and emerging managers focused on infrastructure.

Where we differ shows up pretty quickly in our evaluation of a deal.

Most early-stage AI investing still leans heavily on pattern recognition. You’re reading the founder, the pedigree, how clean the narrative is, how well it fits what’s already working in the market. That works well when the category is well understood.

In Workflow Infrastructure, that breaks down.

I’ve been in too many early calls where a founder couldn’t explain their product cleanly in two minutes, and the instinct is to discount them. But if you stay with it and actually map what they’re describing, you realize they’re not confused. They’re describing a system that hasn’t been simplified yet.

That’s a very different signal.

So instead of relying on instinct alone, we built a structure around it. Every deal goes through our Eight-Layer Evaluation Engine. We’re looking at things like:

There have been cases where a founder didn’t “land” in the first meeting, but when we walked through the workflow step by step, it became clear they had a much deeper grasp of the system than founders who presented more cleanly.

We’ve also passed on companies that looked great on the surface but couldn’t hold up when we pushed into workflow or constraint questions.

So the difference is less about taste and more about structure.

Traditional ventures tend to reward clarity of presentation. We care more about clarity of the underlying system. Those are not always the same thing, especially this early.

That gap is where we spend our time, and where we think our edge comes from.

David Teten: What’s your background? How and why are you in your role today?

My path into venture wasn’t linear.

I started in architecture, then moved into strategic design, and eventually into systems work at IDEO. Across all of that, the common thread was learning how complex systems actually function, not just how they’re supposed to function on paper.

At IDEO, I spent a lot of time inside large organizations that were trying to adopt new technologies, including early AI systems. What stood out wasn’t the technology itself. It was how hard it was to get anything to actually work inside a real workflow.

You’d see strong tools fail because they didn’t fit how decisions were made, or how data moved, or who actually owned the process.

That led me into more direct work with founders. I started spending a lot of time mapping workflows across industries like healthcare, insurance, logistics, and revenue operations. Sitting with founders, whiteboarding their systems, pressure-testing how things would actually run in production.

After a while, a pattern became really clear.

The companies that endured weren’t the ones with the most impressive models. They were the ones who understood how to orchestrate everything around the model.

That’s really where Beyond Capital Ventures comes from.

It’s not a thesis I picked because it was trending. It’s the result of years of seeing the same gap over and over again. The fund is really just a formalization of how I was already working with founders, turning that into a structured, repeatable way to evaluate and support companies building in that layer.

David Teten: What are the tools you’re using for your front office: sourcing, LP relations, investing analysis, etc.? What are the strengths and weaknesses of these providers?

We try to keep the stack relatively simple. Most of the leverage comes from how we use the tools, not the tools themselves.

At the core, we use Notion (https://www.notion.so) as our investment operating system. That’s where our Eight-Layer Evaluation Engine lives.

For example, after a founder call, we’ll take the transcript, run it through our AI workflows, and reconstruct the actual workflow they’re describing. We map current state, future state, decision points, and failure modes. That all gets documented directly in Notion alongside founder trait scoring, architecture notes, and our mispricing view.

The strength is total control. The weakness is that you only get value if you’re disciplined. There’s no guardrail. If you stop maintaining it, it breaks quickly.

On the fund side, we use Decile Hub (https://decilehub.com) and AngelList (https://www.angellist.com).

Decile is really the backbone of how we manage the fund and LP relationships. AngelList is where we run SPVs. Both are efficient and make it possible to operate as a solo GP without a large back office.

The tradeoff is customization. You’re operating inside someone else’s system, so you have to adapt your workflows to fit.

For AI, we primarily use OpenAI (https://openai.com).

A concrete example, we’ll take a founder conversation and use AI to extract the implied workflow, then pressure-test it. Where does it break? What happens when inputs are messy? Where does human intervention still need to exist?

It’s incredibly useful for accelerating pattern recognition, but it’s not a decision-maker. You still have to verify everything. If anything, it forces you to be more structured in how you think, because bad prompts produce bad conclusions very quickly.

Overall, the way we think about tools is pretty simple.

Notion is the system.
AI accelerates analysis.
Decile and AngelList handle execution.

Everything else is secondary to the structure sitting on top.

David Teten: Can you share details on your use of OpenAI? E.g., Custom GPTs you’ve built in it?

Yes. The primary way we use OpenAI is through a custom GPT I’ve built called BCV Investment Analyst, which is now on its third version.

I didn’t start by trying to “use AI for investing.” I started by trying to make my own thinking more consistent.

Early on, I noticed that even with a strong framework, your judgment can drift. You ask slightly different questions. You overweight something in one deal and underweight it in another. That’s where mistakes creep in.

So I began encoding my evaluation process into a structured system inside ChatGPT.

Version one was pretty basic. It was essentially a set of prompts to help me map a company to my thesis and generate an initial IC memo.

Version two got more serious. I integrated scorecards, founder trait evaluation, and more structured outputs across diligence.

Version three, which I’m using now, is much closer to an actual operating system.

It’s built directly around our Eight-Layer Evaluation Engine. For every company, the system walks through:

Each layer has specific prompts, required outputs, and pass/fail thresholds.

A concrete example, I’ll take a founder call transcript, feed it into the system, and have it reconstruct the workflow step by step. Then I’ll pressure-test that workflow. Where does it break? Where are the hidden human dependencies? What assumptions are unproven?

Separately, I’ll run a founder assessment through the system to evaluate how they think under pressure, not just what they say when they’re polished.

What’s important is that the GPT is not making decisions.

It’s forcing consistency.

It ensures that every deal is interrogated the same way, that I don’t skip layers, and that I have a clear, written rationale for why I’m leaning in or passing.

In practice, it acts less like an “AI analyst” and more like a structured thinking partner that doesn’t get tired, doesn’t forget steps, and doesn’t let me be lazy in my evaluation.

That’s where the real value has come from.

 

David Teten: What are the tools you’re using for supporting your portfolio companies? What are the strengths and weaknesses of these providers?

Our primary support is not vendor-first. It’s structural.

Most of the work we do with founders happens through a set of focused sprints that mirror how we evaluate companies in the first place. That usually includes:

For example, with one founder, we spent a session just reconstructing their workflow end-to-end. Once we mapped it cleanly, the GTM path and pricing strategy became much more obvious. The tools didn’t create that insight. The structure did.

From a tooling perspective, we keep things relatively lightweight:

Core System

AI & Analysis

Communication & Diligence

Capital & Fund Infrastructure

Financial Stack

Recommended to Founders

We often guide founders toward modern fintech and ops infrastructure such as:

Strengths

Weaknesses

David Teten: What technologies/databases have you found helpful in winning LPs?

Winning LPs is less about tools and more about consistency.

Most of our LP conversations don’t start with a deck. They start with a point of view. The tools just support that.

Core Infrastructure

Communication & Capture

How We Source LPs

We don’t rely on a single channel. It’s a combination of:

A lot of this is relationship-first. The database helps us stay organized, but the trust is built offline.

What Actually Converts

Every LP conversation is anchored in:

One thing I’ve noticed is that allocators don’t respond to enthusiasm. They respond to discipline.

When they can see that you make decisions the same way every time, and that you’re willing to pass as often as you invest, the conversation changes.

Strength

Weakness

David Teten: What tools do you find helpful for expediting due diligence?

Our diligence speed doesn’t come from moving faster. It comes from removing ambiguity.

Early on, I realized most delays in diligence aren’t about missing data. They’re about unclear thinking. So we built a system that forces clarity early.

What We Use

How It Works in Practice

A typical flow looks like this:

At that point, the question is usually no longer “do we have enough information?”

It becomes “Does this pass or not?”

What Actually Speeds Things Up

Every deal must pass all eight layers of our evaluation engine.

That removes a lot of back-and-forth and second-guessing.

AI helps us synthesize quickly
The engine forces us to decide clearly

Strength

Weakness

David Teten: What are the tools you’re using for your middle office?

We keep the middle office relatively lean.

As a solo GP, the goal isn’t to build a large operational layer. It’s to have reliable infrastructure that handles execution cleanly so I can stay focused on investing.

Core Infrastructure

How We Actually Use It

These platforms handle the mechanics, capital calls, investor tracking, and execution.

But we don’t rely on them for decision-making or portfolio logic.

We maintain our own internal system in Notion (https://www.notion.so) to track:

This separation is important.

Execution happens in AngelList and Decile.
Thinking and discipline live in our internal system.

Strength

Weakness

David Teten: What are the tools you’re using for your back office?

We keep the back office simple and reliable.

At this stage, the goal isn’t sophistication. It’s clarity, control, and making sure nothing breaks as capital starts to move.

Core Stack

How It Works in Practice

Rho handles the movement of money.
QuickBooks tracks and organizes it.
Decile ensures everything ties back to the fund and LP layer.

It’s not complex, but it’s dependable, which matters more at this stage.

Strength

Weakness

David Teten: A huge amount of valuable data flows through your pipes. What are you doing to capture that data and mine it? Can you share any patterns you have identified?

We treat every interaction as data, not just notes.

Most of that data lives inside our system in Notion (https://www.notion.so), where each company is evaluated through the same structured lens. Over time, that creates a dataset that’s actually comparable across founders and companies.

What We Capture

Every meaningful founder interaction produces:

How We Use It

This isn’t passive data collection.

We revisit it constantly. After a call, after a follow-up, after a founder sends an update, we refine the model. Over time, patterns start to show up in a way that’s hard to see if you’re just relying on memory or scattered notes.

Patterns We’ve Identified

A few patterns have shown up consistently:

What This Means for Us

Over time, this dataset reinforces our predictive trait model.

It doesn’t replace judgment, but it sharpens it.

Instead of asking “Do we like this founder?”
We’re asking, “Have we seen this pattern work before, and under what conditions?”

That shift has probably been one of the biggest upgrades in how we make decisions.

David Teten: Do you see any room to use AI to exploit your dataset?

Yes, but probably not in the way most people think about it.

We’re not trying to automate decisions. We’re trying to see patterns earlier and more clearly.

Most of this work sits on top of our internal system in Notion (https://www.notion.so), with AI tools like ChatGPT (https://openai.com) and Claude (https://www.anthropic.com) helping us interrogate that data.

What We’re Building Toward

We’re starting to layer in structured tagging across every company and interaction so we can:

How We Think About AI

AI is useful here as a pattern amplifier.

It helps us:

But it’s not making decisions for us.

What We’re Actually After

The goal is not automation.

It’s a sharper conviction.

If anything, the more we use AI, the more it reinforces how important human judgment is, especially in edge cases where the pattern isn’t obvious yet.

AI helps us see the pattern.
We still have to decide what it means.

David Teten: What are the most creative or unusual ways you’re using AI & analytics?

Most of our AI use isn’t about speed. It’s about pressure-testing.

We use tools like ChatGPT (https://openai.com)** and Claude (https://www.anthropic.com) to take what a founder says and push it further than a normal conversation would.

A few ways this shows up in practice:

How We Think About It

We’re not using AI to summarize.

We’re using it to make things harder.

To push on the edges of a system, a founder, or our own thinking until something either holds or breaks.

That’s where most of the insight comes from.

David Teten: What are your unmet technology needs?

There are a few areas where we consistently feel the gap.

Not in a theoretical sense, but in the middle of real decisions where we wish something existed.

Where We See Gaps

Why This Matters

All three of these come back to the same idea.

We have strong tools for measuring outputs.
We don’t have good tools for measuring systems in motion.

That’s the layer we spend most of our time in, and where we think a lot of the next generation of venture infrastructure will get built.

If those tools existed, they wouldn’t replace judgment.

But they would make it much sharper.

David Teten: What processes are you focused on improving?

At this stage, most of the work is about getting sharper, not broader.

We’re constantly refining a few core processes where small improvements have a big impact:

Where We’re Focused

What This Comes Down To

Most of this is about timing and judgment.

Not just what we see, but when we see it and how confidently we act on it.

The system gives us structure.
Now we’re focused on making that structure sharper over time.

 

If this was helpful to you, please sign up for my newsletter.

Leave a Reply

Sign Up for TheVCDaily

The best news in VC, delivered every day!