Why I replaced my own software before I could even properly launch it

Okay, I didn’t build a real SaaS application. Sorry for the clickbait, but given what happened to my Knowledge Hub, I couldn’t resist.

A bunch of people have asked me about the status of my Knowledge Hub, which I wrote about a few weeks ago. How well does it work? Did I get PMF inside Point Nine? Things are changing so fast that I’ve been hesitant to write this post … by the time I hit the “publish” button, things might look different again. 😉 But I owe y’all an update, so here goes.

TL;DR

I’ve moved to a much simpler and probably better solution. Instead of worrying about a vector database, embeddings, RAG, hybrid search, a custom MCP server, and a process supervisor, I’ve just connected our data sources (Attio, Slack, Zendesk, etc.) directly with Claude, and it seems to work just fine.

A Quick Recap

About a month ago, I vibe-coded a Knowledge Hub: ~43,800 lines of Python, six data source connectors, a vector database, MCP servers, and more. I wrote about it here and also shared this (AI-generated) tech doc.

It was promising. It sort of worked. But it kept breaking. Connections got lost, MCP servers went down, sync jobs failed in non-obvious ways. Some sources kept failing silently. The app would happily report that it had synced, but nothing had actually made it into the database. Other times, documents were in the database but for some reason weren’t being retrieved. Plus all kinds of little things that didn’t work.

All of this is fixable, and better AI coding models and tooling come out almost on a daily basis, making it easier. But while working on it, I’ve tried a much simpler approach in parallel … and realized that the previous approach was heavily over-engineered (for our use case). It also helped that Anthropic released improvements at breakneck speed, including more plugins and better connections to GMail/G-Drive/G-Cal.

Doing this, I realized that by simply plugging everything into C̶l̶a̶u̶d̶e̶ Claude Code directly, I could probably get 90–95% of the value for about 10% of the effort. Especially 10% of the maintenance effort … and that matters a lot. A simpler AI Command Center system (Claude suggested this title when I discussed the specs with it) means I can iterate faster, because there’s so much less complexity to manage.

Side-by-Side: Knowledge Hub vs. AI Command Center

Here’s a quick comparison of the two systems (mostly written by Claude):

The main advantage of the Knowledge Hub is speed, because it provides the AI with a pre-indexed search. With the AI Command Center, question answering is pretty slow because everything happens live. But as mentioned before, the latency advantage of the Knowledge Hub comes with quite a lot of operational complexity. If it turns out that the AI Command Center isn’t good enough from a speed and answer quality perspective, I might go back to the Knowledge Hub — or take a closer look at systems like Glean or Dust.

New Features!

Because the system was suddenly so much lighter, I could experiment more freely. So I started adding things:

Task management. I made it my task manager to capture ideas and to-dos, replacing what I’d previously used ChatGPT for (which I wrote about before). The main advantage vs. the ChatGPT solution is that ideas and tasks are now saved in a Notion database, which is much more robust than relying on the context window of ChatGPT.

Smart screenshot workflows. When I paste a screenshot of a LinkedIn profile, it triggers research on that founder and their company. When I paste a screenshot of a WhatsApp conversation about scheduling, it triggers a Blockit scheduling workflow.

Example for a “paste screenshot” workflow

Morning digest. Every morning, it compiles a summary — emails, Slack messages, calendar updates — so I can start the day with a quick overview. I don’t know yet how useful it will actually be, but hey, why not? It took almost no effort to add.

But the most exciting feature I’ve added is a super simple but pretty interesting “scout”, which works surprisingly well so far, where Claude is instructed to find interesting founders/companies that I should take a look at. It looks for interesting themes in various places, does a first round of research, saves them as leads and presents them to me.

With this feature, I ran into a pretty serious issue: researching companies involves a large number of tool calls, and this kept clogging up the context window to the point where Claude would just stop working. I tried a bunch of approaches to slim it down. Nothing worked well enough … until I switched from Claude Chat to Claude Code for this workflow. The key difference isn’t that Claude Code has a larger context window (it doesn’t). The big advantage (no news to you if you’re an engineer) is that Claude Code can spawn sub-agents — farming out parts of a complex task to separate agents, each with their own fresh context window. This way, the parent agent’s context doesn’t balloon.

Since it works much better for the scouting/research use case and since I frequently run into context size issues with various types of questions, I’ve just migrated the entire thing to Claude Code (I told you there’s a chance that it might change before I’m done with this post).

Another “paste screenshot” example, this time using the CLI version of Claude Code … just for fun, I paste a LinkedIn screenshot of Mikkel

And what about the Crustacean Revolution?!

The obvious question some of you might have: Why am I not using OpenClaw for this? Good question. For now, I’m moving pretty fast with what I have, but I’m experimenting with OpenClaw in parallel, and it’s well possible that I’ll soon switch to a more agentic, OpenClaw-powered setup … I’ll keep you posted!


AI Killed My SaaS was originally published in Point Nine Land on Medium, where people are continuing the conversation by highlighting and responding to this story.

Leave a Reply

Sign Up for TheVCDaily

The best news in VC, delivered every day!