How we actually use AI
Most AI tools don't work. Here's what does.
I finally got on the vibecoding trend and built something.
No surprise: It took way longer than Twitter and the growth gurus would have you believe. For a while, I thought I was using AI correctly. I wasn’t. I was asking ChatGPT questions, pasting answers into docs, and calling it leverage. It felt useful. It wasn’t changing anything.
The thing that actually changed my day-to-day was building a system that’s specced directly into how I work.
We call it DVOS—Daring Ventures Operating System. It’s internal & messy.
It’s not something I could demo cleanly even if I wanted to. There’s no flow you’d want to screenshot. Unlike n8n, there’s no thumbnail-able workflow.
Also, unlike n8n it actually just sits there and does things.
An overview of DVOS
┌─────────────────────────────────────────────────────┐
│ INPUTS │
│ Emails · Decks · Calendar · Meetings · CRM · Docs │
└──────────────────────┬──────────────────────────────┘
▼
┌────────────────┐
│ DVOS │
│ Index + Query │
└────────┬───────┘
▼
┌─────────────────────────────────────────────────────┐
│ OUTPUTS │
│ Morning Brief · Follow-ups · Answers · Alerts │
└─────────────────────────────────────────────────────┘What It Actually Does
DVOS indexes everything we already live in: pitch decks, emails, calendars, meeting transcripts, CRM records, research documents. A year of deal flow, now queryable.
We had all this institutional knowledge scattered across six different systems. Decks in one place. Emails in another. Meeting notes somewhere else. Affinity for relationships. Notion for research. The information existed. We just couldn’t ask questions across it.
Now I can type “What do we know about Convoy?” and it pulls from everywhere—the deck they sent, the emails with their founders, the meeting notes from two calls, the research doc comparing them to competitors. One answer. Citations to sources.
Sounds small. Believe me, it’s not.
The Stack
Under the hood it’s Supabase, Redis, and a Cloudflare sandbox. There’s caching everywhere because if it’s slow, you won’t use it.
We built a pipeline engine that handles any data source the same way:
WEBHOOK ──┐
│
EMAIL ────┼──▶ PIPELINE ──▶ FUNDBRAIN ──▶ QUERY
│
FILE ─────┤
│
POLL ─────┘Webhook from ReadAI with a meeting transcript? Pipeline. Email with a deck attached? Same pipeline. File dropped in a folder? Pipeline. Adding a new data source is a database row, not a deployment.
There’s also this thing we call FundBrain—a knowledge layer that extracts facts and entities from everything that flows through:
┌──────────────────────────────┐
│ FundBrain (Knowledge Layer) │
│ ├─ Documents │
│ ├─ Facts │
│ ├─ Entities │
│ └─ Relationships │
└──────────────────────────────┘Companies, people, deals, how they connect, where we learned it. The extraction is maybe 85% accurate. But 85% queryable beats 0% queryable.
What Actually Changed
Honestly? Meeting prep and follow-up tracking.
6:00 AM Cron job runs
│
▼
Pull calendar, emails, CRM
│
▼
7:00 AM Morning briefing lands in inbox
│
▼
Throughout day: decks auto-processed,
follow-ups flagged, knowledge indexed
│
▼
6:00 PM EOD summary: open items, tomorrow previewI get a morning briefing now. Here are your meetings today. Here’s what you know about each company. Here’s your last interaction. Here are open follow-ups.
The follow-up detection reads emails and flags things that look like commitments i.e. stuff I said I’d do, stuff they said they’d send and at the end of day, I get a list. Nothing falls through cracks. That alone probably saves five hours a week.
The other thing: I ask questions I wouldn’t have bothered asking before. “Which decks this quarter mentioned logistics?” Used to be impossible without manually reviewing everything. Now it’s a ten-second query.
The Annoying Parts
Most AI workflows you see online look amazing. They also break in ways you don’t notice until you depend on them.
We spent more time on authentication and retry logic than on prompt engineering. The LLMs are really good at predicting what the next word should be. That’s useful. But it still can’t reliably add two numbers without writing code to do it.
The value ended up being in all the boring stuff around the model. Integrations, pipelines, caching, error handling. None of it makes good content.
I also thought the models would keep getting smarter until one of them could just do everything. That’s not really how it’s playing out. We use four different models depending on the task. Claude for code. Gemini for fast extraction. It’s looking more like a utility than a winner-take-all situation. I don’t know. It’s really hard to tell where this all goes.
Where We’re At
At this point, ripping DVOS out would feel like losing an employee. It’s that embedded. Costs less than a hire. Doesn’t complain. Also took months of building stuff no one will ever see.
We’re figuring it out as we go. If anyone’s built similar internal tools, knowledge systems, or AI workflows I’d love to hear what’s working for you. Any tips or advice? Still very much in tinkering mode over here.
Find me on LinkedIn or reply to this.



