How to Raise Money for an AI Startup in 2026 (800-Round Playbook)

Last reviewed by Olena Petrosyuk on April 28, 2026

In 2026, AI is the baseline — not a differentiator. Saying "we use AI" no longer wins meetings. 50% of VC funding went to AI in 2025, but 41% of that went to the top 5 companies and 99% of startups still don't get funded. The winning move is to stop working on fundraising — chasing warm intros, polishing decks — and start working on fundability: real traction, de-risking, proof points, distribution moat. When the story is undeniable, investors come to you.

I've spent eleven years in venture capital — 800+ rounds, $3B+ raised for clients, $640M of it in 2025 alone, most into AI. I've also been COO of an AI company that went pre-seed to Series B to a PE exit, and I'm back in the market now, pitching investors on something new. So I see fundraising from both sides of the table, every week. The picture from both sides is the same: AI used to be the cheat code. It isn't anymore. This is the 2026 playbook for raising an AI round without lying to yourself about what actually closes.

How to Raise Money for an AI Startup in 2026 (800-Round Playbook)

AI is the new baseline, not a differentiator

Two years ago, you could walk into a VC meeting, say "we're building on gen AI," and walk out with a term sheet. That era is over. By 2025 it was already getting harder. In 2026, AI is assumed — about 90% of software-side VCs only take AI meetings now. But the inverse is also true: just having AI is no longer enough to get a meeting, let alone a check.

The headline everyone quotes is that 50% of VC funding went to AI in 2025, growing 75–85% year-on-year. True — but it hides the real picture. 41% of that AI funding went to the top five companies (OpenAI, Anthropic, and the like). 80% went to US companies. 60% went specifically to the Bay Area. If you're early-stage and not in San Francisco, the capital you're reading about in the headlines is not being deployed anywhere near you.

The real 2026 AI funding picture
90% of software-side VCs only take AI meetings. 50% of VC funding went to AI in 2025. 41% of that went to the top 5 AI companies. 60% of AI funding went to the Bay Area. 99% of companies still don't get funded. It's not a gold rush — it's a flight to quality.

Look at Carta's State of Private Markets data. The largest surge in deal count was 2021 — the peak of the last bubble. The market crashed and never recovered. More money is being poured in, but the number of deals signed hasn't moved. A handful of mega-rounds at the top skew the averages. Down at seed and Series A, closing a deal is just as hard as it was three years ago. And more than 50% of investors are now openly discussing an AI bubble — the first time I've heard that said at this scale in my career.

The more a founder says AI in the pitch, the less AI the company uses.
Medha Agarwal, Defy (TechCrunch Disrupt 2025)

Fundraising vs fundability: the reframe that decides your round

Fundraising is chasing investors — warm intros, deck polish, financial-forecast tuning, blast emails. Fundability is building a business investors chase — real customer traction, de-risked execution, proof points for every risk, a deep ICP match with the fund. Founders who work on fundraising get five meetings and no checks. Founders who work on fundability get inbound.

The single most universal mistake — and the reason most AI rounds die — is that founders work on fundraising when they should be working on fundability. I hear it every day: "We have an amazing product. We just need warm intros." That's almost never the real problem. If you have a great product and real early traction, even cold outbound emails generate enormous numbers of meetings. The question is never "who will intro us" — the question is "have we built something investors want to chase?"

The core reframe — where founders burn their fundraise, and where winners spend the same weeks instead.

Working on fundraising (what loses)Working on fundability (what wins)
Chasing warm introsTesting and validating the product with real users
Polishing the pitch deck endlesslyDe-risking the business for investors across every axis
Optimizing the DD room and financial forecastBuilding proof points for each specific risk
Blasting emails to hundreds of fundsStudying ICP funds deeply, reaching out personalized
Asking "who will intro us?"Asking "have we built something investors chase?"
Spending months on a fancy deckSpending months on customers and retention
If you build a business that has great traction or interesting traction, or at least fully de-risks the business for the investors, you'll probably get investors pretty easily. If instead you're focusing on just chasing investors, the fundraise is probably gonna be tough.
Waveup sidenote — what ~60 winning 2025 AI decks had in common
I went through roughly 60 successful 2025 AI decks at Waveup and looked at slide order. The pattern is almost universal: traction is slide 2 — not problem, not solution. The $4M seed deck I broke down on this dataset leads slide 2 with traction. Eloquent AI's $7.4M seed lead with traction on slide 2. Lyra's $6M seed lead with traction on slide 2. The old "problem → solution → market → team" flow is dead for AI companies that actually raise.

The 70-minute walkthrough of the full 2026 playbook — fundability, the OpenAI Kill Test, the 5 AI moats, the gross-margin ladder, supernovas vs shooting stars, words that kill your pitch, and the sprint math — is in our February 2026 webinar. Worth watching before your next investor meeting.

2026 AI round sizes and valuations — what's actually closing

Typical 2026 raises for AI startups: pre-seed $500K–$700K at ~$3.6M pre-money; seed $3M–$4M at $16M–$18M pre-money; Series A $12M–$20M, with some AI Series A rounds now closing at $10M–$30M in revenue. US rounds skew higher than Europe, and the AI premium is clearly visible at every stage.

If you're raising in 2026, the numbers you're benchmarking against have moved. AI deals are bigger than non-AI deals at the same stage — but the bar you need to clear is also higher. Here's what actually closes, drawn from our round data and cross-checked against Carta's State of Startups 2025 and Bessemer's State of AI 2025.

2026 AI round benchmarks — US-weighted; Europe typically 10–25% lower on both raise and pre-money. Sources: Waveup round data, Carta State of Startups 2025, Bessemer State of AI 2025.

StageTypical raisePre-money valuationAI premium vs non-AI
Pre-seed$500K – $700K~$3.6MClearly visible
Seed$3M – $4M$16M – $18MClearly visible
Series A$12M – $20MSignificantly higher than non-AILarge
Series A (top AI)Often raised at $10M – $30M revenuePremium structureVery large
B2C vs B2B benchmark
Series A norm for B2C AI is ~$5M ARR vs ~$3M ARR for B2B. Pre-seed and seed bars are also higher for B2C — you need clear validation the product works before a check arrives. Moat logic is the same either way. If you're fundraising in that range, check business valuation services before you lock a pre-money number in the deck.

Traction benchmarks by stage in 2026

$2M–$4M ARR is the 2026 working band for AI Series A, with $3M as the baseline. Top performers are showing $30M ARR at Series A. For B2C, the bar is higher — $5M ARR is the norm. Below those numbers, you need an exceptional team signal (second-time founder with a prior exit, deep domain expert) or you won't clear the bar.

The quickest way to kill your round is to pitch a stage you haven't earned. Here's the ARR bar at each round in 2026, grounded in the decks that actually closed.

AI traction benchmarks by stage, 2026 — drawn from Waveup's round data on ~60 successful 2025 AI decks.

StageMinimum (if team is exceptional)BaselineTop performers
Pre-seedPre-revenue acceptable$100K – $400K ARR
Seed$0 ARR (second-time founders or deep domain experts only)$1M ARR
Series A$2M ARR$3M ARR ($5M for B2C)$30M ARR
Series B$10M+ ARR$10M+ ARR

Reference points from winning 2025 AI decks: Eloquent AI's $7.4M seed and Lyra's $6M seed both lead with traction on slide 2. A $4M seed in the same dataset does the same. The pattern is consistent across the sample — get traction in front of the investor immediately, not after five pages of problem-solution setup.

Pilots aren't production. Logos aren't revenue.
"Pilot purgatory" is where enterprises test your product but never buy. If your traction slide is a wall of enterprise logos with no revenue next to them, investors read it as zombie metrics — user counts without revenue, logos without retention. You need actual contracted ARR, real usage, and strong retention to clear a 2026 seed or Series A bar. Downloads curves and waitlist numbers don't qualify. See also: run rate.

The OpenAI Kill Test: the single most important moat question

The OpenAI Kill Test is a one-question moat framework: "If OpenAI ships this feature next quarter, do you die?" If you can't answer with a confident "no — and here's exactly why," you don't have a moat. Every VC runs some version of this test on AI startups in 2026. Failing it is the single fastest way to lose a deal.

This is the first question I'd ask myself about any AI startup before spending a week building the deck. It's also the question most AI VCs run silently in the first 30 seconds of a meeting. I've seen a lot of businesses killed by the wrong answer — and the founders never knew they'd been killed, they just never got a second meeting.

If OpenAI ships this feature next quarter, do you die?
The OpenAI Kill Test

If "yes" is the honest answer, rethink what you're building before taking any more investor meetings. If the answer is "no," the VC will pressure-test it. You need to be ready for these follow-ups.

  • If a competitor clones your UI and prompt tomorrow, what breaks? — The prompt-and-wrapper check. If the answer is "nothing," your moat lives in copied-in-a-week territory.
  • What gets better only because you have customers? — The data flywheel test. Look for a real, measurable improvement loop that your competitors can't reproduce without your customer base.
  • What makes margins improve at scale? — The economics test. If inference cost rises linearly with revenue forever, you have a cost problem, not a business.
  • What happens if OpenAI or Anthropic shuts off your access? — The platform-dependency test. Best answer: specific per-use-case model routing (GPT for this, Claude for that, open-source for the other). Worst answer: "We're built on [single model]."
Passing vs failing the Kill Test
Failing answer: "We fine-tune GPT-4 for HR onboarding." OpenAI ships a better HR agent — you die. Passing answer: "We sit inside HR workflow inside Workday, ingest three years of proprietary employee feedback data, and our model routes across GPT, Claude, and a fine-tuned open-source model depending on use case." OpenAI shipping anything doesn't change your relationship with Workday, your dataset, or your cost curve. That's a moat.

The 5 AI moats VCs actually fund in 2026

Five moats are what VCs actually write checks for in 2026: deep enterprise workflow embeddedness, proprietary data (especially for vertical AI), evals and hard-won customer logic, context/reasoning/memory layers, and cornered resources (exclusive partnerships, contracts, data access). Model capability is not a moat. If your pitch leans on "we built a better model," VCs assume someone will ship a better one next quarter.

Investors are looking for defensibility in 2026, but not where founders think. Two years ago, being an "AI expert" could land a term sheet — that doesn't work anymore, because non-specialists can now build real AI products too. What VCs actually pay for now are the five moats below. (For a deeper treatment: how to build your competitive moat.)

  • Deep enterprise workflow embeddedness — you're integrated so deeply that switching is painful. The product is the workflow, not a tool sitting next to it. This is the moat Cursor, Intercom Fin, and vertical-AI leaders are compounding.
  • Proprietary data — especially for vertical AI in healthcare, mortgages, legal, or insurance. A unique dataset that makes your output measurably better than general models. I've worked with a company that raised $8M at seed to acquire a services firm purely for its data.
  • Evals as moat — hard-won, obsessive insights about customer business logic, acquired through design-partner work. This is the moat that doesn't show on an org chart but shows on customer conversations: you understand their workflow better than any competitor can without three years of customer sits.
  • Context, reasoning, and memory layers — solutions that retain context across sessions and improve with use become deeply sticky. Users don't want to re-teach a new tool what the last one already knew.
  • Cornered resources — exclusive partnerships, unique contracts, trained models rivals can't use, special data access. PE firms as resellers, VC funds as resellers, proprietary distribution deals that competitors literally can't buy.
It's not about the model. A lot of the VCs are looking for companies that are deeply embedded in enterprise workflows.
Platform dependency kills deals faster than bad margins
"What happens if OpenAI or Anthropic shuts off your access?" is a real VC question in 2026, and "we'd be fine because we could switch" is not enough. The best AI startups can route across multiple LLMs per use case with the swap already tested in production. If your business is one API shutoff away from dying, investors won't fund it — regardless of revenue, team, or the rest of the deck.

The AI gross margin reality: why 95% margins kill your deal

AI gross margins are not SaaS gross margins. Nothing will sink an otherwise-good deck faster than a financial model that pretends they are. I've seen strong-traction founders blow the round because the model was wrong.

The 2026 AI gross-margin reality ladder — from traditional SaaS best-in-class at the top to Cursor-style supernovas at the bottom. Source: Waveup analysis cross-referenced with Bessemer State of AI 2025.

Company typeGross margin
Traditional SaaS best-in-class90%+
Traditional SaaS baseline75 – 85%
OpenAI~70%
Anthropic~60%
AI startup average50 – 60%
Supernovas (Cursor-type, heavy LLM dependence)-30% to +25%
Waveup sidenote — the 95% margin trap
95% of founders I speak to have zero understanding of their gross margins and how they scale. The most common fatal mistake in an AI financial model: putting down 95% gross margins because that's what you've seen in SaaS benchmarks. Investors read that as one thing — "this is not actually an AI startup." At Series A and B, 55% margins trigger difficult conversations on their own; show the path from 55% to something better at scale, or the round stalls. Across 800 rounds, this one modeling error has killed more AI deals than any pitch-deck mistake I can name.

Supernovas vs shooting stars: which category does your growth fit?

A Bessemer framework for AI SaaS. Supernovas hit $100M ARR in 2 years, run 25% (or negative) gross margins, and need $1M+ ARR per FTE — Cursor, Lovable. Shooting stars hit $100M ARR in 4 years on a $3M → $12M → $40M trajectory with ~60% margins. If your growth fits neither, VC funding is unlikely — seed-strapping VCs, family offices, and European or MENA growth funds are the realistic alternatives.

Unicorns are dead as a concept. No VC I speak to cares about $1B outcomes anymore — they care about $10B, and some about $100B. That shift forces a cleaner test: can your growth curve actually get to a $10B outcome? Bessemer's answer is the supernova vs shooting star split — the clearest way to gut-check whether a VC can fund you at all.

The two AI SaaS growth categories VCs will fund in 2026. If neither applies, VC capital is unlikely.

CategorySupernovasShooting stars
Time to $100M ARR2 years4 years
TrajectoryVertical growth curve$3M → $12M → $40M → double/triple
Gross margin25% or negative (LLM-heavy OK)~60%
ARR per FTE$1M+Smaller
ExamplesCursor, LovableUpgraded SaaS growth profile

Which capital actually fits your growth curve?

VC capital works when:

  • You can credibly project $100M ARR in 2 years (supernova) — Cursor/Lovable shape
  • Or $100M ARR in 4 years on $3M → $12M → $40M (shooting star) with ~60% margins
  • You have $1M+ ARR per FTE (or a clear path to it)
  • You can articulate a $10B+ outcome — unicorns no longer move power-law funds
  • Your moat passes the OpenAI Kill Test
  • You operate in a non-crowded category (no AI SDRs, no restaurant tech, no gym tech)

Skip VC — look at alternative capital when:

  • You're profitable but growing slower than 3x/year → seed-strapping VCs
  • You need capital but want to keep control → growth funds or family offices
  • You're in Europe or MENA with a healthy growth profile that's not 4x-4x
  • Your category is inherently high-churn (restaurants, gyms)
  • You have revenue and retention but no outcome big enough for a power-law fund
  • You need capital in under 30 days and can't run a full VC sprint
The new SaaS growth law: 4x, 4x, 3x, 3x, 3x
The old SaaS rule was 3x, 3x, 2x, 2x, 2x — triple twice, double three times. The 2026 version for AI: 4x, 4x, 3x, 3x, 3x. Once you hit your first $1M ARR, the clock starts. Some VCs are forgiving about how long it takes to get to $1M. No VC is forgiving about slow growth after $1M. If you cross $1M and grow slower than 4x the first year after, the next round gets very difficult.

Pain first, math second, tech third: how to structure an AI pitch deck in 2026

Pain first, math second, tech third. Slide 1 states the pain in customer-specific language — no "AI-powered platform" abstractions. Slide 2 is traction — real numbers, named customers, the uptick curve. Tech comes last because in 2026 it's the least important slide: AI is fast to build and easy to copy, so the moat lives in execution, not model capability.

An investor spends about 2.5 minutes on a successful deck and around 30 seconds on an unsuccessful one. That's the math you're optimizing against. If slide 1 doesn't state what you do for whom, and slide 2 doesn't show real traction, you've used half your attention budget on setup and the VC already tuned out. Slide order matters more than most "pitch deck template" content acknowledges. See also: pitch deck consulting services and pitch deck mistakes and how to avoid them.

  1. Pain (slide 1). Name the customer and the outcome in one concrete sentence. "Law firms use [us] to draft contracts 10x faster without junior associate review bottlenecks." Not "AI-powered legal systems."
  2. Math (slide 2). Traction. Revenue, growth curve, named logos, retention. This is what sells — this is the slide order I saw on ~60 winning 2025 AI decks, and it flipped the traditional problem → solution flow.
  3. Tech (later, not earlier). The AI/model/infrastructure slide comes last, because it's the least important. Tech is easy to copy. Two students copied the entire Lovable in two days. Why didn't they become Lovable? Because it's not about the tech — it's execution, distribution, pricing, strategy.
Tech in this equation, surprisingly maybe or not, is the least important thing. It's so easy and cheap and fast to build right now, and it's so easy to copy.

The other half of this is language. Specific words in 2026 flag your pitch as generic AI-wash, and VCs will disengage. Below are six phrases to cut from your pitch and the outcome-specific rewrites that close meetings.

Words that kill your pitch, and the outcome-first rewrites that don't. Drawn from Olena's analysis of ~60 successful 2025 AI decks vs matched unsuccessful comparison set.

Don't saySay instead
AI-powered legal systemsLaw firms use Harvey to draft contracts 10x faster without junior associate review bottlenecks.
Revolutionary medical documentation AIRehab clinics use Ficus to generate patient reports in real time without doctors typing a single word.
NextGen code completion toolEngineering teams use Cursor to ship features 2x faster without context switching between docs and IDE.
AI-powered platformWe reduce customer responses by 50% using automated classification.
Revolutionary technologyWe process claims 8x faster.
Disruptive AI solutionEnterprise customers save $2M per year.
Words that kill your pitch
These phrases will make investors disengage in 2026 if they're the core of your pitch: AI platform, transformative, AI-powered, game changer, revolutionary, groundbreaking. You can still use them on a tech or features slide — just don't lead with them. The core of your pitch is the customer outcome, proven with numbers. AI is assumed; your job is to explain what you're doing for whom, 10x better than anyone else.

GTM moves that create non-zombie traction

Zombie traction is the set of user-facing metrics that look impressive but don't convert to revenue or retention — logos without revenue, downloads without usage, user counts without stickiness. Companies stuck with zombie metrics become zombie companies: they can't sell, they can't raise, they can't pivot. Fix retention and revenue conversion before you pitch.

Every successful AI company I've seen in 2025 had the same first GTM motion: founder-led sales. No exceptions. The product sells best from the founder's mouth, the founder has to build the playbook before handing it to anyone else, and investors won't fund a "we'll hire a sales leader with this round" promise in 2026. Once founder-led sales is working, here are the eleven motions that actually produce non-zombie traction — each with a real company attached.

  1. Founder-led sales first, always. Build the playbook yourself before hiring anyone. No successful 2025 AI company skipped this step.
  2. Free version or freemium. Outside very large B2B enterprise, you need a free tier. If you ask users to pay instantly, investors assume the product doesn't deliver enough value.
  3. Correct virality with branded sharing loops. Tally reached 500K users and $2M ARR fully bootstrapped using sharing loops that put their logo in every shared asset.
  4. Build in public on LinkedIn. Base44 — solo founder, $80M acquisition by Wix, every channel tried, only LinkedIn build-in-public worked. RB2B hit $5M ARR in 13 months from the same motion.
  5. Reposition as an "AI initiative" for B2B enterprise. One supply-chain company stopped being "AI-powered tools" and became an "AI OS for supply chain managers." Pilots unlocked immediately.
  6. Outcome-based pricing. Intercom Fin 4x'd revenue year-on-year after switching from SaaS fees to outcome pricing. Signals product confidence and collapses enterprise approval timelines.
  7. Waitlist with referral skip-the-line. Long waitlist, friends-jump-the-line. Works very well for high-resonance consumer AI.
  8. Don't build a community, find one. Midjourney rode Discord. Other AI companies tap specialized healthcare or investment-banking communities for near-zero-CAC first customers.
  9. Creative partner channels. PE firms as resellers for accounting/insurance automation. VC funds as resellers for board-reporting tools. One pitch gets you 10–100 downstream deployments.
  10. Gamification. Hiring-as-a-game and dating-as-a-game formats create stickiness users and investors both love.
  11. Bottom-up to enterprise. Ship a single-use-case version free to individual employees inside large B2B firms. Adoption anchors into the org; you close the enterprise deal from inside.
Waveup sidenote — the non-negotiable first GTM motion
In our experience across 800 rounds and $3B+ raised, I haven't seen a single successfully-funded AI company in 2025 that skipped founder-led sales in the first motion. None. If the founder isn't selling, the product isn't tight enough to sell, the playbook doesn't exist yet, and there's nothing to hand to a sales hire. VCs know this — and they won't fund a "we'll hire sales with this round" promise from a first-time founder without a working playbook. Sell it yourself first.

The fundraising sprint: how many meetings, how fast, how to run it

Closing one AI seed round typically takes 35–70+ first meetings. To get two term sheets (the minimum for leverage) you usually need 30–45 first meetings with 75–150 warm-ish investors in your CRM. Successful founders average ~40 meetings; unsuccessful founders average ~15. Plan a 10–16 week sprint. If you have no investor excitement after one month, prepare for a 6–9 month haul.

Most founders dramatically underestimate the volume math. VCs are a low-conversion, high-prep channel where 90% of investors are actually outbound — they reach out to founders, not the reverse — and the meetings themselves only compound if you engineer them right. Here's the sprint timeline that actually closes rounds.

The 2026 AI fundraising sprint — 10–16 weeks of execution, with 6–12 months of relationship-building in front of it.

PhaseTimingWhat happens
Relationship building6–12 months before sprint"Lines not dots." Open conversations, ask what VCs want to see at your stage, keep in touch, show up again with proof you hit their bar.
Warm-up8–6 weeks before sprintHeads-up notes to top-priority investors; line up mutual-contact intros; prep deck and DD room.
Sprint window3–5 weeks, intensiveBatch first meetings simultaneously to engineer momentum FOMO. 30–45 first meetings packed into this window.
Close or cutWeek 10Close or stop and go back to building. No dragging past 10 weeks — it signals weakness and kills whatever FOMO you built.

Inside the sprint, don't chase. Every re-engagement with an investor needs to bring a new validation point — I call this traction FOMO engineering. The investor who was lukewarm last week reconsiders this week because the story compounds. The rule is simple: no "any update?" messages, ever.

  • New named customer signed
  • Press coverage (Forbes, TechCrunch, etc.)
  • A metric jump — 2x growth, crossed $X ARR, hit a retention milestone
  • Key hire announced
  • Pilot converted to production
Olena's 4-sentence cold email (verbatim)
The best cold emails I've seen in 2025 look exactly like this: "We help banks cut their costs 2x. JPMorgan, Goldman Sachs on board. Just booked $500K in revenue. Raising seed." That's it. No "I hope this finds you well." No attached deck on the first touch. No paragraph about yourself. Outcome + logos + quantified traction + the ask. Principles: lead with outcome and named logos, quantify the traction, stay one paragraph, and research the fund first (reference a past investment or a partner quote so the email can't be confused for mass outreach).
We invest in lines, not dots.
Investor wisdom (cited throughout the talk)

A single pitch meeting is a dot. A VC can't evaluate a dot. They need lines — momentum over time, metric changes across multiple touchpoints, watching a founder execute for 6–12 months. Which is why relationship-building starts before the sprint. If you're a year out from needing money, you're not too early to open the conversation.

FAQ

Do cold emails work for AI startup fundraising, or do you need warm intros?
Warm intros are always more effective, but plenty of companies are closing AI rounds in 2026 without them. What wins replies is traction + a 2-sentence outcome-plus-logos pitch, plus real research on the fund. Long emails die; pleasantries die. One paragraph, outcome + logos + quantified traction + the ask.
How much should an AI startup raise at pre-seed in 2026?
$500K–$700K is the typical 2026 AI pre-seed raise at roughly $3.6M pre-money. US runs higher than Europe. Don't over-raise — the goal is to hit seed-stage traction on 12–18 months of runway, not to max out valuation.
What ARR do I need to raise a Series A as an AI company?
$2M–$4M ARR is the 2026 working band for AI Series A, with $3M as baseline. Top performers show $30M ARR at Series A. B2C bar is $5M. Below those numbers, you need an exceptional team signal (second-time founder with prior exit, deep domain expert) — otherwise the round won't clear.
What gross margin do VCs expect from an AI startup?
50–60% is the 2026 AI average. 95% margins in an AI model flag you as "not actually an AI startup" and kill the deal. At Series A/B, 55% margins trigger difficult conversations unless you show a path to improvement at scale. OpenAI sits around 70%, Anthropic around 60%, supernovas like Cursor can run negative.
How is pitching B2C AI different from B2B AI?
Same moat logic — data, distribution, UX love. Traction bar is higher: Series A norm is $5M ARR for B2C vs $3M for B2B. At pre-seed, virality and distribution proof matter more than revenue. You still need fundability, moats, and retention — the math is just tougher on the top line.
Will VCs invest in two competing AI startups at once?
Generally no. Most VCs flag portfolio conflicts and pass. Very large multi-stage firms (a16z-class) sometimes invest in related companies, but even then they look for complementary, not competing. Research portfolios before pitching. Know who's invested where before you share sensitive metrics.
How do I prove traction if I'm pre-revenue and haven't launched?
Founder credentials + design-partner logos + a specific de-risk for each key risk (tech, market, team, distribution). Second-time founders with prior exits can sometimes raise pre-revenue at seed. First-timers almost always need a signed design partner or a paid pilot. Pre-revenue with nothing is pre-seed at best, and the team has to be exceptional.
What AI trend is coming after agents and infrastructure?
Honest answer: I don't know. AI isn't losing momentum — agents, AI infrastructure, and AI-OS are still the 2026 wedges. But the broader point is more important: AI can't be the value prop. Lead with the outcome and the pain you solve; AI is the baseline, not the pitch. The next trend matters less than whether your customer gets 10x better than the market today.
Not sure where your AI round actually stands? Our fundraising team has closed 800+ rounds and $3B+ in capital — get a free fundraising readiness assessment before your next meeting.
Get your readiness assessment

2 posts

Olena Petrosyuk

Partner, Waveup

Olena Petrosyuk is a Partner at Waveup. She has spent the last decade in the VC space, advising on 800+ funding rounds and helping founders raise more than $3B — most of it into AI companies. She was previously COO of an AI startup taken from pre-seed to Series B exit.