32 Real B2B Prospecting Prompts (Copy-Paste for 2026)
Thirty-two copy-paste prompts for ICP discovery, account targeting, signals, enrichment, competitive research, and CRM hygiene - grouped by job, with the three properties every good prompt shares.
In March 2026, Jason Lemkin replaced the SaaStr sales team with 20 AI agents and went on Lenny Rachitsky's podcast to argue the SDR role is about to stop existing. A month earlier, Clay crossed $3.1B selling spreadsheets that talk to LLMs. The through-line between the two is the shape of the work, not the seat count: the unit of outbound sales work is no longer a sequence step. It is a prompt.
Cadence-based SDRs sending emails and qualifying inbound leads will be 90% displaced within 12 months. The role simply won't exist.
- Jason Lemkin, founder and CEO, SaaStr
I think Lemkin is half right. The person typing a 7-step sequence into 500 contacts by hand is going away; the person deciding which 500 contacts and why is not. That person writes prompts now. Which is why I've spent the last few weeks collecting the ones that actually work - the specific English sentences that a half-decent research agent can execute end-to-end, and that a bad one face-plants on. This post is 32 of them, grouped by job. Copy-paste, adapt the bracketed placeholders, run them in whatever agent you've got (Leadex, Claygent, a notebook full of GPT calls). If a prompt takes more than two sentences to describe, it's not a prompt, it's a product spec - rewrite it.
Two things to notice before the list. First, the good prompts all name their source. "From Crunchbase's last 30 days of funding announcements" beats "recent fundings" every time, because an agent without a source gets to hallucinate one. Second, the good prompts all have a stopping condition. "The 50 most recent" or "posted in the last 14 days" or "with a headcount between 200 and 800." An unbounded prompt returns an unbounded list, and an unbounded list is an unvalidated list.
I am the founder of Leadex, a chat-native B2B research agent. The commentary under each prompt describes what a well-behaved agent should do with it; where Leadex does that specific thing, I say so plainly. Parts 1 and 2 of the twelve-tool field guide are here and here for context on where the category is.
ICP shape-finding
These are the prompts you run before you have a list. They turn fuzzy intuitions ("we win in mid-market fintech") into an account filter specific enough to execute.
Look at our last 50 closed-won deals in HubSpot. Find the three strongest common attributes - industry, headcount, tech stack, funding stage. Give me each attribute with a confidence score and the count of deals that share it.
An agent that does this well reads the CRM directly, does not rely on company names alone, and scores each attribute by how much it discriminates closed-won from closed-lost. An agent that does this badly returns "SaaS, mid-market, growing" - three phrases that describe half the internet.
Using our top 20 best-fit customers as the seed, return 100 companies in North America and Western Europe that share at least three of: industry, headcount range, funding stage, and presence of a named tech stack element. Dedupe against our existing CRM.
The dedupe step is the one most agents skip. Leadex pushes the seed list and the output through the connected CRM and suppresses anything with an existing owner.
Reverse-engineer our ICP from our five most visited pricing-page URLs in the last 30 days. Cluster the visiting companies by industry and headcount and name the two tightest clusters.
Requires website-visitor identification (RB2B, Clearbit Reveal, Koala) connected to the agent. No visitor data, no answer - a good agent says so rather than inventing clusters.
Find every Series A and Series B company funded in the last 90 days in the US whose headcount is between 50 and 250, and whose pitch deck or website mentions "RevOps" or "go-to-market engineering." Return the company, funding date, amount, and the passage that matched.
Notice the "return the passage that matched" clause. It turns the output into something you can QA in 30 seconds instead of 30 minutes.
Given our product is [one sentence], draft three distinct ICP hypotheses we have not tested yet. For each, name the segment, the wedge use case, and one recent signal we could detect at scale.
The most under-used prompt in the list. Use it quarterly.
Account discovery and lookalikes
Return the 100 fastest-growing Shopify stores in the US (by Alexa rank delta over 180 days) selling in the home-goods category, with a GMV estimate over $5M. Include the store URL, founder name, and LinkedIn.
Specifying the sort axis (Alexa rank delta, not "growing") is the difference between a reproducible list and a demo. Leadex accepts BuiltWith, Similarweb, or SimilarTech as the source; the agent picks one and says so.
Find every law firm in the UK with 20-200 lawyers whose website mentions "AI" or "generative" in the last 6 months but does not yet list an AI product or service. Give me the firm, the page, and the quote.
Return every company funded by Index Ventures, Accel, or Point Nine in 2025 or 2026 that is hiring a Head of Growth or VP of Marketing right now. Include the job posting URL and posting date.
Two data sources: portfolio pages (or Crunchbase) for the funding list; LinkedIn Jobs or the company's careers page for the posting. A good agent runs them as two steps and intersects; a bad one guesses.
Look at our closed-lost accounts from the last 12 months. For each, find the company that beat us (from press releases, LinkedIn posts, or customer logos on their site) and build a competitor-win frequency table.
Find 50 B2B SaaS companies currently running Google ads for the keyword "[competitor product]". Return the company, their landing page, the ad copy, and their own closest competing product.
Contact and persona targeting
At each of these 200 companies, find the most senior person whose title matches "Head of RevOps", "VP RevOps", "Director of Revenue Operations", or "Sales Operations Lead". Return name, exact title, LinkedIn URL, and tenure in role.
The tenure field is load-bearing. A VP in their first 90 days behaves nothing like a VP in year three; a good agent returns it so you can route accordingly.
For the 50 companies that raised a Series B in the last 45 days, find the new Head of Marketing if one has been hired since the round. Include the hire date and the source.
Find every person with "AI" or "ML" in their current title at companies with headcount 500-5000 in the financial services industry in Germany, France, or the Netherlands, who has also posted on LinkedIn in the last 30 days.
For each account on this list, find the economic buyer (VP+) and two internal champions (Manager or Director whose current job is closest to our product's core use case). Return all three with their reporting line if visible.
The "reporting line if visible" qualifier is the one people forget. Sometimes the org chart isn't public; a good agent says "not visible" rather than inventing a manager.
Return every Head of Engineering at a US fintech who has posted publicly about hiring in the last 60 days. Include the post URL, the date, and the substantive quote.
For this list of 500 accounts, find the person most likely to have signed last year's contract renewal. Rank by a composite of title seniority, tenure, and "procurement" or "vendor management" in the current or prior role.
Signals and triggers
Signals are the reason a prompt-based agent beats a seat-based database. You can ask for "companies whose CFO changed in the last 45 days" and get an answer; you cannot filter for that in most database UIs.
List every Series B+ company in the US that has had a new CFO or VP Finance announced in the last 45 days. Include announcement source, date, and the new hire's prior role.
Find every B2B SaaS company with headcount 100-1000 that has grown headcount by more than 20% in the last 6 months. Return the company, the before/after count, and the team that grew fastest.
Scan the last 30 days of 10-K and 10-Q filings from US public companies with annual revenue over $500M. Return every filing that mentions "AI strategy", "generative AI investment", or "automation initiative" in the risk factors or MD&A. Include the ticker, the filing date, and the quoted passage.
Requires the agent to read SEC filings, not a recap. Most agents will settle for a news article about the filing; a good one fetches the source.
Find every account on this list whose G2 or Trustpilot rating has dropped by more than 0.3 stars in the last 90 days. Return the company, the before/after rating, and the single most-upvoted recent negative review.
For each of our top 200 target accounts, look for any mention of a security incident, data breach, SOC 2 audit, or compliance deadline in their news, blog, or LinkedIn posts in the last 90 days.
Enrichment and filling gaps
For this list of 1,000 LinkedIn URLs, return work email, direct dial, current title, current company, and tenure. Flag any row where any field is older than 60 days.
Leadex runs this against your connected Apollo, Clearbit, or Lusha keys - whichever returns first and cleanest - and tags the provider per row. Compare the output to one vendor alone; the waterfall usually resolves 15-25% more contacts.
For each of these 500 companies, enrich with: exact headcount (LinkedIn), confirmed tech stack (BuiltWith), funding history (Crunchbase), and the three most recent news mentions (Google News). Return a single row per company with the source URL for every field.
For every contact missing a mobile phone number, try to find one from ZoomInfo, Apollo, and Lusha in sequence. Stop at the first verified hit. Do not guess.
"Do not guess" is the most important instruction in any enrichment prompt. Most agents will happily pattern-match a format; a good one returns blank when the source is silent.
For this list of 200 accounts, find their current ATS (Greenhouse, Lever, Ashby, Workday) by checking their careers page. Return the ATS and the careers-page URL.
Competitive and alternative research
Find every G2 review of [competitor] posted in the last 12 months that mentions pricing, cancellation, or auto-renewal. Return the reviewer's role, the star rating, and the specific passage.
Turns a competitor's weakest complaint category into a ready-to-use sales talk track. The passage-level quote is the part you paste into a cold email.
Find every public customer of [competitor] by scraping their website logos, case studies page, and the "trusted by" section of any press release in the last 24 months. Dedupe and return the list with the source for each.
For each of [competitor]'s public customers, find the person most likely to have owned the buying decision - title matches "Head of Marketing", "VP Demand Gen", "Marketing Ops Director". Return the account, the contact, and any visible date of adoption (from a case study quote, press release, or podcast mention).
Find every recent Reddit thread (r/sales, r/SaaS, r/startups) where [competitor] is discussed negatively in the last 90 days. Summarize the top three complaint themes with links to the specific threads and comment permalinks.
CRM hygiene and list QA
Scan our HubSpot contacts for duplicates by email root, LinkedIn URL, and name-plus-company. Return a table of suspected duplicates with a confidence score and the fields that conflict.
Find every contact in our CRM whose current title or company does not match LinkedIn as of today. Return contact, CRM state, LinkedIn state, and the last-updated date for both.
The single most valuable prompt on this list for any team over 18 months old. CRM rot is real; this is the only way to surface it without paying ZoomInfo for a refresh.
For every company in our CRM marked "Closed-Lost" in the last 18 months, check if the primary contact has since changed jobs. If yes, return the new company and whether we have an existing account there.
Run this list of 10,000 email addresses through a syntax-and-domain-validity check. Return a per-row status: valid, catch-all, role-based, disposable, or invalid.
The through-line
Every prompt on this list shares three properties: it names a source, it has a stopping condition, and it asks the agent to return evidence (a URL, a quote, a count) alongside the answer. If a prompt doesn't do all three, an agent - mine or anyone else's - has no way to know whether it's done. That is the grammar of the job.
The prompt library is not a replacement for judgement. None of these instructions say "then write the email" or "then send the sequence." They produce the list; a person decides whether to work it. Leadex specifically stops at the list because the output is the part that matters and the sending is the easy part - the comparison pages get into the why. If you want to see the full grammar running on your own list, app.leadex.cc is the place; pick a prompt, paste it in, approve the plan, watch the log.