Prompts for tech recruiters and hiring managers: Boolean strings that actually work, JD audits that surface what your top hires care about, and candidate screen-question generators that aren't 'tell me about a time you faced a challenge'.
Honest note โ These prompts speed up your work. They don't replace candidate respect โ never reject anyone based purely on an AI screen. AI is good at pattern-matching CVs against JDs; it's bad at spotting the candidate who switched fields and would be your best hire.
When `"DevOps" AND "AWS"` returns 50,000 results and you need to narrow to the 50 candidates worth a message.
Claude 4.7 Opus (2026-05)GPT-5 (2026-05)
Build me a LinkedIn Boolean search string for this role.
Role: <ROLE_TITLE>
Must-have skills (deal-breakers if missing): <SKILL_1>, <SKILL_2>, <SKILL_3>
Nice-to-have skills: <SKILL_A>, <SKILL_B>
Seniority: <JUNIOR | MID | SENIOR | STAFF>
Location: <CITY_OR_REMOTE>
Industries to exclude (e.g., consultancies if you only want product co): <EXCLUDE_LIST>
Current-employer red flags (e.g., 6mo tenure max): <EMPLOYERS_TO_AVOID>
Give me 3 Boolean strings:
1. **Tight** โ high precision, ~50-200 results. Uses AND aggressively + title constraints.
2. **Medium** โ ~200-1000 results. Lifts one constraint at a time.
3. **Broad** โ ~1000-5000 results. For when you've exhausted the tight pool.
For each, explain in 1 line: which constraint loosened, and what risk it introduces.
Notes on syntax:
- Use parentheses for OR groups: `("AWS" OR "Azure")`.
- LinkedIn's title search uses TITLE: โ abuse it for precision.
- LinkedIn limits Boolean to 200 chars per field โ keep each string under 180 to be safe.
- Synonyms matter: 'Cloud Engineer' candidates often title themselves 'Site Reliability Engineer' or 'Platform Engineer'. Include the variants.
TipAlways start with TIGHT. Move to MEDIUM only after you've messaged the tight pool. BROAD is a last resort โ it dilutes your time per candidate.
When your role has been open 60 days and you're getting 200 unqualified applicants. The JD is probably the problem.
Claude 4.7 Opus (2026-05)GPT-5 (2026-05)
Audit this job description as if you were the hiring manager who needs this role filled in 30 days.
<PASTE_JD>
Answer:
1. **What kind of candidate is this JD silently rejecting?** (Be specific โ e.g., 'self-taught engineers without a CS degree because of the wording in the requirements section', 'parents/caregivers because of the unflexible hours language', 'senior people because the listed perks are all early-career').
2. **The 3 must-have requirements that are probably nice-to-haves in disguise.** Quote them. Reason: if too many "must-haves" exist, the role looks impossible and qualified candidates self-eliminate.
3. **The buzzwords masking the actual job** โ anything like 'rockstar', 'ninja', 'we're a family', 'fast-paced environment', 'wear many hats'. Quote each and translate ("'fast-paced' often means 'understaffed').
4. **What's missing that the best candidates ask about** โ salary range, remote policy, tech stack specifics, growth path, team size, on-call expectations.
5. **The 1-line rewrite of the opening paragraph** that would 2x quality applications.
Be specific to the role and the wording. No generic advice.
TipIf the audit flags 5+ must-haves, push back on the hiring manager: every must-have past 4 cuts your applicant pool in half. Half of 50 qualified candidates is 25, and so on.
When you're tired of asking 'tell me about a time you faced a challenge' and getting rehearsed nothing-answers.
Claude 4.7 Opus (2026-05)GPT-5 (2026-05)
Generate 8 first-screen interview questions for this role. The goal: figure out in 30 minutes if a candidate is worth a technical loop.
Role: <ROLE>
Level: <JUNIOR | MID | SENIOR | STAFF>
Candidate background I'm interviewing: <BACKGROUND_SUMMARY>
What the role actually does day-to-day: <DAILY_REALITY>
Rules:
- No "tell me about a time" questions. They produce rehearsed STAR-template answers.
- Each question should be unanswerable without specific, lived experience. "I've never thought about that" is a valid answer that tells me something.
- Include exactly 1 question that filters out faking โ something a fake-it-til-you-make-it candidate can't bluff.
- Include exactly 1 question that surfaces values mismatch, not skill mismatch.
- Include exactly 1 question I'd answer poorly if I were interviewing for this role (= a real filter, not a softball).
For each question:
- The question itself.
- What a great answer sounds like (2 lines).
- What a red-flag answer sounds like (2 lines).
- Why I'd skip the next question if they answered poorly.
TipUse 4-6 of these questions, not all 8 โ leave time for the candidate to ask questions. The questions THEY ask tell you more than the answers to yours.
When you're saying no to a strong candidate you might want next year. The default 'we've decided to move forward with other candidates' is corporate sludge.
Claude 4.7 Opus (2026-05)
Help me draft a rejection email to a candidate I want to keep warm for future roles.
Candidate name: <NAME>
Role they applied for: <ROLE>
Why they didn't get the offer: <REAL_REASON>
What would have made them a yes: <GAP>
Is the gap fixable? <YES_WITH_TIMELINE | NO>
Do I want to refer them elsewhere? <YES_NAME_COMPANY | NO>
Write the email. Rules:
- Under 150 words.
- Lead with the decision (no "thank you for your interest" preamble โ they know, they didn't get it).
- Give ONE specific, useful piece of feedback. Not corporate softening ("it was a competitive process"). Real signal.
- If the gap is fixable + on a timeline I'd re-interview, name the door ("if you ship X by Y, reach out โ I'd want to talk again").
- If I'm referring them somewhere else, include the intro line.
- Sign-off should be human, not corporate.
Never use: 'unfortunately', 'we regret to inform', 'we've decided to move forward', 'best of luck in your future endeavors'.
TipThis email is the difference between a candidate who comes back in 18 months as a hire, vs. one who tells 20 peers your company sucks. The 5 minutes you spend on it compound.
How to use these prompts
Each prompt has placeholders in <ANGLE_BRACKETS> โ fill them in before pasting. Copy the prompt with the button, paste into Claude, ChatGPT, Gemini, or any chat-UI'd LLM.
Why "model tested" dates matter
LLMs improve and regress with every release. A prompt that worked on Claude 3.5 may need rewriting for Claude 4. The dates show when each prompt was last verified โ anything older than 6 months should be re-tested before depending on it.
Found a better prompt?
Hit contact and share โ we keep prompts that beat ours.