This is Part 2 of the seed skills series. Part 1 covered the concept — why distributable AI instruction sets are the next content format. Now we build one.
Seed Skills Series
→ Part 2: Anatomy of a Seed Skill (you are here)
A seed skill isn’t a clever prompt. It’s not “try asking ChatGPT this.” It’s a four-block architecture that, when pasted into any LLM, produces a reliable, specific, useful output. Every time. Regardless of which model runs it.
That reliability is the whole point. A prompt tip is entertainment. A seed skill is infrastructure.
Block 1: The context block
This is where most people’s “prompt tips” begin and end — tell the AI to act like something. But a seed skill context block does more than assign a persona. It loads domain knowledge.
Bad context: “You are a helpful assistant that knows about water damage.”
Good context: “You are a restoration project manager with 12 years of experience in IICRC-certified water damage mitigation. You understand the difference between Category 1, 2, and 3 water losses. You know that insurance adjusters use Xactimate line-item pricing. You follow ANSI/IICRC S500 standards for structural drying.”
The difference isn’t length — it’s specificity. The good version constrains the AI’s response space to the exact domain the user operates in. It names the standards, the tools, the classification systems. When the AI encounters ambiguity in the user’s input, it resolves it using the domain frame you gave it, not generic internet knowledge.
The context block is where your industry expertise gets encoded. This is the part a generalist cannot write. If you don’t know that adjusters use Xactimate, you can’t tell the AI to price against it. That domain lock-in is your moat.
Block 2: The input specification
This is the part most prompt engineers skip entirely, and it’s why their outputs are inconsistent.
The input spec tells the user exactly what to provide and where to put it. Not “paste your document below.” That’s too vague. The AI doesn’t know where your instructions end and the user’s content begins.
Better: “Below the line marked INPUT, paste the adjuster’s scope of work. Include the full line-item breakdown. If the scope references a floor plan, describe the affected rooms and square footage in plain text.”
The input spec does three things: it tells the user what to gather (the scope of work), what format to provide it in (full line items, plain text room descriptions), and creates a clear delimiter between your instructions and their data. That delimiter matters more than people realize — without it, the AI mixes instruction-following with content-processing and the output degrades.
For seed skills that fit in a social post, I use a simple marker: ---INPUT BELOW THIS LINE---. Clean, obvious, works in every model.
Block 3: Processing logic
This is the engine. The step-by-step instructions the AI follows to transform the user’s input into something useful.
The key insight: write it like pseudocode, not prose. Numbered steps. Conditional branches. Explicit sequencing.
Here’s a processing block from a seed skill I built for insurance supplement requests:
Step 1: Read the adjuster’s scope line by line. Identify every line item where the unit price is below the Xactimate regional average. Flag these as “underpriced items.”
Step 2: For each underpriced item, calculate the delta between the adjuster’s price and the standard price. Sum the total underpayment.
Step 3: Check for missing line items — structural drying equipment, antimicrobial treatment, containment setup, air quality testing. List anything standard for this category of loss that doesn’t appear in the scope.
Step 4: Draft the supplement request letter. Lead with the total underpayment figure. Then list each underpriced item with the corrected price and the source standard. Then list missing items with justification for why they’re required under S500.
Notice what’s happening: I’m not asking the AI to “analyze” or “review” or “help with” the scope. Those verbs are too loose. I’m giving it a four-step assembly line. Read, calculate, check, draft. Each step has a defined input and output. The AI can’t skip ahead or improvise because each step depends on the previous one.
This is where the 200-word constraint of a social post actually helps. You can’t over-engineer the logic. You have to distill it to the essential sequence. If your processing logic needs 500 words to explain, your seed skill is trying to do too much — split it into two skills.
Block 4: Output format
The most underrated block. Without explicit output formatting, the AI will give you a decent response buried in preamble, caveats, and unsolicited advice. The output format block eliminates all of that.
“Format your response as a professional letter. Use the contractor’s company name as the letterhead. Structure it in three sections: Summary of Findings, Itemized Adjustments (table format: Item | Adjuster Price | Correct Price | Delta), and Required Next Steps. Do not include disclaimers, caveats, or suggestions to consult a professional. This letter will be sent directly to the insurance adjuster.”
That last sentence is doing heavy lifting. “This letter will be sent directly” tells the AI the output is a final deliverable, not a draft for human review. The AI adjusts its tone, formality, and completeness accordingly. It stops hedging.
Output format is also where you control length. “Keep the summary under 100 words” or “limit the table to the 10 highest-delta items” prevents the AI from producing a 3,000-word response when the user needed a one-page letter.
Putting it together: the full seed skill
Context block + input spec + processing logic + output format. Four blocks. Each one is 40-60 words in a tight seed skill. Total: 160-240 words. Fits in a LinkedIn post with room for a one-line intro and a call to action.
The intro line is marketing: “Tired of fighting adjusters on line-item pricing? Paste this into any AI.”
The four blocks are the product.
The CTA is distribution: “Tag a contractor who needs this.”
When someone copies that post and pastes it into ChatGPT with their adjuster’s scope, they get a professional supplement request letter with specific dollar amounts and code references. In 30 seconds. For free. And they remember exactly who gave it to them.
Part 3 covers the distribution architecture — how the seed skill lives on social while the full article lives on your site, and why that split creates a value loop traditional content can’t match.
Part 2 of 4 in the seed skills series. Start with Part 1.
Seed Skills Series
→ Part 2: Anatomy of a Seed Skill (you are here)
Frequently asked questions
How long should a seed skill be?
A well-engineered seed skill fits in 160-240 words. Each of the four blocks (context, input spec, processing logic, output format) runs 40-60 words. This constraint forces precision — if you need more space, your skill is trying to do too much and should be split into two separate skills.
What makes the context block different from a simple persona prompt?
A persona prompt assigns a role (“act like a lawyer”). A seed skill context block loads domain-specific knowledge — naming industry standards, tools, classification systems, and regulatory frameworks. This constrains the AI’s response space to the exact domain the end user operates in, producing outputs that reflect real practitioner knowledge rather than generic internet information.
Why does the input specification matter so much?
Without a clear input spec and delimiter, the AI mixes instruction-following with content-processing. The model can’t distinguish between your engineering instructions and the user’s data, which degrades output quality. A simple marker like “—INPUT BELOW THIS LINE—” solves this by creating an explicit boundary between the seed skill’s logic and the user’s content.
Can I write seed skills for industries I don’t work in?
You can write the structure, but the context block requires genuine domain expertise. Knowing that restoration adjusters use Xactimate pricing, that S500 standards govern structural drying, or that Category 3 water requires different protocols than Category 1 — that knowledge is what makes a seed skill valuable. Without it, you’re writing a prompt template, not a tool.
Leave a Reply