Retour au blog
30 lens templates and counting: how we design AI-powered document generators
Product

30 lens templates and counting: how we design AI-powered document generators

8 min read|2 mai 2026|Neural Summary

When Neural Summary launched, it had seven analysis types: summary, communication styles, action items, emotional intelligence, influence and persuasion, personal development, and custom. They were generic. The same prompt structure, with minor variations, applied to every conversation type.

Eight months later, we have 30 lens templates organized into five intent-based categories. Each one is designed by positioning the AI as a specific domain expert. The output quality is incomparable to where we started.

This post covers how we think about template design, the value ladder that organizes them, and what we learned about turning generic AI output into professional-grade deliverables.

The value ladder

Templates are organized into five categories that progress from passive to active. We call this the value ladder because each step produces more immediately actionable output.

Capture — Structure the raw conversation. Meeting minutes, 1:1 notes, coaching session notes. The output tells you what happened. This is where most meeting tools stop.

Analyze — Extract patterns and intelligence. Communication analysis, competitive intelligence, deal qualification, retrospective, process map. The output tells you what it means.

Communicate — Turn insights into outreach. Follow-up emails, sales emails, client proposals, LinkedIn posts, newsletters, internal updates. The output is something you send to someone.

Deliver — Generate work-ready deliverables. Blog posts, product requirement documents, strategy briefs, agile backlogs, technical design documents, case studies, presentation outlines. The output is a finished artifact your team can act on.

Act — Drive decisions and next steps. Action items, decision documents, executive summaries. The output translates conversation into motion.

The progression is deliberate. Capture requires the least interpretation. Act requires the most. Each step up the ladder produces output that saves more time and requires more sophisticated prompt design.

Anatomy of a lens template

Every template has five components:

1. Domain expert positioning. The opening instruction that sets the AI's perspective. Not "summarize this transcript" but "as a certified Scrum Product Owner with deep expertise in agile methodology, produce a product backlog from this conversation."

2. Output schema. A JSON schema defining every field, its type, and whether it is required. The agile backlog schema defines epics (each with stories, each with acceptance criteria), personas, assumptions grouped by category, and a priority distribution.

3. Quality examples. Good and bad examples for the fields that matter most. For user stories: a good example has a specific persona, a concrete action, and a measurable outcome. A bad example has "as a user, I want to do things."

4. Constraints. Word limits, format requirements, and structural rules. Action items must begin with imperative verbs. Descriptions are capped at 15 words. Headings are 3-6 words. These constraints are what prevent the LLM from producing verbose, meandering output.

5. Edge case instructions. What to do when the transcript does not contain enough information. "If no clear decisions were made, state that explicitly rather than fabricating decisions." This prevents hallucination in sparse transcripts.

The evolution: V1 to V2

The difference between our early templates and our current ones is instructive.

V1 Executive Summary (October 2025):

Prompt: "Summarize the key points of this meeting.
         Include main topics, decisions, and action items."
Output: Markdown blob with bullet points.

V2 Executive Summary (March 2026):

Prompt: "As a chief of staff preparing a board-ready executive 
         summary following the Pyramid Principle, produce a 
         decision-ready brief. Lead with the governing 
         recommendation. Every finding must have a 'so what?' 
         implication. Decisions must include owner, rationale, 
         and status. Risks must include severity and mitigation."
Output: Typed JSON with recommendation, findings (each with 
        implication and evidence), decisions (each with owner 
        and rationale), risks (each with severity and 
        mitigation), and action items (each with owner, 
        deadline, and priority).

The V1 output read like a student's meeting notes. The V2 output reads like something a management consultant would produce for a client.

The difference is not the model. We tested V1 prompts on newer models and V2 prompts on older models. The prompt is the dominant factor.

Case study: CRM Notes

The CRM notes template had the most dramatic transformation.

V1: A narrative summary with some bullet points about what was discussed.

V2: Sales intelligence in a structured format:

  • >Deal verdict. One-line assessment with call sentiment indicator.
  • >BANT qualification. Budget, Authority, Need, Timeline, each with a status indicator (qualified, partially qualified, unknown, unqualified).
  • >Stakeholder mapping. Each person in the call identified with their role, influence level (decision-maker, champion, influencer, end-user), and stance.
  • >Pain points. Each pain point with its business impact and urgency rating, sorted high to low.
  • >Buying signals. Categorized as positive or negative, with the specific statement cited.
  • >Objections. Each objection with the prospect's exact words and a suggested response.
  • >Competitive intelligence. Any competitors mentioned, with context about how they came up.
  • >Next steps. Grouped by timeline (this week, next week, etc.) with owner and deadline.

A sales professional using V1 got meeting notes they could paste into a CRM. A sales professional using V2 gets a complete call intelligence brief that structures their follow-up strategy.

Case study: Agile Backlog

The agile backlog template shows how domain expertise in the prompt produces radically different output.

The V1 prompt generated a flat list of user stories. They were grammatically correct but generic: "As a user, I want to create an account so that I can access the platform."

The V2 prompt positions the AI as a certified Scrum Product Owner. It requires:

  • >Personas extracted from the conversation, not generic "user" placeholders
  • >User stories with specific acceptance criteria in Given/When/Then format
  • >Story sizing using relative complexity (XS through XL)
  • >Epic grouping with priority ordering
  • >Assumptions categorized by topic (technical, business, user behavior)
  • >Out of scope items, also categorized

The prompt includes specific examples of good and bad acceptance criteria. Good: "Given the user has uploaded an audio file, when processing completes, then an email notification is sent within 30 seconds." Bad: "The system should notify users."

The output is a backlog that a development team can import and start working from, not a list that requires a product manager to rewrite every story.

The template design process

New templates follow a consistent process:

1. Interview the expert. What would a senior professional in this role actually produce? We study real examples: actual consulting briefs, actual sales intelligence reports, actual coaching session documentation. The template should produce output that matches professional standards, not a summarized version of them.

2. Define the schema. What fields does the output need? What types are they? Which are required versus optional? The schema is the contract between the AI and the rendering layer.

3. Write the prompt. Position the expert, define the schema, provide examples, set constraints. The first draft is never good enough.

4. Test against sample transcripts. We maintain a set of representative conversations across different types (sales calls, coaching sessions, team meetings, strategy discussions). Every template is tested against this set.

5. Review output quality. Would the intended user be proud to send this to a client? If not, iterate on the prompt. Most templates go through 3-5 iterations before reaching production quality.

6. Add backward compatibility. If we are redesigning an existing template, the new renderer must handle both old-format and new-format data. Users have existing lenses generated with the old schema. They must continue to render.

What we learned

The prompt is the product. Model selection matters, but prompt design matters more. A well-structured prompt with domain expertise, examples, and constraints produces consistently better output than a generic prompt on a more powerful model.

Backward compatibility is non-negotiable. Users have existing documents. When we redesign a template's schema, old data must still render. Every new field is optional. Every renderer handles the absence of new fields gracefully.

Domain expertise beats general intelligence. The coaching notes template improved dramatically when we positioned the AI as a developmental psychologist rather than a general assistant. The competitive intelligence template improved when we gave it the perspective of a VP of competitive intelligence. Specificity in the prompt produces specificity in the output.

The value ladder is a product strategy. Most meeting tools compete in the Capture category. We invest our template development time in Deliver and Act, where the time savings per lens are highest and the competition is thinnest.

Thirty templates. Five categories. Each one designed to produce output that a professional would actually use, not just read.

That is the standard. And we are not done.

Continuer la lecture