The meeting notes market has a consensus problem. Almost every tool in the space does the same thing: record, transcribe, summarize. The output is a document that tells you what was said. Some add highlights. Some identify speakers. Some detect sentiment. But the fundamental promise is the same: we will give you a better record of what happened.
That promise is insufficient.
The summary is not the work
After a product strategy session, a product manager does not need a summary of the strategy session. They need a product spec. After a client discovery call, a consultant does not need meeting notes. They need a proposal. After a sprint retrospective, an engineering lead does not need a record of what was discussed. They need categorized action items with owners, deadlines, and priority levels.
The summary is an intermediate artifact. It is not the deliverable. And the deliverable is what takes time.
Consider the actual workflow of a sales professional after a discovery call. They have 60 minutes of conversation. They know what the prospect said, what the pain points are, which stakeholders have influence. They do not need AI to remind them. What they need is the follow-up email drafted, the CRM notes populated with BANT qualification data, the objections documented with counter-strategies, and the next steps assigned with timelines.
That is two hours of work. The conversation itself was the easy part.
The lens model
Neural Summary generates documents, not summaries. We call each document type a "lens," because it represents a different analytical perspective applied to the same conversation.
The same 45-minute coaching session can produce:
- >A coaching intelligence report with classified key moments (breakthrough, resistance, insight, commitment), accountability tracking, and developmental growth observations
- >An executive summary following the Pyramid Principle, with a governing recommendation, key findings with implications, and risk-rated action items
- >A follow-up email with prioritized action items, compelling event reference, and a micro-commitment CTA
- >A process map with a flowchart diagram, typed steps, bottleneck analysis, and improvement recommendations
Each lens is a different job-to-be-done. Each one produces output that would have taken 30 to 90 minutes to create manually.
Five categories of intent
We organize lenses into five categories that progress from passive to active. This is deliberate. It mirrors the natural arc of how professionals process information.
Capture is the first step. Structure the raw conversation. Meeting minutes, 1:1 notes, coaching session notes. This is the territory most meeting tools occupy. It is necessary, but it is only the beginning.
Analyze extracts patterns and intelligence. Communication analysis, competitive intelligence, deal qualification, retrospectives. These lenses do not summarize what was said. They interpret it through a specific professional framework.
Communicate turns insights into outreach. Follow-up emails, sales emails, client proposals, LinkedIn posts, newsletters, internal updates. These are outward-facing deliverables that go to other people.
Deliver generates work-ready artifacts. Blog posts, product requirement documents, strategy briefs, agile backlogs, technical design documents, case studies. These are the heavyweight deliverables that typically consume hours of focused writing.
Act drives decisions and next steps. Action items, decision documents, process maps, presentation outlines. These are the outputs that translate conversation into motion.
The progression matters. Most tools stop at Capture. Some attempt Analyze. Almost none reach Communicate, Deliver, or Act. But the further right you move on this spectrum, the more time you save, and the more directly the output contributes to actual work.
Structured output is the enabler
This philosophy only works because of a technical decision we made early: all lens output is structured JSON, not markdown.
When a coaching notes lens generates output, it does not produce a wall of formatted text. It produces a typed data structure: an array of key moments, each classified by type, with a quote, a coach observation, and a development implication. A set of accountability items, each with a status. A growth observation with supporting evidence.
This structure enables several things that markdown cannot:
Multi-language rendering. The same structured output can be displayed in English, Dutch, German, French, or Spanish without re-generating it. The content carries the language. The rendering layer handles labels and formatting.
Semantic search. When a user asks a question about their conversations, we search across structured fields, not raw text. A question about "decisions" searches the decisions array, not a full-text index of everything that was said.
Programmatic transformation. Structured output can be exported to markdown, copied as formatted text, or piped into other systems. The data is not trapped in a rendering format.
Quality enforcement. Each lens has a schema. If the AI generates a coaching notes document without key moments, or an executive summary without a recommendation, we can detect and reject it. Markdown offers no such validation.
The prompt is the product
If you accept that the lens output is the product, then the prompt that generates it is the most important engineering artifact in the system.
Our early prompts were generic: "Summarize this transcript. Identify key themes." The output was what you would expect. Competent but unremarkable. The kind of content that reads like it was generated by AI.
The redesigned prompts are different. Each one positions the AI as a specific domain expert: a certified Scrum product owner for the agile backlog, a VP of competitive intelligence for the battlecard lens, an organizational psychologist for communication analysis. Each prompt includes examples of good and bad output. Each one enforces structure, word limits, and action-verb-first patterns for tasks.
The difference in output quality is dramatic. An executive summary generated by a prompt that says "summarize the key points" reads like a book report. An executive summary generated by a prompt that follows the Pyramid Principle, requires a governing recommendation as the most prominent element, and demands "so what?" implications for every finding reads like something a management consultant would produce.
We have 30 lens templates. Each one has been through multiple iterations of prompt design, quality review, and output testing. The prompts average several hundred words each. They are the most carefully written text in the entire codebase.
Why this matters for the market
The meeting notes space is large and getting larger. Dozens of tools transcribe, summarize, and organize meeting content. The competitive landscape is intense.
But almost all of that competition is happening in the Capture category. Better transcription. Better summarization. Better speaker identification. Better integration with calendar and video tools.
The Deliver and Act categories are nearly empty. Very few tools attempt to generate the actual work product that follows a meeting. Very few tools produce a consulting brief, an agile backlog, a sales follow-up email, or a process flowchart directly from a conversation.
That gap is not a feature gap. It is a category gap. And it is the reason we built Neural Summary the way we did.
Summaries tell you what happened. Documents move work forward.
We build the documents.



