The meeting notes market has a consensus problem. Almost every tool in the space does the same thing: record, transcribe, summarize. The output is a document that tells you what was said. Some add highlights. Some identify speakers. Some detect sentiment. But the fundamental promise is the same: we'll give you a better record of what happened.
That promise is insufficient.
The summary is not the work
After a product strategy session, a product manager doesn't need a summary of the strategy session. They need a product spec. After a client discovery call, a consultant doesn't need meeting notes. They need a proposal.
The summary is an intermediate artifact. It's not the deliverable. And the deliverable is what takes time.
Think about the actual workflow of a sales professional after a discovery call. They have 60 minutes of conversation. They know what the prospect said, what the pain points are, which stakeholders have influence. They don't need AI to remind them of that. What they need is the follow-up email drafted, the CRM notes populated, the objections documented with counter-strategies, and the next steps assigned with deadlines.
That's two hours of work. The conversation itself was the easy part.
Why we call them lenses
The concept didn't start with that name. Early on, "template" and "output" were the working terms. But neither captured what was actually happening.
A template implies a static format you fill in. An output implies a single result. What Neural Summary does is closer to putting a lens on a conversation, a specific analytical perspective that lets you see only what you need at that moment. The same 45-minute coaching session looks completely different through a coaching intelligence lens (key moments, resistance patterns, growth observations) than through an executive summary lens (governing recommendation, key findings, risk-rated action items) or a process map lens (flowchart, bottleneck analysis, improvement steps).
Same conversation. Different lenses. Different deliverables. Each one would've taken 30 to 90 minutes to create manually.
Five categories, from passive to active
We organize lenses into five categories that progress from recording to doing: Capture, Analyze, Communicate, Deliver, and Act. This taxonomy was designed upfront based on how professionals actually process information after a meeting: first you structure what happened, then you interpret it, then you share it, then you produce something from it, then you drive the next step.
Most meeting tools stop at Capture. Some attempt Analyze. Almost none reach Communicate, Deliver, or Act. But those last three categories are where the real time goes. Capturing a meeting takes minutes. Producing the strategy brief, the backlog, the follow-up email? That takes hours.
The further right on this spectrum, the more time saved, and the more directly the output contributes to actual work.
From markdown to structured JSON
Early versions of Neural Summary generated markdown. It seemed like the obvious choice: human-readable, easy to render, flexible. We hit a wall within weeks.
Markdown is free-form text. You can't reliably validate it. You can't search across specific fields. You can't render the same content in five languages without regenerating it. And you can't enforce quality. If the AI skips a section or produces a weak recommendation, there's no schema to catch it.
Switching to structured JSON changed everything. Each lens now produces a typed data structure with specific fields, and every output is validated against a schema. If an executive summary comes back without a governing recommendation, it gets rejected. If an agile backlog is missing acceptance criteria, it gets flagged.
That structure also unlocked multi-language rendering. The same output displays in English, Dutch, German, French, or Spanish without re-processing the transcript. And it enabled semantic search: when a user asks about "decisions," the system searches the decisions array, not a full-text dump of everything that was said.
The trade-off was development complexity. But for output quality and reliability, structured data beats free-form text every time.
The prompt is the product
If the lens output is the product, then the prompt that generates it is the most important engineering artifact in the system.
Early prompts were generic: "Summarize this transcript. Identify key themes." The output was competent but unremarkable. It read like AI: technically correct, structurally flat, missing the clarity that makes a document worth reading.
The rewrite took a principle from management consulting: lead with what matters most. The best consultants don't bury insights in chronological recaps. They put the recommendation first, then the supporting evidence, then the implications. Every finding answers "so what?" Every action item starts with a verb.
That's what the redesigned prompts enforce. Each one positions the AI as a specific domain expert and includes examples of both good and bad output. The executive summary prompt follows the Pyramid Principle. The agile backlog prompt requires user stories with acceptance criteria. The coaching notes prompt classifies moments by type (breakthrough, resistance, commitment) and demands supporting evidence for each observation.
The difference is dramatic. A prompt that says "summarize the key points" produces a book report. A prompt that enforces consulting-grade structure produces something you'd actually send to a client.
Thirty lens templates, each refined through multiple iterations. The prompts average several hundred words each. They're the most carefully written text in the entire codebase.
Where this gets interesting
The lenses that get used most aren't always the ones you'd expect.
The Process Map turns a conversation into a flowchart: typed steps, decision points, bottleneck analysis. It sounds niche, but when you're discussing business processes or improvement initiatives, seeing the conversation visualized as a diagram makes it concrete in a way that notes never do. The meeting comes alive.
The Agile Backlog might be the most surprisingly powerful lens in the system. Record a product feedback session, apply the backlog lens, and you get structured user stories with acceptance criteria. Copy a user story, paste it into an AI coding tool, and the feature gets built or improved automatically. The workflow becomes: record → transcribe → backlog → code → review. A conversation turns into working software with almost no manual steps in between. It sounds like a shortcut. In practice, it's just the logical conclusion of treating conversations as the raw material for execution.
The category gap
The meeting notes space is large and intensely competitive. Dozens of tools are fighting over who has the best transcription, the best summarization, the best calendar integration.
Almost all of that competition is happening in the Capture category. The Deliver and Act categories, the ones that produce the actual work product that follows a meeting, are nearly empty. Very few tools attempt to generate a consulting brief, an agile backlog, a follow-up email, or a process flowchart directly from a conversation.
That's not a feature gap. It's a category gap. And it's where Neural Summary operates.
Summaries tell you what happened. Documents move work forward.



