What is an LLM — and why an SMB owner should actually care
If you run a business and you have been told you 'should be doing more with AI', this is the explainer your advisor never gave you. What an LLM actually is, what it does well, what it shouldn't touch, and the three places it reliably creates operating leverage in an SMB.
What an LLM actually is
A large language model is, mechanically, a very large statistical predictor of the next chunk of text given the chunks before it. It is trained on enormous bodies of text and then fine-tuned to follow instructions. OpenAI's own developer documentation describes the unit it works in as a "token" — roughly three to four characters of English — and the amount of text it can hold in mind at once as its "context window" [1].
Two practical consequences fall out of that. First, the model has no actual knowledge — it has patterns. When asked something outside its training or context, it will produce a fluent answer anyway. That is what people mean by a "hallucination". Second, everything you put into the context window costs money and competes for attention. Long, sloppy prompts are both expensive and worse-performing than tight ones.
What LLMs do well
Anthropic's own usage guidance is unusually candid about where its models are reliable: drafting, summarising, classifying, extracting structured data from unstructured text, and acting as a flexible interface over your existing systems[2]. These are exactly the tasks that dominate back-office knowledge work in a typical SMB.
McKinsey's economic-potential research arrives at the same conclusion from the top down: 60–70% of the addressable value from generative AI sits in a small number of repeatable knowledge-work functions — customer operations, sales support, software, and marketing [3].
What LLMs shouldn't touch (yet)
The same vendor guidance is equally clear about where LLMs are unreliable: numerical reasoning at scale, anything regulated where the output is the deliverable (legal opinions, medical diagnoses, tax filings), and any decision where being confidently wrong is more expensive than being slow and right[2]. In those domains, the right pattern is LLM-as-assistant with a human signing off — not LLM-as-decider.
The three places LLMs create real leverage in an SMB
- Intake and triage. Inbound email, forms, tickets, voicemails. An LLM can classify, extract the key fields, route to the right queue, and draft a first response. This is the single highest-ROI use we see — partly because it is also the most owner-dependent function in most SMBs.
- Document and data work. Pulling structured data out of PDFs, contracts, statements, supplier emails. Drudgery that used to consume an admin's week now runs in minutes, with logs.
- Knowledge access. A search interface over your own SOPs, policies and history that any team member can query in plain English. Reduces escalations to the founder and to senior staff.
Why an exit-minded owner should care
Each of those three use cases attacks a specific discount risk a buyer will price. Intake and triage replaces founder-only decision-making. Document and data work removes manual re-keying that breaks under volume. Knowledge access reduces the operational dependency on senior heads. We cover the mechanism in more depth in our piece on owner dependency.
The point is not that LLMs are magic. The point is that they are the cheapest way to industrialise three categories of work that buyers reliably discount when they are still done by people in the owner's orbit.
Frequently asked questions
- What is a large language model (LLM) in simple terms?
- An LLM is a probabilistic system that predicts the most likely next word given a prompt, trained on internet-scale text. ChatGPT, Claude and Gemini are LLMs. They generate fluent text but do not 'know' facts the way a database does.
- What can an LLM reliably do for an SMB?
- Three things: drafting and rewriting (proposals, emails, SOPs), summarising and classifying (inbox triage, call notes, support tickets), and structured extraction from unstructured documents (invoices, contracts, CVs).
- What should an LLM never do without human review?
- Anything where 'mostly right' is unsafe: arithmetic on financials, regulated outputs (medical, legal, tax), customer-facing decisions with no human escalation, and any irreversible action (sending money, deleting records).
- Why do LLMs hallucinate?
- Because they predict plausible text, not verified facts. When a prompt asks for information the model didn't see in training (or saw inconsistently), it generates the most statistically likely-sounding answer — which is sometimes wrong but always confident.
Want this for your business?
Start with a Diagnose. Two weeks. Written report. Honest fit assessment.
Send an enquiry