Artificial intelligence is a deep and convoluted world. The scientists who work in this field often rely on jargon and lingo to explain what they're working on — and that technical language frequently filters into mainstream media coverage. For anyone trying to keep up with the AI revolution, not knowing what these terms mean can feel like reading a foreign language.
TechCrunch's latest glossary breaks down the most important AI concepts in plain English — and this article walks you through the key ones you actually need to know.
AGI: The Goalpost Everyone Is Chasing
Artificial General Intelligence, or AGI, is a nebulous term that generally refers to AI more capable than the average human at many, if not most, tasks. OpenAI CEO Sam Altman has described AGI as the equivalent of a median human you could hire as a co-worker, while OpenAI's charter defines it as highly autonomous systems that outperform humans at most economically valuable work.
Even Google DeepMind has its own slightly different take. The bottom line? Even experts at the frontier of AI research can't fully agree on what AGI means — which makes it one of the most debated terms in tech today.
AI Agents: Beyond the Chatbot
An AI agent refers to a tool that uses AI technologies to perform a series of tasks on your behalf — beyond what a basic chatbot could do — such as filing expenses, booking restaurant tables, or writing and maintaining code.
Think of it as the difference between asking someone a question and hiring someone to get things done. Agents can draw on multiple AI systems to carry out multi-step tasks autonomously — and they're rapidly becoming the hottest category in enterprise AI.
LLMs: The Engine Behind Every AI Assistant
Large language models, or LLMs, are the AI models powering popular assistants like ChatGPT and Claude and Google's Gemini, and Meta's Llama. When you chat with an AI assistant, you're interacting with an LLM that processes your request directly or with the help of tools like web browsing or code interpreters.
These models are built by encoding patterns found in billions of books, articles, and transcripts. When you send a prompt, the model generates the most likely pattern that fits — evaluating the most probable next word after the last one, over and over again.
Hallucinations: When AI Makes Things Up
Hallucination is the AI industry's preferred term for AI models generating incorrect information — literally making stuff up. It's a massive problem for AI quality, producing outputs that can be misleading and potentially dangerous, especially for sensitive queries like medical advice.
The problem is thought to arise from gaps in training data. For general-purpose AI models, this is difficult to resolve — there simply isn't enough data in existence to train models to answer every possible question accurately. This is why nearly every AI tool now carries a disclaimer urging users to verify outputs.
Training vs. Inference: Two Sides of the Same Coin
These two terms are often confused but describe completely different stages of an AI model's life.
Training refers to the process of feeding data into a model so it can learn patterns and generate useful outputs. Before training, the mathematical structure is just layers and random numbers — it's through training that the AI truly takes shape.
Inference, on the other hand, is the process of actually running the model — setting it loose to make predictions or draw conclusions from what it has already learned. Inference can't happen without training first.
Tokens: How AI Counts and Charges
Tokens represent the basic building blocks of human-AI communication — discrete segments of data processed or produced by an LLM. Tokenization breaks down raw data into units digestible to the model, similar to how a compiler translates human language into binary code.
For enterprise users, token usage also determines cost. Most AI companies charge for LLM usage on a per-token basis so the more data a business processes through an AI tool, the more it pays.
RAMageddon: The Hardware Crisis You Haven't Heard Of
One of the more colorful new entries in AI vocabulary is RAMageddon. It describes an ever-increasing shortage of RAM chips driven by AI data centers consuming enormous quantities of memory — leaving less available for consumer electronics like smartphones and gaming consoles, and driving prices sharply higher. There's currently little sign the shortage will ease anytime soon.
The Bottom Line
The AI industry moves fast — and its vocabulary moves with it. Whether you're a business evaluating AI tools, a marketer writing about tech, or simply a curious reader, understanding these terms puts you miles ahead. As models grow more capable and agents take on more complex tasks, fluency in AI language is no longer optional — it's essential.







