AI Glossary

Decide the buzzwords. A simple dictionary for modern AI terminology.

Artificial General Intelligence (AGI)

Concepts

A hypothetical type of AI that can understand, learn, and apply intelligence across a wide range of tasks at or above human level.

Context Window

Architecture

The limit on the amount of text (tokens) a model can consider at one time (input + output).

Deep Learning

Concepts

A subset of machine learning based on artificial neural networks with multiple layers, capable of learning complex patterns from large amounts of data.

Fine-tuning

Training

The process of taking a pre-trained model and training it further on a specific dataset to specialize it for a certain task.

Generative AI

Concepts

A broad category of AI systems designed to generate new content, such as text, images, audio, or code, based on patterns learned from existing data.

Hallucination

Issues

When an AI model confidently generates false or nonsensical information.

Large Language Model (LLM)

Architecture

A type of AI model trained on massive amounts of text data, capable of understanding and generating human-like language.

Machine Learning

Concepts

A branch of AI focused on building systems that learn from data and improve their performance over time without being explicitly programmed.

Multimodal

Capabilities

The ability of a model to process and generate multiple types of media (text, images, audio) simultaneously.

Natural Language Processing (NLP)

Concepts

A field of AI focused on the interaction between computers and human language, enabling machines to read, understand, and derive meaning from text.

Neural Network

Architecture

A computing system inspired by the human brain, consisting of interconnected nodes (neurons) that process information and learn patterns.

Prompt Engineering

Techniques

The practice of designing and refining the input text (prompts) given to an AI model to elicit the most accurate, relevant, or creative response.

RAG

Architecture

Retrieval-Augmented Generation. A technique where an LLM is provided with external data (context) to generate more accurate answers.

Temperature

Parameters

A parameter that controls the randomness of the model's output. Higher values (0.8+) make output more creative/random, lower values (0.2) make it more focused/deterministic.

Token

Basics

The basic unit of text processing for an LLM. Roughly 0.75 words. "Hamburger" might be 2-3 tokens.

Zero-shot Learning

Training

The ability of a model to perform a task without having seen any examples of that specific task during training.