Skip to main content

Enterprise AI Glossary

Cut through the jargon. Clear definitions for executives.

Agent Orchestration

Architecture

Coordinating multiple AI agents to work together on complex workflows, each specializing in specific tasks.

Why it matters: Enables complex automation beyond single-agent capabilities.

Agentic AI

Architecture

AI systems that can autonomously plan, execute, and adapt multi-step tasks with minimal human intervention. Unlike chatbots that respond to individual prompts, agentic AI breaks down goals into subtasks, uses tools (APIs, databases, code execution), and iterates based on results.

Why it matters: The agentic AI market is projected at $7-11 billion in 2025, growing to approximately $199 billion by 2034 (Precedence Research).

Agentic Workflow

Architecture

Autonomous processes where AI agents perceive, plan, and execute multi-step tasks without human intervention, escalating only exceptions.

Why it matters: The shift from 'Chatting with AI' to 'AI doing work'.

AI Bias Audit

Governance

Systematic testing of AI systems for discriminatory outcomes across protected characteristics such as race, gender, age, and disability. Increasingly mandated by regulation: NYC Local Law 144 requires annual independent audits for automated employment decision tools.

Why it matters: Both employers and technology vendors can be held liable for discriminatory AI outcomes. Proactive auditing reduces legal and reputational risk.

AI Governance Framework

Governance

A systematic approach to AI governance: enforcing human-in-the-loop gates, logging every decision with immutable audit trails, scoring risks in real time, and blocking unauthorized operations.

Why it matters: Turns compliance from a checkbox to a systematic process. Ensures AI decisions can be traced, audited, and defended.

Chunking

AI/ML

The process of splitting documents into smaller segments for indexing in a RAG system. Chunking strategy (fixed-size, semantic, recursive) directly impacts retrieval quality because it determines what context the LLM receives when generating answers.

Why it matters: Poor chunking is the most common source of RAG quality issues. The right strategy depends on document structure and query patterns.

Circuit Breaker (AI)

Operations

A safety mechanism in production AI systems that automatically stops agent operations when anomalies are detected, such as error rate spikes, unusual output patterns, or confidence score drops below threshold. Prevents cascading failures in autonomous systems.

Why it matters: Critical for production AI agents operating autonomously. Prevents a malfunctioning agent from causing widespread damage before humans intervene.

Confidence Scoring

Governance

A numeric score (typically 0-100) assigned to an AI system's output indicating the system's certainty in its response. Used to route decisions: high-confidence outputs are auto-delivered, medium-confidence triggers review, low-confidence escalates to a human.

Why it matters: Enables intelligent human-in-the-loop workflows where humans focus on uncertain decisions rather than reviewing everything.

Context Window

AI/ML

Maximum amount of text (in tokens) an LLM can process in a single request.

Why it matters: Determines how much data can inform a single AI response.

Data Sovereignty

Compliance

The principle that data is subject to the laws and governance structures of the nation or jurisdiction where it is collected or processed. For AI, this means AI models processing data must comply with local data protection laws and the data must remain within approved jurisdictions.

Why it matters: 62% of European organizations are seeking sovereign solutions due to geopolitical uncertainty (Accenture, 2025). GDPR, PIPEDA, and sector-specific laws all impose data sovereignty requirements.

Embedding

AI/ML

Numerical representation of text that captures semantic meaning, used for similarity search and RAG.

Why it matters: Enables semantic search beyond keyword matching.

ETL (Extract, Transform, Load)

Infrastructure

Data pipeline process for extracting data from sources, transforming it, and loading it into target systems.

Why it matters: Foundation for preparing data for AI training and RAG pipelines.

EU AI Act

Compliance

Comprehensive AI regulation framework from the European Union categorizing AI systems by risk level.

Why it matters: First major AI-specific regulation, sets precedent for global standards.

Explainability

Governance

The ability to understand and articulate why an AI system made a specific decision.

Why it matters: Required for regulated industries and building trust in AI systems.

Fine-Tuning

AI/ML

The process of adapting open-weights models (like Llama 3) on proprietary enterprise data to increase accuracy and reduce hallucinations without API leakage.

Why it matters: Enables domain-specific AI without sending data to external APIs.

Friction Mapping

Strategy

Process of identifying organizational bottlenecks and time-consuming activities that could be automated with AI.

Why it matters: First step in AI implementation: finding highest-impact opportunities.

GDPR (General Data Protection Regulation)

Compliance

EU data privacy law requiring explicit consent, right to deletion, and data protection for personal information.

Why it matters: Applies to AI systems processing EU resident data. Violations cost millions.

Hallucination

AI/ML

When an AI model generates false or fabricated information that sounds plausible but is factually incorrect.

Why it matters: Major risk in production AI, mitigated through RAG, fine-tuning, and validation.

Human-in-the-Loop (HITL)

Governance

Workflow design where humans review and approve high-stakes AI decisions before execution.

Why it matters: Balances automation efficiency with oversight for critical decisions.

ISO 42001

Compliance

An international standard for AI Management Systems published by the International Organization for Standardization. Provides a structured approach to ethical AI, transparency, and trust through a management system framework. Complements NIST AI RMF by adding operational governance structure.

Why it matters: Recognized globally for demonstrating AI governance maturity to regulators, auditors, and enterprise customers.

LLM (Large Language Model)

AI/ML

AI models trained on vast text data to understand and generate human language (GPT-4, Claude, Llama).

Why it matters: Foundation of modern AI applications and autonomous agents.

LLM Gateway

Infrastructure

A centralized proxy layer that sits between applications and language models, providing policy enforcement, usage monitoring, cost management, model routing, and security controls. Often implements an OpenAI-compatible API so applications can switch between cloud and on-premise models without code changes.

Why it matters: Enables enterprises to manage multiple LLM providers from a single control point while enforcing governance policies.

LoRA (Low-Rank Adaptation)

AI/ML

A parameter-efficient fine-tuning technique that adapts pre-trained language models by training only a small set of additional parameters rather than modifying all model weights. Achieves approximately 95% of full fine-tuning performance at roughly 10% of the compute cost.

Why it matters: Makes fine-tuning accessible for enterprises without massive GPU budgets, enabling domain adaptation on standard hardware.

MLOps

Operations

The practice of applying DevOps principles to machine learning systems: automating model training, testing, deployment, monitoring, and retraining. Covers the full lifecycle from data preparation through production operations.

Why it matters: Enterprises adopting mature MLOps experience up to 8x cost reduction and deployment cycles reduced from months to weeks (Mirantis, 2025).

Model Drift

Operations

Degradation of AI model performance over time as real-world data patterns change from training data.

Why it matters: Requires ongoing monitoring and retraining to maintain accuracy.

Multi-Agent System

Architecture

An architecture where multiple specialized AI agents collaborate on complex tasks, each responsible for a specific function. An orchestrator coordinates their interactions, resolves conflicts, and enforces governance rules across the system.

Why it matters: Enables complex automation that no single agent can handle alone, with clear accountability and governance per agent.

NIST AI RMF (AI Risk Management Framework)

Compliance

A voluntary, rights-preserving framework published by the National Institute of Standards and Technology for managing AI risk. Operates through four functions: Govern (establish policies), Map (document systems), Measure (assess performance and risk), and Manage (implement controls). Released January 2023 with a Generative AI Profile (AI 600-1) added in July 2024.

Why it matters: The primary US framework for AI risk management, increasingly referenced in procurement requirements and regulatory guidance.

PII (Personally Identifiable Information)

Compliance

Personal data such as SSN, email, phone numbers, medical records that must be protected under privacy regulations.

Why it matters: Critical for GDPR, HIPAA, and data privacy compliance.

Prompt Engineering

AI/ML

Crafting precise instructions and examples to guide AI models toward desired outputs.

Why it matters: Improves AI accuracy and consistency without model retraining.

RAG (Retrieval-Augmented Generation)

AI/ML

Architecture pattern where LLMs query a knowledge base before generating responses, combining retrieval and generation for more accurate, grounded answers.

Why it matters: Reduces hallucinations by grounding AI responses in verified data.

Re-Ranking

AI/ML

A secondary scoring step in RAG retrieval where a specialized model re-evaluates initial search results to improve precision before passing context to the LLM. Uses cross-encoder models that compare the query and each retrieved chunk more carefully than the initial vector search.

Why it matters: Significantly improves the relevance of retrieved context, reducing hallucinations caused by retrieving tangentially related documents.

Risk Scoring

Governance

Automatic evaluation of an AI decision on a 0-100 scale. High scores trigger human review.

Why it matters: Prevents AI from taking high-risk actions without oversight.

Shadow AI

Governance

Unauthorized use of AI tools by employees through personal accounts on free services like ChatGPT, Gemini, or Claude. Creates uncontrolled data exposure because employees input company data into systems the organization does not monitor or govern.

Why it matters: 78-90% of employees use unapproved AI tools (WalkMe, 2025). Shadow AI breaches cost $670,000 more on average than standard data breaches (IBM, 2025).

Shadow Mode

Deployment

Testing approach where AI systems run in parallel with human operations, making recommendations but not decisions, to validate accuracy.

Why it matters: De-risks AI deployment by validating accuracy before autonomous operation.

Sovereign AI

Infrastructure

AI systems trained, deployed, and operated on infrastructure you control, within jurisdictions you choose, independent of third-party API providers. Unlike SaaS AI where every request sends data to external servers, sovereign AI keeps data, models, and processing on your servers or private cloud.

Why it matters: Required for industries where data residency, regulatory compliance, and vendor independence are non-negotiable.

Sovereign Infrastructure

Infrastructure

Architecture where all LLMs, embeddings, and decision logs remain within your corporate firewall or VPC. Zero API calls to external cloud providers (no OpenAI, Azure, or public inference). All model weights stay under your control, whether deployed on-premise or within a private cloud.

Why it matters: The only architecture that guarantees 100% data sovereignty for regulated industries.

Token

AI/ML

Unit of text processed by LLMs (roughly 4 characters). Models have token limits and pricing is per-token.

Why it matters: Understanding costs and context window constraints.

Tool Calling (Function Calling)

AI/ML

LLM capability to invoke external APIs, databases, or functions as part of task execution.

Why it matters: Enables AI agents to take actions beyond text generation.

Use Case Prioritization

Strategy

Ranking AI opportunities from highest-impact to lowest, considering both business value and technical complexity.

Why it matters: Ensures resources are focused on the most valuable AI initiatives.

Vector Database

Infrastructure

Specialized database (like Pinecone, Weaviate) that stores embeddings and enables semantic search for RAG pipelines.

Why it matters: Enables fast, semantic retrieval of relevant information for AI systems.

Zero-Retention Architecture

Infrastructure

A deployment pattern where data flows through AI systems for processing but is never persisted outside the organization's infrastructure. No training data, input queries, or generated outputs are stored by external providers.

Why it matters: Eliminates data residency and retention risks for regulated industries handling sensitive information.