Skip to main content
OpenAI

April 10, 2026

OpenAI Academy

AI fundamentals

Understand the basics of AI, including what it is, how it works, and how it’s used.

Loading…

Welcome! If you’re new to AI, you don’t need a technical background to get started. What helps most is a simple map of the landscape—so you can understand what AI systems can do, how they’re packaged, and how to choose the right tool for your needs.

What is AI?

Artificial intelligence (AI) is a broad category of software that can recognize patterns, learn from data, and produce useful outputs. 

You’ve probably seen AI show up in everyday moments, like when:

  • Your map app reroutes you around traffic
  • Your bank flags a purchase as “unusual”
  • A customer support chatbot answers common questions

AI is a category—not one single tool. Within that category are models: trained systems that learn from data and then apply what they’ve learned to new situations. Some models specialize in speech, vision, or forecasting. 

You’re likely starting your AI journey by using conversational AI tools, like ChatGPT. The models behind ChatGPT specialize in language—these are called large language models.

Understanding how large language models work

A large language model (LLM) is a model designed to work with language. It learns patterns from large amounts of text from many sources so it can generate and transform text in helpful ways. An LLM doesn’t “know” things the way a person does. Instead, it predicts the most likely next piece of language based on context. Over time, advances in computing power, training methods, and access to large datasets made it possible to build larger and more capable large language models. 

OpenAI and other frontier research labs build these models as a core part of their offerings, then make them available through user-facing products (like ChatGPT or Codex) and through APIs, which let developers use those models to build their own AI tools and integrate AI into existing software.

How models evolve over time

New models become available from these research labs when they have been trained and passed internal evaluation and safety testing.  When you hear that an AI model was “trained,” it usually refers to two stages—think of it like someone learning and getting better at their job.

The first stage is pre-training, when the model learns general patterns from a huge amount of text, which gives it broad skills like summarizing, drafting, translating, and explaining. 

Think of it like a new employee who spends weeks reading everything they can—manuals, examples of great work, past projects, FAQs—until they understand the “shape” of the job.

Now the “employee” starts doing the work, and a “manager” coaches them: be clearer, ask good follow-ups, match the right tone, and follow company policies. That’s post-training. This stage helps the model follow instructions more reliably, communicate in a useful style, and handle tricky situations better.

Post-training is also where safety checks get emphasized—training that is designed to reduce harmful outputs, avoid unwanted requests, and respond more carefully when the topic is sensitive or uncertain.

As models are updated and trained, you might notice shifts in tone or responses. If you want consistent results, be explicit about your goal, audience, format, and constraints—and expect the model to be more careful when safety or uncertainty is involved.

Reasoning and non-reasoning models

Different models are tuned for different tradeoffs—like speed, depth, and how carefully they follow multi-step instructions. Some are designed to respond quickly and smoothly for everyday tasks (drafting, summarizing, rewriting, brainstorming). Others are designed to spend more compute thinking through a problem before they answer, which can improve reliability on harder, multi-step work. 

Non-reasoning models (sometimes labeled as “Instant”) are optimized for fast, fluent output. They’re a good default when the task is straightforward and you mainly want momentum: turn notes into a message, polish wording, generate options, or extract key points. 

Reasoning models (sometimes labeled as “Thinking”) are trained to do better at deliberate, step-by-step problem solving—things like planning, complex analysis, tricky debugging, or decisions with constraints and edge cases. They may take longer, but they’re often better at tracking multiple moving parts and avoiding shallow mistakes.

If you’re just getting started, you don’t need to worry about model choice—the default ChatGPT experience is designed to auto-switch so you can focus on your question, not the settings.

Over time, as you learn what you like (speed vs depth, quick drafts vs careful analysis), you can start experimenting with the optional controls: for example, choosing Auto most of the time, and switching to Thinking when a task is complex or high-stakes.

Summary

Here’s the simple hierarchy:

  • AI = the overall field
  • Models = trained systems that perform particular tasks
  • Large language models (LLMs) = models focused on understanding and generating language, trained over time by AI research labs
  • ChatGPT = a product that helps you use an LLM effectively

Once you have this picture in mind, you’ll be set up to learn how to get great results with tools like ChatGPT—starting with how to talk to it to get the results you want.

Continue learning with OpenAI Academy

Discover additional guides and resources to help you build practical AI skills.