> learn / vibe-coding

What is Vibe Coding?

A plain-English definition of vibe coding, where the term came from, the tools that enable it, honest critiques, and where gamified feedback like VibeMon fits.

Published: April 15, 2026·10 min read·← All articles

Vibe coding is a style of software development in which an engineer prompts an AI coding agent in natural language and reviews the intent of its output rather than reading every line of generated code. The term was coined by Andrej Karpathy, co-founder of OpenAI and former director of AI at Tesla, in a short social-media post in early 2025 that went viral within weeks.

The one-paragraph working definition

You are vibe coding when three conditions hold at once: (1) an AI agent with tool-use capabilities, such as Claude Code, Cursor, Gemini CLI, or Codex CLI, writes and edits files on your behalf; (2) you interact with it in conversational prompts rather than by typing code directly; and (3) your review mode is outcome-first — you run the program, check the behavior, and rely on tests or logs rather than a line-by-line read of the diff. If any of those three is missing, you are doing something adjacent — AI autocomplete, pair programming with an LLM, or traditional coding — but not vibe coding in the strong sense Karpathy described.

Origin and early reception

Karpathy used the phrase in a February 2025 post where he described giving up on reading the AI's diffs and letting the model drive. His framing struck a nerve because it put a name on a workflow that many senior engineers had quietly adopted after the first capable agent releases in late 2024. The reaction split in predictable ways: founders and solo builders embraced it, staff engineers at large companies rolled their eyes, and academics wrote responses about provenance and code quality. Within a quarter, the term appeared in VC decks, job postings, and conference abstracts — a speed of absorption that suggests it was naming a category that already existed rather than inventing one.

The stack that makes vibe coding possible

Vibe coding is not a new programming language or methodology — it is a workflow enabled by the combined capabilities of agentic LLMs, local file-system access, and shell tool use. The stack people currently use breaks down into roughly four layers:

  • Agent runtime — Claude Code (Anthropic), Cursor, Gemini CLI (Google), Codex CLI (OpenAI), Aider, and Cline are the most common options in 2026. All of them let a single prompt trigger file reads, file writes, shell commands, test runs, and web fetches.
  • Model — Claude Opus 4.6 and Sonnet 4.6, GPT-4.x, and Gemini 2.5 are the models most commonly paired with the runtimes above. Model choice affects reading comprehension on large codebases and the quality of long-horizon planning.
  • Hook layer — each runtime exposes hooks (PostToolUse, Stop, UserPromptSubmit, SessionStart, Notification) that let external systems react to agent activity. VibeMon uses this layer. See the detailed explanation at /learn/claude-code-hooks.
  • Review surface — a browser, a running app, a dev-server URL, or a test report. The reviewer does not read the diff line by line; they check the behavior and delegate low-level correctness to the model.

What vibe coding is not

Three workflows get mislabeled as vibe coding in day-to-day conversation. AI autocomplete (tab-to-accept in an editor) is not vibe coding because the human is still writing code token by token. LLM pair programming (chat with ChatGPT, paste answer into editor) is not vibe coding because the edit boundary is still human-mediated. Agentic refactors (the agent edits many files but you read every diff) blur the line but tend toward traditional engineering when the reviewer cares about syntax-level detail.

Honest critiques

Vibe coding has strong critics, and ignoring them weakens the category. The most common objections:

  • Accumulating unknown-unknowns. When you stop reading diffs, you lose the mental model of the codebase. After a few weeks, parts of the project become black boxes even to their nominal author. This is manageable for throwaway prototypes and risky for production services.
  • Security drift. Agents are willing to add dependencies and copy snippets whose provenance and license are not obvious. Without a review gate, supply chain risk creeps up.
  • Skill atrophy. Junior engineers who vibe code from day one may develop a shallow mental model of the systems they ship. This is an empirical claim and the verdict is not in.
  • Cost. Agentic runs burn tokens. A long vibe-coding session on a large codebase can cost several dollars in API usage — trivial for a funded founder, not trivial for a student.

The honest answer is that vibe coding is a power tool: it is brilliant for greenfield work, one-shot tools, scripts, landing pages, and exploratory prototypes, and it needs discipline (tests, staging, review gates) for anything that will run in production.

How VibeMon fits the workflow

VibeMon is designed specifically for vibe coders. Because the unit of work in this workflow is no longer a keystroke, traditional productivity tools like WakaTime or manual git-commit counts under-represent what you actually did during a session. VibeMon listens to the agent's activity — each tool use, each prompt submit, each stop event — and turns it into drops that feed a pixel slime pet. The slime grows through six stages and reaches a Perfect state at which point it awards a badge and regrows.

The design intent is ambient feedback, not productivity surveillance. There is no leaderboard by default, no minute counter, and no manager-facing report. A quick glance at the Apple Watch complication is usually enough to confirm that the coding session is in full swing. Privacy-wise, VibeMon never transmits source code, prompts, or completions — only event metadata. The FAQ covers the exact fields that are and are not collected.

Further reading


Article last updated April 15, 2026. The working definition will be revised as the category evolves; we will note changes in the changelog.