Figma Opens Its Canvas to AI Agents: Claude Code, Codex and Cursor Can Now Design for You — Respecting Your Design System
Figma launches an open beta MCP server that lets AI agents create and edit assets directly on the canvas, using your existing components, variables and auto-layout.

Until yesterday, AI agents could see your Figma designs. They could read them, analyze them, describe them. But not touch them. As of March 23, 2026, that's over. Through an updated MCP server and a new "Skills" system, Claude Code, Codex and Cursor now write directly to your canvas — using your components, your variables and your auto-layout. This is the end of generic AI design: every output respects your design system, your brand guidelines and your component logic.
The Generic AI Design Problem — Finally Solved
Every designer has lived the same scenario. Ask an AI agent to generate a mockup. Receive a clean result that looks nothing like your brand. Wrong colors. Wrong components. Wrong grid.
Figma put it plainly: "To date, AI agents haven't had this context, which is why so many designs created by AI often feel unfamiliar and generic."
The problem wasn't the AI. It was access. No agent had knowledge of your design system — your design tokens (reusable values: colors, typography, spacing), your components, your variables, your conventions. The AI was drawing from scratch every time.
Not anymore. As announced by Figma on March 23, 2026, the MCP server (Model Context Protocol — the standard protocol that lets AI agents connect to external tools) now provides full write access to the canvas.
use_figma: How It Works Technically
Three technical building blocks make up this update.
use_figma is the main MCP tool. It lets agents create and edit real Figma assets — using the components and variables already defined in your library. The metaphor: it's like giving Claude Code direct access to your component library. It no longer draws from scratch. It assembles your existing pieces like a senior designer.
generate_figma_design translates HTML/CSS from a live application into editable Figma layers. Concretely: you point the agent at your production app, and it reverse-engineers the interface into modifiable Figma components. The bridge between code and design works both ways.
Skills are the third building block — and the most innovative. More on those below.
Nine MCP clients are compatible today: Augment, Claude Code, Codex, Copilot CLI, Copilot in VS Code, Cursor, Factory, Firebender and Warp. The ecosystem is broad from day one.
Skills: Your Conventions Become Rules the Agent Follows
This is the most strategic feature in the announcement. Skills are Markdown files — plain natural language text — that encode your team's conventions. Anyone can write them. No coding required.
A Skill defines steps, sequences and conventions to follow. The agent reads it before every action and complies. Think of it as a creative direction brief — but machine-executable.
The self-healing mechanism is particularly powerful: after every action, the agent takes a screenshot of the result, compares it to the expected outcome and automatically iterates if something doesn't match. No human intervention needed.
Five community Skills are already available:
| Skill | Creator | What It Does |
|---|---|---|
/figma-generate-library | Figma | Creates components from an existing codebase |
/apply-design-system | Chris Goebel (Edenspiekermann) | Connects existing designs to the design system |
/create-voice | Ian Guisard (Uber) | Generates screen reader accessibility specs |
/sync-figma-token | Firebender | Syncs design tokens between code and Figma |
/multi-agent | Augment Code | Runs parallel multi-agent workflows |
Yuhki Yamashita, Figma CPO, sums it up: "Starting today, agents can write directly to the Figma canvas. Teams can now use Claude Code, Codex, and other agents to create and edit real Figma assets with full awareness of your existing design systems."
OpenAI Codex + Anthropic Claude in the Same Announcement — The Double Endorsement
It's rare: the two largest AI labs publicly validate the same Figma feature on the same day.
Ed Bayes, Design Lead at OpenAI Codex, states: "Teams at OpenAI use Figma to iterate, refine, and make decisions about how a product comes together. Now, Codex can find and use all the important design context in Figma."
Cat Wu, Head of Product at Claude Code (Anthropic), adds: "Skills teach Claude Code how to work directly in the design canvas, so you can build in a way that stays true to your team's intent and judgment."
The signal is clear. Figma isn't picking sides between models. It's becoming the standard infrastructure for AI design — model-agnostic. OpenAI, Anthropic, Cursor: everyone goes through the same MCP server. Matt Colyer, Product Lead at Figma, confirms internal teams have been using it for two weeks already.
Security follows: access is controlled via OAuth user authorization. No agent accesses your files without explicit permission. On pricing, the beta is free. The API will be usage-based afterward.
Figma vs Google Stitch vs Lovable — The Week Design Went Agentic
| Tool | Design System Access | Compatible Agents | MCP | Open Source |
|---|---|---|---|---|
| Figma MCP | Native | 9 clients | Yes | Skills |
| Google Stitch | No | Gemini only | No | No |
| Lovable | Figma import | GPT-4o | No | No |
| Adobe Firefly | Partial | Adobe only | No | No |
| v0 (Vercel) | Figma import | Claude | No | No |
This announcement doesn't exist in a vacuum. It's part of a 72-hour sequence redefining design.
Monday, Lovable announces it's hunting for acquisitions — vibe coding consolidation begins. Tuesday morning, Cursor admits Composer 2 runs on Kimi, a Chinese model — the AI dev stack faces a trust crisis. Tuesday evening, Figma opens its canvas to agents.
The irony is exquisite. Google Stitch wanted to bypass Figma by generating UIs from text via Gemini — without existing design systems. Figma responds by integrating every agent directly. Rather than competing with AI, Figma becomes the infrastructure layer every agent uses.
In summary:
- Figma launches on March 23, 2026 an open beta allowing AI agents to write directly to the canvas — Claude Code, Codex, Cursor, Copilot, Augment, Warp are compatible.
- The
use_figmatool lets agents create and edit real Figma assets using existing components, variables and auto-layout. - Skills (Markdown files) encode team conventions and guide the agent with an automatic self-healing loop.
- OpenAI (Ed Bayes) and Anthropic (Cat Wu) both publicly validated the feature on day one.
- Free during the beta — will become a usage-based paid API. The official Figma MCP server guide is available now.
The Figma canvas was the most frustrating idle resource for AI agents. They could read, analyze, describe. But not touch. As of yesterday, they have hands. In 72 hours, Lovable announced acquisitions, Cursor admitted it relies on a Chinese model, and Figma opened its canvas to agents. Design is no longer a human discipline — it's an agentic workflow. The question is no longer "can AI design?" — it's "why do you still need to do it yourself?"


