Skip to main content
uSpec connects your AI agent and Figma into a single pipeline. You provide a component link and context, and the system produces formatted documentation directly in your Figma file.

The pipeline at a glance

Every skill extracts component data and renders documentation directly in Figma via the MCP. The internal steps differ depending on what each skill needs to analyze. The diagrams below show what happens inside each skill.

Triggering a skill

Skills are triggered by typing @ followed by the skill name in Cursor’s chat.
1

Type @

In Cursor’s chat, type @. Cursor shows an autocomplete menu of available skills.
2

Select a skill

Continue typing to filter (e.g., @create-v) or use arrow keys to select. The skill name must match exactly: @create-voice, not create voice or voice spec.
3

Add your prompt

After the skill name, paste a Figma link and add any context about states, variants, or behaviors.
If autocomplete doesn’t show the skill, verify the uSpec project is open in Cursor. Skills load from the .cursor/skills/ folder.

Inside each skill

Every skill loads an instruction file, reads platform-specific or domain-specific reference files, extracts data from Figma via MCP, runs through a checklist, and renders the output. The reference files determine what the agent knows about each domain.
The anatomy skill extracts child layers, element types, and property definitions, then classifies each element’s role before rendering numbered markers with an attribute table directly in Figma.The skill reads child layers, element types, visibility, and property definitions (booleans, variant axes, instance swaps) from the component. An AI classification step then determines each element’s role (optional slot, fixed sub-component, content element, structural/decorative) and writes semantic notes. Utility sub-components like Spacer and Divider are automatically skipped. Eligible nested instances get their own per-child sections with separate markers and tables, and cross-references link back from the composition table.

What the agent sees vs. what you provide

The agent can extract structure, tokens, and styles from Figma automatically. But some information only exists in your head:
The agent can extractYou need to describe
Component layers and hierarchyStates not visible in the current frame
Design token names and valuesBehavioral modes (fill vs. hug, truncation)
Variant axes and propertiesFocus order preferences
Visual dimensions and spacingPlatform-specific interaction details
Styles and color valuesBusiness logic or conditional rules
The more context you provide in your prompt, the more accurate the output. A one-line prompt works, but adding states, behaviors, and edge cases produces significantly better specs.

Architecture overview

uSpec supports two Figma MCP providers — choose the one that fits your setup:
  • Figma Console MCP (by Southleft) connects via a Desktop Bridge plugin running inside Figma Desktop, communicating over WebSocket. It exposes 59+ tools for design creation and variable management.
  • Native Figma MCP (by Figma) connects directly to Figma’s API with read and write access. No Desktop Bridge plugin required.
Both providers give the agent real-time access to component data, tokens, styles, and screenshots. Every skill renders through the MCP, regardless of which provider or host you use. See Getting Started for setup instructions.
MCP providers update their capabilities and setup instructions frequently. For the latest details, see the Figma Console MCP docs or the native Figma MCP docs.