Live2D Getting Started
Live2D lets your agent express themselves with a fully animated avatar — responding to emotions, actions, and tool calls in real-time. This guide walks you through the complete setup from import to first animation.
What is Live2D in AnySoul?
Section titled “What is Live2D in AnySoul?”AnySoul integrates Live2D Cubism models as animated avatars for your agents. Instead of static images, your agent’s visual presence responds dynamically to their emotional state, actions, and even tool usage.
Unlike traditional VTuber setups where a human performer drives the model through face-tracking, AnySoul uses language model output to drive your avatar automatically. The LLM outputs emotions and actions during conversation, and your mapping rules translate those into Live2D parameter animations — no camera required.
Two rendering modes:
- Web — Live2D avatar renders directly in the browser alongside your chat
- Desktop Pet — Transparent overlay window on your desktop (Electron app)
Supported models: Cubism 3, 4, and 5 (.model3.json format)
Complete Workflow Overview
Section titled “Complete Workflow Overview”Setting up Live2D follows three steps. Each step corresponds to a tab in the Live2D Settings panel:
┌─────────────────────────────────────────────────────────────────┐│ Live2D Setup Workflow ││ ││ ① Model Tab ─── Import & Defaults ││ │ Upload ZIP → set default parameter values ││ │ (idle pose: eye openness, mouth shape, head angle, etc.) ││ │ ││ ▼ ││ ② Mapping Tab ─── Animation Clips ││ │ Create reusable clips: ││ │ "smile" → ParamMouthForm: 0→0.8, ParamEyeLOpen: 1→0.7 ││ │ "think" → ParamAngleX: 0→-8, ParamEyeLOpen: 1→0.5 ││ │ "speak" → ParamMouthOpenY: cycling 0↔0.6 (loop) ││ │ ││ │ Each clip has a playback mode: ││ │ · One Shot — play once, then release ││ │ · Loop — repeat until trigger ends ││ │ · Repeat N — play exactly N times ││ │ ││ │ And a completion policy: ││ │ · Complete — finish full animation even if trigger ends ││ │ · Bound — stop immediately when trigger ends ││ │ ││ ▼ ││ ③ Mapping Tab ─── Rules ││ │ Connect LLM output → clips: ││ │ emotion=happy → play "smile" clip ││ │ emotion=thinking → play "think" clip ││ │ action=speaking → play "speak" clip (loop) ││ │ tool_call=web_search → play "search" clip ││ │ idle → play "breathing" clip (loop) ││ │ ││ │ All matching rules fire simultaneously. ││ │ Conflicts resolved by clip priority + blend mode. ││ ▼ ││ ││ Agent chats → LLM outputs emotion/action → rules match ││ → clips play → Live2D parameters animate → model moves │└─────────────────────────────────────────────────────────────────┘Step 1: Import Model & Set Defaults
Section titled “Step 1: Import Model & Set Defaults”Prepare Your Model
Section titled “Prepare Your Model”Your Live2D model must be packaged as a ZIP file containing:
my-model.zip├── model.model3.json # Required — model definition├── textures/ # Texture images (.png)│ └── texture_00.png├── motions/ # Motion files (.motion3.json)│ ├── idle_01.motion3.json│ └── happy.motion3.json└── expressions/ # Expression files (.exp3.json) ├── smile.exp3.json └── surprised.exp3.jsonWhere to get models:
- Create your own with Live2D Cubism Editor
- Browse nizima or Booth for community models
- Use the free sample models from the Live2D SDK
Import
Section titled “Import”- Open your agent’s settings panel
- Navigate to Live2D Model settings
- Click the Model tab
- Click Upload Model (ZIP) and select your ZIP file
- Wait for the import to complete — the model renders immediately in the preview area
After import, your agent’s standing type automatically switches to Live2D mode.
Set Default Parameters
Section titled “Set Default Parameters”Switch to the Parameters tab and configure the model’s idle baseline — the neutral pose your model holds when no animation is playing:
- Eyes — Set
ParamEyeLOpen/ParamEyeROpento a comfortable openness (e.g., 0.85 for a natural look) - Mouth — Set
ParamMouthFormfor a default expression (-1 = pout, 0 = neutral, 1 = smile) - Head — Adjust
ParamAngleX/Y/Zif you want a slight tilt - Idle Motion — In the Motions section, pin a motion as the default idle (e.g., a gentle breathing loop)
These defaults are what the model returns to after each animation clip finishes playing.
Model Management
Section titled “Model Management”Once imported, you can:
- Rename — Click the pencil icon next to the model name
- Export — Download the model as a ZIP for backup or sharing
- Delete — Remove the model and switch back to static image mode
- Mouse Tracking — Enable to make the model follow your cursor
Step 2: Create Animation Clips
Section titled “Step 2: Create Animation Clips”Switch to the Mapping tab. Clips are the building blocks — each clip defines a parameter animation that can be triggered by rules.
What is a Clip?
Section titled “What is a Clip?”A clip is a reusable animation that changes one or more model parameters over time. For example, a “smile” clip might:
- Increase
ParamMouthFormfrom 0 to 0.8 over 200ms - Decrease
ParamEyeLOpenfrom 1.0 to 0.7 over 200ms - Hold for 1 second, then release back to defaults over 500ms
Creating Clips
Section titled “Creating Clips”Three ways to create:
- Manual — Click Add Clip, add parameter tracks, set keyframes. Use the Full Editor (timeline) for precise multi-parameter control.
- From Expression — Click Create From Expression to import a model’s
.exp3.jsonpreset as a clip with configurable enter/hold/release timing. - From Motion — Click Import Motion As Clip to convert a
.motion3.jsonmotion file into an editable keyframe clip.
Playback Modes
Section titled “Playback Modes”Each clip has a playback mode that determines how it repeats:
| Mode | Behavior | When to Use |
|---|---|---|
| One Shot | Play once, then release | Reactions (nod, wave, surprise) |
| Loop | Repeat until the trigger ends | Ongoing states (speaking mouth cycle, thinking) |
| Repeat N | Play exactly N times | Fixed sequences (3 blinks, 2 nods) |
Completion Policy
Section titled “Completion Policy”Controls what happens when the trigger condition ends mid-animation:
| Policy | Behavior | When to Use |
|---|---|---|
| Complete | Finish the full animation, then release | Don’t want to cut off mid-gesture |
| Bound | Stop immediately when trigger ends | Tightly sync to agent state changes |
For detailed clip reference, see Animation & Mapping.
Step 3: Configure Mapping Rules
Section titled “Step 3: Configure Mapping Rules”Rules are the “glue” that connects your agent’s language model output to animation clips. This is what makes AnySoul’s Live2D different from traditional VTuber setups — instead of face-tracking, the LLM drives the model.
How It Works
Section titled “How It Works”During conversation, the LLM outputs structured metadata alongside its text response:
- Emotion — how the agent feels (happy, sad, angry, thinking, etc.)
- Action — what the agent is doing (speaking, nodding, waving, etc.)
- Tool calls — specific tools being used (web_search, manage_memory, etc.)
Rules match these outputs to clips:
┌──────────────────────┐ ┌──────────────────────┐│ LLM outputs: │ │ Rules match: ││ emotion = happy │ ──► │ → play "smile" clip ││ action = speaking │ ──► │ → play "speak" clip │└──────────────────────┘ └──────────────────────┘Condition Types
Section titled “Condition Types”| Type | What It Matches | Example |
|---|---|---|
| Emotion | Agent’s emotional state | emotion = happy |
| Action | What the agent is doing | action = speaking |
| Tool Call | Specific tool being used | tool_call = web_search |
| Text | Substring in agent’s monologue | text contains "haha" |
| Idle | No active triggers | Background breathing animation |
Conditions support * wildcards (match any value) and compound logic (AND / OR).
All-Match Parallel Playback
Section titled “All-Match Parallel Playback”All rules whose conditions are satisfied fire simultaneously. If the agent is “happy” and “speaking”, both the smile rule and the speak rule trigger — their clips play in parallel.
When multiple clips affect the same parameter, conflicts are resolved by clip priority (0–3) and blend mode (Add / Multiply / Overwrite). See Animation & Mapping for details.
Idle & Base Behaviors
Section titled “Idle & Base Behaviors”For always-on background animations (breathing, subtle swaying, blinking):
- Create a clip with Loop playback
- Add it as an Idle Behavior instead of a normal rule
- It plays continuously when no other rules match, giving your model natural movement even when idle
Verify Everything
Section titled “Verify Everything”After completing all three steps:
- Preview area — Confirm the model renders correctly with your default parameters
- Test rules — Use the sequence tester in the Mapping tab to simulate emotion/action sequences and verify clips trigger correctly
- Chat test — Start a conversation with your agent and watch the avatar respond in real-time
Next Steps
Section titled “Next Steps”- Parameters Reference — Deep dive into every parameter, motions, parts, and expressions
- Animation & Mapping — Advanced clip editing, conflict resolution, and detailed use cases