COMFYUI WORKFLOW NODES

Seedance 2 ComfyUI Guide

Build node-based AI video generation workflows with Seedance 2 inside ComfyUI. From installation to advanced ControlNet pipelines, batch processing, and GPU optimization — the complete technical guide for power users who want full control over every parameter.

Start Installation → Jump to Workflows

What Is ComfyUI?

ComfyUI is an open-source, node-based graphical interface for running AI models. Instead of typing commands or clicking buttons in a web app, you connect visual nodes in a graph — each node performs one operation, and data flows between them through connections.

Node-Based Architecture

Every operation is a discrete node: loading a model, encoding a prompt, running inference, saving output. You drag connections between nodes to build a pipeline. This makes complex workflows visual and reproducible. You can save an entire workflow as a JSON file and share it with anyone.

Originally built for Stable Diffusion image generation, ComfyUI has expanded to support video models, audio generation, 3D rendering, and more. The community has created over 5,000 custom node packs covering everything from face swapping to real-time previews.

Why Creators Choose ComfyUI

  • Reproducibility: Save and reload exact workflow configurations
  • Composability: Chain multiple models in a single pipeline
  • Automation: Queue hundreds of generations with different parameters
  • Efficiency: Smart memory management keeps only active models in VRAM
  • Community: Thousands of custom nodes extend functionality
  • Free & Open Source: No subscription, no cloud dependency

Why Use Seedance with ComfyUI

Dreamina is the easiest way to generate Seedance videos. But ComfyUI unlocks capabilities that no web interface can match. Here are the three biggest reasons to run Seedance inside ComfyUI.

Custom Workflows

Build multi-step pipelines that go far beyond generate-and-download. Chain Seedance with upscalers, frame interpolation, color grading, watermark removal, and format conversion — all in a single automated flow. Generate a video, upscale every frame with Real-ESRGAN, interpolate to 60fps with RIFE, and export as ProRes — hands-free.

🔌

Node-Based Control

Every parameter is exposed as a node input. Adjust CFG scale, seed, resolution, duration, and sampling steps individually. Connect parameter nodes to sliders for real-time tweaking. Branch your workflow to test multiple settings simultaneously and compare outputs side by side before committing to a full render.

🚀

Integration with Other Models

Use ControlNet for precise pose and depth control. Apply IP-Adapter for style transfer from reference images. Chain Seedance output into face restoration models or style-transfer networks. ComfyUI is the only interface where you can combine Seedance with the entire ecosystem of open-source AI models in a single graph.

What You Need Before Starting

Gather these requirements before installing Seedance ComfyUI nodes. API-based workflows have minimal hardware requirements since inference runs on remote servers.

Software Requirements

  • Python 3.10 or 3.11 — Required by ComfyUI. Python 3.12+ may have compatibility issues with some nodes
  • Git — For cloning ComfyUI and custom node repositories
  • ComfyUI (latest) — The base application. We cover installation in the next section
  • Seedance API Key — From BytePlus, fal.ai, or Replicate
  • pip / conda — For managing Python dependencies

Hardware Requirements

  • GPU (API mode): Any GPU that runs ComfyUI — even integrated graphics work
  • GPU (local post-processing): 8GB+ VRAM recommended for upscaling and interpolation
  • RAM: 16GB minimum, 32GB recommended for complex workflows
  • Storage: 5GB for ComfyUI + nodes + cached outputs
  • Internet: Required for API calls. Stable connection recommended for large video downloads
No GPU at all? You can run ComfyUI in CPU-only mode for pure API workflows. Seedance inference happens on remote servers, so your local hardware only needs to handle the ComfyUI interface and any local post-processing nodes. For cloud GPU options, see our local setup guide.