Summary of "Wan 2.2 Animate - забираем всю мощь полной модели чтобы создавать хай-рес контент. Пошаговый гайд"
Short summary
This is a step-by-step practical guide to running the full One-2.2 Animate model in the cloud (via a ComfyUI / Stable Diffusion UI image) to produce high-resolution animated content from a single static image + reference video. The creator stresses running the full (non-quantized) model on powerful cloud GPUs to avoid OOM/attention issues and to maximize quality.
Goal
- Produce 1920×1080 animated output quickly (avoid low-res 480 outputs and very long local runs).
- Use a cloud GPU to run the full model (no FP8/quantization compromises) and avoid local RAM/VRAM limits.
Recommended hardware
- Preferred GPU: Tesla H200 (tested, ~140 GB VRAM). In the demo it is much faster than H100 for the example workload (≈2 min vs ≈5 min for a 77-frame job).
- Other shown options: H100, consumer cards like 3090/4090/5090 (depending on availability and cost).
Why use the cloud
- Many users lack local RAM/VRAM for the full One-2.2 Animate model (~30+ GB model and large text encoders).
- Cloud GPUs (H100/H200, etc.) allow running full models without quantization and reduce memory/attention errors.
- Additional cloud conveniences:
- Preinstalled drivers and a server image with ComfyUI (Triton/CUDA/attention drivers preinstalled).
- Per-second billing and persistent volumes (models stay on volumes separate from instances).
- Built-in file manager for large model uploads (drag & drop).
- Keypair creation for SSH/FileZilla access.
- Promo link / bonus for first deposit mentioned in the video description.
Cloud setup & ComfyUI steps
- Create a cloud server, select GPU & configuration, and choose the prebuilt image with ComfyUI.
- Wait for ComfyUI to boot, then open its web UI.
- Import the provided workflow JSON (author supplies the file in the video description).
- Click “Install Missing Custom Nodes” → install all custom nodes, restart the UI and refresh the browser.
- Upload models (see next section) to the appropriate folders on the persistent volume.
- Refresh ComfyUI so the newly uploaded models appear in workflow nodes.
Model installation — where to upload files
- Upload the full One-2.2 Animate diffusion model (~30+ GB) into the diffusion models folder.
- Upload the large text encoder into text_encoders.
- Upload required LoRA files into the loras folder.
- Some models (e.g., ClipVision/One21) may already be preloaded in the cloud image.
- After uploads, refresh ComfyUI so nodes can detect the new models.
Running the workflow
- Upload your reference video and reference image into the workflow.
- Configure the sampler, number of frames (example used 77 frames), resolution steps, and final upscaler (example pipeline includes a final upscaling to 1920×1080).
- Example timings from the demo:
- H200 (140 GB): ≈2:18 for the first full 6-step run of 77 frames; ≈1:50 for subsequent runs.
- H100: ≈5:00 for the same 77-frame job (about 2× slower in the demo).
- The upscaler node produced the final 1920×1080 output without memory errors when using the full model on the H200.
Changes vs prior workflow/video
- Fixed vertical video loading behavior.
- Mask UI element note: the mask may appear only after the node finishes loading — allow the workflow to finish and the canvas will update.
- Resizer behavior changed to center-based resizing (prevents blank bottom areas when source is compressed).
- Added an upscaler node at the end of the pipeline.
Clean-up and cost control
- Stop the cloud server when finished (wait until the instance shows offline/gray).
- Delete the instance if you don’t need it; persistent volumes are billed separately and can be kept or deleted.
- Carefully stop/delete instances to avoid losing unsaved changes on volumes.
Practical tips
- Start downloading/uploading big models early (the full model is 30+ GB). Upload to the cloud volume before generation to save time.
- Use the provided cloud image with preinstalled drivers and ComfyUI to avoid driver and configuration problems.
- If a node or mask doesn’t show immediately, allow the workflow to finish and refresh the canvas.
- Use center resizer to avoid cropping artifacts for compressed or vertical inputs.
- Shut down instances to avoid unnecessary charges; volumes persist for later reuse.
Note: The presenter provides a promo link and mentions a 20% bonus for first deposit in the video description. Workflow JSON and model download links are also provided free in the video description.
Performance / price notes (demo numbers)
- H200 (140 GB): ~2:18 first run for 77 frames; ~1:50 average subsequent runs.
- H100: ~5:00 for the same 77-frame job (roughly 2× slower in the demo).
- Price difference between GPUs shown was small in the demo (~200 rubles), with H200 delivering much faster runs.
Where to get files and config
- The presenter provides direct links to download the workflow JSON and the required models in the video description (free, no paywall).
Main speakers / sources
- Video author / presenter: the channel host who identifies themself as “Ia, Igeneratn”.
- Cloud provider: the presenter’s recommended preconfigured GPU cloud (referred to as “my friends’ cloud” in the video) offering Tesla H100/H200 GPUs, per-second billing, and persistent volumes.
- The presenter also references a previous detailed video on the One-2.2 Animate workflow for node-by-node explanations.
Additional note
Some model and driver names in autogenerated subtitles may be slightly misrecognized; the summary focuses on the demonstrated workflow and setup steps.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.