This node pack integrates the Tencent HunyuanPortrait framework into ComfyUI. It allows users to generate portrait animations driven by a video, using a source image for appearance.
- Simplified Model Loading: Uses a single folder input for main models and a dropdown for VAE selection.
- Local Model Loading: All models are loaded from local paths. No automatic HuggingFace Hub downloads by the nodes.
- Portrait Animation: Generates videos based on a source portrait image and a driving video.
- Customizable Preprocessing & Generation: Offers various parameters to control the preprocessing and video generation stages.
You need to download the model weights from the official HunyuanPortrait sources.
-
HunyuanPortrait Models:
-
Go to the HunyuanPortrait Huggingface repository.
-
You need all the models downloaded into
ComfyUI/models/HunyuanPortrait/hyportrait -
unet.pth,dino.pth,expression.pth,headpose.pth,image_proj.pth,motion_proj.pth,pose_guider.ptharcface.onnxdownloaded toComfyUI/models/HunyuanPortraitfrom https://huggingface.co/FoivosPar/Arc2Face/resolve/da2f1e9aa3954dad093213acfc9ae75a68da6ffd/arcface.onnxyoloface_v5m.ptdownloaded toComfyUI/models/HunyuanPortraitfrom https://huggingface.co/LeonJoe13/Sonic/resolve/main/yoloface_v5m.pt
-
-
VAE Model:
- The HunyuanPortrait project uses a Stable Video Diffusion VAE. A common one is
stabilityai/stable-video-diffusion-img2vid-xt's VAE. - You can download it from Hugging Face: stabilityai/stable-video-diffusion-img2vid-xt - vae/diffusion_pytorch_model.fp16.safetensors
- The HunyuanPortrait project uses a Stable Video Diffusion VAE. A common one is
Organize the downloaded files as follows:
-
Create a main directory for HunyuanPortrait models within your ComfyUI models folder:
ComfyUI/models/HunyuanPortrait/ -
Inside
ComfyUI/models/HunyuanPortrait/, place:- The entire
hyportraitfolder (which you downloaded, containing the various.pthfiles). arcface.onnxyoloface_v5m.pt
- The entire
-
Place the VAE model (e.g.,
diffusion_pytorch_model.fp16.safetensors) in your standard ComfyUI VAE directory:ComfyUI/models/vae/(e.g.,ComfyUI/models/vae/diffusion_pytorch_model.fp16.safetensors)
The final structure should look like this:
ComfyUI/
└── models/
├── HunyuanPortrait/ <-- This is your 'model_folder' for the node
│ ├── arcface.onnx
│ ├── yoloface_v5m.pt
│ └── hyportrait/ <-- folder containing dino.pth, unet.pth, etc.
│ ├── dino.pth
│ ├── expression.pth
│ ├── headpose.pth
│ ├── image_proj.pth
│ ├── motion_proj.pth
│ ├── pose_guider.pth
│ └── unet.pth
└── vae/
└── diffusion_pytorch_model.fp16.safetensors <-- Or your chosen VAE file
-
Load HunyuanPortrait Models:
- Add the
Load HunyuanPortrait Modelsnode (found under theHunyuanPortraitcategory). - Model Folder: Set this to your main HunyuanPortrait weights directory (e.g.,
ComfyUI/models/HunyuanPortrait). - VAE Name: Select the correct VAE file (e.g.,
diffusion_pytorch_model.fp16.safetensors) from the dropdown.
- Add the
-
Load Inputs:
- Use a
LoadImagenode for the source portrait image. - Use a
LoadVideo (VHS)node or similar to load your driving video frames.
- Use a
-
Preprocess Data:
- Add the
HunyuanPortrait Preprocessornode. - Connect
hunyuan_modelsfrom theLoad HunyuanPortrait Modelsnode. - Connect the
source_imagefrom yourLoadImagenode. - Connect the
driving_video_framesfrom your video loading node. - Set
driving_video_fpsto match the actual FPS of your input driving video. - Adjust other preprocessing parameters like
limit_framesandoutput_crop_sizeas needed.
- Add the
-
Generate Video:
- Add the
HunyuanPortrait Generatornode. - Connect
hunyuan_modelsfrom theLoad HunyuanPortrait Modelsnode. - Connect
preprocessed_datafrom theHunyuanPortrait Preprocessornode. - Connect
driving_dataoutput from theHunyuanPortrait Preprocessorto thedriving_datainput of theHunyuanPortrait Generator. This carries information about the temporary video file for cleanup. - Adjust generation parameters such as
height,width,num_inference_steps,fps_generation(FPS of the output video), guidance scales, etc.
- Add the
-
Preview/Save Output:
- Connect the
IMAGEoutput of theHunyuanPortrait Generatorto aPreviewImagenode to see individual generated frames. - Alternatively, connect it to a
VideoCombine (VHS)node to assemble the frames into a video file (e.g., MP4, GIF). Ensure theframe_rateinVideoCombinematches yourfps_generationsetting in the Generator for correct playback speed.
- Connect the
- Use a color match node to restore color levels as outputs are messy
- Dependency Version: It is crucial to use
diffusers==0.29.0as specified inrequirements.txt. Newer versions ofdiffusersmay have breaking API changes that are incompatible with the vendored code in this node pack.
