AnimateDiff Legacy Animation v5.0 [ComfyUI] - 雙語字幕
We will learn how to make this animation using comfy UI and anime diff.
All workflows link in description below.
Drag and drop the first workflow to get started.
First we have in place.
Then anime.
did, then prompts, then control net, which have batch or single option, then case sampler settings, then video export settings.
First, copy and paste the output folder path where frames will be rendered.
Choose dimension of the output.
Select batch size.
Let's keep it at 72 for this tutorial.
I am using this to an anime model.
I am choosing concept pyromancer lore to add some cool fire effects and putting its weight to around 0.5.
Here you choose the anime diff model.
So prompts you can use any words.
I am using this prompt.
Control net would be turned off by default.
I want to use the directory for open pose reference images.
So I will unmute the directory group.
I had open pose images extracted from my old renders so I will use those.
You can extract them using the cn passes.
extractor work flow.
Now I will unmute the control net node, and enable open pose.
I will also change the FPS at the exporting video to 12 so it does not move fast.
Time to render the queue and wait for it to It came out now we will move to the upscaling workflow.
Now drag and drop the video to video.
upscalor workflow here we will input video here's output path video and settings model
settings animate diff prompts IP adapter case sample upscale value and video out.
Right-click on the video and select Copy As Path and paste it in the input video node.
Leave the output in the settings to default.
I am putting 72 in the load cap as I rendered only 72 frames or you can also put zero if you want to render full length of the video.
Select the same model you as before.
Set the target resolution to 1200.
Make sure to change this FPS according to your video for faster or slower speed.
Everything is good to go.
Now we will hit render.
Just keep in mind that IP adapter will take long to process.
After rendering it will output in the same folder.
Lastly we will use the video 2 video face fixer workflow.
Like previous workflow, it has also similar settings.
Same as before, copy and paste the video path in the input video node.
Set load cap and video settings.
Set model as used before.
I will enter prompts to make Git more details.
Also, I will add up scale to get more better faces.
Make sure to change this FPS according to your video for faster or slower speed.
Then I will start the render.
After face fix, this is how it looks like.
Like wise I did the other two shots and added some frame interpolation for smoothness with flow frames.
I post workflow, tutorials, other on my Patreons for free, so everyone can learn better and can improve their AI artwork.
Thanks for all my Patreons for watching!
the support.
It means a lot to me.
You all keeps me going and help me to keep my tutorials free for everyone.
解鎖更多功能
安裝 Trancy 擴展,可以解鎖更多功能,包括AI字幕、AI單詞釋義、AI語法分析、AI口語等

兼容主流視頻平台
Trancy 不僅提供對 YouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Coursera 等平台的雙語字幕支持,還能實現對普通網頁的 AI 劃詞/劃句翻譯、全文沉浸翻譯等功能,真正的語言學習全能助手。

支持全平臺瀏覽器
Trancy 支持全平臺使用,包括iOS Safari瀏覽器擴展
多種觀影模式
支持劇場、閱讀、混合等多種觀影模式,全方位雙語體驗
多種練習模式
支持句子精聽、口語測評、選擇填空、默寫等多種練習方式
AI 視頻總結
使用 OpenAI 對視頻總結,快速視頻概要,掌握關鍵內容
AI 字幕
只需3-5分鐘,即可生成 YouTube AI 字幕,精準且快速
AI 單詞釋義
輕點字幕中的單詞,即可查詢釋義,並有AI釋義賦能
AI 語法分析
對句子進行語法分析,快速理解句子含義,掌握難點語法
更多網頁功能
Trancy 支持視頻雙語字幕同時,還可提供網頁的單詞翻譯和全文翻譯功能