How to install and use ComfyUI - Stable Diffusion. - バイリンガル字幕

Hello friends, in this video I'm going to show you how to install Comfy UI and get it running.
Comfy UI is an extremely powerful user interface because you have total freedom, total control to create your own workflows.
I'm going to give you some advanced workflows you can work with.
I'm going to show you some basics and I'm also going to set you up with an advanced extension that you're gonna thank me for.
By way, do you know why chickens are so fun?
Because this is comfy UI and this is what we will be installing today.
This is also comfy UI.
Let's delve into this and go over the user interface together.
All the links are going to be in the description below.
And first of all, we're going to go to GitHub and find the comfy UI.
So if you scroll down here,
you have installing or you can just scroll down even further, then you have Nowadays, you don't need to get close.
You can just direct link to download after you download that you're going to have a 7-SIP file and you can use WinRAR or you can use 7-SIP
I Personally prefer 7-SIP and then you can just either extract to wherever you want or you know Just extract in the current location.
It's a rather big file.
So we'll take you some time You're gonna go into the folder And there's a read me here that says very important,
but it basically says that if you have an NVIDIA GPU, you're going to use this path file.
But if you don't, you're going to run by CPU, which should also be noticeably slower.
And there's also some troubleshooting here.
Like if you get a red error in the UI, make sure you have a model.
So we're going to download that as well.
If you need to update your comfy UI,
it so in the read me here, but you can also go, you can go into the update here.
and just press update comfy UI.
Now, if you are using automatic 11 11
or have another installation of stable fusion where you already have models,
you can go into the comfy UI director here and rename this and just remove the example here.
Then you should be able to edit this with notepad and then you can set your base path to where your automatic 11 11 is.
Now, if you don't have that installed, you can just skip that.
I do have it installed so I can use it like this.
that will make sure after I save this that all my models that I'm using in automatic 11 11 will be used in Comfy UI as well.
Now, if you don't have that or any of models at all, I recommend that you download a model from Civitai.
So let's do that.
Let's take deliberate here, for example.
And we're just going to download that then inside your models folder here.
So these are all the folders we're going to be placing your various models.
Now, if you have no idea what this is, just go into checkpoints and put your regular safe tensors or ckpt files here.
I just dropped my deliberate model into here.
set to start comfy UI.
Like talked about previously, you're just going to run NVIDIA GPU.
And after a few seconds, you will have comfy UI launched.
And this will be the default interface that you will see.
So these are no nodes.
So similar to other user interfaces like automatic 1111, the features are the same, but everything is connected by nodes.
And really, the power of nodes is that you can then take all the features and connect them together.
Basically, almost however you want, and you can add nodes and create.
So here we have our load checkpoint or our load models node, and here you can select the model that you want.
Now I have a lot of models,
but if you just follow this tutorial and don't have anything installed,
you should have the deliberate here, for example, which is the model that we downloaded from Civitai.
Now we have a default prom set here,
which is beautiful scenery, nature glass bottle landscape, purple galaxy bottle, and we have a negative prompt here.
Now you can see this is a negative prompt because it just says clip texting code prom.
This can be renamed, so let's just type.
positive prompt here.
So understand what it is what they're doing.
Just rename this to negative prompt.
Now you don't have to do this.
This is just three or six.
And as you can see here, the K sampler here, we have connections going from the positive prompt here.
So you can see that it's a line here.
I'm to the positive and there's a line from negative here.
going to negative.
So that is how the sample understands what text is the positive prompt and what text is the negative prompt.
Now you can switch these around.
You can detach these.
Let's say you want to change this.
This is the negative instead.
So I just drag this to negative and you can connect this to the positive if you would want that.
So you can switch things up on the fly.
Let's put them back into positive and negative.
And as you see, the model here is going straight back to the ball here.
So from the load checkpoint, we're sending the model line to sample.
and the VAE here,
and you can't see that because it goes behind,
but if you can see that this red goes to this red down here, the VAE decode.
So if I were to render this or QPROM that it says here,
it will start rendering similar to as it does in automatic 1111, and we would have an image coming into Save Image here.
open this image.
This is just a 512 by 512, so it's a small image, but you get the point.
Now, in the little node down here, you have the width and the height and the batch size,
which is how many images you will create.
And in the case sampler here, you have a which basically determines the base noise before this image is generated.
This can be changed and it can be set to fixed and increment increasing one at a time or decreasing one at a time,
for example.
The of steps that your image will go through while rendering and the CFG scale,
which is a mathematical example,
between the positive prompt and the negative prompt and the output,
but simply put it's lower CFG means that AI is listening less to your prompt.
I'd say a value between 3, 7ish is kind of good.
I usually do 4 or 5 nowadays.
The sampler here is basically what renders your image.
Now, for stable fusion 1.5 and 2.0 models, I recommend the dpm plus plus 2m and then changing the scheduler to Keras.
But this is basically the super simple version of com for UI and the nodes.
Then you can't add more nodes.
There are a lot of available.
So just right click something.
You can add a node and there are a lot of nodes to choose from.
So let's say here for example, this is the case amper.
So this is some of the settings that are used to render an image.
If we add an advanced case amper, you can see here that we're getting some more settings.
We can have a start at step and an end.
which means we could run multiple case samplers like one running from 0 to 15 and another from 15 to 30.
This is very popular in SDXL where you're running the SDXL model for 25.
steps and then the exit the excel refinder for the last five steps.
But again, that's a little more advanced.
So for another video.
Now another fantastic feature in Comfy UI is that you can actually take an image.
So this is an image I got from a user on my discord.
And if you take this and drag that straight into Comfy UI,
well, first of all, it's going to give us some That's not the point here.
What you actually get is the workflow and all the nodes that we use to create this image, all the settings, everything.
And that is really powerful.
So you can check on other images.
find their workflows and learn from there and continue iterating.
You might be getting errors like I am, and that's because you are missing nodes.
In this example, we're missing custom nodes, we're missing some CRG, SDL, SDXL, samplers here, and there's actually something called the comfy UI manager.
And that's going to help us install custom notes.
So we have the zip here and this is our comfy UI folder.
We're going to go into comfy UI and then into custom notes.
And then we're just going to, well, basically extract everything.
I'm just going to drag and drop that into here.
And after that, you need to restart your comfy UI.
So we're running this again.
And now you should have, we're still getting the errors, but don't mind that because you have the manager down here.
And here you can, like it says, install custom notes or install missing custom notes.
So, let's press the little install missing custom notes button here.
So, this is the CRG SDXL, we're installing that, and now it says to apply the install
custom note, please restart, coffee UI, so we will do that again.
And as you can see now, we have now...
We just load another user's image and let's queue the prompt here and see if we get a similar result.
You can see here that the green border here shows what node is active.
It's actually rendering here now.
And we have.
So this is exactly the same image that I got from the user Discord.
So that's pretty cool, eh?
of how powerful this known-based system is.
So you get a lot of control and a lot of freedom.
a lot of help if you can just load another user's workflow and just remember to download
that manager so you can install the required custom notes.
Now there are also a lot of other notes you can check out to yourself.
So there's a whole list here of custom notes available similar to the extensions tab in automatic 1111.
So I'd recommend looking at the base UI try to see what node does.
the green border when you're generating an image,
and then find another image, check that particular workflow and see how other users are creating their fantastic images.
Now, if you want to learn more in depth, Comfy UI has a lot of examples in there again.
So, for example, you can see an inpainting node setup right here.
So, here's an example of how you would set up a node to inpain the picture.
So, in the example here, you can see that image is a cat here and we have first loaded a checkpoint or a model.
We have the positive and the negative prompts,
we have the sampler,
and we have also added a VAE N code for in paint,
and that runs from the VAE from the checkpoint here,
and then runs into a load image with pixels to the image and a mask to the mask.
you're painting on top of the mask here,
and then that and code for in paint runs back into the latest image for the case app right here.
So you're basically adding on top.
of current workflow.
So this is very similar to what we saw as the default here in the video when we started.
And now their example here is Laura, for example, we have the usual workflow.
We the checkpoint or the model,
we have the poster prompt and the negative prompt and the sampler and the VAD code, and then they've added a separate Laura loader here.
So instead of going from the model, straight the case sampler, it goes from the model.
loader and then continues on to the K sampler and the same goes with the clip.
So there are a lot of good examples you can find on the Comfy UI GitHub.
So this was a quick guide to get you started with Comfy UI.
If you want to learn more about Comfy UI, check out this STXL work

さらなる機能をアンロック

Trancy拡張機能をインストールすると、AI字幕、AI単語定義、AI文法分析、AIスピーチなど、さらなる機能をアンロックできます。

feature cover

主要なビデオプラットフォームに対応

TrancyはYouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Courseraなどのプラットフォームにバイリンガル字幕を提供するだけでなく、一般のウェブページでのAIワード/フレーズ翻訳、全文翻訳などの機能も提供します。

全プラットフォームのブラウザに対応

TrancyはiOS Safariブラウザ拡張機能を含む、全プラットフォームで使用できます。

複数の視聴モード

シアターモード、リーディングモード、ミックスモードなど、複数の視聴モードをサポートし、バイリンガル体験を提供します。

複数の練習モード

文のリスニング、スピーキングテスト、選択肢補完、書き取りなど、複数の練習方法をサポートします。

AIビデオサマリー

OpenAIを使用してビデオを要約し、キーポイントを把握します。

AI字幕

たった3〜5分でYouTubeのAI字幕を生成し、正確かつ迅速に提供します。

AI単語定義

字幕内の単語をタップするだけで定義を検索し、AIによる定義を利用できます。

AI文法分析

文を文法的に分析し、文の意味を迅速に理解し、難しい文法をマスターします。

その他のウェブ機能

Trancyはビデオのバイリンガル字幕だけでなく、ウェブページの単語翻訳や全文翻訳などの機能も提供します。

始める準備ができました

今すぐTrancyを試して、ユニークな機能をご自分で体験してください。

ダウンロード