1000% FASTER Stable Diffusion in ONE STEP! - 이중 자막
I'm going to show you how to speed up your stable fusion up to 10 times and all you need to do is just download one file and put it on your computer.
Oh, and just between you and me, I have a fear of speed bumps, but I'm slowly getting over them.
So these are live renders.
These not sped up at all.
And this is 1024 by 1024 SD XL images.
We're going to be downloading something called an LCN lore.
All the links are in the description below.
So you're going to go to this page.
You're going to download depending on what you STXL and the 1.5,
so go here's the STXL one,
press file some versions,
you have the safe tensors here, just download that, you're going to go into your stable fusion folder, models and then LORA.
Here you're going to save them,
you're going to go back into that folder,
find the file that you saved, we're going to rename that so you know which one it is.
So this is the LCD XL.
We're gonna do the same for the 1.5 and the SSD 1B if you are using that.
So we download the SD 1.5,
we're renaming that, SD 1.5, the SSD 1B, Laura is only gonna be if you are using that specific model.
So if you're not sure, you can skip that if you want to.
If you're using Comfy UI, they're just gonna be in the same models, Laura's.
Hey guys, remember to like and subscribe down below.
I do the research so you don't have to.
So I loaded a 1.5 model up here.
I my Laura as LCM SD 1B here.
If you don't have this, you can go into settings here, go down to user entry.
face and scroll down and here quick settings list if you change this it's
just gonna see SD model checkpoint here you can go in here and add SD.
Laura and then you're gonna have it same as me.
You're gonna need to apply the settings and probably reload the UI Then you can load the Laura's from up here.
Now.
I medieval Viking Man of warrior you're gonna set this to eight steps.
Eight steps is all you're gonna need for this I loaded some of my styles.
I've did a large fantasy art and I've also made a XYZ plot here So you're gonna see the difference between the samplers a lot of samplers here
Then I said CFG values between one one and a half and two
I would say the max you're gonna use here is two so one and two.
If you're using Comfy, you can use the LCM sampler.
As of recording, this video is not available in automatic 1111, but probably it will be very soon.
So I'm generating now this in real time.
So here's a lot of images and we can see here if you can see the speed we're getting about 20,
sometimes 27, 10 iterations per second here.
So these are a lot of images, I would say one, two, three, four, five, six, seven, 30.
ish times three.
So these are about 100 images just popping out here now live in front of your eyes.
Now bear in mind I have a very fast GPU.
I the 4090 but even on the low end GPU this is going to be much much faster and even on a Mac this is going to help tremendously.
I think Mac has some of the best gains.
So the grid is complete now.
I'm going to get it up here for you.
I'm just going to zoom in and a lot me messy and you don't want that.
So favorite sample, like the 2m carers for example, isn't working great with the LCM lore.
As you can see both from CFG scale one, one and a half and two.
It's not fantastic.
However, some of the samplers, like Oiler A here, for example, are all looking beautiful in all the CFGs.
Let me show you some samplers that do look good because most of these here you can see will not work.
If you're in Comfy, the LCM sampler is going to look amazing, but like I said, not available in Automatic.11 yet.
So just keep moving here.
A of these are terrible, but we have here DDIM now.
PC here at CFG to 2 is looking okay, but not fantastic.
I can see like DPM fast here and some of the 3M1's SD are really terrible, but we have the DPM2 here that's pretty good.
good.
And the oil array, that's pretty good.
We're going to do a second one,
just with a photo realistic prompt here,
I'm going to take another model epic realism, instead of the dreamshaper one, we're still using a step diffusion 1.5.
And you're probably going to see that for this model, the results are going to vary a little bit.
We are again generating this in real time.
So about 100 images popping out here.
fast speeds.
Totally time here was about a for around 100 images and let's check here again for this model.
We 2M caras, not fantastic.
Spoiler here, looks pretty good.
Oil are pretty good.
DPM 2A, this one here, CFG 2A.
turn out to be pretty good dpm++
sde okay should the higher cfg most of these are not great similar as previously however more are working now with this short cinematic prompt.
As can see, some of these are actually looking pretty good.
TDIM around 1.5 to CFG.
It's okay.
Unity PC starts looking showing some promise but isn't really getting it.
So what's my recommendation now?
Well, if you are in automatic 1111, try oil array for now but make some tests of your own and make sure you test with a
model that you like to use.
So let's hop over to config.
This is where it's really going to shine.
First of all,
we're going to go into the manager,
make sure that you update your comfy, that it's it's the latest version, because we're going to need the case emperor with LCM sample here.
So we can change this to LCM, we're going to add a lower lower loader, we're going to get the one we want.
which was the LCM SD 1.5 in this case.
I'm just gonna take this, load the laura here in between, then I drop the model here, go and keep going into the case amper.
I'm gonna change the steps to 8 and I'm gonna change the CFG to, let's do 1.5 for now and we are generating.
So this is 1.5 and And we did a quick generation here that looks much, much better than the results that we got.
Almost all the samples in automatic 1111.
So whatever you generate now, we're going to get some pretty good looking images.
As can see, the LCM sampler here is doing a much greater job.
So if you want these insane speeds without the
need to do messing about with settings and other samters in automatic 1111 you're gonna need to use Comfy UI for now.
We can just keep queuing these up you can see the speeds that we're getting here live.
I'm going to show you this with an excel model here.
I'm going to change the LORA to the SD-XL1.
I'm going to change the size 1024, 10.
24 here.
I'm just going to queue these up.
So takes a second to load the new model.
And now we should see live renders coming in here.
So these are live renders.
These are not sped up at all.
And this is 1024 by 1024 SD XL images.
Pretty good.
If you ask me.
If you want to learn more about this, check out the blog post.
I'm going to put that in the link description below.
So this is basically explaining how all this works.
It's magic for most of us.
But for a lot of the AI researchers, it's just what they do every day.
It says here,
here's an example,
the speed difference we're talking about generating a single 1024 by 10 to more imaging on an M1 Mac with STXL base takes about a minute using the LCM LoRa
to get great results in just six seconds and they're used four steps.
Using a 4090 we get almost instant response less than one second.
This unlocks use of STXL applications where real-time events are a requirement.
So that's kind of cool.
I know people in my Discord have been playing with this,
I know, Kixu play with it to get, he used his face in a webcam and got the real time renders from the webcam.
So that was kind of cool, to be honest.
You can see comparison here from the number of steps.
So between 1, 1, 2, 3, 4, 5, 6, seven and eight steps here.
You see at start starting at step four or five here, the image really starts taking shape.
So you can play with even lower values than eight.
You can play with like four or five, especially if you have a low end GPU or an older computer, try four steps, five steps.
And you can also use this with animations.
So, if you don't get some speeds into those animative animations, be to try out the CM Loras.
And talking about guidance scale here, like we talked about, the CFG.
So between one and two, but they say If you have a CFG of 1, it effectively disables negative prompts.
If you have a guidance scale between 1 and 2, you can use the negative prompts.
And they said, we found the largest values don't work.
So they remember that, set your CFG to 1 or 2.
But if you have one, negative prompts are going to be out of the question.
They have some speed comparisons here.
SDXLORA LCM 4 steps on the left here.
We have SDXL standard 25 steps on the right.
Like we said,
on a map,
you is six seconds,
six and a half seconds versus a minute,
280 TI for 4.7 seconds versus 10 seconds,
3090, you start getting to some real speeds, you can get almost to a second here, seven seconds that you had before.
and if you are not 49 like me,
you're going to sub second speeds, which I mean it was fast before, but this is blazingly fast.
It's going to help with especially with my animated renders.
And can even use this on a CPU.
Now this Intel here is a quite beefy i9.
It says they're using one out of the 36 cores and it can still get to 29 seconds compared to that.
19 seconds used previously.
This good even for you potato pieces owners out there.
Hey, thanks for watching.
I hope you learned something today.
Check out this video here.
And as always,
더 많은 기능 잠금 해제
Trancy 확장 프로그램을 설치하면 AI 자막, AI 단어 정의, AI 문법 분석, AI 구술 등을 포함한 더 많은 기능을 사용할 수 있습니다.

인기 있는 비디오 플랫폼과 호환
Trancy는 YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera 등의 플랫폼에서 이중 자막을 지원하는데 그치지 않고, 일반 웹 페이지에서 AI 단어/문장 번역, 전체 문장 번역 등의 기능도 제공하여 진정한 언어 학습 도우미가 됩니다.

다양한 플랫폼 브라우저 지원
Trancy는 iOS Safari 브라우저 확장 프로그램을 포함하여 모든 플랫폼에서 사용할 수 있습니다.
다양한 시청 모드
극장, 읽기, 혼합 등 다양한 시청 모드를 지원하여 전체적인 이중 자막 체험을 제공합니다.
다양한 연습 모드
문장 청취, 구술 평가, 선택 공백, 테스트 등 다양한 연습 방식을 지원합니다.
AI 비디오 요약
OpenAI를 사용하여 비디오 요약을 생성하여 핵심 내용을 빠르게 파악할 수 있습니다.
AI 자막
3-5분 만에 YouTube AI 자막을 생성하여 정확하고 빠른 자막을 제공합니다.
AI 단어 정의
자막에서 단어를 탭하면 정의를 검색하고 AI 단어 정의 기능을 활용할 수 있습니다.
AI 문법 분석
문장에 대한 문법 분석을 수행하여 문장의 의미를 빠르게 이해하고 어려운 문법을 습득할 수 있습니다.
더 많은 웹 기능
Trancy는 비디오 이중 자막 뿐만 아니라 웹 페이지의 단어 번역 및 전체 문장 번역 기능도 제공합니다.