1000% FASTER Stable Diffusion in ONE STEP! - Sous-titres bilingues

I'm going to show you how to speed up your stable fusion up to 10 times and all you need to do is just download one file and put it on your computer.
Oh, and just between you and me, I have a fear of speed bumps, but I'm slowly getting over them.
So these are live renders.
These not sped up at all.
And this is 1024 by 1024 SD XL images.
We're going to be downloading something called an LCN lore.
All the links are in the description below.
So you're going to go to this page.
You're going to download depending on what you STXL and the 1.5,
so go here's the STXL one,
press file some versions,
you have the safe tensors here, just download that, you're going to go into your stable fusion folder, models and then LORA.
Here you're going to save them,
you're going to go back into that folder,
find the file that you saved, we're going to rename that so you know which one it is.
So this is the LCD XL.
We're gonna do the same for the 1.5 and the SSD 1B if you are using that.
So we download the SD 1.5,
we're renaming that, SD 1.5, the SSD 1B, Laura is only gonna be if you are using that specific model.
So if you're not sure, you can skip that if you want to.
If you're using Comfy UI, they're just gonna be in the same models, Laura's.
Hey guys, remember to like and subscribe down below.
I do the research so you don't have to.
So I loaded a 1.5 model up here.
I my Laura as LCM SD 1B here.
If you don't have this, you can go into settings here, go down to user entry.
face and scroll down and here quick settings list if you change this it's
just gonna see SD model checkpoint here you can go in here and add SD.
Laura and then you're gonna have it same as me.
You're gonna need to apply the settings and probably reload the UI Then you can load the Laura's from up here.
Now.
I medieval Viking Man of warrior you're gonna set this to eight steps.
Eight steps is all you're gonna need for this I loaded some of my styles.
I've did a large fantasy art and I've also made a XYZ plot here So you're gonna see the difference between the samplers a lot of samplers here
Then I said CFG values between one one and a half and two
I would say the max you're gonna use here is two so one and two.
If you're using Comfy, you can use the LCM sampler.
As of recording, this video is not available in automatic 1111, but probably it will be very soon.
So I'm generating now this in real time.
So here's a lot of images and we can see here if you can see the speed we're getting about 20,
sometimes 27, 10 iterations per second here.
So these are a lot of images, I would say one, two, three, four, five, six, seven, 30.
ish times three.
So these are about 100 images just popping out here now live in front of your eyes.
Now bear in mind I have a very fast GPU.
I the 4090 but even on the low end GPU this is going to be much much faster and even on a Mac this is going to help tremendously.
I think Mac has some of the best gains.
So the grid is complete now.
I'm going to get it up here for you.
I'm just going to zoom in and a lot me messy and you don't want that.
So favorite sample, like the 2m carers for example, isn't working great with the LCM lore.
As you can see both from CFG scale one, one and a half and two.
It's not fantastic.
However, some of the samplers, like Oiler A here, for example, are all looking beautiful in all the CFGs.
Let me show you some samplers that do look good because most of these here you can see will not work.
If you're in Comfy, the LCM sampler is going to look amazing, but like I said, not available in Automatic.11 yet.
So just keep moving here.
A of these are terrible, but we have here DDIM now.
PC here at CFG to 2 is looking okay, but not fantastic.
I can see like DPM fast here and some of the 3M1's SD are really terrible, but we have the DPM2 here that's pretty good.
good.
And the oil array, that's pretty good.
We're going to do a second one,
just with a photo realistic prompt here,
I'm going to take another model epic realism, instead of the dreamshaper one, we're still using a step diffusion 1.5.
And you're probably going to see that for this model, the results are going to vary a little bit.
We are again generating this in real time.
So about 100 images popping out here.
fast speeds.
Totally time here was about a for around 100 images and let's check here again for this model.
We 2M caras, not fantastic.
Spoiler here, looks pretty good.
Oil are pretty good.
DPM 2A, this one here, CFG 2A.
turn out to be pretty good dpm++
sde okay should the higher cfg most of these are not great similar as previously however more are working now with this short cinematic prompt.
As can see, some of these are actually looking pretty good.
TDIM around 1.5 to CFG.
It's okay.
Unity PC starts looking showing some promise but isn't really getting it.
So what's my recommendation now?
Well, if you are in automatic 1111, try oil array for now but make some tests of your own and make sure you test with a
model that you like to use.
So let's hop over to config.
This is where it's really going to shine.
First of all,
we're going to go into the manager,
make sure that you update your comfy, that it's it's the latest version, because we're going to need the case emperor with LCM sample here.
So we can change this to LCM, we're going to add a lower lower loader, we're going to get the one we want.
which was the LCM SD 1.5 in this case.
I'm just gonna take this, load the laura here in between, then I drop the model here, go and keep going into the case amper.
I'm gonna change the steps to 8 and I'm gonna change the CFG to, let's do 1.5 for now and we are generating.
So this is 1.5 and And we did a quick generation here that looks much, much better than the results that we got.
Almost all the samples in automatic 1111.
So whatever you generate now, we're going to get some pretty good looking images.
As can see, the LCM sampler here is doing a much greater job.
So if you want these insane speeds without the
need to do messing about with settings and other samters in automatic 1111 you're gonna need to use Comfy UI for now.
We can just keep queuing these up you can see the speeds that we're getting here live.
I'm going to show you this with an excel model here.
I'm going to change the LORA to the SD-XL1.
I'm going to change the size 1024, 10.
24 here.
I'm just going to queue these up.
So takes a second to load the new model.
And now we should see live renders coming in here.
So these are live renders.
These are not sped up at all.
And this is 1024 by 1024 SD XL images.
Pretty good.
If you ask me.
If you want to learn more about this, check out the blog post.
I'm going to put that in the link description below.
So this is basically explaining how all this works.
It's magic for most of us.
But for a lot of the AI researchers, it's just what they do every day.
It says here,
here's an example,
the speed difference we're talking about generating a single 1024 by 10 to more imaging on an M1 Mac with STXL base takes about a minute using the LCM LoRa
to get great results in just six seconds and they're used four steps.
Using a 4090 we get almost instant response less than one second.
This unlocks use of STXL applications where real-time events are a requirement.
So that's kind of cool.
I know people in my Discord have been playing with this,
I know, Kixu play with it to get, he used his face in a webcam and got the real time renders from the webcam.
So that was kind of cool, to be honest.
You can see comparison here from the number of steps.
So between 1, 1, 2, 3, 4, 5, 6, seven and eight steps here.
You see at start starting at step four or five here, the image really starts taking shape.
So you can play with even lower values than eight.
You can play with like four or five, especially if you have a low end GPU or an older computer, try four steps, five steps.
And you can also use this with animations.
So, if you don't get some speeds into those animative animations, be to try out the CM Loras.
And talking about guidance scale here, like we talked about, the CFG.
So between one and two, but they say If you have a CFG of 1, it effectively disables negative prompts.
If you have a guidance scale between 1 and 2, you can use the negative prompts.
And they said, we found the largest values don't work.
So they remember that, set your CFG to 1 or 2.
But if you have one, negative prompts are going to be out of the question.
They have some speed comparisons here.
SDXLORA LCM 4 steps on the left here.
We have SDXL standard 25 steps on the right.
Like we said,
on a map,
you is six seconds,
six and a half seconds versus a minute,
280 TI for 4.7 seconds versus 10 seconds,
3090, you start getting to some real speeds, you can get almost to a second here, seven seconds that you had before.
and if you are not 49 like me,
you're going to sub second speeds, which I mean it was fast before, but this is blazingly fast.
It's going to help with especially with my animated renders.
And can even use this on a CPU.
Now this Intel here is a quite beefy i9.
It says they're using one out of the 36 cores and it can still get to 29 seconds compared to that.
19 seconds used previously.
This good even for you potato pieces owners out there.
Hey, thanks for watching.
I hope you learned something today.
Check out this video here.
And as always,
Langue de traduction
Sélectionner

Débloquez plus de fonctionnalités

Installez l'extension Trancy pour débloquer plus de fonctionnalités, y compris les sous-titres IA, les définitions de mots IA, l'analyse grammaticale IA, la parole IA, etc.

feature cover

Compatible avec les principales plateformes vidéo

Trancy offre non seulement le support des sous-titres bilingues pour des plateformes telles que YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera, mais propose également la traduction de mots/phrases IA, la traduction immersive de texte intégral et d'autres fonctionnalités pour les pages web régulières. C'est un véritable assistant d'apprentissage des langues tout-en-un.

Tous les navigateurs de plateforme

Trancy prend en charge tous les navigateurs de plateforme, y compris l'extension du navigateur Safari iOS.

Modes de visualisation multiples

Prend en charge les modes théâtre, lecture, mixte et autres modes de visualisation pour une expérience bilingue complète.

Modes de pratique multiples

Prend en charge la dictée de phrases, l'évaluation orale, le choix multiple, la dictée et d'autres modes de pratique.

Résumé vidéo IA

Utilisez OpenAI pour résumer les vidéos et saisir rapidement le contenu clé.

Sous-titres IA

Générez des sous-titres IA précis et rapides pour YouTube en seulement 3 à 5 minutes.

Définitions de mots IA

Appuyez sur les mots dans les sous-titres pour rechercher des définitions, avec des définitions alimentées par l'IA.

Analyse grammaticale IA

Analysez la grammaire des phrases pour comprendre rapidement le sens des phrases et maîtriser les points de grammaire difficiles.

Plus de fonctionnalités web

En plus des sous-titres vidéo bilingues, Trancy propose également la traduction de mots et la traduction intégrale de texte pour les pages web.

Prêt à commencer

Essayez Trancy aujourd'hui et découvrez ses fonctionnalités uniques par vous-même

Télécharger