Inpainting Tutorial - Stable Diffusion - 이중 자막

In painting, the secret sauce to stable fusion and how the pros get the best-looking images.
You turn a generation from this.
into this and people keep asking me do you need the in-painting model?
No you don't but it will help you with the larger fixes but to be completely honest
with you I usually use the regular models for in-painting as well.
So we had a painter over when he was finished he wouldn't accept my money so I asked him
why and he replied don't worry the paint is on the high All right,
so when you're inside stable diffusion,
you're going to enter image to image here and then this tab in paint and I've prepared an image here and if we zoom in on this a little
bit, you can see that image is fairly good, but the face here isn't great.
The nose is all messed up, the ear is not great.
you might have an image that looks like this.
You're fairly happy with the composition, but you need to fix most of the time the face, but it could be other parts.
It really matter.
So this is what we're going to do today.
And this feature here, the scroll, this is not default.
So if you want this, you're going to go into extensions and you're going to find this, the canvas zoom.
And you can either copy paste this URL and install from URL,
or you can check available load from and find it in the list and install it.
When you're in here, we are gonna in-paint first the face here.
Now you have a lot of options to choose from.
And first of all, the mask mode is going to be in paint mask because we have painted what we're supposed to be changing.
If you did it the other way around, you're going to choose in paint, not mask, that would change the rest of the image.
Now this is where most people get it wrong with the mask content.
If you want like this image,
you want to keep what's under here,
which is her face,
then you're going to choose original because then it will remember what's below here and use that to create our next iteration of the image.
If you have nothing, You need to switch to Latent Noise.
These two are the ones that you're going to use for 99% of uses.
Then we have the in-paint area here, and if you take whole picture, then...
frame, this image will be rendered.
So this part that we've painted will keep the same resolution as the rest of the image.
However, for this image, we don't want that.
We want more detail, better resolution here.
So if we change this in paint area to bone It will take this area here and render this in full resolution and then splice it together
with the big image.
And that can be changed.
So this image fully is 768 by 768,
so this is what we'll use here as well,
but this doesn't have to coincide with the full image if you're in painting just a masked area.
Euler A is a great sampling method.
I do that at 25 steps.
I also like DPM2MKs and SDEKs.
I usually run that at 30-35 steps, but it's a little slower.
Now if you just want to upscale the image, let's say you want to keep the details, then I run this at 0.4.
However, we want to change the details because the details weren't good.
We want better details, better quality.
So then I usually go with 0.6.
Let me show you an example here.
So we're going to do one image here.
We're going to type in woman face.
negative prompt that I use,
which is the n-fix, and that can be downloaded from the Armada merge model or Illuminati model, but that is not required.
So now you can see that we're only getting this part of the image.
If you go back here, you can see that it's a major change.
compared to the left one.
Now we actually have a proper face.
Now let's say you didn't get this result.
Then you have probably either change the mask content here.
You have a different resolution or your denoicing strength is off.
Let me show you what happens if I set this to for example one.
The denoicing is how much the image will be changed.
So one will change it completely and zero would change it.
Nothing at all.
So let's paint the back in here and generate.
And now it will change completely.
It will also be a woman, but it doesn't retain much of the original.
As you can see, it's a completely different woman and the hair doesn't match at all.
This woman has it.
kind of blended into the other one.
So is not a great results.
And if we have the denoicing at zero, you can see here that we won't get any change to the image at all.
So this is just the same image.
That's why I recommend especially for an image like this where you have the original I set this at 0.6.
Then you can keep going.
Now if you drag this image to the left here, you can keep working with what you have here.
Now if you were to want to add something to the image, let's say you want to add a coffee cup here.
We're changing this to coffee cup or coffee mug or whatever.
Now if I would keep this at original end point six, it would try to make up from what's below here, which is basically nothing.
So as you can see here, we're not getting any coffee cup.
We have the same image and nothing, because it couldn't find anything to work with that looks like a coffee cup.
So in this instance, there are two ways to solve this.
And if you change this to Latin noise.
It will give the image.
As you can see here, you can see the noise coming in here.
But it didn't work.
Then you might say, well, that didn't work, Seb.
You're a terrible teacher.
Well, let me show you.
When you're using Latin noise, you need to raise the denoicing string.
because now you need to change more of this noise that gets added.
Let me show you.
And there we go.
Now we got a little coffee cup here.
Now it's not perfect to scale or anything.
And that's one of the issues with working with the Latin noise,
because it doesn't understand really how this is supposed to interact with the rest of the image.
What I would do is I would go inside in paint sketch here.
I'm going to drag our image in.
And here we can draw.
So and then let's try to sort of create the cup here.
Let's give it some the light is hitting from the left here so maybe just add some doesn't have to be perfect
we're gonna iterate on this When we render,
first of all, we can change the Masked Compton to Original, and then we can lower the denoicing a little bit.
Now not as low as 0.6,
because this needs to be changed a lot,
but maybe 0.8, 0 and we don't have a perfect This needs to be tested and fine-tuned.
Let's do two images.
And now we have something that's more in line with the scene.
They're not perfect still, but we can work with that.
We take what we have.
We take this
and drag this into here and then we can either paint more or we can use the
in-painting this one and just mask this out and change this into, well, another cup of cup.
But now we have more.
So now we can lower the denoicing even more,
I'm changing to four images and we are rendering because now we want some more examples we want AI to give us something to choose from.
And here we have some more coffee.
Now, it doesn't look like the one we have to left here, it's another coffee cup, but I think it's a
fairly cool addition to the scene.
Now, if you want this better, since this is a, you can see clear that this is a blurred cup.
then you're going to need to work with this even more.
You can add blurred into the scene.
You actually try that.
Now I added blurred coffee cup out of focus.
And while it may not be perfect, it's better than what we had before.
Now it's a similar blur to Now,
and this can be adapted to something I do is I actually take this,
go into Photoshop or PhotoP and paint a little rough sketch, which can be more detailed in paint sketch.
But let's pretend we're happy with this.
You can drag this into here.
Keep working with the face because we had this part.
We can actually go, let's say we want the eyes here.
I'm setting the denoicing to 0.6 again and let's do two images.
Now we will render even closer.
And that will give us more details specifically in the in painted area here,
and you can keep going as long as you want with this.
Look at this one, for example, you can see we've got some more detail in the eyes here.
And again,
if you want to keep iterating,
just drag that to the left,
and let's say we want I'm going to change the earring here,
beautiful golden earring, you can see we're getting much more intricate details compared to the first one.
You can see in some of the image,
there's a little line here,
like a blurred line and that can be changed with, you have the mask blur here and only mask padding pixels.
So the padding that changes how from the object.
This is, and the mask blur here changes how much the blur is.
Think of it as a Gaussian blur.
So let's say we set this to 20 instead.
We're gonna set this a little closer.
you can see the line is not as distinct anymore, especially in this one.
So there you have it, in painting, in stable fusion.
Once you get to know the values and settings, it's fairly easy and you can keep iterating with very advanced scenes.
with multiple people, faces and characters and whatever.
It doesn't matter.
If you learned something today, feel free to like and subscribe.
If you don't, that's fine too, I'm not your boss.
I'll see you in next video.
As always, have a good one.
번역 언어
번역 언어 선택

더 많은 기능 잠금 해제

Trancy 확장 프로그램을 설치하면 AI 자막, AI 단어 정의, AI 문법 분석, AI 구술 등을 포함한 더 많은 기능을 사용할 수 있습니다.

feature cover

인기 있는 비디오 플랫폼과 호환

Trancy는 YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera 등의 플랫폼에서 이중 자막을 지원하는데 그치지 않고, 일반 웹 페이지에서 AI 단어/문장 번역, 전체 문장 번역 등의 기능도 제공하여 진정한 언어 학습 도우미가 됩니다.

다양한 플랫폼 브라우저 지원

Trancy는 iOS Safari 브라우저 확장 프로그램을 포함하여 모든 플랫폼에서 사용할 수 있습니다.

다양한 시청 모드

극장, 읽기, 혼합 등 다양한 시청 모드를 지원하여 전체적인 이중 자막 체험을 제공합니다.

다양한 연습 모드

문장 청취, 구술 평가, 선택 공백, 테스트 등 다양한 연습 방식을 지원합니다.

AI 비디오 요약

OpenAI를 사용하여 비디오 요약을 생성하여 핵심 내용을 빠르게 파악할 수 있습니다.

AI 자막

3-5분 만에 YouTube AI 자막을 생성하여 정확하고 빠른 자막을 제공합니다.

AI 단어 정의

자막에서 단어를 탭하면 정의를 검색하고 AI 단어 정의 기능을 활용할 수 있습니다.

AI 문법 분석

문장에 대한 문법 분석을 수행하여 문장의 의미를 빠르게 이해하고 어려운 문법을 습득할 수 있습니다.

더 많은 웹 기능

Trancy는 비디오 이중 자막 뿐만 아니라 웹 페이지의 단어 번역 및 전체 문장 번역 기능도 제공합니다.

시작 준비되셨나요?

Trancy를 오늘 시도하고 독특한 기능들을 직접 경험해보세요.

다운로드