ComfyUI - Hands are finally FIXED! This solution works with all models! - 이중 자막
to all we're finally going to fix hands,
so I push my live stream less Saturday showed had to do this but a running to a couple of issues and I sense resolve those aa this is going to show you how to fix
hands about 96 percent of your images 1.5 SDK XL whatever model it all works very well and it's pretty soon simple.
Quick word from our sponsor today.
Gigabyte has sponsored the channel.
So they sent us a 17x laptop, which we are using during the live streams and a lot of our videos here.
So you can kind of check it out.
It is a 4080 card in it.
So we can take it on the road.
We can do amazing artwork.
home.
It is a fantastic laptop and I had a of fun with this again during live streams we play with it quite a bit so you get a chance to see the
features a bit more in action but it is quite fast so it's been able to knock
these things out a faster than my older 3090 card that we've been using pretty
much for the longest time now so again huge thanks to Gigabyte
And so I've here a very basic graph and I'm going to using the juggernaut model you again
You can use whatever model you'd like and have a very simple prompt here portrait of a beautiful woman in a summer dress
waving your hands and fighting.
So sitting yourself up for success with waving your hands.
We have no negative prompt here.
You can throw things in here like word hands is a pretty popular one recently that helps correct hands are or whatever.
But again, we're going to do this a more methodical way and not just hope that the prompt works.
We have a custom node here for the empty latent, but you do whatever you'd like to do.
And then a standard case sampler.
So I'm just picked a seed here.
And we're going to use a fixed seed for this.
So we don't change a of variables as we explore this.
So just 20 steps,
you do want to do enough steps to make sure the model does have a chance to resolve the image,
meaning don't be too quick on this because once we've done this,
we really don't want to have to do another case sampler again later to help with any residual noise.
We want to try and get it right the first time because we only want to try and fix the hands.
We don't want to try and fix the image.
again.
Now it's not to say we can't upscale when we're done to try and do something else.
Again, for this, we really don't have to deal with that.
And if you don't have this preview,
remember in your manager here, you can go and change your preview here to the slow version and show you in every sample.
Yeah, she has lots of extra fingers, it's quite exciting.
All right, so let's fix this up.
So the node that does all the work here is this mesh graph former,
but for you type in the word mesh, you should see it in here.
There's a couple of them, there's a face mesh, but we're looking for the depth map preprocessor here.
So what this is going to do is this comes with the control net auxiliary proof processors.
So if you're doing any control net, you probably already have this.
If up to date,
remember, always do a fetch all or update all before you complain that you can't get it to work because this stuff changes on a daily
basis.
So what we want to do is we want to take and find the hands,
which is what this does, but it also takes and figures out what the hand shape should be.
And this is a very small model.
So, we just take this image and push it in here.
We can take this image and pull it right back out again and see what we get.
Now, because we're using a fixed seed, we don't have to worry about this thing taking time to process again.
We just a queue and it will go right to this graph former here and we'll see what we get.
So, what we're looking for out of here is just the hands and it should show them in a depth map.
And there we go.
So the hands that we notice the lightest part is toward the camera and the darkest part is away from the camera I think it does this somewhat randomly based on what I can see
But it does help the model understand how the hand is laid out
So I think that's a good idea
You can also work with this resolution here if you want more out of it
But I think for this this is good enough because we only need to guide this depth All right,
so from here, what do we do to fix this?
Well, let's use a control net I like to use the advanced one here, but that's just my personal preference.
And we want to do is want to take this image, which is the depth map, and we want to put that in here.
You're not going to use this image down here because this control net is expecting a depth map.
We can tell that because if we pull this control net out and we're going to load in the depth map.
So let's just go find it.
I have a lot of control nets on here.
So we're taking the word depth.
And I'm going to look for this one.
Again, you can look for whatever one you want.
I'm just going to use the one that's official release from stability.
That's the one I grabbed.
Let me double check that.
Should be the 128 lower.
Yes.
So that's the one that I want.
Although any of these.
right so from here we should be able to kind of figure out what we need to do now we need to hook
up these pods and negatives over here so we'll just go i'm just going to drag these across
the screen you can make your graph a lot nicer when you're done with this but for now this is what we're
And then we're going to take this into a case sample.
Now, here's where I screwed up on Saturday.
Let's just grab this case sample here.
Again, I hold on my alt key.
And if I click and drag it will duplicate the node.
And so I'm just going to hook up the positive and negative.
Like I did before.
And again, I'm going to show you this because I ran into this and I got really frustrated.
I couldn't figure it out.
Let's save you some pain here.
I'm going to grab this and drag it up and create a reroute for it.
This is the model.
And I'm probably gonna do the same with the VAE eventually because we're gonna need it now this latent
I'm gonna actually screw up twice here and show you kind of what I did
We take this latent and we drag it over here to Now we can take this and decode it and again.
We need our VAE like I said Let's go ahead and drag this up and And I'm going to explain this again,
so don't freak out up here like, oh my god, I lost this guy.
Actually, I'm going to save this one because it's a bean.
Okay, so let's walk through what I did real quick here because it's kind of a spaghetti mess, but that's okay.
So up here, this is going to be our group, right?
So this is our control net group.
And all we're doing in here is taking this this graph former to create this depth map.
And now also creates a mask.
Let's take a look at that mask as well.
So I have this really nice image to mask note here from the MTB set and use that in preview this as well.
So you can kind of see what the mask looks like.
And then for our control net, we're using just a full strength control net with with no with no specific starter end.
And we're going to use a depth map here.
Okay, so if we hit Q prompt on this, we can see what this looks like.
This is what the mask looks like versus the actual depth map.
So you see, this is a lot more detail.
And this is just a mask of what we're going to be replacing.
Now, what I did is I just ran this thing directly into this case sampler and you see, first of all, that the picture is very different.
And why is that?
Well, because we only want to redraw the hands, we don't want to redraw the whole picture.
So we need to make sure we mask this.
So, in order to mask this, we want to, actually, this is this latent here, this line here, this is the problem.
I know there's a couple of ways to do this.
One is if we type in in paint, there's a gate E encode for in painting.
This node here is one way to do it.
Now, I don't have as much luck with this one as I do with just the standard one, the man.
asking one.
This one tends to create rectangles around the hands,
as I'll show you,
well, as you'll experiment with, I'm not going to show you the mistake here because there's other mistakes we're going to make.
But I don't really think this note works as well.
But again, you try what you want to try.
I'm going to use, I'm going to type in the word mask.
And I'm going set latent noise mask here.
So I'm going to take the latent that came from our first case sampler,
and we're going to take the mask that came from, again, this refiner here that looks like this.
This is this mask we're grabbing.
We're going to grab the green line.
We're not going to grab the blue one.
This is just our preview, and we're going to drag that into.
So what we've done is we're basically saying the only part that I want you to correct is the part that is in white here,
everything else leave the image alone.
So if we run this again, it happens and you see that it is only refining just the hands.
It ends with something that looks.
a lot better.
Here we go.
Do you see that her hands are now proper?
Although, they look a little wonky and why do they look wonky?
Well, they look wonky and this is the mistake that I had made.
This took me a little bit to find is you notice that the seed is 20 on both of them.
And that was the issue.
So make sure that the seeds are different.
What I had done in my live stream was I was using the global, this global seed here, which I love.
This little remote control that you can drag around the screen.
kind of keep track of what your seeds are.
But what it does is it keeps fixed for all controls and keeps all of them the same.
And I kept running into that problem.
So now if we run this again, we should end up with a lot less issue with that kind of crunchiness around the hand.
So some things to be aware of with this.
is that it does not know what the right size of a hand is, right?
So it may make the hand too large or because again it doesn't know that it's supposed to be the right size.
So it's just going to take whatever size hand you have and fix it.
Now the other issue is that it's not going to fix fingers that are too long.
The issue I have here is if we look at this mask,
you can see that we can kind of see the And if we have multiple fingers,
let's say we have one with like virus six fingers and there's a finger sticking up through here that fingers not going to be removed because the mask is only only using the hand here.
So I have a suggestion that it's right here.
It's based on depth for the mask.
We're going to change it to tight to be boxes.
This will create a bounding box around just.
of the hand here, so these will turn into rectangles.
And then you can use these little mask expands here
to kind of help with the padding to see if you can get a better result out of it.
Now, I don't think these are the default values.
I I've messed with this a bit,
but again, the results should be whatever, obviously whatever you get to work, but you'll see here that this will turn to rectangles now.
So when it goes through, it will use a rectangle.
Now this will take care of the extra fingers that we might have, because now they won't be seen through the additional fingers.
So this should work out better.
However, the issue I have with this one is sometimes that the fingers could be extra long and sticky.
So there's actually another option here, and that is the original.
So the original will create a much larger rectangle, again, paying attention to your mask expansion here.
And that should help quite a bit.
One of those.
And you see the bounding box is now much larger.
Obviously, my settings are a bit aggressive on the size of this, but you get the idea.
It'll go through and mask out just the hands and replace them with the proper mashed hands.
So there you go.
We end up with something.
Now from here, I would recommend that you take this into an upscaler, which will help with the face and any other issues.
Again, I use the ultimate upscaler here, but that should help resolve this and any issues there might be with any lines that might show up.
You shouldn't have too many, but it can happen.
So I'll post this graph in the community area for all the people who helped support the channel.
So you guys can have it.
It's actually a much prettier graph that I did from the live stream on Saturday.
So I'll make sure you guys get that one.
gigabyte our sponsor today.
So really appreciate them taking time to support the channel.
And again, all the people who've supported us as well, we have over 300 members now so we are rocking it.
And again,
I'm putting all the files and all these other things into the community area here on YouTube for all the people
who are sponsors for the channel.
So again, thank you so much and make sure you go in there and look around.
you access to months and months of all the live streams and all these different graphs that I've got,
as well as a bunch of other things, just some embeddings in there that I'm giving away.
더 많은 기능 잠금 해제
Trancy 확장 프로그램을 설치하면 AI 자막, AI 단어 정의, AI 문법 분석, AI 구술 등을 포함한 더 많은 기능을 사용할 수 있습니다.

인기 있는 비디오 플랫폼과 호환
Trancy는 YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera 등의 플랫폼에서 이중 자막을 지원하는데 그치지 않고, 일반 웹 페이지에서 AI 단어/문장 번역, 전체 문장 번역 등의 기능도 제공하여 진정한 언어 학습 도우미가 됩니다.

다양한 플랫폼 브라우저 지원
Trancy는 iOS Safari 브라우저 확장 프로그램을 포함하여 모든 플랫폼에서 사용할 수 있습니다.
다양한 시청 모드
극장, 읽기, 혼합 등 다양한 시청 모드를 지원하여 전체적인 이중 자막 체험을 제공합니다.
다양한 연습 모드
문장 청취, 구술 평가, 선택 공백, 테스트 등 다양한 연습 방식을 지원합니다.
AI 비디오 요약
OpenAI를 사용하여 비디오 요약을 생성하여 핵심 내용을 빠르게 파악할 수 있습니다.
AI 자막
3-5분 만에 YouTube AI 자막을 생성하여 정확하고 빠른 자막을 제공합니다.
AI 단어 정의
자막에서 단어를 탭하면 정의를 검색하고 AI 단어 정의 기능을 활용할 수 있습니다.
AI 문법 분석
문장에 대한 문법 분석을 수행하여 문장의 의미를 빠르게 이해하고 어려운 문법을 습득할 수 있습니다.
더 많은 웹 기능
Trancy는 비디오 이중 자막 뿐만 아니라 웹 페이지의 단어 번역 및 전체 문장 번역 기능도 제공합니다.