Nvidia DLSS 3 Analysis: Image Quality, Latency, V-Sync + Testing Methodology - バイリンガル字幕

With the launch of RTX 4090 and Rich's GPU review, I've spent some time with a number of free release DLSS3 titles.
So in today's video, I will answer a lot of the questions I saw from viewers under our last DLSS3 video.
I will also present some found foundational criteria and tests that we at DF will be using to critique DLSS3 quality.
But before I get into all that, let's go to our first most common question I saw under our last video.
And that is, how are you even recording DLSS3?
This is a very important question to answer.
as it explains why you were seeing me on the screen right now in the flesh and not a bunch
of b-roll of game footage.
Currently, there are no readily available HDMI 2.1 consumer capture cards on the market.
So that means there is no way to externally capture 4K at 120 Hertz.
As you can imagine, this is a pretty big issue as DLSS3 was designed to accelerate 4K into and beyond 120 Hertz.
So to get that,
we've been leveraging OBS on a local computer with the RTX 4090 and setting it up to capture the screen at 4K 120 F.
Here we are using OBS because NVShare still does not offer a 4K 120 FPS option,
although it does offer an 8K 60 FPS option interestingly enough.
The media encoders in the RTX 4090 are better than those in the previous gen of cards,
so 4K 120 local capture seems like it would be.
work.
This should be awesome, but it has two big flaws.
The first flaw is that local capture will affect performance.
So when you do use it, you're not actually showing the real performance of the GPU in the moment when the recording is happening.
Secondly, local OBS recording is highly flawed because it currently does not actually work well.
In fact, it works very poorly at the moment.
You can record footage like you're seeing here and it will seem fine enough.
The recording though will be actually dropping a ton of frames.
Although you can see RTSS in the corner, for example, going 120 FPS with DLSS, three on and it looks fluid in person.
While the recording is actually dropping a ton of frames all the time, there's missing frames, lurches, and even more.
Nine out of ten times a recording that you will make using this method will be deeply flawed.
This means recordings you may see elsewhere across the web.
at 4k could possibly be wrong and full of artifacting that is not there locally.
The footage you will be seeing in this video of 120 FPS slowed down has been manually scanned
using an f-cat border to make sure that the footage that I'm showing on screen is not dropping any frames in the request.
recording at all, so that the footage has a flawless real frame rate.
To even get 10 seconds of flawless footage can sometimes take hours on end of redoing recordings.
Since recording footage of DLSS3 at full resolution is so flawed,
it means I do not have nearly enough footage at the intensity of
the resolution and frame rate that I would actually like to do this video at.
So I cannot offer a 100%
definitive analysis of DLSS 3, but I'm getting there and I have some good preliminary critiques and a lot of info.
I just need an HDMI 2.1 capture card really, really badly.
Another question I saw under our video was asking, does DLSS3 work with VSync?
And to answer that, let's first look at what DLSS3 is doing in theory.
As I described in our last video, DLSS3 is a technology that generates a frame in between two traditionally rendered frames.
This frame is generated with the help of most,
engine which explain the movement of opaque geometry as well as an optical flow image generated
on a separate fixed function block on the RTX 4000 GPUs called the optical flow accelerator.
This generated frame can be called an AI generated frame essentially as a machine learning program on the GPU.
is deciding the information from these two images and how to combine them,
or which ones to use, essentially, to make that final in-between frame.
The AI-generated frame fills the time gap between the two traditional frames.
In practice, it looks like this.
Traditional 60 FPS on the left, DLSS frame generation at 120 FPS on the right.
Both are running at half speed on YouTube, so you can actually see the DLSS 3 frames.
As you can see, DLSS 3 on the right is smoother than the traditional 60 due to the AI frames filling the gaps.
You should also notice in this footage here that there's absolutely no tearing on the right-hand side.
DLSS S3 frames are being synced here, yet according to Nvidia, V-Sync is currently not supported while G-Sync is supported.
And when you turn on DLSS3 in most games, V-Sync options in-game become grayed out and non-functional.
So, how am I showing clearly synced footage of a Spider-Man at 120F?
with DLSS3, the answer is simple.
I'm using VSync,
I forced externally in the Nvidia panel and it worked perfectly fine in the pre-release version of Spider-Man that I had
access to for the last video.
This obviously contradicts Nvidia's commentary to us, so let's investigate.
For one, Nvidia's comments have confused me a little bit.
See, since around 2015, the panel option to turn on G-Sync is typically only half of what turning on G-Sync actually does and requires.
G-Sync is most known for its ability to allow variable refresh rates without tearing below the monitor's
max running on the first option here allows it to do that, but if you go above the refresh rate, the game will tear.
To prevent that, you have to turn on the second part of G-Sync's total function.
To turn on that function of G-Sync,
you have to turn on G-Sync and then usually go into the global settings of NVCP and force on vsync there globally.
With both of these options on, you will not see tearing below and not above your monitor's max refresh rate.
Either way,
apparently syncing is supported in some capacity as you're seeing synced footage of the Lyra or Lyra Unreal Engine 5 demo here with
DLSS3 on zero tears sticking to 120 FPS at 4K.
So obviously syncing can work in some capacity.
Syncing works in multiple games that I've tested out with DLSS3 with their pre-release patches.
That is with vsync being forced with or without gsync set to on.
That means half of the frame being presented here in the footage you're seeing are AI-generated frames, and the other half are traditional frames.
Basically, my experience seems to contradict NVIDIA's official stance on v-sync support.
So when I asked further about v-sync,
NVIDIA told us that v-sync is not supported now,
but there are plans to support it in the future as Currently,
V-Sync can at times introduce back pressure to the game which makes pacing uneven.
Based on NVIDIA's response, it like syncing or v-syncing can work, but it will not always work.
And I think I found out what that looks like when it doesn't work in Microsoft's Flight Simulator.
Here I'm going to slow down some footage I made of the game running at a V-Sync, 120 FPS with DLSS 3.
Slow down, of course, on YouTube to have speeds.
You actually see those DLSS 3 frames there.
Here, I'm going to ping it back and forth in the footage so you can see what I'm talking about.
And I think we're seeing an uneven pacing issue.
I've scanned this footage, of course, with our frame tools and, it's actually a perfect 120 FPS.
So frame rate is not the issue here.
Rather, each rendered frame are showing a non-linear amount of time per frame.
So the camera moves an uneven distance between frames while panning.
This leads to stuttery and jerky looking movement even at a perfect 120 FPS.
on the FPS when DLSS3 is enabled.
I saw this behavior when forcing syncing with or without G-Sync being active.
In the end, this means the following.
DLSS3 does not officially support VSync.
In our opinion at DF, VSync is kind of needed for G-Sync to work in the way that I think it should be working.
As if you do not force a globally, I think you usually see tearing there with G-Sync on.
If you do force it, it can work wonderfully, but it can also not work wonderfully, like we see in Flight Simulator here.
I think NVIDIA needs to clear up a little bit of that confusion there around G-Sync,
as well as work towards a 100% working V-Sync with DLSS3.
Another question I saw in the comments was, does DLSS3 AI frame generation require DLSS image reconstruction?
The answer there is a simple, no.
And nearly all the games I've tested so far,
DLSS3 frame generation option is a separate option from the other DLSS options, and it can be toggled separately.
So if you want to use frame generation without DLS image reconstruction, you can do that.
That means you can run it with TAA at native resolution or any other type of image quality treatment that you want.
Frame generation is a separate option.
One the last questions I commonly saw was, does DLS-S3 look good and work well at low frame?
frame rates.
Given everything about DLSS 3 and the power of an RTX 4090, I think DLSS 3 was designed around an HFR experience.
That high frame rate, usually greater than 100 FPS.
And at above 100 FPS, it looks excellent.
I think the last video I made showed that off.
But when does it start becoming less convenient?
And I think the answer to that depends upon the content and the game like flight simulator
for example where there's such low motion in the game that it is not really a good judge promotion artifacts due to DLSS3.
If I were to exclude the v-sync and do stutter that this title has, you wouldn't actually see that much of a difference.
due to this low amount of motion here.
Same with Cyberpunk 2077, for example, at 60 FPS.
DLSS3, at 60 FPS here, looks impressively similar to a traditional 60 FPS rendered in the normal way.
Even when doing reloads of your weapon right next to the camera, that is the thing about the- DLSS3.
For games in first person, or those without a of motion chaos, it is going to look surprisingly good, even at a low frame rate.
Those kind of games and experiences, though, are not the interesting challenging tests.
Cyberpunk isn't that of an interesting test for DLSS3 for how it holds up in motion.
Something like Spider-Man,
or Lyra,
or however pronounce it,
are actually good representations of movement and challenging movement scenarios that allow us to judge how low we can actually go with DLSS3.
I mean, in these type of games, there's a lot of motion in the center of the screen due to that.
third-person character model.
It's zipping all around, animating in a more chaotic way.
Something you really don't get, actually, in first-person games, whether they be racers, simulation games, or something like Cyberpunk's role-playing.
And when we do this,
and look at games like this,
for example,
this example here in Lyra,
with The character running I think we can see some issues with a 60 FPS conversion here with DLSS 3 on the right hand side
It looks fine
But I would say that if you look at the frame
Very specifically when the character is moving you can see some extra aliasing in there that you definitely do not see on the normal
Traditionally rendered 60 FPS side of the screen same spider-man when the characters moving in a really hectic manner
I think you see that exact same issue essentially deal Ss3 at 60 FPS can convey the motion
I would say but it does it in a way where it seems flawed and
imperfect unlike at a higher FPS that I've been showing consistently in this video and in the videos before.
Based upon my testing that I did specifically in the Lyra demo,
I found around 80 FPS to be the point at which DLSS3 starts looking a lot more like the real deal traditional 80 FPS that we see.
here on the left-hand side.
The persistence per frame is short enough so that you don't really tend to notice the individual artifacts that can be found in a frame.
Here I did these ADFPS recordings by hooking up the RTX 4090 to a capture card and recording at 240Hz 1080p
using one-third rate VC.
This topic though of how low can you go with the LSS3 brings me to the core of how we at
DFR actually going to judge DLSS3 image quality.
Our analysis of DLSS3 and other similar techniques that will come out in the next couple of years
most likely is going to be different than how I analyze DLSS3.
to FSR and XDSS or anything similar.
The reason for this is because of image persistence.
Let me start with an example that is not actually about DLSS3 but it's about DLSS2.
When I reviewed Hitman 3 getting its DLSS patch,
I pointed out how its version of DLSS at release had this visual artifact in the distance where you can see visible trailing occurring
behind this guard here walking in the distance.
The reason I point this out in the video is because this artifact was readily visible
at a normal frame rate speed at normal viewing distance without any zooming at all.
It was plainly visible just by playing the game.
That is actually how I do all of my image quality of analysis.
By playing the games,
I generate a list of image quality scenarios where I tend to see issues at normal viewing distance at normal game speed.
I want to do that exact same thing with frame generation, looking for those issues that are visible at normal viewing distance, game speed.
But for the purpose of this video,
it's hard to do it the same way I've done in the past because frame generation presents artifacts very differently than say DLSS 2 did.
With this guard example in Hitman 3 and DLSS 2, the error we are seeing here is occurring in every single frame of the game.
It shows a trail of issues.
use in every single frame, continuously over the arc of movement.
This allows the error to enter our visual perception as an obvious artifact,
even though it can take up a small percentage of the overall screen.
Basically, its continuous nature makes it obvious, even though it's small.
With frame generation artifacts, you do not have continuous error.
You have a perfect frame, followed by an AI-generated frame with potential artifacts, followed by another perfect frame.
And this happens over and over again at infinitum, and you're not seeing the error continuously.
Rather, you're seeing error strobe at you in between perfection.
And around rapid speed like 120 FPS,
the strobing makes artifact detection at normal view distance at normal game speed surprisingly difficult and different than the way I would do it with
something like DLSS2.
Artifacts with frame generation are strangely harder to perceive than those with image reconstruction.
And I cannot even show this rapid strobing.
effect on YouTube as you know, YouTube is limited to 60 FPS so I'm slowing down footage and everything you see.
Basically the video you're seeing here on YouTube is not representative.
Let me give you an example of what I mean with Spiderman that I used in my last video.
In that video I showed an error in this area around Spider-Man's legs in this frame in the middle.
I thought it was caused by a disillusion, and it probably is.
The thing is, in actual motion at 120 FPS, I did not notice this error when I was playing the game.
I only noticed it after the fact when I was combing through my footage frame by frame.
And only then,
I saw the error when looking at a still frame,
the stroving nature of perfect, imperfect, perfect, et cetera, made my brain perceive this specific artifact when laying.
This is interesting as the size of the error in screen space, as you can see here, is actually rather large in a stopped frame.
So the strobing made me not see this artifact,
but at the same time the strobing made me notice another artifact in this scene With Spider-Man climbing the building and a full speed at 120 FPS with this strobing
behavior I saw a kind of aliasing or flickering around the area near the lower edge half of Spider-Man's body.
I noticed this at full speed 120 fps and normal viewing distance.
If I pause a frame, I think you can see why.
There are these dark lines that pop up near Spider-Man's legs in AI-generated frames here that do not appear necessarily in the traditionally
rendered ones.
When Spider-Man's animation cycles over and over again, on repeat here, this cyclical appearance and disappearance...
of this artifact allowed my brain to perceive it, I think.
In essence, I was seeing a flicker at full speed around Spider-Man's legs.
I think this error is interesting because it's comparatively small in comparison to the other error I showed on Spider-Man's feet in the cutscene.
Yet due to how image persistence works,
I can see the smaller error as it's constantly repeating,
but I did not notice the larger error in the cutscene as it was a one-off error.
This is what I'm on a call scenario.
frame generation image quality, cycling animations in front of the camera, that's something I'm going to look at in future DLSS3 videos.
Basically, any animation that repeats itself, this can lead to visible error as we see in Spider-Man due to that cycling nature.
Fortunately, of all the games I really have access to in this video, Not really any of them are very good for examining this.
They're all really first-person,
or do not have this kind of cycling animation close to the camera in such a rapid way like Spider-Man has it.
Even Lira isn't that fast.
Another type of error that I noticed with DLSS3 that I saw at full-speed playback.
at normal view distance was in scene transitions.
Basically one camera cuts to another shot that looks very different.
Check out flight simulator here at 120 fps slowed down here.
to half at YouTube.
I think it will be a bit exaggerated at half speed but at full speed it almost looks like a frame drop occurs when the camera cuts.
If we check out what it is happening with the frames in sequence around that camera cut we can see that the intermediate
AI-generated frame is just a big old artifact in its own right.
At full speed this full-screen artifact even for one frame due to its size looks like a frame drop.
frame drop depending upon its brightness level.
This is an artifact I've seen with DLSS3, so it's scenario 2 on our list here.
But I'm honestly a little surprised this is happening as I would imagine DLSS3 could maybe see if the scene is changing due to motion vectors,
but maybe not in this game.
So we have two scenarios of interest so far.
A third scenario I found is in HUD presentation or in thin transparencies.
I showed this off in the last video with Spider-Man,
but HUD elements are interpolated up to the frame right you're targeting and are technically not correct in those AI-generated frames.
This is the least noticeable of the three scenarios that I've described so far.
One usually doesn't notice the HUD distorting,
and you will maybe only really notice it if you're looking at a particular HUD element that is complex and you're staring directly at it.
At full speed,
like 120 FPS, HUD distortion from AI frame generation typically presents as a slight darkening of the HUD element when the camera is moving.
And really about it.
Thin is similar to the HUD darkening.
If the transparency is very thin, it may alias and not show up as complete in an AI-generated frame.
Due to the strobing of frames, it can make some thin transparencies look a little bit.
darker to your mind's perception.
A fourth issue that I've noticed that possibly happens is with rapid flashing, like a muzzle flash, for example.
This one like the HUD scenario is actually not too big and easily noticeable,
but with rapid flashes like these gunshots here, you can see that the DLSS three gunshots.
Team a bit less bright in motion, and is kind of how it presents at 120 FPS, slightly not as bright flashes.
If you slow down the footage, you maybe see why.
The frames before and after a flash can have this artifact here.
In aggregate in motion, it makes the muzzle flash appear slightly darkened.
And with that,
I come to my last scenario regarding image quality and frame generation with DLSS 3,
and that is erratic mouse movement in a third-person game,
typically by causing this occlusions, like here you see Spider-Man right next to this water tower.
If I go absolutely nuts with the mouse and from of this water tower rapidly moving it back and forth,
you can start to see some of that frame generation break down if you look at the area around Spider-Man's body.
You can see kind of a reprojection error of sorts right around him.
Technically, there's even more artifacting going throughout the entire frame, but the area where is really noticeable is in that center area around spider-man.
This one is really limited to just mouse users as a controller cannot move anywhere near as fast as this,
but it is something that I can imagine happening in third-person games with really really fast mouse movement.
With that being said,
those are the five scenarios or areas that I have noticed so far, that I'm going to become interesting testbed scenarios for frame generation quality.
And I honestly think I'm going to find more over time as I play more games, especially ones like Spider-Man.
Man, that game seems like it's going to be an awesome testbed for frame generation.
One thing I've done to discover these five scenarios,
is to compare DLSS 3-FAME generation at the same frame rate as a traditionally rendered frame rate.
For example,
here you can see Cyberpunk 2077's benchmark running at 4K 120 FPS in the traditional manner on the left and on the right hand side with DLSS 3-FAME generation.
Now, Rendering a traditional 120 FPS at a high resolution like this is not going to be achievable for all games.
So may not be able to always do this comparison,
but when I can do it like here, you can find some pretty cool differences there with DLSS3.
To get 4K 120 in the traditional way in cyberpunk that required
me to turn off ray tracing and to make use of DLSS in ultra performance mode.
And when I lined up at the same settings with DLSS 3 here,
my first impression was, wow, DLSS 3 actually looks really, really similar to that traditional 120 FPS.
And I think even at YouTube at half speed you'll have a very similar impression there.
Like I mentioned earlier though, a game like Cyberpunk with its first-person perspective, its camera motion, etc.
Well, this type of game is very easy on DLSS3.
It's going to look high quality here due to the type of content we're looking at.
But still, I did notice an interesting difference when I recording side by side at 120 FPS, played back at full speed.
Going back to the beginning of that bench there, check out the floor here.
Notice how in traditional or rendering at this moment this part of the floor looks stable.
You see the detail on the floor and the normal map lines on its surface.
That same area with DLSS3 in that moment instead has a moire pattern and I noticed this at full speed normal viewing distance.
This is a classic case of moire too.
You those parallel lines often produce this kind of error in games.
Both of these videos though are running at DLSS ultra performance mode which is 7-20
So why exactly does DLSS3 have this happening even though they're running at the same resolution?
The answer comes if we look at DLSS3 frames in sequence here.
As you can see, traditional frame on the left, AI-generated frame in the middle, traditional frame on the right again.
We can see that even traditionally rendered frames here have a the moire pattern on them.
This means the issue is not in the AI generation itself.
It's just that the AI generated frame is inheriting the moire issue from the traditionally rendered frames.
This is a great insight about DLSS3 actually,
as it means DLSS3 can inherit image quality faults from traditionally rendered frames,
but still you may be wondering why the traditionally rendered frame even has this error in the first place.
And the reason is because image reconstruction actually gets better, the higher frame rate is.
We can see this if we put traditional 60fps on the left here, traditional 150fps on left on and DLSS-3-120FPS on the right.
Notice the traditional 60FPS has the same Huawei issue as DLSS-3 at 120FPS,
while the traditional 120FPS in the middle does not have this issue.
The thing is with the traditional 120FPS, the frames are closer together.
So DLSS super resolution, even in ultra performance mode, has a better understanding of the surface.
120 resolves better anti-aliasing than 60 FPS.
And since DLSS 3 is only rendering a traditional 60 frames in the normal manner,
it inherits the image quality of a 60 fps presentation, even though it animates Justice Booth as a 120 fps one.
I geek like me found this really interesting.
And that's kind of where I am right now with our methodology to analyze image generation.
I have a of at least 5 scenarios right now where I can see issues cropping up,
and I have a nice consistent way to do ground truth comparisons, presuming the game allows me to do them.
So now let's talk about inputs.
SS3, which is a really interesting topic.
As you can imagine,
it inherently adds input latency due to the fact that it holds a frame like you're seeing here to compare between them to generate that in-between frame.
This is generically measurable with V-Sync set to off.
I test this in Cyberpunk 2077 in this case.
area you see here running around and shooting.
I compare DLSS 3 4K performance mode with V-Sync off to DLSS 2 performance mode with V-Sync off both with reflex on of course.
Measurements were made with Nvidia frame view which captures PC latency right before the monitor latency so there's none of that monitor latency included here.
So just seeing what's going on essentially in the rendering pipeline.
The numbers look like this.
DLS-2, 25 milliseconds of latency here versus 35 milliseconds latency of DLS-3.
That around 10 milliseconds of latency cost is essentially the cost.
of what is happening by generating that extra DLSS 3 frame in aggregate.
DLSS 3 is of course performing better here if we look at those average frame rate numbers at the cost of that latency.
But who wants to play with all that screen tearing and v-sync off?
I do not.
So here are those same numbers with V-Sync.
DLSS 2 in performance mode is 30 milliseconds now and DLSS 3 in performance mode is 110 milliseconds.
Whoa!
It's adding a ton of more input latency here with V-Sync.
That's not very good, but it makes sense if we think about it.
V-Sync engaged.
here with DLSS 2 is still below the V-Sync limit of 120 FPS if we look at its average frame rate.
So its input latency is not too different.
But DLSS 3, which is far in excess of 120 FPS without V-Sync, is now at that V-Sync number.
and it is waiting.
The GPU is waiting and not performing to its fullest.
It's just queuing up frames, waiting and waiting, causing it to do dramatically slower in response.
We can see this in the GPU power as well, where the GPU is consuming 50 less watts in DLSS3 than it is in DLSS3.
2.
Now 110 milliseconds is not awful, but it's not great.
It's actually slower than DLSS 2 4K performs mode at 60 Hertz with v-sync as you're seeing here in the graph.
So if you were to play the game like this, it would be like playing a slightly heavier input latency than playing the game.
stock at 60fps without frame generation on.
Okay, so let's show another test that is eye-opening and explains a lot about what you need to do with DLSS-3.
DLSS-2 in 4K quality mode in that same scene with V-Sync has 36 milliseconds of input latency.
DLSS-2 SS3 in 4K quality mode in that same scene with V-Sync has 49 milliseconds of latency.
Yes, you are seeing that correctly.
In mode, DLSS3 here has lower latency than in performance mode.
And the reason is because it is not hitting the V-Sync limit.
V-Sync is not hitting not capping the GPU anymore, and it is going full throttle, leading to a better and unhindered response.
We see that in the power consumption.
Now our DLSS3 is only moderately below DLSS2's power consumption.
Based upon what I can feel in the mouse movement and say about DLSS3, my recommendation for anyone using it.
If you're going to use G-Sync or V-Sync is to make sure you are not hitting your
monitor's refresh rate in terms of frame rate.
You need to have the GPU going full blast to avoid incurring a lot of extra latency.
To do this,
do as I did here in the example,
I tuned the settings or the resolution to be high enough to make that the case so that it is constantly below the maximum frame rate limit that
your refresh rate imposes on you.
Just it with a normal frame rate limiter like RiverTuner's statistics server is not enough.
That will only minimally decrease input latency.
The best way to decrease input latency with the is to be below the max v-sync frame rate.
using for analyzing something like DLSS2, as image persistence changes which issues are relevant.
Of issues, some are more significant than others.
I say the ones that I'm labeling here in red are noticeable, yellow are less noticeable, and green are even less noticeable, slash placebo actually.
I've also talked about, input latency.
Discovering very crucially that you want to avoid hitting your max v-sync, refresh rate, frame rate by making your in-game settings heavy enough.
This is a big counterintuitive but also fascinating.
Basically you're gonna use v-sync,
which I'm definitely going to be doing when I use DLSS3, you want to be below your max frame rate.
All together I would say DLSS3 in aggregate here over this entire video is rather incredible tech.
I think at 120 FPS the quality holds up incredibly well next to ground truth and I think it's best use case is to enable experiences that were not possible before.
Check this out, Cyberpunk on the left is 4K DLSS quality mode without frame generation on with no ray tracing.
On the right is like a ray tracing, but with frame generation at that same resolution.
DLSS 3 on the right hand side here is enabling higher quality settings and it has a much higher
motion fluidity due to having a higher frame.
If you were to compare PC latency between these two sides,
you would have 34 milliseconds of PC latency on the left and 49 milliseconds on the right.
This is the kind of trade-off that DLSS3 is all about in my eyes.
Much better settings,
higher frame rates, but then you have this added input latency that I think is rather minimal considering what you're gaining in return.
That is how I would use DLSS3 as it is a nice win.
Now, we just need to see how Nvidia evolves this in the future.
It definitely needs official v-sync support,
and it also needs to clear up some of those more visible eras, like the camera cut ones I talked about earlier.
But that is really enough for me for now.
If you did like this video, hit that like button and subscribe to the channel.
If you're already a subscriber, hit that little bell in the corner to be informed as soon as Digital Foundry posts a video.
If you want to help out, support Div on Patreon to get years worth of digital foundry content and high quality for download.
Comment below, follow on Twitter, and always, this is Alex Brifewehrwe und auf Wiedersehen.

さらなる機能をアンロック

Trancy拡張機能をインストールすると、AI字幕、AI単語定義、AI文法分析、AIスピーチなど、さらなる機能をアンロックできます。

feature cover

主要なビデオプラットフォームに対応

TrancyはYouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Courseraなどのプラットフォームにバイリンガル字幕を提供するだけでなく、一般のウェブページでのAIワード/フレーズ翻訳、全文翻訳などの機能も提供します。

全プラットフォームのブラウザに対応

TrancyはiOS Safariブラウザ拡張機能を含む、全プラットフォームで使用できます。

複数の視聴モード

シアターモード、リーディングモード、ミックスモードなど、複数の視聴モードをサポートし、バイリンガル体験を提供します。

複数の練習モード

文のリスニング、スピーキングテスト、選択肢補完、書き取りなど、複数の練習方法をサポートします。

AIビデオサマリー

OpenAIを使用してビデオを要約し、キーポイントを把握します。

AI字幕

たった3〜5分でYouTubeのAI字幕を生成し、正確かつ迅速に提供します。

AI単語定義

字幕内の単語をタップするだけで定義を検索し、AIによる定義を利用できます。

AI文法分析

文を文法的に分析し、文の意味を迅速に理解し、難しい文法をマスターします。

その他のウェブ機能

Trancyはビデオのバイリンガル字幕だけでなく、ウェブページの単語翻訳や全文翻訳などの機能も提供します。

始める準備ができました

今すぐTrancyを試して、ユニークな機能をご自分で体験してください。