【翻譯字幕】超微執行長蘇姿丰全程演講 談突破AI與高效能運算極限|三立新聞網 SETN.com - Phụ đề song ngữ

Now, the moment you have been waiting for, as chair and CEO of AMD, Dr.
Dizansu has led a company's transformation into a powerhouse of high-performance computing.
under her visionary leadership.
AMD has achieved remarkable success.
Dr.
Su was recently named the 2024 Chief Executive of the Year by Chief Executive Magazine.
Recognized for her role in one of the most spectacular achievements in technology sector, Dr.
Su's inference extends beyond AMD.
She has been a key advocate for the integration of AI across industries.
Emphasizing is transformative power.
Her commitment to innovation and collaboration is evident in her leadership style,
which focuses on the development of cutting-edge solutions while fostering an inclusive and forward-looking company culture.
Now, on behalf of all Computex, I am very pleased to welcome our old friend, Disa, but
we are going to share a video from A&T first.
A.M.D makes the limitless potential of A.I.
possible, from A.I.
PCs to Edge to Cloud Lab,
powered by some of the most advanced GPUs, CPUs, and-Pays on the planet, and enabled by an open software approach that's accessible to all.
Together with our partners, A.I.
from more imagination possible.
Innovation.
healing possible peace of mind and thrills possible the impossible is now possible Thank you very much.
Ladies and gentlemen, please join me in welcoming Dr.
Dissa Su, Chair and CEO of AMD.
Thank you, Dissa.
Thank you so much.
Thank you.
Thank you.
Good morning.
Thank so much.
Thank you,
James, for that very, very warm introduction and welcome to everyone joining us today in Taipei and from around the world as we open Computex 2024.
Every year,
Computex is such an important event for our industry as we bring together all members of the ecosystem to share new products,
to talk about new innovations, and really discuss the future of technology.
But this year is even more special,
with the rapid innovation around AI and all of the new technology everywhere, it is actually the biggest event.
Computex ever, and I'm so honored to be here to open the show.
Now we have a lot of new products and news to share today, so let's just go ahead and get started.
Now at AMD, we're all about pushing the envelope in high performance and adaptive computing to help solve the world's most important challenges.
From cloud and enterprise data centers to 5G networks to healthcare,
industrial, automotive, PCs, gaming and AI, AMD is everywhere, powering the lives of billions of people every day.
AI is our number one priority,
and at the beginning of an incredibly exciting time for the industry as AI transforms virtually every business,
improves our quality of life, and reshapes every part of the computing market.
AMD is uniquely positioned to power the end-to-end infrastructure that will define the AI computing era,
from massive cloud servers and enterprise clusters to the next generation of AI-enabled intelligent embedded devices and PCs.
Now to deliver all of these leadership AI solutions, we're focused on three priorities.
It's delivering a broad portfolio of high-performance,
energy-efficient compute engines for AI training and inference,
including CPUs,
GPUs, and Second, it's about enabling an open and proven and a developer-friendly ecosystem that really
ensures that all of the leading AI frameworks, libraries, and models are fully enabled on AMD hardware.
And third, it's about partnership.
It's really about co-innovating with our partners,
including the largest cloud OEM, software, and AI companies in the world, as we work together to bring the best AI solutions to the market.
Now today,
we're going to talk about a lot of new technologies and products,
including our brand new Zen5 core,
which is the highest performance and most energy efficient core we've ever built and our next generation XDNA 2
NPU core that enables leadership performance and capabilities for AIPCs.
And we're also going to be joined by a number of our partners as we launch our new Ryzen
notebook and desktop processors and preview our data center CPU and GPU portfolio for this exciting AI world.
So let's go ahead and get started with gaming PCs.
Now at AMD, we love gaming.
Hundreds of millions of gamers everywhere use From the latest Sony and Microsoft consoles,
to highest end gaming PCs,
to new handheld devices like the Steam Deck,
Legion Go, and ROG Ally, today I'm excited to show you what's next for PC gaming with Rises.
Our new Ryzen 9000 CPUs are the world's fastest consumer PC processors,
bringing our new Zen 5 core to the AM5 platform with support for the latest IO and memory technologies, including PCIe 5 and DDR5.
I'm happy to show you now our brand new Zen 5 core.
Zen 5 is actually the next big step in high-performance CPUs.
It's a of design that's extremely high performance and also incredibly energy efficient.
You're going to see Zen5 everywhere, from to data centers and PCs.
And when you look at the technology behind this, we so much new technology.
We have a new parallel dual pipeline front end and what this does is it improves branch prediction accuracy and reduces latency.
It also enables us to deliver much more performance for every clock cycle.
We also designed Zen 5 with a wider CPU engine instruction window to run more instructions in parallel for leadership compute throughput and efficiency.
As a result,
compared to Zen4,
we get double the instruction bandwidth,
double the data bandwidth between the cache and floating point unit, and double the AI performance with full AVX-512 throughput.
All of this comes together in the product in Ryzen 9000 series,
and we're delivering an average of 16% higher IPC across a broad range of application benchmarks and games compared to Zen4.
So now, let me show you the top of the line, Ryzen 9 9950X for the very first time.
Here you go.
We have 16 Zen 5 cores, 32 threads, up to 5.67 gigahertz boost, a large 80 megabyte cache at 170 watt TDP.
This is the fastest consumer CPU in the world.
Okay, so let's take a look at some of the performance.
So when you compare it to the competition, the 9950X delivers significantly more compute performance across a broad suite of content creation software.
software.
In some of them, you actually see like Blender that take advantage of the AVX 512 instruction throughput.
We're actually seeing up to 56% faster than the competition.
And in 1080p gaming, we know all of our fans love gaming.
The 9950X delivers best-in-class gaming performance across a wide range of popular...
Now, with desktops, we know that enthusiasts want to have an infrastructure that let you
upgrade across multiple product generations, and with Ryzen, we've done just that.
Our original Ryzen platform, socket AM4, launched in 2016.
And now we're approaching the ninth year, and we have 145 CPUs and APUs across 11 different product families in socket AM4.
And we're actually still launching new products.
We actually have a few Ryzen 5,000 CPUs that are coming next.
And we're taking this exact same strategy with socket AM5, which we now plan on supporting through 2027 and beyond.
So you're going to see AM5 processors from us for many, many years to come.
Now, in addition to the top of the stack rising 99.50x,
we're also announcing the 12, 8, and 6 core versions that will bring the leadership performance of ZEM5 to mainstream price points.
And all of these go on sale in July.
So, now let's shift gears from desktops to laptops, and there's going to be a lot of discussion about laptops at Computex this year.
AMD has been actually leading the transition to AI PCs since we introduced our first generation of Ryzen AI in January last year.
Now AI is actually revolutionizing the way we interact with PCs.
It enables more intelligent personalized experiences that will make the PC an even more essential part of our daily life.
AIPCs enable many new experiences that were simply not possible before.
These things like real-time translations that will allow us to collaborate in new ways,
things like generative AI capabilities that accelerate content creation,
and we each want our own customized digital assistant
that really will help us decide what we need to do and what we should do next.
So, to enable all of this, we need much, much better AI hardware.
And why we're so excited to announce today our third gen Ryzen AI processors.
Our news at Ryzen AI series actually is a significant increase in compute and AI performance
and sets the bar for what a co-pilot PC should do.
Thank you Drew.
Here we go.
This is Strix Strix is our next generation processor for ultra thin and premium notebooks and it combines our news and five CPUs you,
faster RDNA 3.5 graphics, and new XDNA 2 NPU.
Thank you.
And when you look at what we have, it really is all of the best technology on one chip.
We a new NPU that delivers an industry-leading 50 tops.
We're going to talk about tops a lot today, 50 tops of compute that can power new AI experiences.
We have our new Zen 5 core that enables all the compute performance for ultra thin notebooks.
And we have faster RDNA 3.5 graphics that really brings the best in class application acceleration, as well as console level gaming to notebooks.
Now we have a couple of SKUs.
The Ryzen 9 HX370 has 12 Zen 5 cores, 24 threads, 36 megabytes of cache, the industry's most integrated NPU, and our latest RDNA graphics.
Strix is simply the best mobile CPU.
So let me talk a little bit about what's special in this new NPU.
NPUs are really new, and they're really there for all of these AI applications and workloads.
Now, compared to our prior generation, XDNA 2 features a large array of 32 AI tiles with double the multitasking performance.
It's also an extremely efficient architecture that delivers up to two times better energy efficiency of our prior generation when running gen AI workloads.
And if you look at the performance of Strix,
compared to other chips in the market,
and there are a lot of chips that are coming out with new MPUs,
XDNA delivers the highest performance, leadership 50 tops of Int8 AI performance.
is that third gen Ryzen AI will deliver the best NPU powered experiences in a co-pilot plus PC.
But let me just go a little bit deeper so you understand the technology.
Every NPU is actually not the same when it comes to generative AI capabilities.
different NPUs actually support different data types.
And that says something about the accuracy and the performance of the devices.
So for generative AI, 16-bit floating point data types are great for accuracy, but they actually sacrifice performance.
And the current standard for NPUs is actually 8-bit integer data types.
They prioritize performance, but they sacrifice accuracy.
And what this means is that developers really have a tough choice to make between offering either a more accurate solution or a performance solution.
Now, XDNA2 is the first NPU to support Block 16 floating points.
And what that means is Block FP16 actually combines the accuracy of 16-bit data with the performance of 8-bit data.
This represents a huge leap forward in AI capability and enables developers to run complex models natively without any quantization steps at full speed,
and what that means is with no compromise.
And so let me show you what this looks like.
When you look at the example, these are three images that are generated by the popular stable diffusion Excel turbo gen AI model.
We use the same prompt with no quantization or retraining for all three, and the only difference is actually the data type.
So int eight is on the left, which is what most.
Block FP16 is in the middle, which is what XDNA2 has, and FP16 is on the right, which is the more traditional format.
And as you can see,
the two F16-bit images look much better with no real differences between And it is only because our NPU supports that block FV16 that rise in AI is capable of generating
the significantly better images in the same time that it takes to generate the lower quality into eight images.
And this is just an example of why we believe that NPUs with the right data types are the best for the next generation PCs.
And this is why we believe 2 is the best MPU in the industry.
Now, Microsoft is a great partner and is really leading the AI era.
And been working very, very closely with them to bring co-pilot plus PCs to market with strict.
So, to hear more about the work we're doing together, I'd like to welcome Pavan
Daviluri, corporate vice president, Windows devices at Microsoft to the stage.
Pavan, so wonderful to see you, thank you so much for joining me here in Taipei.
You know, i know it's been a super busy time for you guys.
So is going on.
Can you tell us a little bit About what's been going on?
Thank you, lisa, for having us here today.
It is an honor to be here at comptax with all of you.
It has been a really busy couple of weeks for us here at Microsoft.
We announced a new category of PCs built for ai,
the To realize the full power of ai and a we re-engineer the entire system from chip through Every layer of windows.
Windows.
These are the fastest, most performant, most intelligent PCs.
And we are thrilled to partner with AMD on Strix based Copile plus PCs.
Thank you.
We are too.
At least I truly believe we're at an inflection point here where AI is making computing radically more intelligent and personal.
And we've collaborated with AMD since day one on this and I'm very excited about that.
whole AI PC era.
We've always talked about users.
Talk a little bit about, you know, what were thinking with co-pilot PCs and this integration between operating systems, hardware and software.
Sure, absolutely.
The first thing I think customers will see with Copile Plus PCs is that they're just simply outstanding PCs.
These devices will have leading performance and best-in-class battery life, and every app will work great on these machines.
Now, for those next year and AI experiences, how about we just take a look, Lisa?
Sounds good.
Thanks for see you in next video.
So, what you just saw there, those devices and those experiences,
experiences have on-device AI that is powerful enough to keep up with all of the experiences
we want and efficient enough to be always running.
For example,
you saw a recall that helps users instantly find anything on their PC,
and only possible because we could semantically index content in the background,
which acquires always on-
Co-creator lets you generate high-quality images by drawing and paint and we can do that with fast on-device image generation.
We live captions which will translate your audio in real time on a PC switching automatically
between languages in fact but I truly think what you just saw Lisa is just We built this thing called the Windows Copilot Runtime,
which is effectively our library of APIs to let developers access new AI capabilities built into Windows.
And those capabilities are also backed by microsars Responsibilities around responsible ai.
And i truly think we are going to be blown by what partners Build, you initially microsoft is bringing.
I completely agree.
I think this is really bringing the Entire ecosystem together.
And, you know, one of the things Poven that you and i have talked a lot about is the Importance of the hardware.
And it's all about how do we Give you enough power such that you can run these co-pilot Plus pcs.
So can you talk a little bit about your vision With copilot plus pcs,
we want to make it possible to deliver These action ai experiences by using on-device capabilities and do That in concert with the cloud.
On-device ai means faster response times, better privacy, Lower cost.
But that means running models that have billions of parameters in them on PC hardware.
Compared to traditional PCs even from just a few years ago,
we're talking 20 times the performance and up to 100 times the efficiency for AI workloads.
And to make that possible,
every copilot plus PC needs an NPU that's at least capable of 40 tops, and we're deeply grateful for the close partnership with AMD.
We're thrilled that Strix Point's NPU delivers an incredible 50 tops.
That is super, super powerful for us.
We wanted to give you more.
We're always ready.
The thing with that,
of course,
it means we're efficiently delivering these copilot plus experiences, but also gives us headroom for the next generation of AI and we're at the start.
Of course,
the copile-plus PC is complement that power of the NPU with at least 16 gigabytes of RAM and 256 gigabytes of SSD storage.
So truly think these devices are built for the era of AI that's coming.
You know,
one of the things I can share, Pavan, is, you know, as we talk, you guys are always pushing us to give you more.
You're always pushing us saying more tops, more tops.
What are you doing with all those tops?
What's your vision for the future?
I do remember those conversations, Lisa.
But by the way, it takes a lot of diarrhea just so you know.
I can only imagine.
We are deeply excited about those commitments, and quite frankly, the deep collaboration across our teams to go bring that to life.
For us, these breakthrough experiences requires to run these billion-parameter models, always running on the device.
And that requires high performance NPUs to power them.
And really, thanks to our deep partnership, we've been able to seamlessly cross-compile and execute over 40 on-device models.
NPUs, which very meaningful for us.
We took advantage of all of the low-level software and hardware capabilities of the AMD silicon here,
so we did not lose any performance nor efficiency.
Also, these high-performance NPUs are really the best way to drive overall PC performance.
Get into 50 tops, for is quantum leap for us, for sure.
And it's really much, much more impactful relative to what you Could do with just a cpu or gpu alone.
And other thing that excites me really is that these
Powerful npu's then free up the gpu's and the npu's for work Those where they shine.
I'm excited to see what developers Will do with us going forward.
We are super excited as well.
Thank pop for being and thank you for leading the industry.
Thank you.
Thank you.
Thank you.
Thank you.
So, in addition to Microsoft, we're also working with all of the leading software developers, including Adobe, Epic Games, SolidWorks, Sony, Zoom.
and many others to accelerate the adoption of AI-enabled PC apps.
And by the end of 2024,
we're on track to have more than 150 ISVs developing for AMD AI platforms across content creation, consumer, gaming, and productivity applications.
Now, to give us a look at some of these upcoming co-pilot plus PCs, let me welcome our
next guest, a close partner and good friend, Enrique Loras, HP President and CEO.
Enrique, thank you so much for being here.
talk about what's next with our teams we've been working on.
Actually, thank you for having me here and congratulations for all the announcements you have made today.
There's a lot more to come.
Now, look, Enrique, you and I've talked a lot about the intersection of AI and hybrid work in recent months.
What are you seeing in the industry?
I think this is actually what made
many of the announcements that we're making today very exciting is not only about the technology improvements that we're going to see,
which you have explained extremely well.
It's about how they are going to be helping employees and companies to meet their goals.
What we see today is a significant tension between all of us.
as companies that want to continue to improve the productivity of our teams and our teams that are looking for increased flexibility,
the ability to meet their personal and private goals.
And within that technology, an AI can really help to close the bridge because it can help to improve productivity and at the same time.
give the flexibility that our teams are looking for.
And we look at AIPCs as the first instantiation of this change.
They will be enabling to increase productivity.
We're going to talk about some of the new functionalities and you're going to see how unbelievable they are.
But at the same time, make sure that employees can deliver on the goals and meet their productivity goals.
Yeah.
And we've talked a lot about how,
you know, in this, you know, hybrid world, you know, people are really wanting all of these different features.
Can you talk a little bit about that?
I think what we have learned during the last years is that what is really critical for all of us that develop their systems.
are gonna be using is that we co-engineer the solution.
It is not anymore about someone developing the software, someone developing the operating system, someone developing the chip, someone the hardware.
We need to understand what experience we are building and deliver that experience together.
And this is something that we started a few years ago,
and the teams have been learning how to make that happen and how to co-engineer these solutions.
And all the products that we have introduced during the last year,
especially, for example, a product we introduced two weeks ago, the pavilion era, show that.
And this is going to be even more important as we show the new AIPCs that we are going to be bringing into market.
Because we have made an effort to integrate the new processors,
the chips into the solutions that we are going to be
And we are incredibly excited about the new family of products we will be launching in a few weeks.
So, I think you have something to show us.
Is right?
Okay.
I so.
And this is actually the new generation.
Since we have done together, we can show it together.
That's wonderful.
This is the next generation nominee book.
When we integrate it later...
this, Raysen AI 300 series, that Lisa was saying before, will the first product that will have 50 tops integrated in the device.
And performance as Pawen was saying is critical because will enable us to continue to deliver incredible experiences to our customers.
If you ask me what I'm more excited about is something that is going to be very close to many people in the room.
We HP have a very large team here in Taipei.
And many we spend many hours in video conferences and Zoom calls.
And as you can notice, I have a strong Spanish accent.
And know that for the team here, understanding my accent sometimes is difficult.
So just imagine that you can get real-time translation.
So I will speak my Spanish English.
That's pretty good.
Come on.
That's going to make a big difference in productivity.
By the way, I think your English is pretty good, Enrique, but I understand.
My Chinese also needs to be better, so I totally...
We help each other.
Yes.
Look, we love it.
I mean, we love the omnibook and all the work that we've done together with their generalizing AI processors.
But let's actually give everyone a preview of Gen A running on OmniBook.
So let me show you again the popular Stable Diffusion XL Turbo model which is generating some high-quality images of locations around Taiwan
based on some simple text prompts.
So starting with the white cliffs of Tiroko Gorge National Park, Sunmoon Lake with nice fall colors, type pay 101.
And then finally, the peak of Jade Mountain, all of this is running on the omnibook.
You're seeing it for the first time.
And the reason those pictures are so beautiful is because we have a very, very powerful NPU.
We've co-engineered the system together.
We have the Block FP16 data support that I talked about.
And you can see these beautiful photorealistic images almost instantaneously.
Yes, I just imagine the productivity this is going to provide to product managers, creatives that are going to start creating their designs.
And just with this solution, they will really be able to accelerate that work.
So unbelievable progress.
Thank you so much Enrique, thank you for your partnership.
I can't wait for everything that we're going to bring out together.
Thank you Lisa.
Great to be here.
Thank you.
Thank you.
Thank you.
So I showed you earlier that third-gen Ryzen AI has the most powerful MPU,
but you also need a high-performance CPU and GPU to deliver the best PC experience as possible.
So let's take a look at some of that other
When we compare Ryzen AI 300 series to all of the latest X86 and ARM CPUs from our competitors,
you can see why we say Strix is really the best notebook CPU in the market.
Whether you're looking at single threaded responsiveness,
productivity applications,
content creation,
or multitasking,
Third-gen Ryzen AI processors deliver significantly more performance,
many times often beating the competition by a large double-digit percentage across a broad range of use cases.
Now, let's welcome another one of AMD's closest partners in the development of Copilot Plus PCs.
Let's welcome Luca Rossi, president of Lenovo Intelligent Devices Group.
Hey, good morning, Lisa.
Wonderful to see you, Luca.
Thank you so much for joining us today.
We so appreciate the partnership, and and AMD are doing so much together.
Yeah, so thanks for having me, Lisa, and strong partnerships are core to Lenovo strategy.
Our long-term partnership with AMD, as you know, Lisa, spends over 25 years and is a testament to this.
Together, I think we have driven incredible innovations across PCs, mobile gaming, servers, tablets, edge computing.
And, for example, our ThinkStation P620 was the first Thread Reaper Pro workstation to deliver unprecedented performance and flexibility to power AI renderings and workflows.
The Lenovo AI Engine Plus software uses machine learning in our Legion gaming laptops,
integrates with AMD Ryzen processors to dynamically adjust settings and tailor for epic gaming experiences at the ISO.
Our AMD power devices,
ThinkPad, ThinkBoot, Yoga, Legion, are all well-equipped to enter AI applications, accelerate video editing, enhance 3D rendering, and gaming to new aids.
So, Lisa, we are very excited for all the innovation that AMD is introducing and Lenovo
definitely will be a great panel to deliver them to the global markets.
We can do this because of our global scale.
Okay, top notch engineering and design, and then the operational excellence as the world number one.
Thank you.
Thank so much, Lika.
And, you know, Lika, when we think about all the work we're doing together, you know,
today we're talking about third-gen rise in AI devices.
And I know your team has done a lot, our teams have done a lot together.
Can you talk a little bit about your lineup and some of the AI experiences that you have?
Yeah, yeah, yeah, with pleasure.
So, later this year.
We are going to launch Lenovo AI laptops with the third-gen Ryzen AI processors for consumers through
our Yoga franchisee for commercial with our legendary thinkpad brand and for small and medium businesses
through our thinkbook lineup and the no matter if you are a creator an
Enterprise professional or a startup entrepreneur
Lenovo will have the perfect
co-pilot plus laptop with third gen Ryzen AI and Congratulations operating at the industry leading over 50 tops Congratulations for that Lisa.
Thank you.
So we'll have also some exclusive Lenovo AI experiences coming to the market this year One is creator zone.
We co-engineered this with AMD on fine tuning the AI model.
And is an exclusive Lenovo software tailor made for creators, providing tools and features to boost creativity and productivity.
and maybe let's take a look at how this software works.
So first, let me introduce Lenovo AI now.
That's our native language agent that runs locally on the device.
And one of the things that make this so special is its personal knowledge base.
Lenovo AI now can interpret user data to provide faster and personalized output.
You have seen now AI now going through a script and generating both a thumbnail and a description
to post on YouTube and share with the world.
So see, it was very easy, right?
Now, let's the user wants to create images of a fish to post on social media.
All they have to do is to use Lenovo creator zone.
That's our other IP.
With to image,
creator zone can generate an image based off the And if the image needs further refining, the sketch to image function can assist the user.
And then all of this,
all of these images were created with the same prompt and locally on the device with a built-in responsible check feature.
So we have invested significantly in R&D,
as you know,
Lisa, and we built a unique Lenovo IPs running on device AI workload, including LLM compression, performance-improven algorithms,
And we know we are confident that our third-gen rise in AI are offering will stand out from the competition.
And last but not least,
and then I'm over, we also have created Smart Connect, another Lenovo IP that unifies AIPCs, tablets, smartphones, and other devices.
into the same Lenovo ecosystem.
That's fantastic.
Just, it's wonderful to see all of these pieces come together.
Now, Luca, you've been holding something, and think you're gonna show us what it is.
Is that right?
Well, I wasn't supposed to even show this yet, since this is something we will not announce until later this year, but Lisa, you're right.
I felt it's just too exciting to look very happy so straight from our R&D straight
from our R&D lab this is the first ever sneak peek at our Yoga, left of, powered by third gen, rising AI.
That's beautiful.
Maybe we want to do left and then I, yeah.
So, I can share more for today, but I can tell you this device represent a significant
leap forward in next generation AI computing,
we'll include some of the Lenovo exclusive IP AI features that I mentioned, and this is just the beginning.
We cannot wait together to share more and bring this transformative AI pieces to the world Lisa, thank you for having me, and thanks everyone.
Thank you so much, Luca.
Thank you.
Thank so much.
Thank you.
Thank Thank you.
Thank Thank you.
Thank You see, we get very excited about our products.
Next, I'd like to welcome one of the most important visionaries and innovators in the Thank you so much for being here.
Thank you, Lisa.
Yeah, I think it's really my great honor to join you on
stage, especially since you are now a legend of the computing industry and the pride of Taiwan.
Thank you.
Thank you.
Thank you.
Thank Johnny, you are actually a true visionary.
I think we all have so much respect for you.
You've shaped this industry for so many years.
Can you just tell me a little bit about how the landscape of computing is changing?
AIPC will be one of the most disruptive innovations of our lifetime.
The ubiquitous AI error is the meta paradigm shift.
that we have long envisioned asus, and I'm so overjoyed that it's finally becoming a reality.
The world will be full of AI brains that come in different forms and sizes, including super, like the 1.8 trillion parameters with MOE.
Big, medium, and even tiny, like this than 1 billion parameters.
From the cloud,
to the edge, to PCs, and devices like phones and robots, AIPs will play an extremely critical role in this new hybrid AI ecosystem.
Imagine an AI PC with small language model types of AI brains capable of acting as a personal agent who can understand and help you with your personal needs
preferences and even work while complimenting the super brain.
with local advantages, local latency, high security, and presentation.
All the while, uploading the cloud computing needs, especially for inferencing.
Isn't it incredible?
This will benefit user productivity like video editing, design work, and scientific computing, and a lot more.
This vision is amplified and possible because of our partnership with AMD, and the launch of the 3rd gen.
We are definitely co-innovating at the forefront of AIPC together.
It is so inspiring, Jonny, to hear you talk with such passion.
I think we all can feel your passion going forward.
So can you tell us a little bit about your new portfolio of AIPCs with third-gen rise in AI?
Of course, this.
I think At 4 p.m.,
we'll be unveiling a range of cutting-edge AI PCs across our portfolio with brand new ZenBook,
ProArt, Webobook, ASUS TUF and ROG laptops powered by the 3rd Gen AMD Ryzen Air Plus.
These new line-ups are equipped with the most powerful MPU,
with and superior MB ZEMFI architecture that leads the industry in compute and AI performance.
The third-gen Ryzen AI processor is the catalyst to bring in personalized computing to everyone,
from content creators to gamers and business professionals, and to them like never before.
This advancement gives the new ZenBook higher AI performance, the MacBook, while making it thinner and lighter as well.
X is so proud and honored to be the first OEM partner to make the third-gen ride-in AI systems available to consumers.
It will be ready for purchase.
in July.
Isn't it incredible?
Thank you.
These are super beautiful systems, Johnny.
And, you know, it's also about the experiences.
And know that ASUS has also done a lot to create some new experiences for content creation and creativity.
Can you tell us a little bit about that?
Sure.
We have been working closely with AMD to integrate and
Optimize the incredible power of rising AI processors on our Asus co-pilot plus pc lineups.
This enables us to create Exclusive and unprecedented apps to empower users to be more Efficient and creative ever before.
A great example is one of our recently launched AI apps called StoryCUBE.
Content creators who use multiple devices will love it.
StoryCUBE is an AI-powered digital asset.
designed to provide you with the simplest and efficient fire organizing experience.
It can act as a handy assistant by automatically identifying your loved ones'
faces, and even detecting and sorting your media into various scenes, such as road trips, skiing adventures, or puppy moments.
With the top 50,
with the 50 top's capability of the rise in AI and BU, StoryCube can drastically shorten AI categorization time from the tens of seconds.
Running only on a CPU to just a brink of a lie.
Thank you,
you know,
Johnny look It's really exciting what we're doing on AIPCs
But AMD and Asus have also had a very long history of partnering across motherboards graphics cards and embedded systems And as we go forward,
can you comment about some of that going forward?
It has been a great history together creating incredible products like the original cross-held motherboard
ASUS was even the first to push rising gaming system, which received an incredible response from users.
Last year, ASUS introduced the first ROG ally, handheld devices, which also adopted the AMD Z1 extreme chip.
We have solid dealership in the AMD Ryzen 9 segment with 60% market share.
And we are excited about expanding our partnership to offer AI solutions for specific industries like health care.
care, education, and smart cities with the goal of revolutionizing these sectors with powerful on-device AI applications.
That's why we are so excited to all together in the next-gen AI PC space by combining cutting edge AI.
like rising processors and ASU software expertise rooted in our design thinking philosophy.
We are pushing the boundaries of AIPC innovation and delivering truly groundbreaking AI experience to use AI.
Johnny I all I can say is you are an inspiration to us all.
Thank you so much for everything that you've done for industry And thank you for your partnership with AMD.
Thank you.
We have great time and shit.
Thank you very much.
Thank you So I hope you got a feel for all of the customer excitement around third-gen rise in a IPC's I'm very happy to say that
will be available in July,
and we have more than 100 consumer and commercial notebook design wins with Acer,
ASUS, HP, Lenovo, MSI, and others, so lots of things to come.
So, now let's transition from PCs to the edge, where our embedded and adaptive solutions
are bringing AI to a diverse set of markets and devices.
AMD platforms are already broadly deployed at the edge.
In healthcare, AMD chips are improving patient outcomes by enhancing medical imaging analysis, accelerating research, and assisting surgeons with precision robotics.
In automotive, AMD AI solutions are powering the most advanced safety systems.
And in industrial, customers are using AMD technology for AI-assisted robotics and machine vision applications.
We are number one in adaptive computing today,
and thousands of companies have adopted our XDNA AI adaptive and embedded technologies to power their products and services.
Let me just give you a few examples.
So is a global leader in genomics,
and they use Epic and AMD adaptive SOCs with their splice AI software to identify previously undetectable mutations in patients with rare genetic diseases.
An automotive servaroo is industry's leading eyesight ADAS system uses
Vercel to analyze every frame captured by the front camera and that allows them to identify and alert the driver of possible safety hazards.
Hitachi Energy uses AMD adaptive computing products in their widely-deployed,
high-voltage direct current solutions to detect potential electrical issues before they become large problems and cause power outages.
And Canon has adopted Versal to power the AI-based free viewpoint video system that captures high-resolution video from over 100 cameras.
and that allows viewers to experience live events from every angle.
Now, AI at the edge is actually a hard problem.
It requires the ability to do pre-processing, inferencing, and-processing all within the world.
And only AMD has all of these pieces needed to accelerate end-to-end AI at the edge.
So combine the adaptive computing engines for pre-processing sensor and other data with AI engines for inferencing,
and then high-performance embedded compute cores for post-processing decision-making.
Now, today, when you do this, this requires three seconds.
And with our new Vercel AI Edge Gen 2 series,
we bring all of this leadership compute together to create the first adaptive solution that integrates pre-processing,
inferencing, and post-processing in a single manner.
So today, we're announcing early access for our next-gen versatile platform.
More than 30 of our strategic partners are already developing Edge AI devices powered by our new single-chip versatile solution.
And we are incredibly excited about the opportunity
to drive AI at the edge and see significant opportunities to extend our embedded market leadership with these new technologies.
CHEERING AND Okay, now let's turn to the data center.
We've built the industry's broadest portfolio of high-performance CPU, GPU, and networking products.
And when you look at modern data centers today, they actually run many different workloads.
They range from traditional IT applications to smaller enterprise LLMs to large-scale AI applications.
And you need different compute engines for each of these workloads.
And AMD has the full portfolio of high-performance CPUs and GPUs to address all of these workloads.
from our Epic processors that deliver leadership performance on general-purposed and mixed-inferencing AI
workloads to our industry-leading Instinct GPUs that are built for accelerating AI applications at scale.
Today, I'm going to share details of our next-generation data center CPU and GPU offerings.
So let's start first with CPUs.
If you take a look at the CPU model,
market, Epic is actually the processor of choice for cloud
computing, powering internal workloads for all of the largest hyperscalers, and more than 900 public instances from all the major cloud prices.
Every day, billions of people around the world use cloud services powered by Epic.
That includes Facebook, Instagram, LinkedIn, Microsoft Teams, Zoom, Netflix, WeChat, WhatsApp, and many more.
All of that is on.
Epic.
Now, we launched Epic in 2017, and with every generation, more and more customers have adopted Epic because of our leading performance, our leading energy efficiency, and our leading
total cost of ownership.
And I'm very proud to say that we're at 33%
share Now, when you look at today's data centers, most data centers are actually powered by processors that are more than five years old.
And when you look at the virtualization performance of our latest generation server CPUs,
the new technology is so much better that Epic delivers five times more performance compared to those
like And even comparing to the best processors today from the competition, our performance is one and a half times faster.
Many enterprises today are actually looking to modernize their general purpose computing infrastructure and add new AI capabilities, often within the same footprint.
And by doing, by refreshing their data centers with FourthGen Epic, you can really accomplish this.
You can actually replace five legacy servers with a single server that reduces rack space by 80% and consumes 65% less than 60%.
Now, many enterprise customers are also wanting to run a combination of general purpose and AI workloads without adding GPUs.
And is, again, the best option for that.
Looking at Epic,
we are 1.7 times faster when running the industry standard TPX AI benchmark that measures end-to-end AI pipeline across different use cases and algorithms.
Fourth Gen EPIC is clearly the industry's best server CPU, but we're always pushing the envelope to deliver more performance.
So I have something to show you today.
It's actually the preview of our upcoming fifth Gen EPIC processor, code named Turin.
Let's a look at Turin for the very first time.
Turin features 192 cores and 384 threads.
And has 13 different chiplets built in three and six nanometer process technology.
There's a lot of technology on Turin.
It supports all the latest memory and IO standards and is a drop-in replacement for existing fourth-gen epic platforms.
Thank you, Drew.
Turin will extend Epic's leadership in general purpose and high performance computing workloads.
So let's take a look at some of that performance.
NMD is a very compute intensive scientific software that simulates complex molecular systems and structures.
When simulating a 20 million at a model, a 128 core version of Turin is more than three times faster than the competition's best.
enabling researchers to more quickly complete models that can lead to breakthroughs in drug research, material science, and fields.
Now, Turin also excels at AI inferencing performance when running smaller large language models.
So I want to show you a demo here.
Now, what this demo compares is the performance of Turin when running a typical enterprise
deployment of llama2 virtual assistants with a minimum guarantee latency to ensure a high-quality user experience.
Both servers begin by loading multiple llama2 instances, with each assistant being asked to summarize an uploaded document.
Right away,
you can see that the turn server on the right is adding double the number of sessions in the same amount of time while responding to user requests significantly faster
than the competition.
And while the other server reaches a maximum number of sessions, you'll see it stop soon, it basically can't support the latency requirement.
Turn, continue scaling, and a sustained throughput of nearly four times more tokens per second.
That means when you use turn, you need less hardware to do the same way.
work.
And in addition to leadership summarization performance,
Turn also delivers leadership performance across a number of other different enterprise AI use cases,
including two and a half times more performance when translating large documents and more than five times better performance when running a support challenge.
Our customers are super excited about Turn and I know many of our partners are actually
here in the audience today and I want to say we're on track to launch in the second half of this year.
So let's turn to data center GPUs and some big updates on our instinct accelerators.
We launched MI 300 last December, and it's quickly become the fastest ramping product in AMD history.
Microsoft, Meta, and Oracle have all adopted MI 300.
Every major server OEM is offering MI 300 platforms.
And we have built deep partnerships with a broad ecosystem of CSP and ODM partners,
again, many, many thanks to our ODM partners who are here today that are offering instinct solutions.
Now if you look at today's enterprise generative AI workloads,
MI 300X provides out of the box support for all of the most common models, including GPT, LAMA, Mistral, Fi, and many more.
We've made so much progress in the last year on our Rockam software stack,
working very closely with the open source community at every layer of the stack while adding new
features and functionality that make it incredibly easy for customers to deploy AMD Instinct in their software environment.
Over the last six months, we've added support for more AMD AI hardware and operating systems.
We've integrated open source libraries like VLLM and frameworks like JAX.
We've enabled support for state-of-the-art attention algorithms.
We've improved computation and communication libraries, all of which have contributed to significant increases in the Gen AI performance for MI 300.
Now, with all of these latest ROCOM updates, MI300X delivers significantly better inferencing
performance compared to the competition on some of the industry's most demanding and popular models.
That is, we are 1.3 times more performance on Meta's latest LAMA-370B model compared to H100.
and we're 1.2 times more performance on Mistral's 7D model.
We've also expanded our work with the open source AI community.
More than 700,000 hugging face models now run out of the box using Rockam on MI 300X.
This is a direct result of all of our investments in development and test environments that ensure a broad range of models work on instinct.
The industry is also making significant progress at raising the level of abstraction at which developers code to GPUs.
We want to do this because people want choice in the end.
and we're really happy to say that we've made significant progress with our partners to enable this.
For example,
our close collaboration with OpenAI is ensuring full support of MI 300X with Triton,
providing a vendor-agnostic option to rapidly develop hardly perform an LLM current.
And we've also continued to make excellent progress adding support for AMD AI hardware into the leading frameworks like PyTorch,
TensorFlow, and Now, We're also working very closely with the leading AI developers to optimize their models for MI 300.
So I'm very excited to welcome Christian Lefort,
CTO and Co-CEO of Stability AI, an important AMD partner known for delivering the breakthrough stable diffusion open access AI models.
Hello Christian, how are you?
Thank you.
I'm great.
It's a great honor to be here to represent my colleagues and everyone else who makes Stability AI a really interesting player in the ecosystem.
Well, you know, we've showed a lot of
You're known for delivering these breakthrough open access AI models that generate images, video, language, code, all of these things.
Can you share some insights into how these models are pushing the boundaries of what's possible?
Yes.
We're seeing incredible gains in productivity in every industry.
And many were made possible because we did the crazy thing and we released our models and our source code for free.
And so this allowed millions of developers and enthusiasts and thousands of
researchers to adapt their models to make new discoveries at record pace and to create new applications extremely fast.
Take, for instance, touching up old family photos to improve their resolution or quality
or maybe to remove someone you never, ever want to see again in your whole life.
Doing this, well, used to take years of experience and sometimes hours of tedious work for each image.
Now, applications like Stable Assistant and Stable Artisan, and a lot of other applications
that leverage Stable Diffusion can allow anyone to create and edit images in seconds.
And we're seeing similar gains in productivity not just in images,
but in other research areas that were involved in language, coding, music, speech, and 3D.
And all of those together, we aim to soon boost, by at least 10x, the productivity of filmmaking and video game content.
That's fantastic, Christian.
Now, I understand you have some big news to tell the audience today.
Yes.
So, basically, the wait for stable diffusion three is almost over.
We appreciate the community's patience and understanding as we dedicated extra effort to improve its quality and safety.
Today we're announcing that on June 12th,
we release the stable diffusion 3 medium model for everyone to download A lot of work went into this and we're really excited to see what the community will end up doing with this
One thing that is maybe not obvious to non-technical people is that like it used to be that the frontier of research
Led to be still these these models like stable diffusion
But nowadays what's happening is that there's like a new it's like a natural evolution
basically like these models are getting combined together in all kinds of novel ways and by releasing them openly we
of people to help discover the best way to bring those together and unlock new use cases.
So this SD3 medium,
it's an optimized version of SD3 that achieves unprecedented visual quality and that the community will be able to improve for their own specific needs to help us
discover the next frontier.
of generative AI.
It will, of course, run super fast on the MI 300.
And it's also compact enough to run on the Ryzen AI laptops that you've just announced.
So, here's an image produced with stable diffusion tree, we challenged it to illustrate the famous Taiwan night markets, what it looks like.
That's very nice.
It looks very nice.
So, if you look really, really closely, you'll it's not quite photorealistic, but
I think it captured really well the different elements of the text prompt, and it's especially this long text prompt.
So it captured the pedestrians that are walking,
the that is made of stones, the fact that it is during the night, there's the trees and so on.
So, basically, SD3 is able to do this using a bit.
including the multi-modal diffusion transformer architecture that allow it to understand visual concepts and text prompts far better than previous models.
It supports both simple prompts,
so you don't need to become an expert at these,
but you can also use much more complex ones and it will try to bring together all of the different elements.
elements of it.
And SD3 excels at all kinds of artistic styles and photorealism.
So an example that is actually a really challenging example.
And comparing it with our previous version models, stable diffusion excel that we released less than a year ago.
And it's It's especially because it involves hands that are notoriously hard for these models to replicate.
It involves these repeating patterns like the strings and the guitar and the frets.
And these are really challenging for these models to understand and draw accurately.
So notice how SD3 generated more realistic details like the shape of the guitar in the hands.
And if you look really, really closely, you notice that there are a imperfections here and there.
So it's not quite perfect, but a big improvement over the previous generation.
Now, it's fantastic, Christian.
And, you know, I know that, you know, your team's been working a lot on SD3.
What's your experience been like?
Like with 300?
It's wonderful.
192 gigabytes of HBM, that's really a game changer.
It's like having more memory basically is often like it does the way that we can
basically unlock new models and it's often like the number one factor that will help us to train bigger models faster.
and more efficiently.
And I'll give an example that we've actually just encountered in collaborating with AMD.
So we have this creative upscaler feature in our API.
And the way it works is
that it can take an old photo and an old image that
is less than one megapixel and really blow out resolution and improve the quality at the same time.
And so this creative upscaler.
Like, we were happy when we were able to reach 30 megapixels on the H100 and H100.
But like once we basically ported our code over to the MI 300,
which by the way was like pretty much no effort, we were able to reach 100 megapixels.
And know, content creators, they want more pixels.
So makes a huge difference.
And the fact that we didn't have to really make any effort to actually achieve this, it's a big step up.
So researchers and engineers are really going to love the incredible memory capacity and
the bandwidth advantages that the AMD Instinct GPUs deliver out of the bus.
So Lisa,
moving forward, we'd really love to collaborate more closely with AMD because we'd like to create a new state-of-the-art video model.
We a lot more memory,
we a lot more compute to do this,
and so we'd love to collaborate more closely with your team to achieve this and to release this for the whole world to enjoy.
That sounds fantastic.
It like you need some GPUs.
Yes.
Thank you.
We're at the place for this.
Thank you so much, Christian.
Have a great day.
You can see all of the innovation that's happening in such a short amount of time.
Now, earlier in the show, I was joined by Microsoft's pop-and-deviluri who shared the
great work that we're doing together on co-pilot plus PCs.
Microsoft is also one of our most strategic data center
partners And we've been working very closely with them on our epic and instinct road map to hear more about our partnership
And how Microsoft is using my 300x across your infrastructure.
Here's Microsoft chairman and CEO Satya Nadella We're in the midst of a massive AI platform shift with the promise to transform how we
live and work.
We have committed to partnering broadly across the industry to make this vision real.
That's why our deep partnership with AMD,
which has spanned multiple computing platforms from the PC to custom silicon for Xbox, and now to AI, is so important to us.
As power highlighted, we are excited to partner with you to deliver these new Ryzen AI powered co-pilot plus PCs.
And we're also thrilled to announce last month that we were the first cloud to deliver general availability of virtual machines using AMD's MI 300x accelerator.
It's a massive milestone for both our companies.
And it gives our customers access to very impressive performance and efficiency for their most demanding workloads.
In it offers today the leading price performance for GPT workloads.
This is just the start.
We are very committed to our collaboration with AMD,
and we'll continue to push AI progress forward together across the cloud and edge to bring new value to our joint customers.
Thank you all so very much.
Thank you so much, Satya.
We're so proud of our work with Microsoft.
And as you heard from Satya,
MI 300 delivers the best prize performance today for GPT4 workloads and is being deployed broadly across Microsoft's AI compute infrastructure.
Now, let me show you one more example of MI 300 being used to power OpenAI's wonderlust travel assistant built on GPT-4.
So again, let's start by letting the tool know that we're interested in Taiwan and that we're going to be attending Computex.
And you might ask something about, you know, who's the opening keynote?
Yeah.
But...
Got it right.
Now, let's also ask Wanderlust, what other interesting sites should we see in Taipei?
And you can see, almost instantly, we get lots of options of things to do near the convention center.
But if we want to narrow it down to just a few,
because we only have a day,
we can ask it to plan a day for us,
and that day would include things like Elephant Mountain and Taipei 101, and you get the full itinerary.
I think it just gives you an example of the power of AI.
I wonderlust on MI 300 looks wonderful,
but it really shows the power of these assistive agents and how easily it is for developers to seamlessly integrate Gen AI models into
their applications so that we can make AI.
for all of us.
Now, the customer response to MI 300 has been overwhelmingly positive, and just so clear that the demand for AI is just accelerating so much going forward.
We're really just at the beginning of a decade-long mega-cycle for AI, and to address this incredible demand.
I have a very exciting roadmap.
map to show you.
We MI300x last year with leadership inference performance, memory size, and capabilities.
And have now expanded our roadmap, so it's on an annual cadence.
That means a new product family every year.
Later this year,
we plan to launch MI325X with faster and more memory, followed by our MI350 series in 2025 that will use our new CDNA4 architecture.
And both the MI325 and 350 series will leverage the same industry standard universal baseboard OCP server design used by MI300,
and what that means is that our customers can very quickly adopt this new technology.
And then in 2026, we'll deliver another brand new architecture with CDNA next in the MI400 series.
So let me show you a little bit.
Starting MI325.
MI325 extends our leadership in generative AI with up to 288 gigabytes of ultra-fast HBM3E memory with six terabytes per second of memory bandwidth,
and uses the same infrastructure as MI300, which makes it easy for customers to transition.
Now, let me show you some competitive...
the competition, MI325 offers twice the memory, one point three times faster memory bandwidth, and one point three times more peak compute performance.
And based on this larger memory capacity, you heard what Christian said about the importance of memory.
A single server with eight MI325 accelerators can run advanced models up to one trillion That's double the size supported by an H-200 server.
And then moving into 2025, we'll introduce our CDNA4 architecture, which will deliver the biggest generational leap in AI performance.
The MI350 series will be built with advanced three nanometer process technology and
supports for FP4 and FP6 data types and will again drop into the same infrastructure as MI300 and MI325.
We are super excited about the AI performance of CDNA4.
So if you just take a look at that history,
when we launched CDNA3, we were eight times more AI performance compared to our prior generation.
And with CDNA4, we're on track to deliver 35 times increase.
That's 35 times increase in performance compared to CDNA3.
And when you compare MI350 series to B200, Instinct supports up to 1.5 times more memory and delivers 1.2 times more performance.
Overall, we are very excited about our multi-year instinct in rock and roadmaps.
And I can't wait to bring all of this new performance to our AI customers.
Now, I have one more topic I'd like to talk about today.
And it's really, in addition to our focus on instinct, we've also made significant progress driving the development of high performance AI networking infrastructure systems.
AI network fabrics need to support fast switching rates with very low latency, and they must scale to connect thousands of accelerator nodes.
At AMD,
we believe that the future of AI networking must be open,
open to allow everyone in the industry to innovate and drive the best solutions together.
So for both inferencing and training,
it's actually critical to scale up the performance of hundreds of accelerators,
connecting the GPUs in a RACR pod with an incredibly fast,
highly resilient interconnect so they can work as a single compute node to run the largest models with the fastest responses.
Last week,
I'm very happy to say that many of the largest chip cloud and systems companies came together
To announce plans to develop an open standard for a high performance fabric that can officially connect hundreds of accelerators
We call this ultra accelerator link or
It's an optimized load store fabric designed to run at high data rates and leverages AMD's proven infinity fabric technology.
We actually believe UA Link will be the best solution for scaling accelerators of all types,
not just GPUs, but all accelerators and will be a great alternative to proprietary options.
The UA Link 1.0 standard is on track for later this year with chips supporting UA Link already well into development from multiple vendors.
And now the other part of training large models is also the need for scale-out performance,
connecting multiple accelerator pods to work together with at-scale installations, often spanning hundreds of thousands of GPUs.
A broad group of industry leaders formed the Ultra Ethernet Consortium last year to address
And UAC is the high-performance technology with leading signaling rates.
It has extensions such as Rocky for RDMA to efficiently move data between nodes,
and it has a new set of innovations developed specifically for AI supercomputers.
It's incredibly scalable.
It offers the latest switching technology from leading vendors such as Broadcom, Cisco, and and above all, it's open.
Open means that as an industry,
we can innovate on top of UEC and solve the needed problems,
and the industry can work together to build out the best possible high-performance interconnect for AI and HPC.
So, when you look ahead, what does this mean?
That means we have all the pieces.
We UA link and Ultra Ethernet,
and we have the complete networking solution for highly performant, highly interoperable, and resilient AI data centers that can run the most advanced.
So, I hope you can see now that AMD is the only company that can deliver the full set
of CPU,
GPU, and solutions to address all of the needs of the And we have accelerated our roadmaps to deliver even more innovation across both our instinct and epic
portfolios, while also working with an open ecosystem of other leaders to deliver industry-leading networking solutions.
Now, it's been a wonderful morning.
We have so much that we talked about, so let me just wrap things up.
We showed you a lot of new products today from our latest Ryzen 9,000 desktops and third-gen Ryzen AI notebook
processors with leadership compute and AI performance.
to our single-chip virtual Gen 2 series that will bring more AI capabilities to the edge,
to our next-generation turn processors that extend the leadership and efficiency of our epic portfolio.
And our expanded set of instinct accelerators that will deliver an Annual cadence of higher performance.
And what i can say is this is an incredible time to be in the Technology industry,
it's an incredible pace of innovation,
and i Couldn't be more excited about all of the work that we're going
to Do together in high performance and ai computing as an industry.
So a very,
very special thank you to all of our partners who and especially thank you for all of our partners here in Taiwan and around the world.
Thank you for being such a great audience and have a great Computex 2024.
Thank you.
Thanks for watching!

Mở khóa nhiều tính năng hơn

Cài đặt tiện ích Trancy để mở khóa nhiều tính năng hơn, bao gồm phụ đề AI, định nghĩa từ AI, phân tích ngữ pháp AI, nói chuyện AI, v.v.

feature cover

Tương thích với các nền tảng video chính

Trancy không chỉ cung cấp hỗ trợ phụ đề song ngữ cho các nền tảng như YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera, mà còn cung cấp dịch từ/câu bằng AI, dịch toàn văn sâu sắc và các tính năng khác cho các trang web thông thường. Đây là một trợ lý học ngôn ngữ đa năng thực sự.

Trình duyệt trên tất cả các nền tảng

Trancy hỗ trợ tất cả các trình duyệt trên tất cả các nền tảng, bao gồm tiện ích trình duyệt iOS Safari.

Nhiều chế độ xem

Hỗ trợ chế độ xem rạp, đọc, kết hợp và các chế độ xem khác để có trải nghiệm song ngữ toàn diện.

Nhiều chế độ luyện tập

Hỗ trợ luyện viết câu, đánh giá nói, trắc nghiệm nhiều lựa chọn, viết theo mẫu và các chế độ luyện tập khác.

Tóm tắt video AI

Sử dụng OpenAI để tóm tắt video và nắm bắt nhanh nội dung chính.

Phụ đề AI

Tạo phụ đề AI chính xác và nhanh chóng trên YouTube chỉ trong 3-5 phút.

Định nghĩa từ AI

Chạm vào từ trong phụ đề để tra cứu định nghĩa, với định nghĩa dựa trên AI.

Phân tích ngữ pháp AI

Phân tích ngữ pháp câu để nhanh chóng hiểu ý nghĩa câu và nắm vững các điểm ngữ pháp khó.

Nhiều tính năng web khác

Ngoài phụ đề song ngữ cho video, Trancy còn cung cấp dịch từ và dịch toàn văn cho các trang web.

Sẵn sàng để bắt đầu

Hãy thử Trancy ngay hôm nay và trải nghiệm các tính năng độc đáo của nó cho chính bạn

Tải về