AMD at Computex 2024: AMD AI and High-Performance Computing with Dr. Lisa Su - 雙語字幕
Thanks for watching!
We will now proceed to the AMD opening keynote speech.
Tai Cha Chairman, James Huang will help us introduce our keynote speaker.
Please again welcome James to the stage.
Now the moment you have been waiting for.
As chair and CEO of AMD, Dr.
Dizansu has led a company's transformation into a power house of high-performance computing under her visionary leadership.
AMD has achieved Dr.
Su was recently named the 2024 Chief Executive of the Year by Chief Executive Magazine,
recognized her role in one of the most spectacular achievements in the technology sector.
Dr.
Su's influence extends beyond AMD.
She has been a key advocate for the integration of AI across industries.
Emphasizing is transformative power.
Her commitment to innovation and collaboration is evident in her
which focuses on the development of cutting-edge solutions while fostering an inclusive and forward-looking company culture.
Now, on behalf of Computex,
I am very pleased to welcome our old friend, Disa, but we are going to share a video from AMD first.
AMD makes the limitless potential of AI possible,
from AI PCs to Edge to Cloud Lab, powered by some of the most advanced GPUs, CPUs, and pains on the planet.
by an open software approach that's accessible to all.
Together with our partners, AI from AMD helps make more imagination possible, innovation, breakthroughs, and healing possible.
Peace of mind and thrills possible.
The impossible is now possible.
Ladies and gentlemen, please join me in welcoming Dr.
Dis Cheer and CEO of AMD.
Thank you.
Thank you so much.
Thank you.
Thank you.
Thank you.
Good Ta-wang, Dajahau.
Thank Thank you,
James, for that very, very warm introduction, and to everyone joining us today in Taipei and from around the world as we open Computex 2024.
Every year,
Computex is such an important event for our industry as we bring together all members of the ecosystem to share new products,
to talk about new innovations, and really discuss the future of technology.
But this year is even more special with the rapid innovation around AI and all of the new technology everywhere.
It is actually the biggest and the most important Computex ever, and I'm so honored to be here to open the show.
Now, we have a lot of new products and news.
Let's just go ahead and get started.
Now at AMD, we're all about pushing the envelope in high performance and adaptive computing to help solve the world's most important challenges.
From cloud and enterprise data centers,
to 5G networks, to healthcare, industrial, automotive, PCs, gaming, and AMD is everywhere, powering the lives of billions of people every day.
AI is our number one priority,
and we're at the beginning of an incredibly exciting time for the industry as AI transforms virtually every business,
improves our quality of life, and every part of the computing market.
AMD is uniquely positioned to power the end-to-end infrastructure that will define the AI computing era.
From massive cloud servers and enterprise clusters to the next generation of AI-enabled intelligent embedded devices,
Now, to deliver all of these leadership AI solutions, we're focused on three priorities.
First, it's delivering a broad portfolio of high-performance energy-efficient compute
engines for AI training and inference,
including CPUs,
GPUs, and Second, it's about enabling an open and proven and a developer-friendly ecosystem that really
ensures that all of the leading AI frameworks, libraries, and models are fully enabled on AMD hardware.
And third, it's about partnership.
It's really about co-innovating with our partners,
including the largest cloud OEM software and AI companies in the world, as we work together to bring the best AI solutions to the market.
Now today,
we're going to talk about a lot of new technologies and products,
including our brand new Zen5 core, which is the highest performance and most energy efficient core we've ever built.
And our next-generation XDNA 2 NPU core that enables leadership performance and capabilities for AI PCs.
And we're also going to be joined by a number of our
partners as we launch our new Ryzen Notebook and desktop processors and preview our data center CPU and GPU portfolio for this exciting AI world.
So let's go ahead and get started with gaming PCs.
Now, at AMD, we love gaming.
Hundreds millions of gamers everywhere use AMD technology.
From the latest Sony and Microsoft consoles to the highest-end gaming PCs to new handheld devices like the steam deck,
Legion Go, and But today, I'm excited to show you what's next for PC gaming with Ryzen.
Our new Ryzen 9000 CPUs are the world's fastest consumer PC processors,
bringing our news N5 core to the AM5 platform with support for the latest IO and memory technologies including PCIe 5 and DDR5.
I'm happy to show you now our brand new Zen 5 Core.
Zen 5 is actually the next big step in high performance CPUs.
It's a grounds of design that's extremely high performance and also incredibly energy efficient.
You're going to see Zen5 everywhere, from to data centers and PCs.
And you look at the technology behind this, we so much new technology.
We have a new parallel dual pipeline front end.
And this does is it improves branch prediction accuracy and reduces latency.
It also enables us to deliver much more performance for every clock.
We also designed Zen 5 with a wider CPU engine instruction window to run more instructions and parallel for leadership compute throughput and efficiency.
As a result,
compared to Zen 4, we get double the instruction bandwidth, double the data and double the AI performance with full AVX-512 throughput.
All of this comes together in the product in Ryzen 9,000 series,
and we're delivering an average of 16% higher IPC across a broad range of application benchmarks and games compared to Zen4.
So now, let me show you the top of the line, Ryzen 9 9950X for the very first time.
Here you go.
We have 16 Zen5 cores, 32 threads, up to 5.67 gigahertz boost, a large 80 megabyte cache at 170 watt TDP.
This is the fastest consumer CPU in the world.
Okay, so let's take a look at some of the performance.
So when you compare it to the competition,
the 9950X delivers significantly more compute performance across a broad suite of kind In some of them,
you actually see like Blender that take advantage of the AVX-512 instruction throughput.
We're actually seeing up to 56% faster than the competition.
And in 1080p gaming,
we know all of our fans love gaming,
the 9950X delivers best-in-class gaming performance across a wide range of popular Now,
with desktops, we that enthusiasts want to have an infrastructure that let you upgrade across multiple product generations, and with Ryzen, we've done just that.
Our original Ryzen platform,
Socket AM4,
launched in 2016, and now we're approaching the ninth year, and we have 145 CPUs and APUs across 11 different product families in Socket AM4.
for.
And we're actually still launching new products.
We have a few Ryzen 5000 CPUs that are coming next month.
And we're taking this exact same strategy with Socket AM5, which we now plan on supporting through 2027 and beyond.
So you're going to see AM5 processors from us for many, many years to come.
In addition to the top of the stock rise in 9950X,
we're also announcing the 12,
8, and 6 core versions that will bring the leadership performance of ZEM5 to mainstream price points,
and all of these go on sale in July.
So, now let's shift gears from desktops to laptops, and there's going to be a lot of discussion about laptops at Computex this year.
AMD has been actually leading the transition to AI PCs since we introduced our first generation of Ryzen AI in January last year.
Now, AI is actually revolutionizing the way we interact with PCs.
more intelligent experiences that will make the PC an even more essential part of our daily lives.
AI PCs enable many new experiences that were simply not possible before.
These are things like real-time translations that will allow us to collaborate in new ways,
things like generative AI capabilities that accelerate content creation,
and also we each want our own customized digital assistant that really will help us decide what we need
to do and what we should do next.
To enable all of this, we actually need much, much better AI hardware.
And that's why we're so excited to announce today our third...
AI processors.
Our news rise in AI series actually is a significant increase in compute in AI performance and sets the bar for what a co-pilot PC should do.
Thank you, Drew.
Here we go this is Strix.
Strix is our next generation processor for ultra-thin and premium notebooks and it combines our news N5 CPU,
faster RDNA 3.5 graphics and the new XDNA 2 NPU.
And when you look at what we have, it really is all of the best technology on one chip.
We have a new NPU that delivers an industry-leading 50 tops.
We're going to talk about tops a lot today, 50 tops of compute that can power new AI experiences at very low power.
We have our news N5 core that enables all the compute performance for ultra thin notebooks.
And we have faster RDNA 3.5 graphics that really brings the best in class application acceleration, as well as console level gaming to notebooks.
Now, we have a couple of SKUs.
The Ryzen 9 HX370 has 12 Zen 5 cores, 24 threads, 36 megabytes of cache, the most integrated NPU, and our latest RDNA graphics.
Strix is simply the best mobile system.
So let me talk a little bit about what's special in this new NPU.
NPUs are really new, and really there for all of these AI applications and workloads.
Now, compared to our prior generation, XDNA2 features a large array of 32 AI tiles with double the multitasking performance.
It's also an extremely efficient architecture that delivers up to two times better energy efficiency of our prior generation when running Gen AI workloads.
And if you look at the performance of Strix,
compared to other chips in the market,
and there are a lot of chips that are coming out with new MPUs,
XDNA delivers the highest performance, leadership 50 tops of Int8 AI performance.
And what this means is that third-gen rise in AI will deliver the best NPU-powered experiences in a co-pilot process.
But let me just go a little bit deeper so you understand the technology.
You every NPU is actually not the same when it comes to generative AI capabilities.
Different NPUs actually support different data types.
And that says something about the accuracy and the performance of the devices.
So, for generative AI, 16-bit floating point data types are great for accuracy, but they actually sacrifice performance.
And the current standard for NPUs is actually 8-bit integer data types.
They prioritize performance, but they sacrifice accuracy.
And what this means is that developers really have a tough choice to make between offering
either a more accurate solution or a more performance solution.
Now, XDNA2 is the first NPU to support block 16 floating point.
And what that means is block FP16 actually combines the accuracy of 16-bit data with the performance of 8-bit data.
This represents a huge leap forward.
and enables developers to run complex models natively without any quantization steps at full speed and what that means is with no compromise.
And so let me show you what this looks like.
When you look at the example, these are three images that are generated by the popular stable diffusion XL turbo generation.
We use the same prompt with no quantization or retraining for all three.
And the only difference is actually the data.
type.
So int 8 is on the left, which is what most NPUs are using.
Block Fp16 is in the middle, which what XDNA2 has.
And Fp16 is on the right, which is the more traditional format.
And as you can see, the two F16 bit images look much better with no real differences between the two.
And it is only because our NPU supports that block Fp16 that rise in AI is capable of generating the significantly better images in the same time that it takes
to generate
And this is just an example of why we believe that NPUs with the right data types are the best for the next generation PCs.
And is why we believe that XDNA 2 is the best NPU in the industry.
Now, Microsoft is a great partner, and really leading the AI era, and we've been working
very, very closely with them to bring co-pilot plus PCs to market with Strix.
So to hear more about the work we're doing together,
I'd like to welcome Pavan Daviluri, corporate vice president, Windows devices at Microsoft to the stage.
You know, I know it's been a super busy time for you guys.
So is going on.
Can you tell us a little bit about what's been Thank you, Lisa, for having us here today.
It is an honor to be here at Computex with all of you.
It has been a really busy couple of weeks for us here at Microsoft.
We announced a new category of PCs built for ai, the-pilot plus PCs.
To realize the full power of ai and a pc, we-engineered the Entire system from chip through every layer of windows.
most performance, most intelligent PCs, and are thrilled to partner with AMD on Strix-based Copilot plus PCs.
Thank you.
We are too.
At least I truly believe we're at an inflection point here where AI is making computing radically more intelligent and personal,
and we've collaborated with AMD since day one on this.
I'm very excited about that.
There's no question.
Microsoft and you and your team have Really led, you know, this aipc era.
We've always talked about user experiences.
Can you talk a little bit about, you know, what were Thinking with co-pilot pcs and this integration between Operating systems hardware.
software?
Sure, absolutely.
The first thing I think customers will see with Copile Plus PCs is that they're just simply outstanding PCs.
These devices will have leading performance and best-in-class battery life, and every app will work great on these machines.
Now for those next year and AI experiences, how we take a look, Lisa?
Sounds good.
Thanks for watching Thanks for watching guys!
So what you just saw,
those devices and those experiences on device AI that is powerful enough to keep up with all of the experiences we want and efficient enough to be always running.
For example,
you just saw recall that helps users instantly find anything on their PC and that's only
possible because we can semantically index content in the background which acquires always on high performance AI.
Co-creator lets you generate high quality images by drawing and paint and we can do that with fast on device image generation.
We have live captions,
which will translate your audio in real time on a PC,
switching between languages, in But I truly think what you just saw, Lisa, is just the beginning.
We built this thing called the Windows Copilot runtime, which effectively our library of APIs to let developers access new AI capabilities.
capabilities built into windows.
And those capabilities are also backed by microsars Responsibilities around responsible ai.
And truly think we are going to be blown By what partners build, you in to what microsoft is bringing.
I completely agree.
I think this is really bringing the Entire ecosystem together.
And, you know, one of the things, That you and i've talked a lot about is the importance of the hardware and it's all about how
Do we give you enough power such that you can run these co-pilot plus pcs so can you talk a little
About your vision there?
sure absolutely.
With co-pilot plus pcs we want to make it possible to deliver These next-genai experiences by using on-device cases.
capabilities and do that in concert with the cloud.
On-device AI, really for us, means faster response times, better privacy, lower costs.
But that means running models that have billions of parameters in them on PC hardware.
Compared to traditional PCs even from just a few years ago,
we're talking 20 times the performance and up to 100 times the efficiency for AI workloads.
And make that possible, every co-pilot plus PC needs an NPU that's at least capable of 40 Tops.
And deeply grateful for the closed partnership with AMD.
We're thrilled that Strix Point's NPU delivers an incredible 50 Tops.
That is super, super powerful for us.
We're always ready.
The thing with that,
of course,
it means we're efficiently delivering these copile plus experiences,
but gives us headroom for the next generation of AI and we're at the start of that runway.
Of course,
the copile plus PC is complement that power of the NPU with at least 16 gigabytes of RAM and 256 gigabytes of
So i truly think these devices are built for the era of ai that's coming.
You know, one of the things i can share is as we talk, you Guys are always pushing us to give you more.
You're always saying more tops, lease of more tops.
What are you doing with all those tops?
What's your vision for the future?
I do remember those conversations, Lisa.
It takes a lot of diarrhea, just so you know.
I can only imagine.
We deeply excited about those commitments, and frankly, the deep collaboration Across our teams to go bring that to life.
For us, these breakthrough experiences require us to run these Billion parameter models.
always running on the device.
And requires high performance NPUs to power them.
And really,
thanks to our deep partnership,
we've been able to seamlessly cross-compile and execute over 40 on-device models on these AMD NPUs, which very meaningful for us.
We took advantage of all of the low-level software and hardware capabilities of the AMD Silicon
So we do not lose any performance nor efficiency.
Also, these high performance NPUs are really the best way to drive overall PC performance.
Getting to 50 tops, for example, is a quantum leap for us, for sure.
And it's really much, much more impactful relative to what you could do with just a CPU or a GPU alone.
The other thing that excites me really is that these powerful NPUs then free up the GPUs and the NPUs for workloads where they shine.
So I'm excited to see what developers will do with us going forward.
We are super excited as well.
Thank you, Pavan, for being here.
Thank you for your partnership.
And thank you for leading the industry.
Thank you.
Thank you.
Thank you.
Thank you.
Thank you,
So, in addition to Microsoft, we're also working with all of the leading software developers,
including Adobe, Epic Games, SolidWorks, Sony, Zoom, and many others to accelerate the adoption of AI-enabled PC apps.
And by the end of 2024, we're on track to...
have more than 150 ISVs developing for AMD AI platforms across content creation, consumer, gaming, and applications.
To give us a look at some of these upcoming co-pilot plus PCs,
let me welcome our next guest, a very close partner and good friend, Enrique Loris, HP President and CEO.
Enrique, thank you so much for being here.
It's always fun to talk about what's next with our teams I've been working on.
Actually, thank you for having me here and congratulations for all the announcements.
There's a more to come.
Now, look, Enrique, you and I've talked a lot about the intersection of AI and hybrid work in recent months.
What are you seeing in the industry?
I think this is actually what makes many of the announcements that we're making today very exciting.
It's not only about the technology improvements that we're going to see.
see, which you have explained extremely well, it's about how they are going to be helping employees and companies to meet their goals.
What we see today is a significant tension between all of us as companies that want to continue to improve the productivity of
our teams and our teams that are looking for increases.
flexibility, the ability to meet their personal and private goals, and within that technology,
an can really help to close a bridge because it can help to improve productivity and at
the same time give the flexibility that our teams are looking for.
And we look at AI PCs as the first instance.
of this change.
They will be enabling to increase productivity.
We're going to talk about some of the new functionalities and you're going to see how unbelievable they are.
But at the same time, make sure that employees can deliver on the goals and meet their productivity goals.
Yeah, and we've talked a lot about how in this hybrid world, people are really wanting all of these different features.
Can you talk a little bit about that?
I think what we have learned during the last years
is that what is really critical for all of
us that develop the systems that our teams are going to be using is that we co-engineer the solution.
It is not anymore about someone developing the on developing the operating systems,
on the chips, on the hardware, we need to understand what experience we are building and deliver that experience together.
And this is something that we started a few years ago,
and the teams have been learning how to make that happen and how to co-engineer these solutions.
And all the products,
that we have introduced during the last year, specially for example, a we introduced two weeks ago, the pavilion era, show that.
And this is going to be even more important as we show the new AIPCs that we are going to be bringing into market.
Because we have made an effort to integrate the new processors, the chips into the solutions that we are going to be bringing together.
And we are incredibly excited about the new family of products we will be launching in a few weeks.
So I think you have something to show us.
Is right?
Okay.
I so.
And this is actually the new generation, since we have done together, we can show it together.
That's wonderful.
This is the next generation omnibook.
When we integrate the latest recent AI 300 series,
that, as Lisa was saying before, will be the first product that will have 50 tops integrated in the device.
And performance, as Pawen was saying, is critical because it will enable us to continue to deliver incredible experiences to our customers.
If you ask me what I'm more excited about,
I'm about is something that is going to be very close to many people in the room.
We HP have a very large team here in Taipei.
And many, we spend many hours in video conferences.
And as you can notice, I have a strong Spanish accent.
And I know that for the team here, understanding my accent sometimes is difficult.
So just imagine that you can get real-time translation, so I will speak my Spanish English.
That's pretty good.
Come on.
That's going to make a big difference in productivity.
No, I think that's, by the way, I think your English is pretty good in return, but I understand.
My Chinese also needs to be...
Look, we love it.
I mean,
we love the omnibook and all the Work that we've done together with their generalizing ai Processors,
but let's actually give everyone a preview of gen ai Running on omnibook.
So let me show you again the popular Stable diffusion excel turbo model,
which is generated by creating some high-quality images of locations around Taiwan based on some simple text prompts.
So starting with the white cliffs of Taroko Gorge National Park,
Sun Moon Lake with nice fall colors, Taipei 101, and then finally the peak of Jade Mountain.
all of this is running on the omnibook.
You're seeing it for the first time.
And the reason those pictures are so beautiful is because we have a very, very powerful NPU.
We've co-engineered the system together.
We have the Block FP16 data support that I talked about.
And you can see these beautiful photorealistic images almost instantaneously.
Yes, I just imagine the productivity this is gonna provide to product managers, creatives that are gonna start creating their designs.
And just with this solution, they will really be able to accelerate that work.
So unbelievable.
Thank you so much, Enrique.
Thank you for your partnership.
I can't wait for everything that we're gonna bring out together.
Thank you, Lisa.
Great to be here.
Thank you.
Thank you.
So, I showed you earlier that third-gen rise in AI has the most powerful MPU, but you also,
CPU and GPU to deliver the best PC experience as possible.
So let's take a look at some of that other performance.
When we compare Ryzen AI 300 series to all of the latest X86 and ARM CPUs from our competitors,
you can see why we say Strix is really the best notebook CPU in the market.
Whether you're looking at single threaded responsiveness,
productivity applications,
content creation,
or multitasking,
Third-gen Ryzen AI processors deliver significantly more performance,
many times often beating the competition by a large double-digit percentage across a broad range of use cases.
Now let's welcome another one of AMD's closest partners in the development of CoPilot Plus PCs.
Let's welcome Luca Rossi, president of Lenovo Intelligent Devices Group.
Hey, good morning, Lisa.
Wonderful to see you, Luca.
Thank you so much for joining us today.
appreciate the partnership and Lenovo and AMD are doing so much together yeah so
thanks for having me Liza and strong partnerships are called to Lenovo
strategy our long-term partnership with AMD as you know Liza spends over 25 years and is a testament to this Together,
I think we have driven incredible innovations across PCs, mobile servers, tablets, edge computing.
And example, our ThinkStation P620 was the first Threadripper Pro workstation to deliver unprecedented performance.
and flexibility to power AI renderings and workflows.
Beyond hardware,
the AI Engine Plus software used machine learning in our Legion gaming laptops
integrates with AMD Ryzen processors to dynamically adjust settings and tailor for epic gaming experiences at the ISP.
Our AMD power devices,
ThinkPad, ThinkBoot, Yoga, Legion all well-equipped to enter AI applications, accelerate video editing, enhance 3D rendering and elevate gaming to new aids.
So, Lisa, we are very excited for all the innovation that AMD is introducing and Lenovo definitely will be a great partner to deliver them to the
We can do this because of our global scale top notch engineering and design and then the operational excellence as the world number one.
Thank Thank so much, Liko.
And when we think about all the work we're doing together,
today we're talking about third-gen rise in AI devices and I know your team has done a lot,
our teams have done a lot together.
Can Can you talk a a little bit about and some of the AI experiences that you have.
Yeah, yeah, yeah, pleasure.
So later this year,
we are going to launch Lenovo AI laptops with the third gen Ryzen AI processors for consumers through our yoga franchisee for commercial,
with our legendary ThinkPad brand and force more than medium businesses through our ThinkBook lineup.
And matter if you are a creator, an professional or a startup entrepreneur, Lenovo have the perfect co-pilot plus laptop with Tergen.
And congratulations operating at the industry leading over 50 tops.
Congratulations for that, Lisa.
Thank you.
So we'll have also some exclusive Lenovo AI experiences coming to the market this year.
One is CreatorZone.
We co-engineered this with AMD on fine-tuning the AI model,
and is an exclusive Lenovo software tailor-made for creators, providing tools and features to boost creativity and productivity.
And maybe let's take a look at how this software works.
So first let me introduce Now, that's our native language agent that runs locally on the device.
And of the things that make this so special is its personal knowledge base.
With the right permissions, Lenovo now can interpret user data to provide faster output.
You have seen now AI now going through a script and generating both a thumbnail and a description to post on YouTube and share with the world.
So see, it was very easy, right?
Now, let's the user wants to create images of a fish to post on social media.
All they have to do is to use Lenovo Creator Zone.
That's our other IP.
With text-to-image, Creator can generate an image based off any idea.
And if the image needs further refining, the sketch-to-image function can generate.
And then all of this,
all of these images were created with the same prompt and locally on the device with a built-in responsible check feature.
Fantastic.
So have invested significantly in R&D, as you know, Lisa, and we built a unique Lenovo IPs.
running on device AI workload,
including LLM compression, performance-improving algorithms, and know we are confident that our third gen Ryzen AI are offering will stand out from the competition.
And but not least, and then I'm over.
we also have created Smart Connect, another Lenovo IP that unifies AIPCs, tablets, smart phone, and other IoT into the same Lenovo ecosystem.
That's fantastic, just wonderful to see all of these pieces come together.
Now, Luca, you've been holding something and I think you're going to show us what it is.
Well, I wasn't supposed to even show this yet since this is something we will not announce until later this year, but Lisa, you're right.
I felt it's just too exciting to put me reality.
So straight from our R&D,
straight from our R&D lab, this is the first ever sneak peek at our new yoga laptop, powered by third gen Ryzen AI.
That's beautiful.
Maybe we want to do left and right.
I can share more for today,
but I can tell you this device represent a significant leap forward in next
generation We'll include some of the Lenovo exclusive IP AI features that I mentioned.
And this is just the beginning.
We cannot wait together to share more and bring this transformative AI pieces to the world.
Thank you for having me and thanks everyone.
Thank you so much, Luca.
Thank you so much.
Thank Thank you.
Thank Thank you.
Thank you.
You can see, we get very excited about our products.
Next, I'd like to welcome one of the most important visionaries and innovators in the
Taiwan ecosystem and a very, very close partner, Johnny Xi, chairman of ASUS.
Hi, Lisa.
Thank you.
Thank you so much for being here.
Thank you, Lisa.
Yeah.
I think it's really my great honor to join you on stage,
especially since you are now a legend of the computing industry and the pride of Taiwan.
Yes.
Thank you.
Thank you.
Thank you.
Thank you.
Johnny, you are actually a true visionary.
I think we all have so much respect for you.
You've shaped this industry for so many years.
Can you just tell me a little bit about how, you know, the landscape of computing is changing and how do you see AI?
Yes.
AIPC will be one of the most disruptive innovations of our lifetime.
The ubiquitous AI era is the mega paradigm shift that we have long envisioned at ASUS.
And I'm so overjoyed that it's finally becoming a reality.
The world will be full of...
that common different forms and sizes, including super, like the 1.8 trillion parameters with MOE.
Big, medium, small, and even tiny, like less than 1 billion parameters.
to the edge, to PCs and devices like phones and robots.
an extremely critical role in this new distributed hybrid AI ecosystem.
Imagine an AI PC with small language model types of AI brains capable of acting as a
personal agent who can understand and help you with your personal needs,
preferences, and even work while complementing the super brain in the cloud with local advantages, local security, high and personalization.
All the while, uploading the cloud computing needs, especially for inferencing.
Isn't it incredible?
This will benefit user productivity like video editing, design work, and scientific computing and a lot.
This vision is amplified and possible because of our partnership with AMD and the launch of the third gen Ryzen AI.
We are definitely innovating at the forefront of AIPC together.
It is so inspiring, Johnny, to hear you talk with such passion.
Feel your passion going forward.
So can you tell us a little bit about your new portfolio of AIPCs with third gen rise in ai?
Of course this is.
I think later today at four p.m.
Will be unveiling a range of cutting edge AI PCs across our portfolio with brand new ZenBook, ProArt, VivoBook, Asus Tough and LG laptops.
powered by the third gen AMD Ryzen AI processors.
These new laptops are equipped
with the most powerful MPU with three tops and the superior AMD ZEMFI architecture that leads the industry in compute and AI performance.
The third gen Ryzen AI processor is the catalyst to bring in personalized computing to everyone from content creators to gamers and business professionals.
and empower them like never before.
This advancement gives the new ZenBook higher AI performance, the MacBook, while making it thinner and lighter as well.
Asia is so proud and honored to be the first OEM partner to make the third-gen rise in AI systems available to consumers.
It will be ready for purchase in July.
Isn't it incredible?
Thank you.
These are super beautiful systems, Johnny, and you know it's it's also about the experiences for content creation and creativity.
Can you tell us a little bit about that?
Sure.
We have been working closely with AMD to integrate and optimize the incredible power of right-in AI processors on our Asus co-pilot plus PC
Linux.
This enables us to create exclusive and unprecedented AI apps to empower users to be more efficient and creative than ever before.
A great example is one of our recently launched AI apps called StoryCUBE.
Who use multiple devices, we love it.
Sorry kill is an AI powered digital asset management app.
Designed to provide you with a seamless and efficient fire Organizing experience.
It can act as a handy assistance by automatically identifying your loved one's faces and even detecting and sorting your media into various scenes such as scenic road trips,
skiing adventures or adultery puppy moments.
With the top 50,
with the 50 top's capability of the Ryzen AI MPU,
StoryCube can drastically shorten AI categorization time from the 10th of second running on the CPU to just the brink of it.
Thank you.
You know,
Johnny, look, it's really exciting what we're doing on AI PCs, but and ASUS have also had a very long history of partnering across motherboards, graphics
cards, and systems.
And as we go forward, can you comment about some of that going forward?
Sure.
It has been a great history to gather,
creating incredible products like the original Asus was even the first to push the rising gaming system, which received an incredible response from users.
Last Asus introduced the first ROG ally, handheld devices, which also adopted the AMD Z1 extreme chip.
We have solid leadership.
in the AMD Ryzen 9 segment with 60% market share.
And we are excited about expanding our partnership to offer AI solutions for specific industries like healthcare, education, and smart cities.
With the goal of revolutionizing these sectors with powerful, un-device AI applications.
That's why we are so excited to work together in the next-gen AI PC space by combining cutting-edge
Like rising processors and ASU software expertise rooted in our design thinking philosophy.
We pushed the boundaries of AIPC innovation and delivering truly groundbreaking AI experience to users.
Johnnie, all I can say is you are an inspiration to us all.
Thank you so much for everything that you've done for our industry and thank you for your partnership with AMD.
Thank you.
We have a partnership.
Thank you very much.
Thank you.
Thank you.
So I hope you got a feel for all of the customer excitement around third-gen rise in AIPCs.
I'm very happy to say that the first notebooks will be available in July and we have more
than 100 consumer and commercial notebook design wins.
use, HP, Lenovo, MSI, and So of things to come.
So now let's transition from PCs to the Edge,
where our embedded and adaptive solutions are bringing AI to a diverse set of markets and devices.
AMD platforms are already broadly deployed at the Edge.
In healthcare, AMD chips are improving patient outcomes by enhancing medical imaging analysis, accelerating research, and surgeons with precision robotics.
In automotive, AMD AI solutions are powering the most advanced safety systems.
And in industrial, customers are using AMD technology for AMD technology.
analytics and machine vision applications.
We are number one in adaptive computing today,
and thousands of companies have adopted our XDNA AI adaptive and embedded technologies to power their products and services.
Let me just give you a few examples.
So Illumina is a global leader in genomics,
and they use Epic and AMD adaptive SOCs with their splice AI software to identify previously undetectable mutations in patients with rare genetic diseases.
An automotive servaroo is industry's leading eyesight ADAS system uses
Versal to analyze every frame captured by the front camera and that allows them to identify and alert the driver of possible safety hazards.
Hitachi Energy uses AMD adaptive computing products in their widely-deployed,
high-voltage direct current solutions to detect potential electrical issues before they become large problems and cause power outages.
And Canon has adopted Versal to power the AI-based free viewpoint video system that captures high-resolution video from over 100 cameras simultaneously,
and that allows viewers to experience live events Now, AI at the edge is actually a hard problem.
It requires the ability to do pre-processing,
inferencing, and post-processing all within And only AMD has all of these pieces needed to accelerate end-to-end AI at the edge.
So combine the adaptive computing engines for pre-processing sensor and other data with AI engines for inferencing,
and then high-performance embedded compute cores for post-processing decision-making.
Now, today, when you do this, this requires three separate systems.
And with our new Versal AI Edge Gen 2 series,
we bring all of this leadership compute together to create the first adaptive solution that integrates pre-processing,
inferencing, and post-processing in a single chip.
So today we're announcing early access for our next gen Versal platform.
More than 30 of our strategic partners are already developing Edge AI devices powered by our new single chip Versal solution.
And we are incredibly excited about the opportunity to
be able AI at the edge and see significant opportunities to extend our embedded market leadership with these new technologies.
Okay, now let's turn to the data center.
We've built the industry's broadest portfolio of high-performance CPU, GPU, and networking products.
And when you look at modern data centers today, they actually run many different workloads.
They range from traditional IT applications to smaller enterprise LLMs to large-scale AI applications.
And you need different compute engines for each of these workloads.
Not AMD has the full portfolio of high-performance CPUs and GPUs to address all of these workloads.
From our EPIC processors that deliver leadership performance on general-purposed and mixed-inferencing
to our industry-leading Instinct GPUs that are built for accelerating AI applications at scale.
Today I'm going to share details of our next generation data center CPU and GPU offerings.
So let's start first with CPUs.
If you take a look at the CPU market, market.
Epic is actually the processor of choice for cloud computing,
powering internal workloads for all of the larger hyperscalers, and more than 900 public instances from all the major cloud providers.
Every day, billions of people around the world use cloud services powered by Epic.
That includes Facebook, Instagram, LinkedIn, Microsoft Teams, Zoom, Netflix, WeChat, WhatsApp, and many, many more.
All of that is on Epic.
Now, we launched Epic in 2017, and with every
generation, more and more customers have adopted Epic because of our leading performance, our leading energy efficiency, and our leading total cost of ownership.
And very proud to say that we're at 33% share now and growing.
Now, when you look at today's data centers, most data centers are actually powered by processors that are more than five years.
And when you look at the virtualization performance of our latest generation server CPUs,
the new technology is so much better that Epic delivers five times more performance compared to those legacy processors.
And even comparing to the best processors today from the competition, our performance is one and a half times faster.
Many enterprises today are actually looking to modernize their general purpose computing infrastructure and add new AI capabilities, often within the same footprint.
And by doing, by refreshing their data centers with fourth gen epic, you can really accomplish this.
You can actually replace five legacy servers with a single server.
reduces rack space by 80% and consumes 65% less energy.
Now, many enterprise customers are also wanting to run a combination of general purpose and AI workloads without adding GPUs.
And is, again, the best option for that.
Looking at EPIC, we are 1.7 times faster when running the industry standard TPX AI benchmark.
that measures end-to-end AI pipeline across different use cases and algorithms.
Fourth Gen EPIC is clearly the industry's best server CPU, but we're always pushing the envelope to deliver more performance.
So I have something to show you today.
It's actually the preview of our upcoming fifth gen EPIC processor, codename Turin.
So please take a look at Turin for the very first talk.
Turin features 192 cores and 384 threads and has 13 different chiplets built in three and six nanometer process technology.
There's a lot of technology on Turin.
It supports all of the latest memory and IO standards and is a drop-in replacement for our existing 4th Gen EPIC platforms.
Thank you, Drew.
Turn will extend EPIC's leadership in general purpose and high-performance computing workloads.
So let's take a look at some of that performance.
NMD is a very compute-intensive scientific software that simulates complex molecular systems and structures.
When simulating a 20 million atom model, a 128-core version of Turin is more than three times faster than the competition's best.
enabling researchers to more quickly complete models that can lead to breakthroughs in drug research, material science, and other fields.
Now, Turin also excels at AI inferencing performance when running smaller large language models, so I want to show you a demo here.
Now, what this demo compares is the performance of Turin when running a typical enterprise
deployment of llama2 virtual assistants with a minimum guarantee latency to ensure a high-quality user experience.
Both servers begin by loading multiple llama2 instances, with each assistant being asked to summarize an uploaded document.
Right away you can see that the turn server on the right is adding double the number of sessions in the same amount of time while responding to user requests significantly faster than the competition.
And while the other server reaches a maximum number of sessions, you'll see it stop soon, it basically can't support the latency requirements.
Turn continue scaling and delivers a sustained throughput of nearly four times more tokens per second and
That means when you use turn you need less hardware to do the same work work,
and in addition to leadership summarization performance,
Turn also delivers leadership performance across a number of other different enterprise AI use cases,
including two and a half times more performance when translating large documents, and more than five times better performance when running a support chat.
bot.
Our customers are super excited about Turin, and I know many of our partners are actually here in the audience today.
And I want to say we're on track to launch in the second half of this year.
CHEERING AND So now let's turn to data center GPUs and some big updates on our instinct accelerators.
We launched MI 300 last December, and it's quickly become the fastest ramping product Microsoft, Meta, and Oracle have all adopted MI 300.
Every major server OEM is offering MI 300 platforms.
And have built deep partnerships with a broad ecosystem of CSP and ODM partners,
again, many, many thanks to our ODM partners who are here today that are offering instinct solutions.
Now, if you look at today's enterprise generative AI
workloads, MI 300X provides out-of-the-box support for all of the most common models, including GPT, LAMA, Mistral, Fi, and many more.
We've made so much progress in the last year on our Rockam software stack,
working very closely with the open source Community at every layer of the stack while adding new features
And functionality that make it incredibly easy for customers To deploy AMD instinct in their software environment.
Over the last six months, we've added support for more AMD AI hardware and operating systems.
We've integrated open source libraries like vllm and Frameworks like jacks.
We've enabled of the art,
attention algorithms, we've improved computation and communication libraries, all of which have contributed to significant increases in the gen AI performance for MI-300.
Now, with all of these latest ROCOM updates, MI300X delivers significantly better inferencing
performance compared to the competition on some of the industry's most demanding and popular models.
That is,
we are 1.3 times more performance on Meta's latest LAMA-370B model compared to H100,
and we're 1.2 times more performance on Mistral's 7B model.
We've also expanded our work with the open-source AI community.
More than 700,000 hugging-face models now run out of the box using Rockam on MI 300x.
This is a direct result of all of our investments in development and test environments that ensure a broad range of models work on instinct.
The industry is also making significant progress at raising the level of abstraction at which developers code to GPUs.
We to do this because people want choice in the industry.
And we're really happy to say that we've made significant progress with our partners to enable this.
For example,
our close collaboration with OpenAI is ensuring full support of MI300X with Triton, providing a vendor agnostic option to rapidly develop highly performant LLM kernels.
And we've also continued to make excellent progress adding support for AMD AI hardware into the leading frameworks like PyTorch,
TensorFlow, and We're also working very closely with the leading AI developers to optimize their models for MI 300.
So I'm very excited to welcome Christian LeFort,
CTO, and-CEO of Stability AI, an important AMD partner known for delivering the breakthrough stable diffusion open access AI models.
Hello, Christian, how are you?
Thank you,
I'm great,
it's a great honor to be here to represent my colleagues and everyone else who makes stability AI a really interesting player in the ecosystem.
Well, you know, we've showed a lot of stability AI today.
You're known for delivering these breakthrough open access AI models that generate images, video, language, code, all these things.
Can you share some insights into How these models are pushing the boundaries of what's possible?
Yes, we're seeing incredible gains in productivity in every Industry.
And were made possible because we did the Crazy thing and released our models and our source code for Free.
And this allowed millions of developers and and thousands of researchers to adapt their models to make new discoveries at record base
and to create new applications extremely fast.
Take, for instance, touching up old family photos to improve their resolution or quality or maybe to remove someone you never, ever want to see
again in your whole life.
Doing this, well, used to take years of experience.
and sometimes hours of tedious work for each image.
Now applications like stable assistant and stable artist and a lot of other applications that leverage stable diffusion can allow anyone
to create and edit images in seconds and we're seeing similar gains in productivity not just research research that were involved in language coding,
music, speech, and 3D.
And combining all of those together, we aim to soon boost, by at least 10x, the productivity of filmmaking and video game creation.
That's fantastic, Christian.
Now, I understand you have some big news to tell the audience today.
Yes.
So basically, the wait for Stable Diffusion 3 is almost over.
We appreciate the community's patience and understanding as we dedicated extra effort to improve its quality and safety.
Today we're announcing that on June 12, we will release the Stable Diffusion 3 medium model for everyone to download.
Thank you for A lot of work went into this and we're really excited to see what the community will end up doing with this.
One thing that is maybe not obvious to non-technical people is that it used to be that the frontier
of research led to these models, like stable diffusion.
But nowadays, what's happening is that there's a new, it's like a natural evolution, basically.
These models are going to combine together in all kinds of novel ways.
And releasing them openly, we basically allow millions of people to help discover the best way to bring those together and unlock new use cases.
So, the
SD3 medium,
it's an optimized version of SD3 that achieves unprecedented visual quality and that the community will be able to improve for their own specific needs to help us discover
the next frontier of generative AI.
It will, of course, run super fast on the MI300 and it's also compact.
enough to run on the Ryzen AI laptops that you've just announced.
So, here's an image produced with table diffusion three, we challenged it to illustrate the famous Taiwan night markets, what it looks like.
It looks very nice.
So if you look really,
really closely, you'll it's not quite photorealistic, but I think it captured really well the different elements of the text prompt.
And especially impressive when you think that it was generated so much faster than actually typing this long text prompt.
So it captured the pedestrian.
the street that is made of stones, the fact that it is during the night, there's the streets and so on.
So SD3 is able to do this using a bunch of new innovations,
including the multimodal diffusion transformer architecture, that allow it to understand visual concepts and text prompts far better.
than previous models.
It supports both simple prompts,
so don't need to become an expert at these,
but you can also use much more complex ones and it will try to bring together all of the different elements of it.
And SD3 excels at all kinds of artistic styles and photorealism, And an example that is actually a really challenging example.
And comparing it with our previous version models, stable diffusion Excel that we released less than a year ago.
And especially challenging because it involves hands that are notoriously hard for these models to replicate.
It involves these.
repeating patterns like the strings and the guitar and the threads.
And these are really challenging for these models to understand and draw accurately.
So how SD3 generated more realistic details like the shape of the guitar and the hands.
And if you look really, really closely, you notice that there are a imperfections here and there.
So it's still not quite perfect, but a big improvement over the previous generation.
No, it's fantastic, Christian, and you know, I know that, you know, your team's been working a lot on SD3.
What's your experience been like with Mi3100?
It's wonderful.
192 of HBM, that's really a game changer.
It's, like, having more memory basically is often, like, it does the way that we can
basically unlock new models, and often, like, the number one factor that will help us to train bigger models faster and more efficiently.
And I'll give an example that we've actually just encountered in collaborating with AMD.
So we have this creative upscaler feature in our API,
and basically the way it works is that it can take an old photo and an old image that is less than one megapixel and really blow out the
resolution and improve the quality at the same time.
And so this creative upscaler Like, we were happy when we were able to reach 30 megapixels on the H100 and the H100.
But once we basically ported our code over to the MI 300,
which by the way was pretty much no effort, we were able to reach 100 megapixels.
And know, content creators, they want more pixels.
So makes a huge difference.
And fact that we didn't have to really make any effort to actually achieve this, yeah, it's a big step up.
So researchers and engineers are really gone out of the increase.
memory capacity and the bandwidth advantages that the MD and Stank GPUs deliver out of the box.
So Lisa,
moving forward, we'd really love to collaborate more closely with AMD because we'd like to create a new state-of-the-art video model.
We a lot more memory, we your team to achieve this and to release this for the whole world to enjoy.
That sounds fantastic.
It like you need some GPUs.
Yes.
Thank you.
Right the place for this.
Thank you so much, Christian.
Thank you.
Have a great, a great computer.
You can see all of the innovation that's happening in such a short amount of time.
Now, earlier in the show, I joined by Microsoft's Poven Daviluri, who the great work that we're doing together on co-pilot plus PCs.
Microsoft is also one of our most strategic data center partners, and we've been working very closely with them on our Epic and Instinct roadmap.
To hear more about our partnership and how Microsoft is using MI 300X across your infrastructure.
Here's Microsoft Chairman and CEO Satya Nadella.
Thank you so much, Lisa.
Great to be with all of you at Computex.
We're in the midst of a massive AI platform shift.
with the promise to transform how we live and work.
We have committed to partnering broadly across the industry to make this vision real.
That's why our deep partnership with AMD,
which has spanned multiple computing platforms, from PC to custom silicon for Xbox, and now to AI is so important to us.
As power and highlight it, we are excited to partner with you to deliver these new Ryzen AI powered co-pilot plus PCs.
And we're also thrilled to announce last month that we were the first cloud to deliver general
availability of virtual machines using AMD's AMI 300x accelerator.
It's a massive milestone for both.
and it gives our customers access to very impressive performance and efficiency for their most demanding AI workloads.
In It offers today the leading price performance for GPT workloads.
This is just the start.
We are very committed to our collaboration with AMD
and we'll continue to push AI progress forward together across the cloud and edge to bring new value to our joint customers.
Thank you all so very much.
Woo!
Thank so much Satya.
We're so proud of our work with Microsoft and as you heard from Satya,
MI 300 delivers the best price performance today for GPT-4 workloads and is being deployed broadly across Microsoft's AI compute infrastructure.
Now, let me show you one more example of MI 300 being used to power OpenAI's wonderlust travel assistant built on GPT-4.
So, again, let's start by letting the tool know that we're interested in Taiwan and that we're going to be attending Computex.
And you might ask something about, you know, who's the opening keynote?
Now, let's also ask Wanderlust, what other interesting sites should we see in Taipei?
And you can see, almost instantly, we get lots of options of things to do near the convention system.
But if we want to narrow it down to just a few,
because we only have a day,
we can ask it to plan a day for us,
and that day would include things like Elephant Mountain and Taipei 101, and you get the full itinerary.
I think it just gives you an example of the power of AI.
I wonderlust on MI 300 looks wonderful,
but it really shows the power of these assistive
agents and how easily it is for developers
to seamlessly integrate gen AI models into their applications so that we can make AI extremely helpful for all of us.
Now, the customer response to MI 300 has been overwhelmingly positive.
clear that the demand for AI is just accelerating so much going forward.
We're really just at the beginning of a decade-long mega cycle for AI.
And to address this incredible demand, I have a very exciting roadmap to show you.
We launched MI300x last year with leadership inference performance, memory size, and capabilities, and have now expanded our roadmap so it's on an annual cadence.
That means a new product family every year.
Later this year,
we plan to launch MI-325x with faster and more memory,
followed by our MI-350 series in 2025 that will use our new CDNA4 architecture,
and both the MI-325 and 350 series will leverage the same industry standard universal baseboard OCP server design used by MI-300,
and what that means is that our customers can very quickly adopt this new technology.
And then in 2025, will deliver another brand new architecture with CDNA next in the MI 400 series.
So let me show you a little bit.
Starting with MI 325, MI 325 extends our leadership in generative AI with up to 288 gigabytes of ultra-fast HBM3 e-commerce.
memory with six terabytes per second of memory bandwidth,
and it uses the same infrastructure as MI 300, which makes it easy for customers to transition.
Now let me show you some competitive data.
Compared the competition, MI 325 offers twice the memory, 1.3 times faster memory bandwidth and 1.3 times more peak compute performance.
And based on this larger memory capacity, you what Christian said about the importance of memory.
A single server with eight MI325 accelerators can run advanced models up to one trillion parameters.
That's double the size supported by an H-200 server.
And then moving into 2025, we'll introduce our CDNA4 architecture, which will deliver the biggest generational leap in AI performance in our history.
we built with advanced three nanometer process
technology and supports for FP4 and FP6 data types and will again drop into the same infrastructure as MI300 and MI325.
We are super excited about the AI performance of CDNA4.
So if you just take a look at that history,
when we launched CDNA3, we were eight times more AI performance compared to our prior generation.
And with CDNA4, we're on track to deliver 35 times increase.
That's 35 times increase in performance compared to CDNA3.
And when you compare MI350 series to B200, Instinct supports up to 1.5 times more memory and delivers 1.2 times more performance overall.
We are,
thank you,
we are very excited about our multi-year instinct and rock and road maps,
and I can't wait to bring all of this new performance to our AI customers.
Now I have one more topic I'd like to talk about today, and it's really in addition to our focus on instinct.
We've also made significant progress driving the development of high performance AI networking infrastructure.
AI network fabrics need to support fast switching rates with very low latency, and they must scale to connect thousands of accelerator nodes.
At AMD, we believe that the future of AI networking must be open.
Open to allow everyone in the industry to innovate and drive the best solutions together.
So for both inferencing and training,
it's actually critical to scale up the performance of hundreds of accelerators,
connecting the GPUs in a rack or pod with an incredibly fast,
highly resilient interconnect so they can work as a single compute node to run the largest models with the fastest.
Last week,
I'm very happy to say that many of the largest chip cloud and systems companies came together to announce plans to develop an open standard for a high-performance fabric
that can officially connect hundreds of accelerators.
We call this ultra-accelerator link or
It's an optimized load store fabric designed to run at high data rates and leverages AMD's proven infinity fabric technology.
We actually believe U8Link will be the best solution for scaling accelerators of all types,
not just GPUs, but all accelerators and will be a great alternative to proprietary options.
The UALink 1.0 standard is on track for later this year with chips supporting UALink already well into development from multiple
And now the other part of training large models is also the need for scale-out performance,
connecting multiple accelerator pods to work together with at-scale installations, often spanning hundreds of thousands of GPUs.
A broad group of industry leaders formed the Ultra Ethernet Consortium last year to address this challenge.
And UAC is the high-performance technology with leading signaling rates.
It has extensions such as Rocky for RDMA to efficiently move data between nodes,
and it has a new set of innovations developed specifically for AI supercomputers.
It's incredibly scalable.
It the latest switching technology from leading vendors such as Broadcom,
Cisco, and
And above all,
Open means that as an industry,
we can innovate on top of UEC and solve the needed problems and the industry can work together to build out the best possible high performance interconnect for AI and HPC.
So, when you look ahead, what does this mean?
That means we have all the pieces.
We have UA Link and Ultra Ethernet.
And now we have the complete networking solution for highly performant,
highly interoperable, and highly resilient AI data centers that can run the most advanced frontier models.
So, I hope you can see now that AMD is the only company that can deliver the full set
of CPU,
GPU, and networking solutions to address all of the needs of the And we have accelerated our roadmaps to deliver even more innovation across both our instinct
and epic portfolios while also working with an open ecosystem of other leaders to deliver industry-leading networking solutions.
Now, it's been a wonderful morning.
We have so much that we talked about, so let me just wrap things up.
We showed you a lot of new products today from our latest Ryzen 9,000 desktops and third-gen
Ryzen AI notebook processors with leadership compute and AI performance to our single-chip
versatile Gen 2 series that will bring more AI capabilities to the edge to our next-generation
turn processors that extend of our epic portfolio and our expanded set of instinct accelerators that will deliver an annual cadence of higher performance.
And what I can say is this is an incredible time to be in the technology industry.
It's an incredible pace of innovation.
And I couldn't be more excited about all of the work that we're going to do together in high performance and
AI computing as an industry.
So a very, very special thank you to all of our partners who joined us today.
Microsoft, H.P., ASUS, Lenovo, and.ai.
And especially thank you for all of our partners here in Taiwan and around the world.
Thank you for being such a great audience.
And a great Computex 2024.
Thank you.
Thank you.
Thanks for watching guys!
.
You Thanks for watching!
解鎖更多功能
安裝 Trancy 擴展,可以解鎖更多功能,包括AI字幕、AI單詞釋義、AI語法分析、AI口語等

兼容主流視頻平台
Trancy 不僅提供對 YouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Coursera 等平台的雙語字幕支持,還能實現對普通網頁的 AI 劃詞/劃句翻譯、全文沉浸翻譯等功能,真正的語言學習全能助手。

支持全平臺瀏覽器
Trancy 支持全平臺使用,包括iOS Safari瀏覽器擴展
多種觀影模式
支持劇場、閱讀、混合等多種觀影模式,全方位雙語體驗
多種練習模式
支持句子精聽、口語測評、選擇填空、默寫等多種練習方式
AI 視頻總結
使用 OpenAI 對視頻總結,快速視頻概要,掌握關鍵內容
AI 字幕
只需3-5分鐘,即可生成 YouTube AI 字幕,精準且快速
AI 單詞釋義
輕點字幕中的單詞,即可查詢釋義,並有AI釋義賦能
AI 語法分析
對句子進行語法分析,快速理解句子含義,掌握難點語法
更多網頁功能
Trancy 支持視頻雙語字幕同時,還可提供網頁的單詞翻譯和全文翻譯功能