Record Labels Take A.I. Music to Court - 雙語字幕
Machines don't learn, right?
Machines and then they finish the pattern based on predictive algorithms or models, right?
That's how humans do.
Humans have lived experiences, they souls, they have genius, they actually listen, get inspired and then they come out with something different, something new.
I'm curious.
what you make of the Stopkiller robots movement,
a bunch of leaders in AI, including Elon Musk and Demisysabus of Google DeepMind, they all pledged not to develop autonomous weapons.
Do you think that was a good pledge or do you support autonomous weapons?
I mean, I think autonomous weapons.
weapons are now kind of a reality in the world and you know if you're not willing to fight with
a ton of weapons then you're gonna lose.
This was a partnership that McDonald's struck with IBM.
It is like just tallying up McNuggets and starts charging them more than $200.
But look,
I, for one, am very glad this happened because for so long now, I've wondered what does IBM do, and I have no idea.
And now, if it ever comes up again, I'll say, oh, that's the company that made the McDonald's stop working.
I'm Kevin Ruse, a columnist from the I'm Casey Noon from Platformer, and this is hardfork.
This week, the record label Su 2 leading AI music apps, accusing them of copyright infringement.
R-I-A-A-C-E-O Mitch Glazier joins us to make the case.
Then, we go inside the military's tech turmoil with Chris Kirkoff, who ran the Pentagon's office
in Silicon Valley and has a new book out called Unit X and finally a round of hat GPT Casey today.
I learned something new,
you I'm New York I'm visiting some friends I'm going to some weddings and I'm at the New York Times building and I I learned just today that there's an
entire Podcast studio at the Times building that I've never seen that's how big the New York Times is.
It's just full of you know nooks in crannies that very few people have ever seen with their own eyes.
Yeah, so up on the 28th floor, apparently there's a gleaming new audio temple.
I hear it's very fancy, but I've never been.
So right after we tape today, I'm going to go up there and I'm going to see the promised land.
You know what I would do if I got to see the studio, Kevin and I were in New York.
What's that?
I would sneak in and I'd get a little pocket knife and I'd just uh,
carve, uh, Kevin plus Casey forever into one of the brand new desks.
And I would dare them to say anything to me about it.
Yeah, let's, let's not let you up there.
Let's, I'm going to actually ask security this specifically.
Can you imagine Ezra Klein sits down to interview like the secretary general of the United patients and he just sees carved into the desk.
Casey plus Kevin forever.
Casey was here.
Suck it, Clyde.
Now Kevin, not a lot of people know this, but we have something interesting in common.
What's that?
Well, we were a couple of the few teenagers who managed to survive the Napster era without
getting sued by the Recording Industry Association of America.
Yes, although one of my friends actually did get sued by the Recording Industry and had
to pay thousands of is he still in prison or did he get out?
No, he got out.
He's fine.
Oh, thank God.
Thank God.
Well, look, Kevin, it's always a strange day when you find yourself siding with the RIAA.
And yet when I heard this week's news, I thought, well, Yeah, let's talk about it.
and is maybe trying to shut them down.
Yeah, so the companies that the music labels sued our UDO and SUNO, we've talked about them a little bit on this show before.
Basically, these are tools that sort of work like chat GPT, you can type in a prompt.
You can say, make me a country western song about a bear fighting, Dolphin and it'll do that.
But basically these companies have come under a lot of criticism for allowing people to create songs without compensating the original artists.
Like other AI companies, these companies do not say where they're getting their data.
Suno is releasing statements using words like transformative and completely new outputs,
basically arguing that this is all fair use and that they don't know anything
to the holders of the copyrighted songs that they were presumably using to train their models.
Um, but we'll see how the courts see that.
Well, and if you've never heard one of these, Kevin, I think we, and I know you
have, we should play a clip, I think,
just so people get a sense of just how closely these services can mimic artists you might be familiar with.
So we're about to hear a song called Prancing Queen and this was made with Suno.
Can you believe what they're doing to Abba Kevin?
You know I actually saw an Abba cover band once many years ago and that was better than the Abba cover band.
You know what I liked about that clip is it reminded me,
it's If I had like six beers and someone shoved me onto a karaoke stage and said sing a dancing queen from memory,
that's exactly what it would have sounded like.
So we wanted to get to the bottom of this.
So we reached out to the RIAA and they offered up chairman and CEO Mitch Glager.
So we're going to bring him on and ask him what this lawsuit is.
bounce.
Let's do it.
All Mitch Glazier, welcome to Hardfork.
Thanks.
Thanks for having me.
So make your case that these two AI music companies violated copyright law.
pretty easy case to make.
They copied basically the entire history of recorded music.
They stored it.
Then they used it by matching it to prompts so that they rejiggered the ones and zeros.
Basically, they took chicken and made chicken salad and then said they don't have to pay for the chickens.
Well, some people...
But this is a transformative use that no matter what you put into a UDO or a SUNO,
you're not going to get back the original track.
You're going to get something that has been transformed.
What do you make of that case?
Well, there is such a thing as transformative use.
It's actually a pretty important doctrine.
It's supposed to help encourage human creativity, not substitute for it.
There was a really important Supreme Court case.
on this issue,
thank God,
that just happened last year,
where they kind of dispelled this notion that,
you know, anytime you take something and, you know, splash little bit of color on it, it's transformative.
That's not what that means, and this is very similar.
Mitch.
which you said that these companies have scraped the entire history of recorded music and used them to train their models,
but I read through the complaint that came out and there isn't sort of direct evidence, right?
There's no smoking gun.
They haven't said outright, yes, we did train on all this copyrighted music.
Presumably, that is something you hope will come out in the course of this case, but do you actually need to be able to prove that
they did use copyrighted music in order to win this case?
Can the lawsuit succeed without I think ultimately we do have to show that they copied the music,
but they can't hide their inputs and then say,
sorry, we're not going to tell you what we copied, so you're not allowed to sue us for what we copied.
That they can't do.
So what we were able to do was show in the complaint that there's no way they could have come out.
Without copying all of this on the input side,
it's sort of this equitable Doctrine and fancy legal terms that says you're not allowed to hide the evidence and then say you can't sue me
Right.
Well, on that point, one of my favorite parts of the Suno lawsuit is where it discusses Suno reproducing what are called producer tags, which is what a producer says their
name at the start or end of a song.
What does it mean that Suno can nail a perfect Jason Derulo?
Well, thank God Jason Derulo likes to say his name in the beginning of his songs, right?
And know, in the blender, that piece wasn't ripped apart enough.
And so that was sort of one of those smoking guns where we're able to show,
if you look at the output, right, and Derulo's tag is in the output, I think they copied the Jason Derulo song.
Yeah, so one of the arguments we've heard from AI companies, not just AI music companies,
but also companies that train language models, is that these machines, these models, they're basically learning the way that humans learn.
They're not just sort of regurgitating copyrighted materials.
They are learning to generate.
I want to just read you Suno's response that they gave to the Verge and have you share your thoughts on it.
Suno said, quote, we would have been happy to explain this to the corporate record labels that filed this lawsuit.
And fact, we tried to do so.
But of entertaining a good faith discussion, they've reverted to their old lawyer-led Suno is built for new music, new uses, and new musicians.
We prize originality.
What do you make of that?
Yeah, I love this argument.
I love that machines are original and machines and humans are the same.
If just use human words around machines like learning, well, you know, then there's no difference between us.
If you read a book,
it's the same as copying it under Xerox machine and then mixing all the words around and then coming out with something new has nothing
to do with the fact that they actually happened to take all of these human created works.
You write machines.
And then they basically match a user's prompt with an analysis of patterns and what they've copied.
And they finish the pattern based on predictive algorithms or models.
That's what humans do.
Humans lived experiences.
They souls.
They genius.
They actually listen, get inspired.
And then they come out with something different.
They don't blend around patterns based on machine-based algorithms.
So nice try But I don't think that that argument is very convincing and I also love that they say that you know
the creators and their partners are the ones that have resorted to, like, the old legal playbook.
They're not resorting to, oh, we can do this, it's based on fair use, it's transformative.
We're going to seek forgiveness instead of permission.
Well, I you also have the investor in the company who you quote in the lawsuit saying,
can we because he said this to a news outlet,
I don't know if I would have invested in this company if he had to deal with the record labels because then they probably wouldn't have
needed to do what they needed to do.
sort of met Hoover up all this music without paying for it.
Yeah, that's in the legal world what we call a bad fact, and that is a bad fact for the other side.
You don't want your investors saying,
gee, if they had really done this the legal way, I don't think I would have invested because it's just too hard.
It's just too hard to do it the legal way.
Mitch, we've seen other lawsuits come out in the past year from media companies, including
New York Times, which sued OpenAI and Microsoft last year alleging similar types of copyright violations.
How similar or different from the sort of text-based copyright arguments is the argument that you are making against these AI music generation?
companies.
I think the arguments are the same that you have to get permission before you copy.
It's just basic copyright law.
The businesses are very different.
And think looking at sort of the public reports on the licensing negotiations going on between the news media and companies like OpenAI,
where you, you news is dynamic.
It has to change every single day.
And so there needs to be a feed every single day for the input to actually be useful for the output.
Music is catalog, right?
You the song once.
It's there forever.
You know, you don't have to change it.
You don't have to feed the beast every single day.
So think the business models are quite different, but I think that the legal basis is very similar.
Well, and does that suggest that for you all, it's actually essential that you are able to
capture the value of the back catalogs for training, whereas for these media outlets, a better chance of securing ongoing revenue.
I think that's right.
I also think that we have an artistic intent element that's very, very different.
You it's one thing for somebody to say you can copy this into your input.
It's another to say that you can then change it
so that the output uses the work of the artist but it doesn't match their artistic intent.
And, you know, to that these, you know,
sort what Kevin was saying earlier, they're saying, look, we're just, you know, we had discussions, what's your problem?
Well, the problem is we work with human artists
who care about the output, and so they need to have a role and a place in deciding how their art's being used.
Yeah, my understanding is that it's actually gotten much more difficult and expensive to sample
lately than it used to be in ways that I don't really like.
I'd probably like to see more sampling than we do,
but it seems like something changed around the time that the song Blurred Lines came out and now all of a sudden everybody has to let you know,
even just a whisper of familiarity.
Is there anything sort of in the whatever led to that situation that you expect you'll bring to this lawsuit.
Well, I think sampling is actually a pretty good example, because samples are licensed today, and there's plenty of sampling going on.
Now, does it mean that anybody can sample anything they want without permission?
No.
Do we have to have clearance departments that go out,
whether you're talking about,
you know, a video, or a movie, or another song, and get those rights, especially from, you know, publishers and prior artists?
That's called ownership, and you actually get to control your own art and what you do, and it's not a simple process all the time.
I it takes work.
I'm sure that our companies get frustrated in trying to do clearances, but it's what you got to do.
Yeah, there have been some companies
that have faced copyright challenges
in AI generative products that have responded by basically limiting their own You can't refer to a living artist in a prompt.
It won't give you a response basically to try to quell some of these concerns.
Would that satisfy your concerns or are you trying to shut these things down altogether?
They're trying to confuse the issue.
They're pretending that this is about the output.
The lawsuit is about the input, right?
So actually by you can't type Jason Derulo's name, you can't type Adele's name, what they're basically doing there is further hiding the input.
They're making it so that you can't see what they copied and they're pretending that this is all about the output in order to say,
look, we're putting guardrails on this thing.
That's not what this lawsuit's about.
This lawsuit is about them training their model on all of these sound recordings,
not on limiting prompts on the output to further hide the input, but it's clever, it's clever.
Okay, so you want to shut this down?
Well, I don't think that we call it an injunction, Kevin.
We would like to shut down their business as it's operating now,
which is something illegally trained on our sound recordings with output that doesn't reflect the artist's integrity.
Yes.
Does that mean that we want to shut down AI generators or AI companies?
No.
There's 50 companies that are already licensed by the music industry.
And I think it's important,
and this differs a lot from,
I think, the old days, but nobody's scared of this technology as in they want to shut down the technology.
Everybody to use the technology.
They definitely see sort of good AI versus bad AI.
Good AI compliments artists, helps them stretch music, helps them in the creation of music.
Bad AI takes from them,
gives no attribution, no compensation, asks no permission, and then generates a lot of rate something that's a bunch of garbage.
Yeah, I know some artists who would say they want to shut down this stuff entirely, that
they don't think there's any good form of it.
But mentioned the old days, and I want to ask you about this.
I think a lot of my fellow millennials think of the RIAA as the group that went around suing teenagers.
for pirating music during the Napster era.
The RAA has also sued a bunch of other file sharing and music sharing platforms and actually fought the initial wave of streaming music services like
Spotify because there was this fear that these all-you-can-eat streaming services would eat into CD sales.
Now, of course, we know that streaming
wasn't the death of music or music labels that actually ended up being sort of saving the music industry.
Do you think there's a danger here that actually these AI music generation programs could ultimately
be great for music labels just like Spotify was and that you might be trying to cut off
something productive before it's actually had the chance to mature?
I don't think it's really the same at all.
I that there's an embrace of AI and there was well before these generators came out or well before OpenAI, especially within the tech.
content partnerships that have existed and have grown and matured and gotten sophisticated through the streaming age.
So, you know, even though the RIAA's job is to be the boogie man and to, you know, go out there and enforce rights, which, you know, we do with with zeal and hopefully
a smile doing our job.
Here, I think that the really what we're trying to do is create a marketplace like streaming where there are partnerships and both sides can grow and evolve together because
the truth is you don't have one without the other.
You know, record companies don't control their prices.
They don't control their distribution.
They're now gateways, not gatekeepers.
The democratization of the music industry has changed everything.
And I think they're seeking the same kind of relationships with AI companies that they have with streaming companies today.
What would a good model look like?
I mean,
there are reports this week that YouTube is in talks with record labels about paying them a lot of money to license songs for their AI music generation
software.
solution here that there will be sort of these platforms that pay record labels and then they get to use those labels'
songs and training their models.
Do you think it's fine to use AI to generate music as long as the labels get paid or is there sort of a larger
objection to the way that these models work at all?
I think it works as long as it's done in partnership with the artists and at the end of the day it moves the ball forward for the label and the artist.
The YouTube example is interesting because that's really geared towards YouTube shorts.
It's geared towards fans being able to use generated music to put with their own videos for 15 or 30 seconds.
That's an interesting business model.
Band Lab is a tool for artists, splice, beak port, focus right, output, waves, even Every audio workstation that's now using...
native instruments, Oberheim.
I there are so many AI companies that have these bespoke agreements and different types of tools that are meant to be done with the artistic community
that I think the outliers are the Sunos and the UDO's,
who frankly are not very creative in trying to help with, you know, human ingenuity.
Instead, they're just different reactions to the rise of AI among artists.
Some people clearly seem to want no part of it.
On the other hand, we've seen musicians like Grimes saying, here, take me.
my voice, make whatever you want.
We'll figure out a way to share the royalties if any of your songs becomes ahead.
I'm curious,
you if you're able to get the deals that you want,
do you expect any controversy within the artists community and artists saying, hey, why'd you sell my back catalog to this blender?
I don't want to part of that.
Yeah, I think look artists are entitled to be different and there are gonna be artists
I think Kevin you said earlier,
you you know artists who are so scared of this They just they do want to shut the whole thing down
They just don't want their music and their art touched right?
I know directors of movies who can't stand that the formatting is different for an airplane like that's their baby and
They just don't want it then there are artists who are like, I'm finding experimental.
I'm fine having fans take it and change it and do something with it.
All of that is good.
They're the artist, right?
I mean, it's their art.
Our job is to invest in them, partner with them, help find a market for them.
But at the end of the day,
if you're trying to find a market for an artist's work that they don't and they don't want it.
that work in the market, it's not going to work.
Have you listened to much AI-generated music?
Are there any songs you've heard that you thought that's actually kind of good?
Yeah.
So I think in the sort of overdubbing voice and likeness thing,
it's a little bit better than some of the, you know, simple prompts on these AI generators like Yudio and Suno.
But I heard Billy Eilish's voice on a revivalist song, and I was like, wow, she should cover this song.
It really great,
it was just kind of seemed like a perfect fit,
and it's fun to Play with those things,
but again like in that case,
I think Billy Eilish gets to decide if her voice is used on something I think she gets to decide to decide if she wants to do a cover
I don't think that it's up to you know overdubbed to be able to to do that
I did do a bunch of prompts as you can imagine on some of these
Services trying to see like what happens if you just put in a few words?
like a simple country saw and then what happens if you put in like 20 different
descriptors and what's amazing is you can every 10 seconds you get a new song so
if you don't like it just put in a few more words and it rejiggers the patterns
and you can start getting to a point where you're like okay it's And the lyrics kind of suck,
but it's not terrible and you know We are only six months into the huge progression of this technology and if you had listened
to You know a prompt where you were allowed to put in Jason Derulo or Mariah Carey six months ago versus
You know now notice a marked improvement.
And one of the reasons why we needed to get out there now.
We needed to bring this suit.
We need the courts to settle this issue so that we can move forward on thriving marketplace before the technology gets so good that it
is a seismic threat to the industry.
Yeah, I've seen a lot of support for
this lawsuit among people,
you know,
I follow who are more inclined to side with artists and musicians,
but there have also been some tech industry folks who think this is all kind of,
you know, it sounds like the RA is just sort of anti-progress, anti-technology.
I even saw one executive.
one tech person call you the ultimate de-cells which is like in Silicon Valley that's sort of like the
biggest insult de-cells are people who want to basically stop technological progress, basically Luddites.
What do you make of that line of argument from the valley?
You know, this has been the same argument that the valleys had since 1998.
To me, that's a 30-year-old argument.
If you look at the marketplace today,
Where Silicon Valley thrives is when rights are in place and they form partnerships and then they grow into sophisticated global leaders
Where they can tweak?
You every couple of years their deals and come up with new products that allow them to feed these
Devices that are nothing without the content on them and you know
We all there's always sort of this David versus Goliath thing no matter what side you're
on But if you think about it Music which is a $17 billion industry in the United States.
I I don't even, I think, I think one tech company's cash on hand is five times that, right?
Not mention their $299 big $9 billion market caps, right?
But they are completely,
the music that these geniuses create in order to thrive and to say that
these creators are stopping their progress I think is sort of laughable.
I think what's much more threatening is if you move fast and break things without partnerships
what are you threatening on the tech side with the no holds barred you know
culture destroying you know machine led world it's a So what happens next?
The lawsuits have been filed.
This tends to take a long time.
But can we look forward to?
Will be sort of scandalous emails unearthed in discovery that you'll post to your website or what can we look forward to here?
Well, moving forward in discovery, I think we'll be prohibited from posting anything to work.
man.
I know.
You think you're just...
If you want to just send them to hard fork at NYU, that's fine.
I live for that stuff.
But will, of course, follow the rules.
But, you know, we have filed in the districts where these companies reside.
And I hope that...
or so, we will actually get to the meat of this.
Because you think about it, the judge has to decide when they raise fair use as a defense.
Is this fair use or not?
Right?
And is something that, you know, has to be part of the beginning, part of the lawsuit.
So we're hopeful that But, you know, when I say a short time in legal terms, that means, you know, a year or two.
Well, we're hoping that in a short time, we will actually get a decision and that it sends
the right message to investors and to new companies.
Like there's a right way and a wrong way to do this.
Doors are open for the right way.
I think there's a story here about startups that are sort of moving fast, breaking things, asking for forgiveness, not permission.
But I also think there's a story here that maybe we haven't talked about about restraint because I know that a lot of the big AI companies had
tools years ago that could generate music, but they did not release them.
I remember hearing a demo from someone who worked at the
one of the big AI companies maybe two years ago of one of these kinds of tools.
But think they understood,
they were scared because they knew that the record industry is very organized,
it this kind of history of litigation, and understood that they were likely to face lawsuits if they let this out into the public.
So, have you had discussions with the bigger AI companies, the more established ones
that are working on this stuff,
or are they just sort of intuiting correctly that they would have a lot of legal problems on their hands if they let this stuff out into the general
public?
You know,
you're raising a point that I don't think is discussed often enough, which is that there are companies out there that deserve credit for restraint.
And part of it is that they know that we would bring a lawsuit and in the past we haven't been shy and that's useful.
But part of it is also because these are their partners now.
There are real business relationships here and human relationships here between these companies.
And so they're natural,
I think they're moving towards a world where they're natural instinct is to approach their partners and see if they can work with them.
You know, I know that YouTube did sort of its Dreamcast experiment, approached artists, approached record companies.
That was sort of like the precursor or the data to whatever they might be discussing now for what's going to go on shorts that we talked
about earlier.
And I'm sure that there are many others, but you're right.
Yes, there are going to be companies like Sunu and Yudio that just seek investment, want to make profit and steal stuff.
But there is restraint and constructive action by a lot of companies out there who do view the creators of their partners.
Yeah.
Well, it's a really interesting development and I look forward to following it as it progresses.
Yeah, let me know if you need any more information and thanks so much for this discussion and conversation.
I think I've been able to talk in more than a three minute clip about this.
So if I sound like I'm a soundbite machine, you can tell how to do it.
We going to take this interview and put it through a blender and extrude something that sounds like you on the other side.
Alright, thanks so much Mitch, thanks for coming by.
Thanks Thanks guys.
When we come back, we're going inside the Pentagon with Chris Kirchhoff, the author of Unit X.
I'll see you in next Well, Casey, let's talk about war.
Let's talk about war.
What is it good for?
Some say absolutely nothing.
Others, my books, are the opposite.
So I've been wanting to talk about AI and technology and the military for a while in the show now,
because I think what's really flying under the radar of kind of the mainstream tech press these days is that there's just been a huge shift in Silicon Valley toward making
things for the military and the US military in particular.
Years ago,
it was the case that most of the big tech companies,
they were very reluctant to work with the military,
to sell things to the Department of Defense, to make products that could be used in war.
They had a lot of ethical and moral quandaries about that, and employees did too.
But we've really seen a shift over the past few years.
of startups working in defense tech, making things that are designed to be sold to the military and to national security forces.
We've also just seen a big effort at the Pentagon to modernize their infrastructure,
to update their technology, to not get beat by other nations when it comes to having the loot.
latest and greatest weapons.
along with Raj Shah of a book called Unit X, Chris is sort of a longtime defense tech guy.
He's been working in this area for a long time.
He was involved in a number of tech projects for the military.
He worked at the National Security Council during the Obama administration.
Fun fact, he was the highest ranking openly gay advisory in the Department of Defense for years.
And importantly, he was a founding partner of something called the Defense Innovation Unit, or DIU.
It also goes by the name Unit X,
which is basically this little experimental division that was set up about a ago by the Department of
try to basically bring the Pentagon's technology up to date.
And he and Raj Shah,
who was another founding partner of the DIU,
just wrote a book called Unit X that basically tells the story of how the Pentagon sort of realized that it had
a problem with technology and set out to fix it.
So I just thought we should bring in Chris to talk about some of the changes that
he has seen in the military when it comes to technology and in Silicon Valley when it comes to the military.
Let's do it.
Chris Kirka, welcome to Hardfork.
So I think people hear a lot about the military and technology and they kind of assume that
there are like very futuristic things happening inside pentagon that we'll hear about at some point in the future.
But lot of what's in your book is actually about old technology and how underwhelming some of the military's technological prowess is.
Your book opens with an anecdote about your co-author actually using a compact digital assistant because it was better,
it had better navigation tools than the navigation system on his $30 million jet.
That was sort of how you introduced the fact that the military is not quite as technologically sophisticated as many people might think.
So I'm curious, when you first started your work with the military, what was the state of the technology?
Well, it's really interesting.
I mean,
you go to the movies,
and we have all seen Mission Impossible and James Bond, and wouldn't it be wonderful if that actually were the reality behind the curtain?
But when you open up the curtain, you realize that actually in this country, there are two entirely different systems of technological production.
There's one for the military, and then there's one for everything else.
And dramatize this on the image of our book, Unit X.
We an iPhone,
and on top of the iPhone is sitting in an F-35,
the world's most advanced fighter jet, a generation stealth fighter known as a flying computer for its incredible sensor fusion and weapon suites.
But the thing about the F-35.
is that its design was actually finalized in 2001, and it did not enter operations until 2016.
And a lot happened between 2001 and 2016,
including the invention of the iPhone, which by the way, has a faster processor in it than the F-35.
And if you think about the F35 over the subsequent years, there's been three technological upgrades to it.
And we're now what we're almost an iPhone 16.
And once you understand that,
you understand why it was really important that the Pentagon thought about establishing a Silicon Valley office to start accessing this whole other
technology ecosystem that is faster and generally a lot less expensive than the firms that produce technology for the military.
Yeah.
I remember years ago,
I interviewed your former boss Ash Carter,
the former secretary of defense who died in 2022,
and I sort of expected that he'd want to talk about like all the newfangled stuff that the Pentagon was making.
as drones,
stealth bombers,
but instead we ended up talking about procurement, which is basically how the government buys stuff, whether it's a fighter jet or an iPhone.
And remember him telling me that procurement was just unbelievably complicated,
and it was a huge part of what made government and the military in particular so inefficient and kind of backwards technologically.
Describe how the military procures things and then what you discovered about how to maybe short-circuit that process or make it more effective.
efficient.
You know, if you're looking to buy a nuclear aircraft carrier, a submarine, you can't really go on Amazon and price shop for that.
I learned that the hard way,
by the way,
should I shut up your credit look at And so in those circumstances,
when the government is representing the taxpayer and buying one large military system,
a multi-billion dollar system from one vendor, it's really important that the taxpayer not be overcharged.
And the Pentagon has developed a really elaborate system of procurement to ensure that it can control how production happens, the cost of individual items.
And that works okay if you're in a situation where you have the government and one firm that makes one thing.
It doesn't make any sense goods that multiple firms make or that are just available on the consumer market.
And so one of the challenges we had out here in Silicon Valley when we first did a defense
innovation unit was trying to figure out how to work with startups and tech companies,
who it turns out, weren't interested in working with the government.
And the reason why is that the government typically buys defense technology through something called the federal acquisition rules,
which is a little bit like the Old Testament.
It's this dictionary size book of regulations, letting a contract takes 18 to 24 months.
If you're a startup, your investors tell you not to go down on that path for a couple of reasons.
One, you're not going to make enough money before you're not.
tax valuation, you're going to have wait too long, you're going to go out of business before the government actually closes the sale.
And two,
even if you get that first contract, it's totally possible another firm with better lobbyists is going to take it right back away from you.
So at Defense Innovation Unit, we had to figure out how to solve that paradox.
Part of what I found interesting about your book was just these accounts that you gave of
these clever loopholes that you and your team found around some of the bureaucratic slowness at the Pentagon.
And in particular, this loophole that allowed you to purchase technology much, much more quickly than one of your staffers found.
Tell that story and maybe that'll help people understand kind of the the systems that you were up in.
It's an amazing story,
but we knew when we arrived in Silicon Valley that we would fail unless we figure out a different way to contract with firms,
and our first beginning the office,
this 29-year-old staff member named Lauren Dailey,
the daughter actually of a tank commander whose way of serving was to become a civilian in the Pentagon and work on acquisition,
happened to be up because she's a total acquisition nerd late at night reading the just-released National Defense Authorization Act,
which is another dictionary-sized compendium of law that comes out every year,
and she was flipping through it trying to find new provisions in law that might change how acquisition worked,
and sure enough in section 815 of the law,
she found a single sentence that she realized somebody had placed there that changed everything,
and that single sentence would a completely different kind of contracting mechanisms
called other transaction authorities
that were actually first invented during the space race to allow NASA during the Apollo era to contract with mom and pop suppliers.
And so she realized that this provision would allow us not only to use OTAs to buy technology,
But the really important part is that if it worked,
it was successful in the pilot, we could immediately go to buy it at scale, to buy it in production.
We didn't have to recompete it.
There would be no pause, no 18-month pause between demonstrating your technology and having the department buy it.
And when Lauren brought this to our attention, we thought, oh boy, this really is.
So, we flew Lauren to Washington.
We had her meet with the head of acquisition policy,
the Department of Defense,
and in literally three weeks,
we changed 60 years of Pentagon policy to create a whole new way to buy technology that to this day has been used to purchase $70 billion of technology for the Department
of Defense.
You just said that the reason that Silicon Valley tech companies,
some of them didn't want to work with the military is because of this sort of arcane and complicated procurement process,
but there are also real moral objections among a lot of tech companies and tech workers.
In 2018,
Google employees,
famous tech workers,
in obviously objected to something called Project Maven,
which was a project the company had planned with the Pentagon that would have used their AI image recognition software to improve weapons
and things like that.
And there have been just a lot of objections over the years from Silicon Valley to working with the military to being defense company.
contractors, why do you think that was, and do you think that's changed at all?
It's completely understandable.
So Americans serve in uniform.
Most of us don't actually know somebody who's in the military.
It's really easy hearing Silicon Valley, where the weather's great.
Sure, you read headlines in the news, but the military is not something that you encounter in your daily life.
You join a tech company to make the world better, to develop products that are going to help people.
You don't join a tech company assuming that you're going to it the world a more lethal place.
But at the same time,
Project Maven was actually something that I got a chance to work on, and Defense Innovation Unit, and whole group of people led.
And tell us about...
Remind us what Project Maven was.
So Project Maven was an attempt to use artificial intelligence and machine learning to take a whole bunch of footage,
surveillance footage,
that was being captured in places like Iraq and Afghanistan,
and other military missions, and to use machine learning to label what was found in this footage.
So it was a tool to essentially automate work that otherwise would have taken human analysts hundreds of hours to do,
and it was used primarily for intelligence and reconnaissance and force protection.
So Project Maven, this is no...
But when you talk about military systems, there's really a lot of unpacking you have to do.
The headline that got Project Maven in trouble said, you know, Google on secret drone projects.
And made it look as if Google was partnering with Defense Innovation Union,
the Part of Defense, to build offensive weapons to support the U.S.
drone campaign.
And that's not all what was happening.
What was happening is Google was building tools that would help our analysts process the incredible
amount of data flowing off many different observation platforms in the military.
Right.
But employees objected to this.
They made a big case that Google should participate in Project Maven and eventually the company pulled out of the project.
But speaking of Project Maven,
I was curious because there was some reporting from Bloomberg this year that showed that the military has actually used Project Maven's technology as
recently as February to identify targets for airstrikes in the Middle East.
So isn't that exactly what the Google project Maven back when you were working on it at the Defense Department.
Isn't that exactly what they were scared would happen?
Well, Project Maven, when Google was involved, was very much a pilot R&D project, and it
sends transition, actually, into much more of an operational phase, and it's being used in a number of places.
In fact, it's actually being used in Ukraine, as well, to help the U.S.
identify targets in Ukraine.
And this,
again, speaks to, I think, a sea change in Silicon Valley since that original protest of 3,000 Google employees over Project Maven, where the world has changed a lot.
And, uh, not...
uh...
you know we have a land war going on in europe on the border of nato and in fact that war the ukraine conflict has mobilized a lot of people in silicon
valley to want to try and help support ukraine's quest to defend its heritage
And so I think we're in a very different time and moment right now as people watching the
news realize that our security is actually quite a bit more fragile than we might have first imagined.
I mean,
I think one reaction that our listeners may have to this is they are very concerned about the use of AI and other technologies by the military.
And I also hear from a lot of people at the tech companies who are really concerned about some of the use content.
during the project Maven controversy,
talking with people at Google who were part of the protest movement and some things that they would say to me are like,
well, if I wanted to work for a defense contractor, I would have gone to go work for Lockheed Martin or Raytheon.
I'm curious, like, what moral argument would you make to someone who maybe says, look, I did not sign up.
to make weapons of war.
I an AI engineer.
I on large language models or I work on image recognition stuff.
What do you tell that person if you're working at the DIU trying to persuade them that it's okay to sell or license that technology to the Pentagon?
I think you tell them that we're at an expense.
extraordinary moment in the history of war where everything is changing and I'll just give you a couple data points a Few weeks ago,
the United States asked the Ukrainian military to pull back from the front lines all 31 of the M1A1 Abrams tanks that we had deployed the Ukraine to allow their military to better repel the Russian invasion
and These are the most advanced tanks, not only in our inventory, but in the inventory of any one of our allies.
And they were getting whacked by $2,000 Russian kamikaze drones, to $2,000 drones killing tanks.
What does that tell me?
That tells me that a century of mechanized warfare that began in the First World War is over.
And if you're building an army that's full of tanks,
you now are the emperor with fewer clothes anyway, and I'll give you a couple other data points.
Hamas has kicked off the largest ground war in the Middle East because of its attack
in Israel on the 7th of since the 1973 Arab-Israeli war, threatening to destabilize the Middle East into a wider war.
How did they do it?
They did it by taking quadcopters and using them to drop grenades on the generators powering the Israeli border towers
That's what allowed the fighters to pour over the border Another data point.
Houthi rebels in Yemen right now are holding hostage 12%
of global shipping in the Red Sea because they're using autonomous seed drones missiles and loitering munitions to harass shipping And so,
we're at this moment where the arsenal of democracy that we have is incredibly forceful militaries,
full of things like aircraft carriers and tanks, are wielding weapons that are no longer as effective as they were 10 years ago.
And our military and our adversaries doesn't catch up quick, we may be in a situation And we're we're that.
we're a that.
and where we don't have the advantage we once did, and we have to think very differently about our security if that's the case.
It sounds like you're saying that the way to stop a bad guy with an AI drone is a good guy with an AI drone.
Am I hearing you right that you're saying that we have to have such overwhelmingly powerful
lethal technology in our military that other countries won't mess with us?
I totally hear you and frankly hear all the people that years ago were affiliated with the Stop Killer Robots movement.
These weapons are awful things.
They do awful things.
But, you know, at the same time, there's a deep literature on something called
strategic stability that comes out of the Cold War,
and, you know, part of that literature focuses on the proliferation of nuclear weapons, and the fact that actually the proliferation of
nuclear weapons has actually reduced great power conflict in the world, because nobody actually wants to get in a nuclear exchange.
Now, would it be a good idea for everybody in the world to have their own nuclear weapon?
Probably not, so all these things have limits.
But that's an illustration of how strategic stability, in other words, a of power, can actually reduce the chance of conflict in the first place.
Yeah.
I'm curious what you make of the Stop Killer Robots movement.
There was a petition or an open letter that went around years ago that was signed by a That's it.
AI, including Elon Musk and Demisysabas of Google DeepMind, they all pledged not to develop autonomous weapons.
Do you think that was a good pledge or do you support autonomous weapons?
I mean, I think autonomous weapons are now kind of a reality in the world.
I we're seeing this on the front lines of Ukraine.
And if you're not willing to fight with autonomous weapons, then you're going to lose.
So there's this former open AI employee Leopold Ash and Brenna who recently released a long manifesto called situational awareness.
And one of the predictions that he makes is that by about 2027,
the US government would recognize that super intelligent AI was such a threat to the world order that AGI,
sort of artificial general intelligence, would become functionally a project of the national security state, something like an AGI, Manhattan Project.
There's other speculation out there that maybe at some point, have to nationalize an open AI or an anthropic.
Are you hearing any of these whispers yet?
Like people starting to game this out at all?
Well, I haven't, I confess, I made it all through each 155 pages of that lawn manifesto.
It very long.
You could summarize it with Chad GPT, though.
Fantastic.
But these are important things to think about, because it could be that in certain kinds of conflicts, whoever has the best AI.
And if that's the case,
and if AI is getting exponentially more powerful,
then to take things back to the iPhone and the F-35,
it's going to be really important that you have the kind of AI of the iPhone variety.
You have the AI that's new every year.
You don't have the F-35 with the processor that was baked in in 2001, and you're only taken off on a runway in 2016.
So I do think it's very important for folks to be focused on AI.
Where all goes though is a lot of speculation.
to bet in 10 years do you think that the AI companies will also be private or do you think
the government will have stepped in and gotten way more?
interested in maybe taking one of them over?
Well, I'd make the observation that we all watched Oppenheimer, especially employees at AI firms.
They seem to love that film.
And nuclear technology, it's what National Security Strategies would call a point technology.
It's sort of zero to one, either you have it or you don't.
And AI is not going to end up being a point technology.
It's a very broadly diffuse technology that's going to be applied not only in weapons systems, but in institutions.
It's going to be broadly diffused around the economy.
And for that reason,
I think, or it's less likely anyway, that we're going to end up a situation where somebody has the bomb and somebody doesn't.
I think the gradations are going to be smoother.
Part of what we've seen in other industries as technology moves in and modernizes things is that often things become cheaper.
It's cheaper to do things using the latest technology than it is to do using outdated technology.
Do you think some of the work that you've done at DIU trying to modernize how the Pentagon works is going to result in smaller defense
budgets being necessary going forward?
Is the two trillion dollars?
that the DOD has budgeted for this year.
Could that be one trillion or half a trillion in the coming years because of some of these modernizations?
You're giving us a raised, Kevin.
I it's more like 800, 800 billion.
Well, I'm sorry, I got that answer from Google's AI overview, which also told me to eat rocks and put glue on my pizza.
We should get the Secretary to invest, try to eat.
He'd that answer, if had that large of a budget.
Well, it's certainly true that for a lot less money now,
you can have a really destructive effect on the world as drawing pilots in Ukraine and elsewhere in the world are showing.
I think it's also true that the US military has a whole bunch of legacy weapons.
and systems that unfortunately are kind of like museum relics, right?
I if our most advanced tank can be destroyed by a drone, it might be time to retire our tank fleet.
If our aircraft carriers cannot be defended against the hypersonic missile attack,
it's probably not a good idea to see all one of our aircraft carriers anywhere near an advanced adversary.
So I think it is an opportunity moment to really look at what we are spending our money on at the Defense Department and remember the goal of our nation's
founders, which is to spend what we need to on defense and not a penny more.
So I hear you saying that it's very important for the military to be
prepared technologically for the world we're in and that means working with Silicon Valley,
but is there anything like more specific that you want to share that you think that either side needs to be doing here or something specific that you want
to see out of that collaboration?
Well, I one of the main goals of Defense Innovation Unit was literally to get the two groups talking.
Before Defense Innovation Unit was founded, a secretary of defense hadn't been to Silicon Valley in 20 years.
That's almost a generation.
So Silicon Valley invents the mobile phone, it invents cloud computing, it invents AI, and nobody for the day.
bothers to even come and visit.
And that's a problem.
And just bringing the two sides into conversations itself, I think, a great achievement.
Well, Chris, thank you so much for joining us.
Thank you, Chris.
When we come back, play another round of hat GPT.
All right, Kevin, well, it's time once again for hat GPT.
This, of course, is our favorite game.
It's where we draw new stories from the week out of a hat,
and we talk about them until one of us gets sick of hearing the other one talk and says, stop generating.
That's right.
Now, normally we pull slips of paper out of a hat, but due to our remote setup today,
I will instead be pulling virtual slips of paper out of a laptop.
But on in the but but but for those of you going following along at YouTube,
you will still see that I do have one of the hat GPT hats here,
and I will be using it for comic effect throughout the segment.
Will you put it on, actually?
Because if we don't need it to draw slips out of, you might as well be wearing it.
I might as well be wearing it.
Yeah, it looks so good.
Thank you so much, and thank you once again to the listener who made this for us.
and you're a true fan.
So good.
Perfect.
All right, Kevin, let me draw the first slip out of the laptop.
Ilya Sutskiver has a new plan for safe super intelligence.
Ilya Sutskiver is of course the open AI co-founder who was part of the coup against Sam Altman last year,
and Bloomberg reports that he is now introducing his next project,
a venture called Safe Super Intelligence,
which aims to create a safe,
powerful artificial intelligence system with an appear research organization that has no near-term intention of selling AI products or services.
Well, it's very interesting on a number of levels, right?
In sense,
this is kind of a mirror image of what happened several years ago when a bunch of safety-minded people left OpenAI after disagreeing with Sam Altman and
started an AI safety-focused research company.
was anthropic,
and so this is sort of the newest twist in this whole saga is that Ilya Satskumar,
who was,
you know,
very concerned about safety and how to make super intelligence that was smarter than humans,
but also not evil and not going to destroy us, who has done, you know, something very similar.
But I have to say, I don't quite get it.
I mean, he's saying much.
about the project.
But part of the reason that these companies sell these AI products and services is to get the money to buy all the expensive equipment that you need to train these giant models.
And I just don't know,
like, if you don't have any intention of selling this stuff before it becomes AGI, how are you paying for the AGI?
Do you have a sense of that?
No, I don't.
I mean, Daniel Gross, who is one of Ilya's co-founders here, has basically said, don't worry about fundraising.
Like, we are going to be able to fundraise as much as we need for this.
So I guess we will see.
But yeah,
it does feel a bit strange to have someone like Ilya saying he's going to build this with totally without commercial motive,
in part because he said it before, right?
Like, this is what is so funny about this.
Is it truly just is a case where the circle of life keeps repeating where a small band of people get together and they say we want to build a very
powerful AI system and we're going to do it very safely and then bit by bit.
bit, they realized, well, actually, we think that it's being built out safely.
We're going to form a breakaway faction.
So if you're playing a lot at home, I believe this is the second breakaway faction to break away from open AI after anthropic.
And I look forward to Ilya quitting this company eventually to start a newer, even more safe company somewhere else.
They're really, really safe super intelligence.
Yeah, his next company you've never seen safety like this they wear helmets everywhere in the office and they just have keyboards
Yeah, all right stop generating all right pick one out of that Kevin
All right, five men convicted of operating JetFlix, one of the largest illegal streaming sites in the US.
This is from Variety.
JetFlix was a sort of pirated streaming service that charged $9.99 a month while claiming to host more than 183,000 TV episodes,
which is more than the combined catalogs of Netflix, Hulu, Voodoo, and Amazon Prime video.
Ooh, that great.
I'm gonna open an account.
Yeah.
So, the Justice Department says this was all illegal and the five men who were charged
with operating it were convicted by a federal jury in Las Vegas.
According to the court documents and the evidence that was presented at the trial,
this group of five men were basically piracy services for illegal episodes of TV and then hosting them on their own thing.
It does not appear to have been a particularly sophisticated scam.
It's what if we did this for a while and charged people money and then got caught?
Well, I think this is very sad because here finally you have some people who are willing
to stand up and fight inflation and what does the government do?
They come in and they say, knock it off.
I will say though, Kevin, I these, I can actually point to the mistake that these guys made.
What's that?
So instead of scraping these 183,000 TV episodes and selling them for $9.99 a month,
what they should have done was feed them all into a large language model and then you can sell them to people.
for $20 a month.
So next time when these guys get out of prison,
I hope they get in touch with me because I have a new business idea for them.
All right, stop generating.
All right.
Here's a story called 260 McNuggets McDonald's and AI drive through tests amid errors.
This is from the New York Times.
After a number of embarrassing videos showing customers fighting with its AI power drive-through technology,
McDonald's announced it was ending its three-year partnership with IBM.
In one TikTok video, friends repeatedly tell the AI assistant to stop as it added hundreds of chicken macnuggets to their order.
Other videos show the drive-through technology adding nine iced teas to an order refusing
to add a Mountain Dew and adding unrequested bacon to ice cream.
Kevin, what the heck is going on in McDonald's?
Well, as a fan of bacon ice cream, I should say, I want to get to one of these McDonald's
before they take this thing down.
Me too.
Did you see any of these videos?
I have to.
No, but we should watch one of them together.
The caption is, the McDonald's robot is wild.
and it shows their screen at the thing where it is like just tallying up McNuggets and starts charging them more than $200.
Here's my question.
Why is everyone just rushing to assume that the AI is wrong here?
Maybe the AI knows what these gals need.
Because Kevin, here's the thing.
When superintelligence arrives, we're going to think that we're smarter than it.
But it's going to be smart,
you know, so there's going to be a period of adjustment as we sort of, you know, get used to having our new AI.
master?
Have you been to a drive-through that used AI to take your order yet?
No, I mean, I don't even really understand what was the AI here?
Was this like an Alexa thing where I said, you know, McDonald's add 10 McNuggets or like, what was actually happening?
No, so this was a a partnership that McDonald's struck with
And, basically, this was like technology that went inside the little menu things that have the microphone and the speaker in them.
And instead of having a human say, what would you like, it would just say, what would you like?
And then you said it and they would recognize it and put it into the system.
So could sort of eliminate that part of the labor of the drive-through.
Got it.
Well, look, I, for one, am very glad this happened.
now, I've wondered, what does IBM do?
And I have no idea.
And now if it ever comes up again, I'll say, oh, that's the company that made the McDonald's stop working.
We should say it's not just McDonald's, a of other companies are starting to use this technology.
I actually think this is probably, you know, inevitable.
This will get better.
They will iron out some of the kinks.
But I think there will probably still need to be a human in the loop on this one.
All right, stop generating.
OK, well, Kevin, let's talk about what happened when 20 comedians got AI to write their routines.
This is in the MIT Technology Review.
Google researchers found that although popular AI models from OpenAI and Google were effective at simple data.
task, like structuring a monologue or producing a rough first draft, the struggle to produce
material that was original,
stimulating, or crucially, money and I'd like to read you an example LLM joke Kevin please I decided to switch careers
and become a pickpocket after watching a magic show little did I know the only thing disappearing would be my reputation.
Walka, walka, walka, hey I got a laugh out of you Kevin what are you making this are you
surprised that No, but this is interesting.
Like it's, it's like this has been something that, you know, critics of large language models have been saying for years.
It's like, well, it can't, it can't tell a joke.
And, you know, I should say I've had funny experiences with large language models, but never after like asking them to tell me a joke.
Yeah.
Like remember when he said to Sydney, take my wife, please.
I get no respect, I tell you.
No, but this is interesting because this was a study that was
actually, researchers at Google DeepMind, and basically it appears that they had a group of comedians try writing some jokes with their language model.
And in the abstract,
it says that most of the participants in this study felt that the large language models did not succeed as a creativity support tool by producing bland and biased comedy
tropes, which they describe in this paper as being akin to cruise ship comedy material from the 1950s, but a bit less racist.
So were not impressed, these comedians by these language models ability to tell jokes.
You're in an amateur.
used AI to come up with jokes?
No, I haven't.
And I have to say,
like, I think I understand the technological reason why these things aren't funny, Kevin, which is that comedy is very up to the minute, right?
For something to be funny,
it's typically something that is on the edge of what is currently thought to be socially acceptable and what is socially acceptable or like what is surprising
within a social context that has changes all the time and these models they are trained
on decades and decades and decades of text and they just don't have any way of figuring
out well what would be a really fresh thing to say so maybe they'll get there eventually but as is there built right now?
I'm truly not surprised that they're not funny.
All right, stop generating.
All right, next one, Waymo ditches the wait list and opens up his robo taxis to everyone in San Francisco.
This is from the Verge.
Since 2022, Waymo has made its rides in its robo taxi service available only to people who were approved off of a wait list.
But as of this week, they're opening it up to anyone who wants to ride in San Francisco.
Casey, what do you make of this?
Well, I am excited that more people are going to get to try this.
This is, as you've noted, Kevin become kind of the newest tourist.
Francisco is when you come here, you see if you can find somebody to give you a ride in one of these self driving cars.
And now everyone is just going to be able to come here and download the app and use it immediately.
I have to say, I am scared about what this is going to mean for the wait times on Waymo.
I've been taking Waymo more lately and it often will take.
take 12 or 15 or 20 minutes to get a car and now that everyone can download the app.
I'm not expecting those wait times to go down.
Yeah, I hope they are also simultaneously adding more cars to the Waymo network because this is going to be very popular.
You're saying they need cars.
They do.
I'm worried about the wait times,
but I'm also worried about the condition of these cars because I've noticed in my last few rides, they're a little dirtier.
Oh, wait, really?
Yeah, I mean, they're still pretty clean, but you know, I did see like a like a like a takeout container.
Oh, really?
One other day.
Oh my God.
And I so I just I want to know how they plan to keep these things from becoming filled with people's crap.
All right, stop generating.
All right, last one.
This one comes from the Verge TikToks AI tool accidentally let you put Hitler's words in a paid actor's mouth.
TikTok mistakenly posted a link to an internal version of an AI digital avatar tool that apparently had the
This was a tool that was supposed to let businesses generate ads using AI with paid actors using
this AI voice dubbing thing that would make the actors repeat your whatever you wanted to have them say, endorse your product or whatever.
But very quickly,
people found out that you could use this tool to repeat excerpts of mind Kampf,
bin Laden's letter to America, it told people to drink bleach and vote on the wrong day.
And that was its recipe for a happy pride celebration.
Listen, obviously this is a very sort of silly story.
It sounds like everything involved here was a mistake.
And I think if you're making some sort of digital AI tool that is meant to generate ads,
you do want to put safeguards around that because otherwise people will exploit it.
That said,
Kevin, I do think people need to start getting comfortable with the fact that people are just going to be using these AI creation tools to do a bunch of kooky and crazy
stuff.
Like what?
Like are,
you know,
in the same way that people use Photoshop to make nudity or offensive images and we don't storm the gates of Adobe saying shut down Photoshop.
The same thing is going to happen with these digital AI tools.
And while I do think that there are some notable differences and it is sort of,
you know, varies a case by case basis and if you're making a tool for creating ads, it feels different.
They're just going to be a lot of digital tools like this that use AI to make stuff and other
people are going to use it to make offensive stuff.
And when they do, we should hold the people accountable.
Yeah, I agree with that, and I also think like this sort of product is not super worrisome to me.
I obviously it should not be reading X service from my comp.
They're obviously they did not mean to release this.
I assume that when they do, you know, fix it, it will be much better, but this is not like a thing that is.
creating deep fakes of people without their consent.
This is a thing where if you have a brand,
you can choose from a variety of stock avatars that are created from people who actually get paid to have their likenesses used commercially.
So specific details of this one don't bother me.
much, but it does open up some new licensing opportunities for us.
Like could have an AI set of avatars that could be out there advertising crypto tokens or whatever.
And I, for one, am excited to see how people use that.
Oh, man.
Well, and if TikTok weren't banned, we probably make a lot of money that way.
But you know, we're out luck.
Yeah, get it while it's good.
Alright, close up the hat!
Do you know the one time we record an episode and we did record it concurrently with as a client recording an episode Yeah,
I tried to make eye contact with him for the entire time we recorded the episode not one time to that man Look at the window.
He's that focus.
He's that dialing.
He's that in the zone.
He cannot be distracted You really were you're like waving your hands.
Yes That man is on permanent do not disturb mode and he will not be flapped
解鎖更多功能
安裝 Trancy 擴展,可以解鎖更多功能,包括AI字幕、AI單詞釋義、AI語法分析、AI口語等

兼容主流視頻平台
Trancy 不僅提供對 YouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Coursera 等平台的雙語字幕支持,還能實現對普通網頁的 AI 劃詞/劃句翻譯、全文沉浸翻譯等功能,真正的語言學習全能助手。

支持全平臺瀏覽器
Trancy 支持全平臺使用,包括iOS Safari瀏覽器擴展
多種觀影模式
支持劇場、閱讀、混合等多種觀影模式,全方位雙語體驗
多種練習模式
支持句子精聽、口語測評、選擇填空、默寫等多種練習方式
AI 視頻總結
使用 OpenAI 對視頻總結,快速視頻概要,掌握關鍵內容
AI 字幕
只需3-5分鐘,即可生成 YouTube AI 字幕,精準且快速
AI 單詞釋義
輕點字幕中的單詞,即可查詢釋義,並有AI釋義賦能
AI 語法分析
對句子進行語法分析,快速理解句子含義,掌握難點語法
更多網頁功能
Trancy 支持視頻雙語字幕同時,還可提供網頁的單詞翻譯和全文翻譯功能