HUGE LM Studio Update | Multi-Models with AutoGen ALL Local - バイリンガル字幕
Hey and welcome back to day 27 of 31.
We almost done, but today I have a wonderful update that came from LM Studio.
I talked about it briefly in yesterday's video, but what I'm talking about is multi models in LM Studio with auto gen.
No, I didn't mispronounce that and I didn't mean to say multi-modal and meant multi-modal.
Now what that means in a big update with LM Studio, video recently, we can now have more than one model running on one server.
And all we need to do is adjust the model property in the config list,
and we can have multiple config lists each with different models loaded from LM studio.
Let's take a look and see what I mean.
Okay, I'll put a video in the description where I talk about LM studio, how to download install it and get it running and kind of working away around it.
But in the latest update, we have the multi model sessions where you can load and prompt multiple local models simultaneously.
So does that work?
Well, once you download and install it and run Elmo Studio, you'll be greeted with this screen.
And the first thing you do is actually download at least two different models.
So on the home page here, you can just look at some of these here, like, uh, like Google's five to Quinn.
Um, you can come down here to get Zephyr and just go ahead and download a couple of these.
And then there is a new tab on the left hand side here called playgrounds.
So you'll click playground.
You'll see that they have a multi model session here.
Just click go.
And now what we can do is load a couple of models.
So up here, it says select models to load.
Choose this and I'm going to choose Phi two first once that one's done and choose another one
I'm going to choose stable lm zephyr because it's also a smaller model Okay,
and once you have that done Then just come over here on the left-hand side and click start server and we're up and running now
Let's create an other gen file where we can see how we have two different models from one lm studio software running at the same time
with the agents.
So we'll have is two different LLM configs.
I'm going to name the first one Zephyr in the second one, Phi two.
Now how we distinguish them is in our config list, we have a model base URL and API key.
With the new origin update, we can say lm.studio for the API key so it recognizes lm studio.
Then we have a base URL.
This is the same.
This always been the same for LM studio.
Now we have a model name really This is a model identifier from the model that we've loaded, right?
So I'll show you where to find this in a minute,
whichever assistant agent had the Zephyr at the LLM config definition They're going to use the Zephyr model now for a 5 to another LLM config
We had the config lists and then the model this is the identifier for the 5 to model Then we have the base URL,
which is the same in the API key,
which is all for the same also I might not have mentioned this before but for the cache seed you can set this to none
Meaning that it will never cache your results and every time you run this it will always be different
I have two agents I have one name Phil who's going to be using 5 to model and I have one name Zeph
Who's going to be using the Zephyr model and then I just have Phil initiate a chat with Zeph saying tell me a joke
That's it really simple
But this is going to show you how to have multiple models working together in one LM studio software now as I said
We had to get this identify here
Let's go over to LM studio and I'll show you where to get that so back at LM studio If you click for instance,
let's just click 5 to over here There's an API model
identifier You can just copy this and the same thing for Zephyr
You can just look at the API model identifier and then copy that as well.
All right after I ran it as we can see here This is the server logs for both the models and I mean it worked right here is a community all the tokens for the responses
back to But here we go, like so it worked, right?
So this is just, you can go back to LM Studio, look and you can see how the interaction went.
Okay, now if we go back to our IDE
and look at what happened here, we say Phil started talking the zip, tell me a joke.
Why did the tomato turn red?
Because it saw the salad dressing.
Ha ha, they had the audience, Great.
Okay.
Awesome.
What happened here?
Okay.
So again, let's review what just happened.
We had two separate models working on one LM studio software running.
It was open source.
It was free.
We didn't have to worry about open AI's API key and they could talk to each other.
I think this was a huge update and I think this is really going to help out especially if you
Or if you haven't tried it yet, I recommend you downloading it and just trying it.
It's free.
You know, they don't store any of your information, you can use all open source local LLMs.
If you have any comments or anything you want to chat about, we'll leave them down in the below.
Thank you for watching and I'll see you next
さらなる機能をアンロック
Trancy拡張機能をインストールすると、AI字幕、AI単語定義、AI文法分析、AIスピーチなど、さらなる機能をアンロックできます。

主要なビデオプラットフォームに対応
TrancyはYouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Courseraなどのプラットフォームにバイリンガル字幕を提供するだけでなく、一般のウェブページでのAIワード/フレーズ翻訳、全文翻訳などの機能も提供します。

全プラットフォームのブラウザに対応
TrancyはiOS Safariブラウザ拡張機能を含む、全プラットフォームで使用できます。
複数の視聴モード
シアターモード、リーディングモード、ミックスモードなど、複数の視聴モードをサポートし、バイリンガル体験を提供します。
複数の練習モード
文のリスニング、スピーキングテスト、選択肢補完、書き取りなど、複数の練習方法をサポートします。
AIビデオサマリー
OpenAIを使用してビデオを要約し、キーポイントを把握します。
AI字幕
たった3〜5分でYouTubeのAI字幕を生成し、正確かつ迅速に提供します。
AI単語定義
字幕内の単語をタップするだけで定義を検索し、AIによる定義を利用できます。
AI文法分析
文を文法的に分析し、文の意味を迅速に理解し、難しい文法をマスターします。
その他のウェブ機能
Trancyはビデオのバイリンガル字幕だけでなく、ウェブページの単語翻訳や全文翻訳などの機能も提供します。