HUGE LM Studio Update | Multi-Models with AutoGen ALL Local - 雙語字幕

Hey and welcome back to day 27 of 31.
We almost done, but today I have a wonderful update that came from LM Studio.
I talked about it briefly in yesterday's video, but what I'm talking about is multi models in LM Studio with auto gen.
No, I didn't mispronounce that and I didn't mean to say multi-modal and meant multi-modal.
Now what that means in a big update with LM Studio, video recently, we can now have more than one model running on one server.
And all we need to do is adjust the model property in the config list,
and we can have multiple config lists each with different models loaded from LM studio.
Let's take a look and see what I mean.
Okay, I'll put a video in the description where I talk about LM studio, how to download install it and get it running and kind of working away around it.
But in the latest update, we have the multi model sessions where you can load and prompt multiple local models simultaneously.
So does that work?
Well, once you download and install it and run Elmo Studio, you'll be greeted with this screen.
And the first thing you do is actually download at least two different models.
So on the home page here, you can just look at some of these here, like, uh, like Google's five to Quinn.
Um, you can come down here to get Zephyr and just go ahead and download a couple of these.
And then there is a new tab on the left hand side here called playgrounds.
So you'll click playground.
You'll see that they have a multi model session here.
Just click go.
And now what we can do is load a couple of models.
So up here, it says select models to load.
Choose this and I'm going to choose Phi two first once that one's done and choose another one
I'm going to choose stable lm zephyr because it's also a smaller model Okay,
and once you have that done Then just come over here on the left-hand side and click start server and we're up and running now
Let's create an other gen file where we can see how we have two different models from one lm studio software running at the same time
with the agents.
So we'll have is two different LLM configs.
I'm going to name the first one Zephyr in the second one, Phi two.
Now how we distinguish them is in our config list, we have a model base URL and API key.
With the new origin update, we can say lm.studio for the API key so it recognizes lm studio.
Then we have a base URL.
This is the same.
This always been the same for LM studio.
Now we have a model name really This is a model identifier from the model that we've loaded, right?
So I'll show you where to find this in a minute,
whichever assistant agent had the Zephyr at the LLM config definition They're going to use the Zephyr model now for a 5 to another LLM config
We had the config lists and then the model this is the identifier for the 5 to model Then we have the base URL,
which is the same in the API key,
which is all for the same also I might not have mentioned this before but for the cache seed you can set this to none
Meaning that it will never cache your results and every time you run this it will always be different
I have two agents I have one name Phil who's going to be using 5 to model and I have one name Zeph
Who's going to be using the Zephyr model and then I just have Phil initiate a chat with Zeph saying tell me a joke
That's it really simple
But this is going to show you how to have multiple models working together in one LM studio software now as I said
We had to get this identify here
Let's go over to LM studio and I'll show you where to get that so back at LM studio If you click for instance,
let's just click 5 to over here There's an API model
identifier You can just copy this and the same thing for Zephyr
You can just look at the API model identifier and then copy that as well.
All right after I ran it as we can see here This is the server logs for both the models and I mean it worked right here is a community all the tokens for the responses
back to But here we go, like so it worked, right?
So this is just, you can go back to LM Studio, look and you can see how the interaction went.
Okay, now if we go back to our IDE
and look at what happened here, we say Phil started talking the zip, tell me a joke.
Why did the tomato turn red?
Because it saw the salad dressing.
Ha ha, they had the audience, Great.
Okay.
Awesome.
What happened here?
Okay.
So again, let's review what just happened.
We had two separate models working on one LM studio software running.
It was open source.
It was free.
We didn't have to worry about open AI's API key and they could talk to each other.
I think this was a huge update and I think this is really going to help out especially if you
Or if you haven't tried it yet, I recommend you downloading it and just trying it.
It's free.
You know, they don't store any of your information, you can use all open source local LLMs.
If you have any comments or anything you want to chat about, we'll leave them down in the below.
Thank you for watching and I'll see you next
翻譯語言
選擇翻譯語言

解鎖更多功能

安裝 Trancy 擴展,可以解鎖更多功能,包括AI字幕、AI單詞釋義、AI語法分析、AI口語等

feature cover

兼容主流視頻平台

Trancy 不僅提供對 YouTube、Netflix、Udemy、Disney+、TED、edX、Kehan、Coursera 等平台的雙語字幕支持,還能實現對普通網頁的 AI 劃詞/劃句翻譯、全文沉浸翻譯等功能,真正的語言學習全能助手。

支持全平臺瀏覽器

Trancy 支持全平臺使用,包括iOS Safari瀏覽器擴展

多種觀影模式

支持劇場、閱讀、混合等多種觀影模式,全方位雙語體驗

多種練習模式

支持句子精聽、口語測評、選擇填空、默寫等多種練習方式

AI 視頻總結

使用 OpenAI 對視頻總結,快速視頻概要,掌握關鍵內容

AI 字幕

只需3-5分鐘,即可生成 YouTube AI 字幕,精準且快速

AI 單詞釋義

輕點字幕中的單詞,即可查詢釋義,並有AI釋義賦能

AI 語法分析

對句子進行語法分析,快速理解句子含義,掌握難點語法

更多網頁功能

Trancy 支持視頻雙語字幕同時,還可提供網頁的單詞翻譯和全文翻譯功能

開啟語言學習新旅程

立即試用 Trancy,親身體驗其獨特功能

立即下載