HUGE LM Studio Update | Multi-Models with AutoGen ALL Local - Subtítulos bilingües

Hey and welcome back to day 27 of 31.
We almost done, but today I have a wonderful update that came from LM Studio.
I talked about it briefly in yesterday's video, but what I'm talking about is multi models in LM Studio with auto gen.
No, I didn't mispronounce that and I didn't mean to say multi-modal and meant multi-modal.
Now what that means in a big update with LM Studio, video recently, we can now have more than one model running on one server.
And all we need to do is adjust the model property in the config list,
and we can have multiple config lists each with different models loaded from LM studio.
Let's take a look and see what I mean.
Okay, I'll put a video in the description where I talk about LM studio, how to download install it and get it running and kind of working away around it.
But in the latest update, we have the multi model sessions where you can load and prompt multiple local models simultaneously.
So does that work?
Well, once you download and install it and run Elmo Studio, you'll be greeted with this screen.
And the first thing you do is actually download at least two different models.
So on the home page here, you can just look at some of these here, like, uh, like Google's five to Quinn.
Um, you can come down here to get Zephyr and just go ahead and download a couple of these.
And then there is a new tab on the left hand side here called playgrounds.
So you'll click playground.
You'll see that they have a multi model session here.
Just click go.
And now what we can do is load a couple of models.
So up here, it says select models to load.
Choose this and I'm going to choose Phi two first once that one's done and choose another one
I'm going to choose stable lm zephyr because it's also a smaller model Okay,
and once you have that done Then just come over here on the left-hand side and click start server and we're up and running now
Let's create an other gen file where we can see how we have two different models from one lm studio software running at the same time
with the agents.
So we'll have is two different LLM configs.
I'm going to name the first one Zephyr in the second one, Phi two.
Now how we distinguish them is in our config list, we have a model base URL and API key.
With the new origin update, we can say lm.studio for the API key so it recognizes lm studio.
Then we have a base URL.
This is the same.
This always been the same for LM studio.
Now we have a model name really This is a model identifier from the model that we've loaded, right?
So I'll show you where to find this in a minute,
whichever assistant agent had the Zephyr at the LLM config definition They're going to use the Zephyr model now for a 5 to another LLM config
We had the config lists and then the model this is the identifier for the 5 to model Then we have the base URL,
which is the same in the API key,
which is all for the same also I might not have mentioned this before but for the cache seed you can set this to none
Meaning that it will never cache your results and every time you run this it will always be different
I have two agents I have one name Phil who's going to be using 5 to model and I have one name Zeph
Who's going to be using the Zephyr model and then I just have Phil initiate a chat with Zeph saying tell me a joke
That's it really simple
But this is going to show you how to have multiple models working together in one LM studio software now as I said
We had to get this identify here
Let's go over to LM studio and I'll show you where to get that so back at LM studio If you click for instance,
let's just click 5 to over here There's an API model
identifier You can just copy this and the same thing for Zephyr
You can just look at the API model identifier and then copy that as well.
All right after I ran it as we can see here This is the server logs for both the models and I mean it worked right here is a community all the tokens for the responses
back to But here we go, like so it worked, right?
So this is just, you can go back to LM Studio, look and you can see how the interaction went.
Okay, now if we go back to our IDE
and look at what happened here, we say Phil started talking the zip, tell me a joke.
Why did the tomato turn red?
Because it saw the salad dressing.
Ha ha, they had the audience, Great.
Okay.
Awesome.
What happened here?
Okay.
So again, let's review what just happened.
We had two separate models working on one LM studio software running.
It was open source.
It was free.
We didn't have to worry about open AI's API key and they could talk to each other.
I think this was a huge update and I think this is really going to help out especially if you
Or if you haven't tried it yet, I recommend you downloading it and just trying it.
It's free.
You know, they don't store any of your information, you can use all open source local LLMs.
If you have any comments or anything you want to chat about, we'll leave them down in the below.
Thank you for watching and I'll see you next
Idioma de traducción
Seleccionar

Desbloquea más funciones

Instala la extensión Trancy para desbloquear más funciones, incluyendo subtítulos de IA, definiciones de palabras de IA, análisis gramatical de IA, habla de IA, etc.

feature cover

Compatible con las principales plataformas de video

Trancy no solo proporciona soporte de subtítulos bilingües para plataformas como YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera, sino que también ofrece traducción de palabras/frases de IA, traducción inmersiva de texto completo y otras funciones para páginas web regulares. Es un verdadero asistente de aprendizaje de idiomas todo en uno.

Navegadores de todas las plataformas

Trancy es compatible con todos los navegadores de plataformas, incluida la extensión del navegador Safari de iOS.

Modos de visualización múltiple

Admite modos de teatro, lectura, mixtos y otros modos de visualización para una experiencia bilingüe integral.

Modos de práctica múltiple

Admite modos de dictado de oraciones, evaluación oral, opción múltiple, dictado y otros modos de práctica.

Resumen de video de IA

Utiliza OpenAI para resumir videos y comprender rápidamente el contenido clave.

Subtítulos de IA

Genera subtítulos de IA precisos y rápidos para YouTube en solo 3-5 minutos.

Definiciones de palabras de IA

Toca las palabras en los subtítulos para buscar definiciones, con definiciones impulsadas por IA.

Análisis gramatical de IA

Analiza la gramática de las oraciones para comprender rápidamente los significados de las oraciones y dominar puntos de gramática difíciles.

Más funciones web

Además de los subtítulos de video bilingües, Trancy también proporciona traducción de palabras y traducción de texto completo para páginas web.

Listo para empezar

Prueba Trancy hoy y experimenta sus características únicas por ti mismo

Descargar