Complete Coze tutorial: Building an AI chatbot from scratch - 双语字幕
Welcome, everyone.
Today, we're going to walk you through the process of creating powerful AI chat bots on Coase.
So Coase allows you to build AI bots on top of large language models like GPT-4.
Today, we're going to do this in a question and answer format.
So, I'll play the role of a developer or a user of
Coase and Joshua will be guiding us through the process of creating a chatbot.
So, Josh, what are we looking at now?
So, right now, we're actually looking at the Coase.
And the spot store is packed with community bots that are built from people like you guys in the audience that are in our community.
And you can see bots that are recommended.
You can also see bots that help you with learning bots that are also characters.
Bots that can help you with your writing or to generate images and so forth.
There's lots of things that you can do with these bots and it's not just about making personal assistance.
Sometimes you can make a AI friend or a game or you know some type of bot that can help you with consultation.
So there's lots of different things that you can explore through this bot store,
and you can use these bots right here in Co's workspace,
or even sometimes that some of these other buttons here that can help you get to
some of these popular chat apps that Co's bots are available on like Discord and Telegram.
Awesome.
So this is a place I guess where I can get inspiration.
the kinds of bots I can build and the use cases of AI chat bots.
I can see there is learning, characters, productivity, image-based writing.
Pretty much anything.
Yeah, pretty much anything.
Sometimes, you might not have to go and create a bot all over again.
The whole point of this is to also help you move along a lot faster.
Let's say there's a bot that's already available in the store, or that it's something.
something you thought about to use.
Maybe you can get ready and go to use that,
use it for inspiration for later, or maybe you might be satisfied with that bot that's already on the store.
Awesome.
And you maybe click into one of these bots and we can see what it looks like?
Yeah.
So we have a recommended here but we can also go to public config as well because this allows us to get some other information as well like seeing how well was made in the prompt.
So if I go here and I talked to with this GitHub expert.
You can see that I already have some questions that are already ready and prompted to go.
So, what are some popular projects recently?
If just ask our bot,
this GitHub expert bot is going to use different plugins,
like the GitHub plugin,
which is basically connected to the GitHub API,
and the browser plugin to browse websites, it's using GPT4, and it's going through different popular projects that's recommending through GitHub.
If we go here to this prompt button outside of our conversation, we can actually see the code's workspace on how it was built.
We can see the persona and prompt that it took to create this bot,
the different plugins or tools that we would And any other type of skills that we give it,
and you can also have a conversation with the bot duplicated and modify it yourself if the bot is available to you through public configuration.
So, yeah, that's how you'll, you know, you're able to pick up a bot straight from the bot store.
Cool.
So, yeah, I think just from looking at a store, I'm already getting a lot of ideas on the types of files I can create.
So I've created custom GPTs before with OpenAI's kind of GPTs custom GPT builder.
So how is Coase different from that?
So the GPT builder is great.
We've seen some great creations come out of there.
But with CoS, it gives you a lot of other features that you probably won't come across when you are using the GPT Builder.
So with CoS,
you have features like knowledge bases that take up units from a whole bunch of different types of files,
like PDFs, Excel APIs, websites, and to help with the multi-step processes.
With Coase,
you can also publish to really popular chat applications like Discord,
Slack, Facebook Messenger and more right straight from the interface and it's really easy to deploy your bot in places where it's super portable
You could take it from place to place and share it with your friends and also codes gives you access to GPT 4
GPT 3.5 and pretty much those different open AI models for free So,
there's no need for you to go spend a bunch of money every month.
It's completely free to use,
and you have lots of other features to build on top of your bot to personalize it,
to make sure that you're getting accurate information, and the bot feels more like it's yours.
Cool.
Can't wait to get started.
So how can I get started with creating a chatbot?
So actually I'm going on a trip to Japan next month.
Maybe we can build a bot to help me to plan my trip to Japan.
Yeah, it's actually pretty easy to get started.
And honestly,
even though it's easy to get started and go in with this,
there's so many ways you can build on top of your bot and add more complexities to it.
So let's go over to our create a bot button at the top.
And let's give our bot a name.
So can just call the spot plan a trip.
We also have this large language model that's also built into our bots description.
So lot of the description forms that we have have LOMs that run through it as well so that
it can understand the context of what you're writing about and it can help with your bots creation and production as well.
So you are a bot that helps in my next.
Also, with Coase, you can generate a profile image or you can use one that you have already.
So if I go here and I get an image, I could just have it there as my profile picture for my bot.
And once we create our bot here, we'll get taken to the Coase workspace here.
and the code workspace has so many different features as well to help us with the bot.
One of the first places that I would start is persona and prompts.
This is where we will design our bots persona,
talk about the features that we wanted to have, and we can do this all with natural language.
So if I go over here to my persona and prompt and we want to talk about You know,
what our bot is doing,
we can just say here you are the plan a trip, your job is to help me plan my next trip.
you have the ability to recommend the best historical sites, restaurants, and say, and hotel.
So with Coase,
it's really cool because I can write this prompt, it's pretty simple, but I can also use Coase to optimize my prompt even further.
So if I want my prompt to actually have more detail and,
you know, is able to have a better prompt for my bot to understand.
I can go here to optimize prompt this button at the top here and then here we go.
You don't need to be any type of prompt expert or someone that's like really great at prompt engineering.
Coase does this for you already from the platform and you can also parse retry if you don't like the prompt that it generates.
But we can use this prompt and what you'll notice is that it's a markdown format and markdown format is pretty special with AI because it
actually helps the AI really understand the emphasis on some of its characteristics when you're using markdown.
So this actually works a lot better than text.
So if I have my character here it knows from this markdown format of writing character and knows that this is emphasized and that needs to pay
attention to how we are describing the character of our bot.
that's really convenient so I don't need to be a prompt wizard to write a
really cool prompt and I can see it's very structured so it's divided into characters skills and constraints.
Yeah you don't need to be a prompt expert at all again like you can follow this format and if I wanted to add more
skills myself I can always go back in here add another skill like skill number four
and opt to or I can add more to the character or add more to the constraint.
So you don't have to say stuck to what is being written here.
You can continue to use this format to help you build bots in outside of code as well.
Cool.
So by the way, which large language model is the bot utilizing right now?
That's a great question.
So if you go over here, right, above skills, you'll see GPT-48K, which is great.
So for model.
And remember, it's completely free.
We're using codes,
but if we back out a little bit here in our model configuration model that pops up, we can actually go here to see.
And right now we have available GPT-4 Turbo 128k, GPT-4 8k, and GPT-3.5 16k.
And just remember when you're using large language models,
It really depends on the use case that you have for your large language model,
because sometimes you don't need all the tokens in order to work with your bot or with your large language model.
Everybody has different needs, so just because it's GPT4 doesn't mean it's better for your use case.
And you have lots of other options that are here in this model configuration,
so you can change the temperature,
which helps with Like being able to give you a more precise answer based on what you're asking,
even giving you a lesson of random response.
You change your response length, dialogue, rounds, and so forth.
So there's lots of ways you can customize the use of the large language model when you're creating the spot.
Got it.
So now I think we've basically programmed the chat bot to have a certain persona and skills.
Should we test it?
Does it work?
Yeah, it will definitely work right now and let's see how it looks, right?
So thing to pay attention to is again, we have all these skills.
So this is before we're adding anything else to the bot, we're just working from the prompt, which works great.
So let's zoom in over here to our chat.
Right now we have a comment above that's kind of blocking the- Okay.
I just removed it.
Yeah.
Cool.
So if we go here to this to the chat here, we can test it out in the platform before we even publish our bot.
We could just write, I am planning a trip to Ginza Tokyo.
Can you recommend some restaurants for me to go to?
I am looking for four stars and above.
So if we go to our bot, we'll see what it generates, and it actually starts to try to reason with us, right?
So could you please let me know the preferences in the term of my cuisine?
So I can just say that I am looking for ramen.
So, perfect, based on my specific preference, it's able to go here and give us some
recommendations on some Robin spots there nearby in Ginza, Tokyo.
But there's a of other things that we can do to improve this and make it a lot better.
Right now it's basically using the GBT's own training data to answer your question, right?
Yeah, so it's using what's built into the GBT training data already from GBT4 that's already been created, right?
However, there's lots of ways for us to make this a lot more relevant to information that's
available to us in real time or information that is maybe only privately available to us
through a lot of these different skills and memory functions that we have with Coase.
So what I would tell you is that this does work great so far,
but there's so many ways that we can make this more personalized towards our use case that might not be available
from from on GPT-4,
but we can use some of these skills and features to basically fine-tune like how we are actually working with the spot.
So yeah,
it only works off the knowledge of what's on GPT-4 for now,
but we can have plugins,
workflows, knowledge, and variables, and all these other features that we have available to us to make it even more comprehensive.
Okay, can you show me what plugins can do?
Yeah, plugins are awesome, right?
So I'm sure everybody here is aware of what an API is, the application program interface.
And this helps you get information from different services across the web.
So if you go here to add plugin, you'll see that we have plugins like Google web search.
We can search the web and find different, like different things you search on the web.
This works just like the search engine.
So if you want to add a plugin, all you have to do is click.
We also have plug-ins like a GPT-4 and Dali-3 for our text to image generation, right?
So if you want an image to be generated from a certain prompt that you put in, you can do this with Dali-3.
But for our use case here, we are taking a trip to Ginza, right?
So how about we go over here to convenient living and we add some things that we might like, like TripAdvisor.
Now, you can see here with TripAdvisor, you have lots of different functions.
And when you see these functions, these functions actually come from the TripAdvisor API.
So, each function here would be like the function that you would see in the API documentation.
So, with code you don't need to code any of this, you just need to have the URL or for
the endpoint of the API and you'll get access to all these different functions depending on the API.
So, you know, I'm taking this trip and I think I'm gonna need all of these different functions that are available to me from TripAdvisor.
So I wanna be able to search the airport,
search some flights, hotels, search information by my location and why not add some stuff from Yelp as well?
So Yelp is gonna help us find different businesses, restaurants and give us recommendations.
So about we add all these different functions that are available to us like the reviews,
the phone number, the business search, which will come in handy.
later if we add all these different functionalities as well.
So if we go back like this we'll be able to see you know all of our plugins that can be of use and
there's lots of other things you can have.
There are like over 200 plug-ins and if there's a plug-in you don't see but you want to use it.
You also create your own Yeah,
I using an external API and we can Look into that we can talk about that later
For this spot will we now have trip advisor which can help us recommend
Places to stay at and visit and Yelp which can recommend restaurants Yep.
Creating your own plugin is super easy to do.
All you need to do is just have that endpoint, like I mentioned before, and you'll be able to work from there.
There's lots of documentation on how to add a plugin.
So there might be something that's not available to you, that you'll be able to add as a plugin later on.
So, now we have all these different functions that are available to us from these APIs.
And I go and ask how my bot to utilize something from my plugins as well,
I can even specify my prompt how to actually when a certain scenario for me to use this actual function like searching for it.
So if you see I have the search for business plug in here Coz also has a copy button that's
here So you can copy the name of this function and you can actually put it inside of your prompt as well.
So if I want to recommend restaurants,
I can go on the grounds user preferences and specify destinations,
make use of, let's change it to the business search business plugin to recommend top three areas.
We do the same thing there, all right?
So if I ask a similar type of question,
we'll get our, our Yelp API to use this function when we're discussing what suggested restaurants for my bot to use.
So if I go here, I am looking for Japanese curry.
Let me just correct, Japanese curry.
You'll see that it's using our Yelp plugin now,
and again,
this is without use of any type of coding,
and it's using all the plugins and APIs that we have here in order to generate this response using Yelp.
And if you click on the links, it will actually direct you to the relevant Yelp page.
Yeah.
So open it up and take a look.
So you can also see at the bottom as well,
it says that Yelp is there,
but if you open this up, you'll see that I can be taking to this actual page on Yelp to get myself some Japanese code.
Right?
So let's go back to our page here, and yeah.
So that's pretty cool.
By the way, how does the AI bot know when to use this plugin?
You basically just tell it in natural language in the prompt.
Yeah, so like I like I showed before if you go here again like you can customize your prompt like I showed you even though I optimize it I customize it even further.
And in one of the skills we talked about suggesting restaurants.
And I can always create another skill too that can utilize any of these other functions as well.
You automatically sometimes you can.
see that it will use the plugin itself based on the context of the conversation.
But if we want to really specify and make sure that it's using the ones that we needed to in certain situations like finding the best restaurants,
we could write here within our skill that we want to use the search business function.
And we know that the search business function comes from the Yelp plugin, right?
That's right here.
Oh, if you go next to the search business, there is a copy of it.
button that allows you to copy the function in one click and then you can
just paste it in the prompt to let the bot know when it should use that function.
Cool.
So what if I want the response to look more structured?
What if I want to include both say images and text at the same time?
Yeah, that's a really good question.
So Coase actually has a pretty new feature as well that helps you organize and structure that with
So these card data bindings help you structure your responses in a way that you would expect it to,
based on the function of the API.
So this is the interface that comes available.
And if we just click one of the car styles here, let's just try car style number one.
Let's bring our attention over here to this preview of the car effect.
So if you look at number one, this is the binding data.
So one, two, and three all have different places of data that you wanna actually have available.
So the title, we could put the name of the restaurant.
then we can have the description of the restaurant here if that's available to us in the API and then
picture of the restaurant or picture the restaurant provides based on the Yelp API that we're using.
So this card binding data we'll just turn it to a vertical list because that's how I wanted to be formatted.
So restaurant that I have recommended to me will be stacked on top of one of another.
and then I can have my max length be five here,
but even find my prompt I write,
you know,
I the top three eateries,
I can do so,
but let's just put five for now, just to leave some room just in case, and then we can select a right here.
So this array that we're selecting is based on the data object that's within the Yelp API that's being used for the Yelp plugin.
So we have this business array here, which will be the list of the different businesses based on the search.
And then if we go here to select our data, remember we're looking for the title.
So let's try to look through the data that's available to us
and see where we can find the name of the actual So if we go here,
we can just like scroll through and we see that we have name.
So name, we're going to test this out and take a look and this should be the name of the actual restaurant.
We want to get the description.
So if we go here to the description,
we can go and search through and find out which field would be the most like closest to the description.
So I know that some APIs don't have the best naming conventions and just so happens that Yelp doesn't have the best as well,
but title just so happens to be the description on the Yelp API.
Then we can go here to get our image which would section number three over here, right?
And our image would just be image URL or the actual image.
So in this instance, we're going to choose image URL for this, all right?
So if we go down, we can also have the LLM determine the text output.
But for now, we'll just let that be more than that or click on the link.
Yeah, so this will allow people to jump to the actual Yelp page by clicking on the card, right?
So then we have our actual URL that we have here, and then we can click Confirm.
Alright, so now our card binding is created.
So I write again, I am looking, let me zoom in a bit.
I am looking for Sushi in Ginza Tokyo.
We can see that's using the Yelp plugin, and then there we go.
There's our data bindings that are card bindings that are available to us,
and it gave it to us in a different format than once here.
If we use this format without using the card bindings,
it's relieving it up to the LLM to decide how we want things to be formatted,
whereas the card bindings actually goes based on the functions that we have available in the
and it will actually organize it the way you want to.
And cool thing about the part bindings that you can click,
one of the card bindings that are there, you can actually get taken to the link that we selected in the menu as well.
So it makes it really easy for our users to use this as well.
Okay, this answer looks a lot prettier than what it was.
And, again, there's so many different formats that you can use with the card bindings and this is just based on one of the functions.
So not every function that you have available here needs a card binding,
but it is nice to have and it gives it a nice look for your user that's really intuitive for them to use.
So, now that the travel bot can leverage external tools like Yelp and TripAdvisor to
recommend restaurants, hotels and places to visit, what else can we do to make this bot even more powerful?
Yeah.
So, you know, these are plugins or skills or tools, right, that you hear a about AI using tools.
And so...
forth.
But how can we make this AI bot feel like it's a more personalized bot based on the things that we do or preferences that we have when we're traveling,
right, for example.
And if we have this travel bot,
we can use feature to set memory basically to personalize responses based on what we are actually like what we're into.
So we can add a variable here and we can add a variable for your trip preferences.
So maybe you're somebody that likes to go on hiking trails a lot more or maybe someone that loves
to do to look at historical sites in a city or somebody that likes to eat omakase when like only as a meal.
We can also create a variable here for preferences or we can name it trip preferences whatever we like and we can also put a description that says you know this
Say, this is for trip preferences, such as nature, culture, and food preferences.
So you can write a description like that or something that's even more elaborate to create a variable and if we save it and then I write that my trip preferences are going on hikes.
and seeing historical sites, right?
If do that,
we'll see with our memory function here,
with keyword memory,
you'll see that it looks at the keyword or the memory of our actual a conversation with the bot and it will save a variable.
Now when I discuss with my bot further about what I want to do,
while I'm in Genza, it would actually start to give me responses based on their preferences or the variable that I've given it.
So make sure that you have even more personalized experience with your bot,
that you don't really see a lot of other platforms that help you create chat bots without having to code anything at all.
So I guess this will have allowed the bot to remember properties or preferences of each user so that it can give more customized responses in the future.
Yeah, absolutely.
Is this, this variable is specific to each user, right, and as a developer, Can you know, can you see the variable of each user?
Yeah, that's a good question.
So, yeah, you're absolutely right that, you know, these variables will help you personalize your bot even more.
You can continue to add more and more variables as well.
You know, it's not just, you're not just stuck with one variable.
And as a bot creator, myself, you know, when I released this to the bot store, to my variables will be available.
However, I won't be able to see anything that you're saving your variable that's personal to you at all.
So you won't have to worry about me or any other bot creator being able to see what variables you store in that section.
So this information, private to each spot user.
Absolutely.
Yeah.
Cool.
I think this spot is looking pretty good.
But I there are a lot of other functions there, like knowledge.
Can you show me how knowledge works?
Yeah, knowledge is like one of my favorite features when it comes to using codes.
So with knowledge bases, you're able to reference and use content that you have.
stored in different files,
websites, Excel APIs, notion, Google Drive, and you can use these files in order for your bot to have more context on how to
answer questions.
So with retrieval augmented generation,
It works exactly like that where you can create a rag pipeline without having to code anything
at all and you just upload your files and then you'll see just like how we use keyword memory and the plugin,
you'll see how it is generated and using the knowledge skill as well.
So if we go here and we want to add some knowledge, you can go create your own knowledge base as well.
I'll show you an even better use case with this.
So let's go over here to our Coase documentation for a bit.
So with Coase documentation, there's plenty of different sections that are here.
And it's really hard to go from page to page and upload each one of these pieces of knowledge to our Coase bots.
But let's just go back to our Coase store,
and I have another bot that I want to show you of how this is actually used and ready to go.
So let's go over here to our personal bots, and then we'll also just go here to this Coase system that I created.
So I have this Coase bot that's here, and it's a Coase Assistant bot that helps.
our bots to help our community learn more about the codes documentation.
So this is a similar bot that's like the one in our Discord server that you can join to touch a chat with the community,
but you can also chat with the spot there as well that will ask you the answer questions that you have about the codes documentation.
So already have a persona and prompt here that's written.
And then I have knowledge bases as well.
So with knowledge bases,
if I go here and I add knowledge,
you'll see inside of our knowledge base as well,
we have all of our different pages from our database, I mean from our documentation that's there.
When you want to add a unit,
you can actually go here to online data, PDF, text, doc, notion, docs, table for Excel sheets as well.
And we can just go to online data.
So if I go here to our coast documentation and I just go to And we check,
we take out this root that's here where we do codes.com slash docs,
you can just copy that we can actually add every single page that's here based on this being the root of our URL.
And can actually get every single page that is on our knowledge base.
I mean, from the documents.
So let's go over here to our code's knowledge, and go here to online data.
I'll just show you what it looks like as if I was uploading it for the first time.
So in order for me to take a website that has multiple pages,
that I wanted to start from the root,
which would be docs, and if I went to docs slash publishing, it be codes.com slash docs slash publishing, right?
So we wanna start from the root before the year,
we batch edition,
and we can actually go here to import,
All of the different pages that are in our documentation without having to go from page to page to upload it
So you'll see that it's importing each page and then you'll see it has docs Coase.com slash docs that store plug-in,
you know,
and it's going through each One of these different pages in our documentation to upload later And then from there,
you're able to get the bot that we're creating with the coast knowledge,
and you just add it to your actual bot, and it will be here.
And I ask Coase, how do I create a workflow?
it will use knowledge and will search through the knowledge which would be the documentation and give us a response and generate it, right?
So is super powerful.
It's one of the best ways for you to personalize your body and give it some type of memory.
So conversation has more context to it.
And another really great feature about knowledge is going here to automation.
So, now, automatic call helps you customize your knowledge base and your responses even further.
So, you have a search strategy that you can customize, but then another thing that I want
you to take a look at is the minimum matching degree.
So this minimum matching degree is basically a measurement of how much you want your bot to actually use knowledge bases.
So I wanted to use MOS bases most of the time.
So I'm just going to lower this minimum matching degree a bit because I wanted to keep going back to our knowledge bases.
Another thing that you can also do to help with this type of thing helps with the hallucinations of your bot.
Sometimes you give it knowledge and it has the knowledge,
but sometimes it doesn't know how to use it or when to actually use it as well.
So in order to lower the amount of hallucinations with AI, you can do this to help, you know, retrieve information from your knowledge base.
But then also, you can go here to your actual model configuration and you can lower the temperature.
And the temperature, again, like I explained earlier, with this model configuration will help us get a more precise answer with less random information.
So this is how we can use knowledge bases to help lower the amount of hallucinations,
give our conversations more context with our bot, and we can create a rag pipeline super easily without having to code anything complicated.
Yeah, I can see how this is super helpful if I want to create a customer service chatbot
for my product or if I have a product with like a ton of documentation,
I can just feed it all to the bot in one go and then it will be able
to create a chatbot that can help me answer these frequently asked questions.
And for example,
we have a Discord server for code users,
and then we just have a code assistant chatbot that helps users answer questions about codes in the
Discord because we can easily publish that bot into the Discord server.
Yeah, and again, like one of the great things about
As you continue to publish,
which we'll go into much later in this talk, you can always come back and publish more changes to your bot.
So if you have more knowledge,
more variables and more other features that you want to add, you can come back and add more features to this.
You can keep customizing the card bindings.
You keep customizing the automatic call here as well.
So there's lots of ways that you can continue to make your bot even better and test it over time.
Yeah, going back to the temperature of the large language model, I guess the temperature will really depend on what bots I'm building.
So if I'm building a,
say, creative storytelling bot, I might move the temperature too higher because I want it to be wilder and a little more unpredictable, whereas if I
want to build a customer service chat bot,
I really need the answer to be more precise,
and I want the bot to be able to draw correct answers from my knowledge base rather than random things.
So that case, I might want to lower it.
Yeah, I agree.
I think all these these different features that are get used in tandem with one another, where you're changing one setting in one place.
It also works together with another setting in another area in the workspace.
So I wouldn't be afraid of using model configuration.
If don't totally understand it, we have documentation on it as well that can help you with it.
There's always different use cases as to why you would want your temperature to be higher or lower or your max response length.
Just because it's GPT-4 or GPT-4 Turbo,
doesn't mean it's better than-
it really depends on your use case so maybe you don't need as much as lengthy
answers as GPT-4 turbo might give you and you just need some simpler answers
that GPT-3.5 can give you and use less tokens.
So it's really up to you,
and that's why the closed workspace is a great way for you to test out your bots before you publish them.
Cool.
What about workflows?
Yeah, workflows are also really interesting, too.
So workflows are a cool skill because they help.
you combine all these different skills we talked about like puggins,
knowledge bases, variables, databases, like those those different features can come together to create a multi-step process.
And this multi-step process helps you enable orchestration between different nodes which would be all these different skills in order to
create a stable response based on what you're actually looking for.
So let me you what that means because it's probably one of the most complex features of COS, but I wouldn't be afraid of it.
Don't shy away from workloads.
It's probably also, again, one of the most useful tools when you're using COS because it helps your users get a more structured.
And I wouldn't say predict but yeah, more structured answer based on what you're asking it.
And I'm going to show you this through a bot that I also have a video on, which is an MBA bot.
And I created a workflow that helps you generate MBA scores that happened in the past.
And even ones that are happening in real time, if there's an MBA game today.
we'll be able to give you updates on scores.
So take a look at that.
So if I just go here to my workflows that I have for the MBA scores, we could break it all down.
But I think workflows,
again, like a really powerful piece of this is right here are the nodes, so pay attention to, and I'm trying to zoom in here.
Oh, it's assuming in the page, but pay them to to our basic notes here.
We have large language models in here.
So I'll show you the use case for the large language model that I have in this workflow,
a code node that helps you with a lot of different things.
I use it for parsing information that I'm getting from a response from an API that I turn into a plugin.
You can put knowledge bases, conditions, variables, databases, and messages.
There's lots of nodes.
And then you can also put plugins in here as a node, which we have for the NBA plugin that I created.
And then you can also put a workflow and combine it with another workflow, which is also like really interesting.
Let's go here to our starting node and see what it all looks like.
So first you'll be back out, you'll see that I have a few different nodes that are connecting to one another, right?
If I go here, you'll see in our starting node, we have our date and this date is actually, I want to search.
So my MBA bot is searching the dates,
we could take a look at all the dates that we have and then run it through our API.
But I think maybe it might be worth to test this first
and see how it looks and I'll go from node to node so you can see the response generate.
So if you go to test this node, we'll put in the date.
So we'll do 2024 dash.
So we do the 19th of April, and we'll go here, and we'll see that it accepted our first known, so that succeeded, right?
Then it went to our second note here.
So, our first node took in the date and then we passed it to our, we're passing inputs and outputs basically.
So, our input was the date and then we're passing that input to our node here that's going
to be running through an NBA API that I found.
Now, if you want to look at the result, this is like how the payload would look.
look, right?
And all there listed out.
And you can see in the output,
this is what the payload would have like right all the years that's available to the API,
the names of the players, all star teams, all that.
So we have lots of information about MBA games and MBA players from this API.
And we might not need all this information, right?
It's a lot.
And that's why we use this code node up here in order to parse this data.
So we're actually going to truncate our data that we're getting.
And you can see here from this response.
If you see the difference in length of this here compared to our final output, there is quite a difference.
But you can also open up an IDE inside of here and in this code, we're really just explaining and won't let me just.
But in this code here,
we just have our profile,
our home team, our away team, the game date and that's all that we want for our bot for right now.
If wanted to go find the,
you the officials that officiated the game,
the referees, I can find them, you know, and that just created a function here to get player stats on a specific date.
So you can use JavaScript or Python to do this.
So although Coase is a platform that you don't have to code,
if you are a developer, this helps in order to get the information like.
So now that I generated that JSON response from the API using the code node and the plugin,
I can then pass it over to a large language model.
So let's just organize this a little bit more.
So we have our large language model that it passes down to,
and this large language model is basically reading the JSON response that I got from this code node that was correct.
And it makes it more readable.
So we display our results here, you'll see that right here is actually the input that we got from the last node.
Remember what?
I the nodes are connecting from inputs to outputs, right?
That's how we're passing information in this multi-step process.
We're passing it from one node to the other, basically like an assembly line.
So we have the game,
the game information that's here,
but in our output,
when our LLM reads it all,
it puts it out in this raw format that's a lot more readable for us rather than having to go through all this JSON that's just listed here.
And so.
Now that I have that done, I can pass it over to my ending node.
Now, I can deliver it to my user just like that because I got the
scores, but the ending node gives you a really great way to organize your content, kind of similar to the card.
where I can now take this information that I got
from each of these nodes beforehand and organize the output based on the variables that I have available.
So here I just had my game results and then I had the date,
the kind of games that are there and then my answer content I can write,
there were X amount of games that happened on this date and here are the results.
And the results we were able to make more readable in our large language model and then our results will end up like this.
There two games.
of February of April, and then here are the results.
And it gives us our scores in the same format that our large language model was able to produce.
And you can do this with any API.
This is just a specific way that I did this workflow,
and you can make workflows and customize them however you'd like, and you can make it as complex as you need to.
And if you don't want to use a code node, you don't have to, but it definitely helps a lot.
So I guess,
like, in this case, the reason you need a workflow to do this instead of just have the large language model give you the output is because One, the API is giving you a ton of information.
You don't need all of it to be returned to the user.
So using code to parse it and only truncate it into...
and then you're using a large language model to format the answer so it looks neater.
So guess workflow is really useful if you need your answer to be in the very predictable structured format or if you need like to execute these more
sophisticated multi-step tasks.
Yeah, absolutely.
That's exactly how, you know, workflow would be used, right?
So again, like it's we're going through a multi-step process to give you that predictable structure to every time that we ask a question.
And again,
like this is where you start to use all the different features with Coase,
like your persona and prompt will pay a big part on how your workflow is used.
You can use the name of your workflow inside your persona and prompt and say make sure that you use this specific workflow for this specific task based on the
And you can combine that together to then make sure that you're getting that precise usage of your workflow to produce the answer that you're looking for.
Okay, we have a good question in the comments section.
Why we need workflows if we can use plugins?
Yeah, so if you see here, actually, that's a good question.
So plugins have lots of data, lots of functions, and you might find some plugins on the store that fit your use case right away.
However, this MBA plugin, in my use case, is a custom plugin that I added to the store myself.
Now, this plugin will come in with many functions,
lots of output data that I haven't had the time to truncate through code beforehand if I deploy the API myself,
or if the author of the API didn't do themselves as well.
Now, I can use this multi-step process to run through
the plugin,
pass that JSON information to a code node,
parse out what I need,
and then pass it to a large language model to
make it more readable and have the context actually stay up to date with what we're at.
Now, workflows don't have to just be used for this use case
of using APIs, but you ask about plugins, then this is what you would need to do.
So it gives you the plugins give your bot a lot of ability to use services,
but this is how you can utilize those services and give it even more enhancements based on how you want your bot to operate.
And you're giving it that predictable response, not just something that's random.
Because I have a plugin, the large language model it might just produce something pretty random each time by still using the data.
But I don't want that randomness to appear all the time,
this workflow is going to help me structure how to want my data to look later.
So the plugin can be a node in your workflow.
But the reason you still need the workflow is because sometimes the plugin is you.
doesn't return the exact thing you need,
and you need some other way, whether it's code or LM to massage your answer into what you really want.
I think we have another example, can we show that with Dolly?
Yeah, absolutely.
So go here, and that massaging is a great way.
So we have another example here that's actually a lot less complex than the last one that I showed you.
So this one is going to be generating an image based on a description that I give it.
But not just in any type of way,
this workflow is helping produce an
image That has a particular style each time we have a conversation with their bot
So if we left it up to dolly three which is a plug-in right which is generating an image based on text
If we ask dolly three by itself to immery and generate an image of a dog eating an apple Each time you ask that question,
the dog in the apple will probably be of a different art style,
it look different,
it will still be a dog eating an apple,
but it won't be a consistent style that However,
when we're using workloads which create this multi-step process,
we can make sure that our art style that we need
for that image that's generated will be predictable and be in the same art style that we need it to be.
So let's start with this example here where we have
the starting node that takes the input of a user describing how we want the focus.
And then we pass it over here to a code node really, really fast.
And it goes from this code node,
and this code node is going to be creating an animation style,
like an anime style character,
you know, of same style that Makoto Shinkai, who creates a lot of anime movies that you may be familiar with.
And we'll use his art style based on, you know, our online and, you know, our and our bot doing work to find it.
of a 22 year old girl character with almond shaped hazel eyes,
black ponytail, we're gonna reach the shoulders, yellow blue suspenders, white hat, right, like we were describing how we want our character to look.
And when we run this,
this example as well,
you'll see that it will take this same response that we have in our code note pretty much lock it in place so that anytime we're chatting with the spot,
we'll get the same animation style of the art that we need.
So this is why like you can't leave it up to just plugins to do the work,
although they can generate something for you, we want to make sure that we're getting a consistent type of art style with this.
uses a lot of different use cases.
If someone that's into branding and you want to be able to brand something with your logo or something like that,
this is how you can use AI to generate more to be more specific to what your logo looks like or what your brand
assets are like, right?
So, there's lots of ways that you can utilize this type of workflow.
And then that passes over to the text to image generation plugin, which is just Valley three.
And then from Dali3,
you get all this output data,
and we'll just use the data from original image,
our image original We get the height, the width, URL of that image, and then the end output will produce it.
So we, this is how it looks like in the workflow, but if we go over to our chat, which I have here, as well.
You'll see in our chat that I've already been chatting with my bot, right?
Like, I have my, I asked my bot, what is she doing?
And it produces the image of a girl that has hazel almond shaped eyes,
her hairs and a ponytail up to her shoulders, yellow shirt, blue suspenders, white hat, just like how I described it.
And if you scroll down some more, here she is again.
yellow shirt, blue suspenders, look her eyes, right ponytail.
So that is basically the same anime type of style that we're looking for.
If I left it up to Dali 3 on its own, it's just going to produce whatever style image that I did it once.
So this makes it a lot more specific.
So I hope that really answers your question of what's going
Plugins and workflows kind of work differently when you're just adding it to the bot,
but why you would want to use a plugin combined in your workflow to connect to other nodes.
And this is why.
There's lots of different scenarios and plugins are meant to be customized.
Cool.
I think in this use case,
what the code is doing is combining the users prompt, like a different prompt each time to a base prompt that's consistent every single time.
It's a longer prompt for Dali to generate so the image is more consistent.
Exactly, yeah.
So, you know, again, like, this is a cool use case for an AI friend.
You know,
if you're someone that's into branding and creating assets or campaigns,
or your company that you work for, this is something that you could use too to make sure things are consistent or in line.
Workflows are super powerful.
We have a video and workflows and some more to come out as well.
So teach you how to use them.
Awesome, cool.
I think we have some questions about So maybe we can go into a trigger a little bit.
Okay.
Yeah, we just go inside one of the bots.
So triggers based on like variables and databases.
Yeah, just grow down under the skill panel.
We can just click on.
scroll down to trigger.
Yeah.
Yeah.
And we can click the plus button.
So if you want to create a trigger it, you could just give it a name.
So this is useful for if you want your bot to send you some type of message at a particular time.
So for example, I can create an AI news chat bot that sends me the latest news on AI every morning at 9am PST.
If I publish the bot to Discord, then bot will be sending me a Discord message every morning at 9, summarizing the latest AI news.
Without having to talk to the bot, the bot will just proactively send me that message.
Yeah.
So, yeah, I'm creating an airport reminder to send me a reminder every Monday, or I could
trigger it every day up until the day of my department.
And then I can also just have the task execution work from a plugin,
a prompt or workflow as well to help me with this trigger.
And you can trigger things off of events as well.
The event trigger,
I think,
is a little bit like Zapier,
where you're able to,
for example, every time you receive an email in your Gmail account, send it to me on the chat or something like that.
We will create more templates down the road.
I was still working on that, but for now, feel free to explore on your own.
So again, it's just based off something that's happened.
So you do have like an API or something like that, you can always use that here.
You can also allow your users to create their own triggers.
So if you check the box,
allow users to create schedule messages,
schedule triggers,
when the user is chatting with the bot,
they can just say,
hey, remind me to take my medicine every day at the
and the bot will actually program that information into its own prompts to be able to remind the user.
Yeah.
Alright, so if we scroll down, we'll also an opening.
Yeah, opening dialog is pretty cool, so let's clear this chat history that we have here.
So right now I just have this because it's based on the triggers that I have,
this has this opening dialog because I clicked over here at opening questions to my dialog.
But if I go here to open dialog, I can just get rid of this.
And I can describe how my bot is going to be used, right?
So I can say I am a trip planning bot.
I can help you with your next trip trip.
I can find the best food and historical sites to see.
Now, I can ask a question.
I can have an opening question that says, what are the best sushi restaurants in, let's just correct it real quick.
restaurants in Tokyo.
How do I get to Kyoto from Tokyo, right?
Like there's lots of different things that I can add in my opening dialogue.
And opening dialogue sometimes isn't just about writing questions of like what your bot can do.
Maybe there might be someone who's not really knowledgeable in what your bot can do at all.
And you can just write an opening dialogue.
How do I use this bot, right?
And You'll see here from my clear messages, like already from the beginning, I am a trip planning bot.
I help you with your next trip and I can find the best food, historical sites to see.
So can come up with your own thing as well.
You can also use the generation button to auto-generate your response, I mean your opening dialogue.
And then you can also click one of these here as well to see how your bot just gets used and says sure.
I'll be back.
And the bot will just tell you what it's going to be used for, right, based on this opening dialogue.
And then you also have other questions that pop up after each piece of the conversation
that your bot answers to generate and help the conversation flow and move along.
So you can recommend places in other cities as well in Europe, that's what it's looking like.
So...
So I guess this is a way,
it's like an onboarding for the user because when the user first started chatting with the bot,
they have no idea what the bot can do or what are some typical ways they can interact with the bot.
So this just helps the user get started.
Yeah, helps you get started, helps you get going right away.
And like you don't have to just write questions that are, you know, pretty trivial to your bot.
Sometimes people really don't know what your bot's doing and you could just write, how do I use this bot?
And it will just tell you just like what it's so.
Opening is a simple use case, but you know, really the bot creator, not to the people that are using the bot, they'll love this.
Cool.
And what about database?
So databases are really great too.
Databases work like variables in a way where one of the best ways to work with this is through conversation.
So database helps you organize your data in a tabular structure and implement different features like bookmarks or book management.
So let's say I wanted to create a contact list or something for different hotels or people that I meet on my trip.
I can use databases to do that.
So I can add a and I can edit a template or just add my own table.
table, and I can give my table a name, and I can just say contact list, it has to be the
same list, and then I can give it a description.
So remember, like a lot of these descriptions fields that we have in CoS also run through large language models to help with the context.
So, as So, your large language is with all these different skills.
Yeah.
Every you see description in codes, just think of it as a prompt to the large language model.
Because the large language model is going to read your description to decide, okay, when should I use this field?
Or when should I write data into this table?
So, um, you know, we'll just use this for single user mode.
So this is a contact list for people I meet on my trips.
I want to store.
This contact list will store names, phone numbers, and email addresses, right?
So then I can add a field here for a and I can give a description name of the person I met,
and I can keep that as a string, right, and I make this necessary to add.
I can also add the phone number phone, and I'll just do the phone number of the people I...
All right, other person I met.
Let's say person I met.
All right,
and I can add this,
to be honest,
like, so this is like something if you're familiar, obviously you guys are on dev to so you know,
like phone numbers are kind of interesting where they're not necessarily integers,
we could just put this as a string, and we can keep this necessary.
So can always change the data to between each field that you have in your database.
And you can do email, and you can just email be the same.
You say like email of the person I met, right?
And we don't have to make this a necessary field, right?
So leave that necessary.
And can save it.
Oh, and we have changes to a string.
And then we could say, I just met a cab driver in Ginsa, his name.
Let's just say his name is John and his phone number is 518 517.
884-70-099, and his email address is John at cabsandginsa.com.
Save to my contact list.
So we'll see that.
that.
And in our conversation, you'll see that uses our database, right?
So it runs this and if you look at our database here,
see how it's all broken down, you'll see this entire response that's here, if how it's being added to our database using SQL, right?
So the difference.
And we can also look at our database that's here as well, and you see that John will save in our database.
So John is phone number and is email.
So databases are really powerful because then I can reference John later and I say to my bot,
I can say, hey, look what's John's number.
And we'll be able to get John's number and he's even on it can call him and go right away.
And like if you have cos inside of telegram discord or anywhere that's portable on your mobile device or computer,
you could just have this stored inside of your chat and you can always chat with your bot and get the information that you need based on what's saving your database and
how your entire bot's being used.
So powerful.
This is kind of an Excel sheet almost, except you don't upload an Excel file.
You query and write the data with natural language through just chatting with the bot.
Yeah, chatting with the bot is really powerful.
I don't have to waste the time of going through the exact cell in the Excel sheet and putting it in there and making sure things are in
alphabetical order.
I'll leave it up to a large language model to help me do it.
and I'll just be the one that saves the information in my database.
So databases are really, really powerful as well.
Simple helps you get the job done.
Yeah.
We can just quickly run through the other few features.
So-term memory,
if you target it on,
it will basically, Remember what you've chatted with the bot and then it can use that information to enhance the responses in the future.
So just on and off toggle and the file box is think of it for now think of it as a photo album.
So last day I took a lot of photos on my trip.
I can send those photos to the bot and then I can save these photos in the file box and
then I can query these photos using natural language.
And then we have some questions about Text to a voice.
So if we scroll all the way down, there's a voice Feature, so you are able to select a voice for your bot.
So maybe we can just Add try adding a
voice So there are different languages you can choose So maybe we can select English voice and then just choose one of them,
and then once you confirm the bots will be able to basically read out the response in the voice you selected,
but currently If you toggle over the information next to voices,
I think currently voices is only supported in certain chat apps, not all the chat apps support text to voice.
So that's just something to take note of.
Another question is, what is the difference between single and multi-user mode in database?
So we go back to the database?
And, yeah, click on the edit.
Yeah, there's a query.
So if you click hover over the little I next to table query mode.
Multi-user mode helps developers and users read, write, and delete any data from the same channel in the table.
The write, and are controlled by business logic rather than the creator themselves.
So people will be able to add to the database rather than just the creator.
So, basically, if you want people to only be able to read or modify data by
themselves, or you want to create a shared database between multiple people.
So really useful for community bots.
So you're creating something in your Discord server between just like you and your friends or something like that,
this is really powerful because you may be on a team or a business or something together,
and you want to be able to share a database of information with your bot that you're using to help streamline your processes.
Using this type of feature is very powerful.
Okay.
We also have a feature called multi-agents, a that we can quickly go into.
So if we look at the bot now, currently is this, can we hover over to single agent?
Yeah.
So is where you can toggle between single agent and multi-agent mode.
And I can actually bring up an example of multi-agent mode actually to you.
Yeah.
So think of it as creating a team of bots that each specializes in a different skill.
and they can work together to complete your task.
So this is my example here of multi-aging mode but let's go a little bit further in.
So if we take a look here at multi-aging mode, Right.
You'll see that I have a whole chat here.
Now, in multi-agent mode, you're actually able to combine different bots together to help you complete task.
Sometimes it's really hard for a one bot to take on one prompt to and then execute the task.
but if you have a team or swarm of bots that can complete a task together,
you can combine them through these different
these different prompts and applicable scenarios as well in order for your bots to work together
and figure out which bot needs to be used for a certain use case or not.
So this is a bot that I use in multi-agent mode to create a personal system.
One bot is taking care of all the mail inside of my Gmail account.
Another bot is taking care of my calendar and booking and sending invites from my calendar.
Another one is updating Google Sheets, right?
So these different bots had different use cases, plug-ins, skills, prompts well.
And combined together to create a multi-tenant experience so that these bots could work together to create a better,
some more streamlined answers for you instead of relying on one bot to handle one prompt.
Multi-Asian mode is super powerful, and agentic AI is trending extremely right now.
If want to stay ahead of the game without having to get too technical with how to put this together,
multi-Asian mode combined with workflows, skills, and plugins helps you create a really powerful bot.
like a fraction of the time.
So, you know, this is something that's like our next step into how GPTs work with each entity AI.
Cool.
And the way the bot knows which task,
like which sub bots should be talking right now is through the,
applicable scenarios on top of each agent where you can specify when each bot should be called specifically.
And you can give each one its own skills and these skills can be pluggings, can knowledge, can be workflows, can be whatever you want.
Cool.
So after I've built my bot and I'm happy with it, how do I publish it so I can share it with more people?
Yeah.
So we can even do this from our multi-agent mode, because you can publish multi-agent bots as well.
So let's go over here and publish our bot, and we can have this help us with our description of our bot.
I can press Confirm here.
as well, and let's go over here to our publishing.
So inside of our publishing, we have our, let me just go here real quick, we have our change log.
So we can actually generate any changes in our change log.
Or we can write like anything that we changed, right?
So this change log right now is using AI to generate all the things that we
changed inside of our bot to help us stay up to date with what's happening.
It's our own version of version control, basically.
And then we also have places that you can publish your bot.
So if you remember from the beginning, we showed you the bots.
store.
The bot store is great because you, a community member, can also create a bot and publish it to the bot store.
So if I go here and select bot store, it's I'm already authorized ready to go.
And then I already showed you private configuration versus public configuration beginning.
So public configuration, remember, we were able to use the prompts that we have, and let me just scroll over here to it.
We were able to use our prompts, I view the prompts of bots that were created, right?
So I can see the conversation and talk to a bot, but then I can also see further how it's created.
And if you're a bot creator and you your bot has knowledge or databases or you know variables and other personal information
That you have being used there that you don't want people to have access to you Don't have to worry about it.
It won't be available to them when you publish your bot to the store They'll just be able to have conversations with your bot using the knowledge and everything else that you have and you won't
have to worry about your Your knowledge going anywhere else now Right now, I won't publish this to the bot store.
However, you can publish to Discord, Telegram, Messenger, Line, Instagram, Slack, Reddit, Lark, all these different popular chat applications that you could use.
And again, it helps you make your bot portable.
You bring in anywhere you'd like if you're using any of these apps, and you also share with them.
without having to make a whole new account.
So you're familiar with the GPT store,
you're going to have to like create a GPT account,
and then you're also going to have to sign up for premium to use a bot with codes.
You don't have to do all that.
You're able to use your bots in other places that people most likely are being are using like these other chat apps.
And you can take a at all of that in the documentation as well.
So if you go to the coast documentation,
let's say you go to the front page and you go to the bottom here, you can open up the documents.
And you can also go here to.
In Publish, you'll see that it has all the different publishing ways that you could use like Discord, Telegram, and Instagram.
It you all the documentation you need to get started.
For most of these services,
all you need to do is make sure you have an API key or secret key in their developer portal,
and you can publish your bot really easily.
The one that's the easiest to publish in is Telegram, and then I would say after that, Discord is very easy to do too.
So just follow along with the documentation.
We break it down all the way from step one to the end to where it's published,
and if you have any trouble, just join the Discord and ask some questions.
We'll be happy to Awesome.
Yeah, so if you publish it to the post store, as you can see in the front page of the store, we also
section, so we regularly go through all the bots submitted by our developers and make
sure we service the best bots under the recommended section and then we also have a banner where we can feature your bot.
So So this is where you can get a lot more exposure for your bot to get more users, more people to know about it.
So I highly recommend everyone to send your bots to the bot store.
And if you share it in the Coase Discord server,
under does share your bots channel, you also have a lot higher chance of having your bot store.
in the recommended section.
So this is our Discord.
We have a Share Your Baud channel where you can share your creations.
Just basically explain what your bot is and attach a link.
We also have a channel called Talk to Close AI where you can questions about Coase by mentioning the Coase Assistant bot.
So the Coase Assistant bot is trained on all the Coase documentation and frequently asked questions.
So if you have any questions, feel free to join our Discord server and chat with other developers or chat with our team there.
And if you have any feedback on the product, this is also where you can post them.
Cool.
And then, yeah, so I think this is pretty much the walkthrough of codes as a platform.
And if you have more questions, there are several resources available to you.
One is you can look at our documentation.
It's pretty comprehensive.
So you have a certain question about a certain feature, just go inside a relevant doc and see if you can get your answer there.
Or if you don't want in the discord because the assistant is trained on this documentation as well.
Or can you use the code, the onboarding bot as well?
Yeah, exactly.
So when you go inside the code, there's a home bot that is the same code assistant you can chat with.
You can also ask this bot about close features,
and we also have a YouTube channel,
so we're continuing to create and upload more videos on how to use code's best practices,
case studies, ideas or suggestions for the kind of video you want to see, feel free to ping us in Discord.
Cool.
Yeah, I think this is it.
Let send the Discord link to the chat.
And yeah, feel free to join us there.
Yeah, this session is recorded and we will post the recording on our YouTube and on Dev community after this.
Cool.
I think this is it.
Yeah, thank you so much for joining us and we hope you have fun building on Coase.
Thanks, everybody.
Thank you.
Bye-bye.
解锁更多功能
安装 Trancy 扩展,可以解锁更多功能,包括AI字幕、AI单词释义、AI语法分析、AI口语等

兼容主流视频平台
Trancy 不仅提供对 YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera 等平台的双语字幕支持,还能实现对普通网页的 AI 划词/划句翻译、全文沉浸翻译等功能,真正的语言学习全能助手。

支持全平台浏览器
Trancy 支持全平台使用,包括iOS Safari浏览器扩展
多种观影模式
支持剧场、阅读、混合等多种观影模式,全方位双语体验
多种练习模式
支持句子精听、口语测评、选择填空、默写等多种练习方式
AI 视频总结
使用 OpenAI 对视频总结,快速视频概要,掌握关键内容
AI 字幕
只需3-5分钟,即可生成 YouTube AI 字幕,精准且快速
AI 单词释义
轻点字幕中的单词,即可查询释义,并有AI释义赋能
AI 语法分析
对句子进行语法分析,快速理解句子含义,掌握难点语法
更多网页功能
Trancy 支持视频双语字幕同时,还可提供网页的单词翻译和全文翻译功能