Introduction to Cursor - AI Code Editor ✨ - Двуязычные субтитры

Cursor is a groundbreaking code editor that forever changes the way we program.
At least that is what some people who have already used this tool claim praising the native integration of generative AI.
On the other hand,
this tool also faces significant criticism, particularly from advanced programmers who claim that it is a useless tool in the hands of a professional programmer.
Instead of wondering which side is right, it is much better to simply download this editor and then run it on your project.
Since cursor is built on the basis of visual studio code,
it is an interface that is already well known to at least some of it.
I have already completed the configuration and it's not the topic of this material.
However, if you work with Visual Studio
Code on a daily basis, there is an option in the settings to import your profile, configuration, and extensions.
However, we are talking here about the settings
of Visual Studio Code itself, while regarding the settings specific to Cursor, we will return to them later.
Meanwhile, I would like us to take a look at the project that I am developing here and that we will model.
cursor, the goal of this project is to enable personalized interaction with large language models such as GPT-4, GPT-4.
An additional complication is the fact that I am using two tools for building this project that I was not familiar with until now.
The first of these is the HONO framework, which is a relatively new JavaScript file.
framework.
Additionally, I set out to learn about the Drizzle tool, which is an ORM that I will be using to interact with the SQ Lite database.
There is a certain problem associated with both of them regarding large language models.
Namely, I don't know if you are aware, but the knowledge of the models is frozen in time and does not encompass all possible information.
This means that we can ask a question, for example, about Drizzle ORM and we will get the correct answer.
However, this does not mean that if we ask a question about specific functionalities, we will receive a correct answer.
For example, I went here to the documentation for the section talking about the integration of SQLite and ban.
Here is a snippet of code to which we will now ask a question of the model.
Although the model generates a response that seemingly looks correct, it is unfortunately not correct or rather it is more outdated.
This does not change the fact that we have incorrectly generated code here.
Here we see a comparison of exactly this fragment with the one we have in this place.
Everything seems similar, but the imports do not match.
It is particularly important to pay attention to this because in some situations the models will generate code that is not up to date,
while in others they will suggest functionalities that never existed.
We must therefore keep this in mind and remember not to turn off our minds when working with large language models.
models.
Interestingly, this does not mean that large language models are useless in this particular case.
Namely, if we now return to the cursor and start a chat here using command OSHL, we can see that we have the option to search the internet while generating
Then the model will use not only its base knowledge to respond but also the results of internet searches.
However, the support we can provide here does not end there.
Specifically, cursor allows us to add additional context in the form of documentation for selected tools.
We have both the library of default documentation available and the option to connect your own documentation.
In my case, I simply imported the documentation for Drizzle ORM and Hornadev here.
This way, I can now reference them by calling them in this manner.
We have confirmation here that the documentation has been added to the context and then we can ask our question.
We see that the cursor searches the internet and then also searches the documentation and based on all this data it generates a response.
If we go down here and now compare this response with the section of the documentation that we have here,
we can see that the response is exactly what we were looking for.
This does not of course mean that we have solved all the problems in this way and that the answers
generated by cursor will always be correct.
We simply see here an example of how significant the context we add to the conversation is for the quality of the model's responses.
However, we must know that this context is not infinite.
Although it is becoming larger for the latest large language models,
the cursor still limits it by default,
selecting only the most important fragments of the context that we provide in the form of search results or, as in this case, documentation.
In addition to external data sources, we also have the option to include individual files or directories.
There is also the possibility to ask a question, for example, to a specific pull request or commit.
Ultimately, we also have the option to discuss the entire source code of our project, but we must remember the aforementioned context limitation.
Specifically, the questions we will be asking here will be used to search the project and find the most relevant sections possible.
This means that the quality of the responses will depend on the fragments found and in
turn their discovery will depend on the question we asked.
One can imagine that we are dealing with an advanced search engine that is capable of searching
not only based on keywords but also deepening its search and for example assessing which
of the found fragments are actually relevant from the perspective of our conversation.
In other words, the functionalities available to us here allow us to easily provide information about our project and the tools we are working with.
with, making the models responses much more personalized than, for example, on GPT-4 or other similar chatbots.
Of course, it does not guarantee us 100% accuracy and cogeneration, but it undoubtedly helps a lot.
Now, let's return to the functionality of our project, and I will just make sure that the server I configured here is running.
It seems that we can send HTTP requests to it,
which I will do using the ALICE application, my interface for interacting with large language models.
I have set the address of our server and the appropriate endpoint here, which connects this graphical interface to our server.
I just need to switch to the appropriate mode now, and if I send a message, I will receive a response from the model shortly.
We can confirm this by making a small change here to display the content of the message.
However, instead of writing console.log here, and as you can see, similarly to GitHub Copilot,
I receive a response response Instead, I will use the option that is suggested here.
Namely, I will press the Command-K button.
A window has been displayed here that allows me to enter a simple command, the result of which will be generated code.
Specifically, I can request to display the content of the last message.
After a moment, the code has been generated and I can now accept it by pressing the button.
why or command enter.
After saving the changes we can send one more message and we will see that this time the content of my message has indeed appeared here.
This means that without a doubt our connection is configured correctly here.
I will now remove this code by pressing command Z and we can immediately see that I can not only undo the addition of the code but also potentially go back to the
instruction that I typed here.
It turns out that I have the option to either go back to that first step or simply comment on the change made.
Now however we will focus on pushing all the changes we have made here and concentrate on our main task.
It will involve exactly ensuring that the interaction we have here meaning the content of the message is sent in this
place is saved in our journey.
If we now take a look at the contents of the database,
we will see that I already have tables configured here inside which I will want to store the content of messages.
The structure of this table or tables is of course also defined on the code side.
For example,
at this point,
We therefore have general information here that may be useful for implementing this
functionality and providing the appropriate context to the model thereby increasing the chances that the generated content will be correct.
However, contrary to what it may seem now, I will not directly ask about implementing
this functionality but rather pose a question regarding the SRC directory.
This question will not require searching for directory.
but we'll take into account the problem we need to solve.
Specifically, it is the ability to save messages exchanged between the user and a large language model while maintaining the current project structure.
I would like us to plan the implementation together and what we can start with.
For such tasks,
I always use the cloud 3.5 sonnet model here because at the time of recording this material,
it is the best available model for tasks related to code among others.
And now we see how the cursor searches the content of our files and then suggests changes.
The first point states that we already have a message structure.
This is true.
Next, we receive suggestions for creating a service that will include methods responsible for creating new
messages and I will just add that this particular suggestion does not come directly from the model itself,
but from the fact that I actually already had a file with that name created here, although it was empty.
However, this does not change the fact that the suggestion we have here is entirely correct
and is an element of what we are aiming for.
Next we have a suggestion regarding the controller itself and that this is where we will be saving the messages.
Although no changes have been made here, they are indeed located right here.
We see that the model suggested creating a thread identifier here and then added both messages.
At first glance, we achieve our main assumptions regarding saving interactions in the database.
Unfortunately, this does not mean that the logic we have here is correct.
Firstly, the fact that we are generating a thread identifier here means that we will only save two messages in the given conversation.
This means that the model incorrectly understood the fact that we are actually concerned with the ability to send the conversation identifier at this point
and only in the situation where it is not provided will we want to generate it.
The second problem is the fact that the response from the model will not always be provided in full.
The reason is that we have the option of streaming information here and as a result,
the outcome will not yet be available at this stage.
Furthermore, we will also want the array of messages provided here to not be just the
last message conveyed in this place, but a complete set of messages present in the given thread.
In other words,
we have a few problems here,
and although the code written here is correct in itself, unfortunately, it does not align with the logic we expect.
This is a perfect example that model will not be able to do the work for us unless it receives precise requirements.
And moving forward, we also have suggestions regarding service updates and interestingly, the model pointed out an issue concerning responses that are being seen.
Unfortunately, he suggested a rather uninteresting solution, which involves simply waiting for
the streaming to finish, meaning the entire response to be generated, and only then sending it in reply.
Of course this disrupts the entire functionality related to streaming,
as we want the response to be returned in parts, allowing us not to wait for it to be generated in full.
So we need to take all of this into account in the implementation itself.
However, for this purpose, we will not work in the in chat but will move on to a functionality called Composer.
Composer is a floating window that, similar to a chat, allows us to refer to the existing context of the project and engage in conversations.
The difference is that when we enlarge them, we can paste here the instructions or descriptions of the changes we want to make.
I have described here our main goal which is the ability to save the content of
messages and then a few points outlining how I want this to be done.
So I want to avoid modifying too much of what we already have and to have the ability to
group messages within thread as well as to stream responses and send them in their entirety.
Ultimately, I want the logic to be divided with the help of services
and at the very bottom,
I also include the files that I want to be modified or used as a reference to determine the overall style of writing the code.
After sending this query, we see that the code is being generated by the model and soon we will see what the result will be.
It seems that the files are already ready.
In the messages service file, a class and methods have been created responsible for saving messages and retrieving threads.
It looks fine.
Next we have our chat endpoint where we save the user's message and in the case of a non-streamed response we save the
content generated by the
We also have the ability to retrieve messages from an existing thread as well as the option to pass an identifier using a header.
Next, we also see the modified response file where we have included the logic responsible for saving the messages that are being streamed.
And we have the path configuration and everything looks fine although I would like to make a few changes.
First of all, I accept the changes and then I talk about what I would like to be changed further.
Unfortunately, we have a small problem with the size of this text field, but we see that
I am asking for the thread identifier to be passed in the request object,
not Next, in order for this message service not to be passed in this place, it should be created similarly to the RLLM service.
Let's say that for the needs of this application, this will be an appropriate solution.
And finally, I want the thread identifier to be returned in the response object as well as in the response header.
I accept the change, and I am once again waiting for the implementation of the change.
changes.
We can already see here that the thread identifier is now being retrieved from the request object and if it is not available,
it will be generated here.
This is,
of course,
completely correct and in the The rest of the logic in this file remains unchanged,
while we also see a change regarding the return of the header containing the conversation identifier.
If we would like to make sure that no additional lines have been changed here,
we can of course switch the view and compare the individual lines that have been modified.
Meanwhile we can switch to this file where we see that the message service has been created
as I requested and the remaining parameters have been removed.
And finally we also have the recording of the assistance response here,
so it seems that everything is I can now close our Composer and we can start the server.
However, we see that there is a small error related to importing the
file, so I will quickly fix it and we can see that we have more errors that have slipped past us here.
Specifically, this concerns the path file that I overlooked in our context.
Therefore, we will return to the Composer and ask it to fix this error.
I will resize this window again to see exactly what will be modified.
At this stage, it seems that the error has been fixed.
The server has indeed been started, so now we will test the endpoints using Insomnia.
Here we have a request to the server, which I will slightly modify.
At the very beginning, it will contain only a message to our assistant.
And it indeed looks like the response has been generated,
so we should now move to the database to see that the content has been correctly saved here.
So we have a user named Adam and an assistant named Adam.
We also have an identifier in the thread that we can use to continue the conversation.
I just need to provide this identifier here using the thread UID parameter and now we can
immediately see that we are in the same thread as sending the re-welcome message generated the response.
This means that we actually have additional messages here that have been linked by the same thread identifier and correctly recorded in our database.
We could possibly replace the thread you hide here for consistency with the format used in the rest of the
I propose that we do this with the help of a large language model and the composer function.
I will create a new thread here and request changes to be made in our controller files and responses.
We will increase the window here again to see what changes have been made.
And we can compare the names of these properties here and thus confirm that everything is correct.
Once again I will close these changes and check if the server is working correctly and then I will make one more query.
And here we have our identifier which we can pass here and sending another message will cause it to be linked to the existing thread.
We have confirmation of this here.
The identifier differs from the previous thread, which is why everything works correctly here.
We can also check the streaming option.
And here everything also looks fine.
This means that we have implemented the full functionality of saving message content in our application.
And interestingly, we did this without writing a single line of code.
At the same time, it is clear here that all of this did not happen without human involvement.
I constantly controlled this process and made key decisions while the language model generated the code in question.
Normally, I would have to spend a lot of time implementing this functionality considering
the fact that I am dealing with frameworks that I am just getting to know.
Of course,
this does not exclude the fact that I need to improve my skills in using these tools to work with them more easily in the future.
At the same time, the entry barrier which undoubtedly exists here is incomparably lower compared to the situation if we did not have large language.
It can therefore be said that the use of the cursor allowed me to implement this functionality much faster,
and it is of course true because recording this material took me several times longer than implementing the functionality itself,
and now with this fact I can do two things.
The first is simply to take a break or move on to the next functionalities.
And the second could be,
for example,
fixing some bugs or focusing your attention on code optimization and,
for example, to discuss with the model the best possible practices regarding this specific project, unlike search results on the Internet.
In this case, I will receive personalized and tailored suggestions that I can do anything I deem appropriate with.
And since we're already talking about extra time for solving problems, systems.
We have here the type incompatibility and type script.
Namely, the objects we retrieve from the database do not match the type expected by the Versal AI SDK.
This means that we need to perform some kind of mapping here.
We can solve this problem in several ways.
The first of these involves solving the problem manually.
And at this point, we could really conclude the topic.
Because despite the fact that the cursor has all these functionalities, it does not mean that we always have to use them.
However, I would like to show here a few approaches that we can apply.
First, we can use the AI.
Fix in chat option.
In that case, the error content will be transferred to the chat and then a response will be generated.
In fact, the solution we have here is also an appropriate mapping of our messages, which resolves the issue.
Of course,
I would like to point out that when it comes to this specific shape of objects,
if we want to expand the interaction with the model, we will also need to change these types here.
In any case, this is the second solution that we could apply here.
The next step is to continue the refactoring, for example, with the help of a composer.
I just need to highlight this piece of code and then write, for example, a request to move the mapping to the message service.
Then the changes I am referring to will be made here, and I can accept them again and continue working on my application.
So now let's put it all together.
Cursor is primarily a quite good code editor based on the very popular visual studio code.
Its basic functionalities include code, autocompletion, and suggestions similar to what we know from tools like GitHub Copilot or SuperMaven for example.
Next, we also have inline prompts available, which allow for editing selected parts of the code.
We should choose this option only when the scope of the work being done is indeed very small.
Next we have a chat available whose main purpose is rather to discuss the introduced functionalities or to have a general conversation about the project.
For example,
we could ask here about the recently implemented changes,
specifically I imported the current div here,
which is the list of changes I made since the last commit,
and we I already see a general list of suggestions here that we could actually expand on and discuss further.
For example, regarding the optimization of the connection to the database, however, we will not be doing that now.
It seems to me that the concept of the chat is obvious here.
And finally, we also have a composer which we will actually use when we want to make changes in multiple files.
Of course, all these interactions with the model are also stored in the form of a history, allowing us to easily return to individual conversations.
However, ultimately in all these functionalities, what is truly most important is what we know about the operation of the large language models themselves.
The mentioned fact of limited base knowledge,
limitations of the context window,
or the ability to maintain attention translates into the overall understanding of what we can expect After all,
we have seen examples where insufficient context resulted in, for instance, to incorrectly generated code that contained references to non-existent imports.
Similarly, there may be situations where the composer will make changes that we actually
do not want or that will contain business errors that our linter will not catch.
Moreover, ultimately we take responsibility for the generated
For example,
despite the fact that the LLM helped me a lot with the implementation of the last functionality,
as you can see, it left some unused imports here.
This is of course a simple example of some oversight.
However, we must remember that this rapid pace of implementing changes is, in many situations, only an apparent benefit.
because if we were to generate a lot of code here that might initially meet our
needs at some point we could get lost in it and thus we would need more time to
understand what has been generated making the benefit of using tools like
the cursor minimal or even non-existent as for the tool itself I would like us to
go through
namely here we have the ability to manage our account and as you can see a paid
plan is required which gives us access to premium models and fast generation which is quite expensive and also wears out quickly.
Of course we have the option to upgrade our plan as well as set limits which also gives us access to various types of models that are listed here.
In practice however as I mentioned in almost every case we will want to use the best possible model which at the time of recording
this material is cloud 3.5 on it.
Meanwhile, returning to the settings, we see that we have defined rules for AI here.
It is nothing other than the so called system prompt,
which is an instruction added to the chat, the content of which influences the behavior of the model.
As for me, I have simply defined a few rules here that I would like the model to follow.
Of course,
this does not mean that the model will always adhere to them,
because as I mentioned, large language models have a limited ability to maintain attention, especially in extensive contexts.
Either way, it is worth building such an instruction for yourself or, for example, getting inspired by those available on the cursor.directory website.
Here we see examples of instructions that are tailored to specific tools and technologies.
This means that we can choose a few of them and then use them either here or, for example, use them within the cursor.
All we need to do is place this file in the main directory of the project and here we can save our instructions.
Interestingly, we could just as well save any other text file here, although this one will not be automatically loaded by the cursor.
What I mean is that we can save some information here and then during the interaction with the model refer back to the content of this file.
As you can see, the model indeed benefited from her content, specifically quoting a fragment of it.
This means that within such a file,
we can include either various types of rules or for example an action plan similar to the one I used earlier.
I also don't know if it's clearly visible enough,
but the ability to work with such text files can be taken to a slightly higher level.
For example, here I defined a task list and a section where the data is supposed to appear.
I can now run the composer and then write, for example, a request to complete task.
We see that in this situation the composer correctly completed all the tasks we had on the list and filled in the data section.
Such an approach can be applied to the entire project,
allowing not only for task execution but also for providing additional information such as,
for example, provide feedback or ask the model to ask additional questions to which we will also be able to generate answers.
Ultimately, something like this will of course be directly linked to the In any case, this only shows that one can think outside the box and use the cursor not only for faster coding or for
introducing individual functionalities, but also for designing an entire work style that can be completely tailored to us.
Before we go any further, I would like to take a step back in history to the previous conversation I had in the background.
Specifically, this is an example where we utilize the ability to converse with the entire project code as well as the capability of large language models
to generate structured responses.
Specifically, I asked here to visualize how data flows within the query executed in the specified
Additionally, here in the codebase settings, I increased the number of search snippets to
400 and activated an additional step related to reasoning or contemplating the results.
As a result, the files were returned here and then their content was evaluated here.
Finally, we had this stage of thinking where the model considered how to generate the map.
As a result, we obtained a graph here, which I then pasted into the mermaid tool.
As you can see,
this has led to the rendering of a graph that precisely represents how data flows in our application or within this specific entity.
Something like this can be useful during the project exploration phase.
However, it is important to remember that the model may overlook something or simply make a mistake.
I experienced this even here because I had to ask an additional question
in which I pointed out a potential error and requested an update to the graph.
This second response was already correct here, and at least I don't see any mistakes here anymore.
Of course,
I suggest looking at this not only through the lens of this specific example,
but through the lens of the concept we applied here, which involves converting a regular model response into an automatically generated graph.
Still on the topic of the chat itself,
I would like to point out that we can work not only with text here, but also with images.
All we need to do is simply paste a screenshot here, and we will also be able to ask related questions.
I described a practical example of such a situation in one of my threads on XE.
I was implementing a fix in the interface involving visual changes and appropriate state management.
At a certain stage I received responses here that were partially correct as not all requirements were met.
At that time, in addition to the message describing the problem, I also attached a screenshot that illustrates it.
Here again, it should be noted that large language models and their ability to interpret images is also limited.
We can read about this in this publication, which contains examples discussing issues such as detecting intersecting lines.
These are scenarios that we can easily encounter during, for example, coding interfaces.
Unfortunately, large language models will not be able to help us precisely in this context, although that does not mean we should not try.
In many situations conveying an image can be very helpful and enrich our instruction.
I suggest that we go back to the settings because we have a mode here that we obviously want to activate.
However, we must remember to respect the decisions of the employer or, for example, clients regarding the use of large language models.
This means that before we use tools such as a cursor,
we should consult either with a supervisor or our client regarding the policy, privacy, and the way data is processed.
In the Models tab, we have a list of available models that we can connect to and we can also add additional models here.
However, as you can see, I am rather interested in working with just one model.
In order to work with such models, it is necessary to connect API keys.
In my case, these are the open AI and anthropic keys, but we also have a few additional options here.
The principle here is obvious,
namely that the allows us to utilize the given model unfortunately at least at the time of
recording this material if we want to use the composer option we cannot use our
own API keys but must rely on the plan available in the cursor account settings
and the last tab we have here is the settings that allow for adjusting the behavior of the cursor itself As for each of them,
as you can see, these are quite simple settings that are also described in detail.
I also assume that over time this tab will change here and new options will appear in it.
Therefore, we will not go through each of these settings in detail now as it is easy shown.
I would just like all the settings that I have activated to be visible,
but that doesn't mean, of course, that it has to be the same in your case.
As for the settings we have here, I would just like to pause on indexing the contents of the project file.
Specifically, it may happen that the cursor starts generating responses using older versions of files or, for example, excluding new files.
At that point, we may want to re-synchronize the directories.
As part of the indexing itself,
we also have the option to ignore selected files and in any case,
it is worth remembering them and placing an ignore cursor inside the file.
For example, we have here the node modules directory or the env.duff file containing our API keys.
I think that regarding the configuration itself and the options we have available here,
that would be all,
so I actually encourage you to do The first one concerns the installation of the cursor and adjusting its settings to
meet one's needs and then implementing a very, very simple project from scratch.
It's simply about getting familiar with the cursor mechanics, the functionality is available here as well as the keyboard shortcuts.
All of this will allow us to use this tool more efficiently or simply make the decision that it is not for us.
On the other hand,
the second thing I encourage even more is to learn about the functioning of large language models, their current capabilities and limitations.
This knowledge will directly translate into the quality of the generated responses and thus into the value that the cursor will provide us.
Finally, I would like to add that cursor is no longer the only code editor developing functionalities related to generative AI.
An alternative that is still in the early stages of development is the Z editor,
as well as the tools created by Unfortunately,
despite my great affection and respect for this company, they remain behind in terms of implementing generative AI tools in their products.
Perhaps this will change soon, but for now, the best solution in this regard remains the cursor.
As for the topic of the cursor, that would be all.
Once again,
I encourage you to install this tool and to answer for yourself whether this tool is meant for us or
I hope that the examples and material I have shown so far will help you better start working with this tool,
explore its capabilities more effectively and be aware of the limitations as well as above all the responsibility that rests on us as developers related to the code we create for our projects.
I personally keep my fingers crossed that generative AI tools will assist us all in our work and make programming even more enjoyable.
Язык перевода
Выбрать

Разблокируйте больше функций

Установите расширение Trancy, чтобы разблокировать больше функций, включая AI-субтитры, AI-определение слов, AI-анализ грамматики, AI-речь и т. д.

feature cover

Совместимость с основными видеоплатформами

Trancy не только обеспечивает поддержку двуязычных субтитров для платформ, таких как YouTube, Netflix, Udemy, Disney+, TED, edX, Kehan, Coursera, но также предлагает AI-перевод слов/предложений, полноэкранный погружной перевод и другие функции для обычных веб-страниц. Это настоящий всесторонний помощник в изучении языков.

Все платформы браузеров

Trancy поддерживает все платформы браузеров, включая расширение для браузера iOS Safari.

Несколько режимов просмотра

Поддерживает театральный, чтение, смешанный и другие режимы просмотра для всестороннего двуязычного опыта.

Несколько режимов практики

Поддерживает режимы диктовки предложений, устного оценивания, множественного выбора, диктовки и другие режимы практики.

AI-сводка видео

Используйте OpenAI для сводки видео и быстрого понимания ключевого контента.

AI-субтитры

Создавайте точные и быстрые AI-субтитры YouTube всего за 3-5 минут.

AI-определение слов

Нажмите на слова в субтитрах, чтобы найти определения с помощью AI.

AI-анализ грамматики

Анализируйте грамматику предложений, чтобы быстро понять их значение и освоить сложные грамматические моменты.

Дополнительные веб-функции

Помимо двуязычных видео субтитров, Trancy также предоставляет перевод слов и полноэкранный перевод для веб-страниц.

Готовы начать

Попробуйте Trancy сегодня и оцените его уникальные возможности самостоятельно

Скачать