Adding Tools To The Chat

This lesson preview is part of the Responsive LLM Applications with Server-Sent Events course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.

This video is available to students only
Unlock This Course

Get unlimited access to Responsive LLM Applications with Server-Sent Events, plus 70+ \newline books, guides and courses with the \newline Pro subscription.

Thumbnail for the \newline course Responsive LLM Applications with Server-Sent Events
  • [00:00 - 00:33] Welcome back. In this lesson we will discover tools. Let's look at our demo app where we gave several tools for the model including an addition function. Now if we add for the sum of 5 and 3 we ask instance users a tool the addition function to compute that result is 8. Let's take a look at the add function into a sample Python function that takes two numbers a and b and returns for some a plus b. Note the description of the document string and the type ints.

    [00:34 - 03:32] This matter at it will be used by the model to understand how to use the tool. Now let's take a look at the code for our app equals. Here we can see the list of all the tools every tool is a simple Python function. It can encapsulate many different use case like a call to an external API, executing code in a sandbox, making a call to the semantic database or a scalar database and so on. Here we have our events as we have set up previously we may emit many types of events as the workflow goes on and now we may start. Now the first change is the bind tools method. What we are doing is simply making sure that anytime a call is made to the open API we are passing a tool sparan where we describe all variable, variable tools and now we begin the loop. Why the loop? Because we want to implement a self-correction or a reflection pattern where should the model make a mistake we give it the opportunity to fix it or should the model solve partial problem we give it another try to complete the task. Now how does it break this loop? First if it can't solve a problem in the max iteration of course we stop because we do not want it in the fny- blue. So that's the case where the relevant phase and we stop it anyway but the most common workflow is here is when the model detects a natural cause unnecessary then we break the loop. Now let's look we create a new message and since there will be many types of messages this time we will make sure to add an ID so it's easier to track what's going on for that message and we emit just to say the message has been created we make sure to respect the context window size and here we call our open API endpoint and as you can see here we are given two responses. The first is the text that is emitted as usual and here are the tool calls. It's what tools should be called according to the model and let me show you quickly the complete method. Here it is. Now it's look a lot like the old one but the difference is that when we are making sure to pass and read the tool calls the only the second one is that we are not yielding instead we are emitting the checks. Once we have our tool calls we will now happen the new signature is to read we will get the tool calls we will emit all the tool calls and so on and now it will be our job to execute the tool calls.

    [03:33 - 05:37] So you can see we will loop and one by one we will execute every tool. So let's see here I get a tool call I simply make sure to delete and in order to make sure that everything is sent to the UI we emit event so an event to say that the tool call has begun so we will start you starting then an event to in case of success and an event in case of error and here we make the actual call. We have a function here it will be add and the argument here it will be Ia and b. We call the tool and last but not least we must create a new tool message. This is a new type of message this is a message with the whole tool we use as type of message to include in the message history the result of tool use and note that there is a tool call ID here it must make sure that the tool call ID is coherent with the previous history meaning that the model expect the tool call ID to be present in a previous assisted message and we are done we have successfully implemented our use case and as you can see while we were executing everything we were emitting events that means that on the front end we will be able to keep the front end up to date as event action are being taken. Let me show you an example here we are asking a completely complicated use case of direct comparator and as action as being taken we update the UI. This is made possible because in the rule workflow anything sometimes happen we keep emitting an event let's take a look in our next lesson in how we build this UI. See you soon!