Exploring AI Agents - Toward Autonomous Systems

This lesson preview is part of the Responsive LLM Applications with Server-Sent Events course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.

This video is available to students only
Unlock This Course

Get unlimited access to Responsive LLM Applications with Server-Sent Events, plus 70+ \newline books, guides and courses with the \newline Pro subscription.

Thumbnail for the \newline course Responsive LLM Applications with Server-Sent Events
  • [00:00 - 00:50] Welcome back. In this lesson we will discover agents. Agents are a new type of AI system where we give autonomy to a model to solve a given task. Agents are very much at the technical frontier. Under every research and definition and respect list are not fully so let's take a look at a few relabel patterns to gain a better understanding. The two patterns I want to demo are the tool usage and the self correction sometimes called refriches. Tool usage. What is tool usage? Tool usage consists to give to the model several tools to use as needed. For instance here you can see our demo app. We have a chat and this chat may use an external API some basic Python function. Now let's say we ask a question. What's the average of a temperature of two Q and TARs?

    [00:51 - 01:44] That cannot be responded by current LLM as LLMs do not have real time and are not very good at mathematics. So let's take as our model. What does it do? It uses tools on a generic basis as needed. So here we have a first tool called to an external URL for the Tompy Rachael in Tokyo, same in Paris, an addition and a division. And I could ask for the weathering lantern for instance and I will get the renewed action. So that's the tool usage pattern that works quite a second is the self correction reflection pattern. Sometimes when you give a task to large-scale model it will fail on the first time. Let me show you. Here we ask for some code to be written and it filed with similar G-seconds and it executed and it failed because the information was too slow.

    [01:45 - 02:22] At what we did we gave it to the model initial task, how it failed and the model self corrected. It made a certain implementation which is this time takes advantage of dynamic programming to be faster. That's a very powerful pattern that can increase the matrix of your workflow by quite a lot. Let me mention, briefly some exploratory pattern that do not work very well at the moment but that made the future. The first is planning when you ask the model to plan future action and then to execute and the second is multi-algian collaboration.

    [02:23 - 03:21] Well let's say we will build a coding engine, a product manager engine and then all-algian collaborate. Current generation models struggle a bit but in the future we should expect those pattern to work. And last but not least I wanted to show you a visionary future as described by Andres Kerpatih famous machine learning researcher describing a vision of a future where large-scale model are at the center of a new stack software 2.0 where you have ALM which is the core general administrator and then you have all the other services, you have memory, you have some code execution in your classical computer, you have specialized model, video model, audio model and you have some external access, kson api, browser and so on. Thanks a lot and in our next course we will start building our tool usage pattern. See you soon!