Creating a FastAPI server
This lesson preview is part of the Responsive LLM Applications with Server-Sent Events course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.
Get unlimited access to Responsive LLM Applications with Server-Sent Events, plus 70+ \newline books, guides and courses with the \newline Pro subscription.

[00:00 - 00:09] Welcome back. In this module, we will start building the backend. Recall that in the previous module, we build a front-end for the following use case.
[00:10 - 00:20] Converting English into EmuJ. To create this server, we will use first API. A very useful Python libraries that later set up a server very quickly.
[00:21 - 00:33] It leveraged pedantic to automatically validate the parameters of the body and generate on-the-fly a swagger documentation. You can also handle many other facilities like middlewares.
[00:34 - 00:40] Here we can see coarse middlewares. We need to make sure that our server accepts requests from the front-end.
[00:41 - 00:48] So here we are listing all the URLs that may be used by the front-end. We have released all the methods that are accepted.
[00:49 - 00:52] And here you can see the endpoints. So we created the first API server.
[00:53 - 00:58] We are declaring an endpoint, a get endpoint. And we are also declaring the paths.
[00:59 - 01:10] And automatically, the paths will be violated by Python. Now we can launch this backend with a single shell command, "Unicorment App Reload".
[01:11 - 01:17] And automatically, we get the swagger which is auto-generated. Here is the swagger. You can find it on the root slash docs.
[01:18 - 01:23] Up. Here I open my L1 endpoint. There is the contents bar.
[01:24 - 01:28] And let's just try the world 2. Let's go execute.
[01:29 - 01:34] And we can get a request that indeed worked. It returned in the world.
[01:35 - 01:39] And the barram was indeed in the world 2. So as you see, very few lines of code.
[01:40 - 01:44] So we'll just set up our server. We'll see how to live in the next lesson.
[01:45 - 01:53] We will see how to combine first API a long chain, a very popular library to build AI solution on top of LLM.