Mocking Streams
This lesson preview is part of the Responsive LLM Applications with Server-Sent Events course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.
Get unlimited access to Responsive LLM Applications with Server-Sent Events, plus 70+ \newline books, guides and courses with the \newline Pro subscription.

[00:00 - 00:06] Welcome back. In this lesson, we will learn how to market stream. Why do we want to market stream? There are two main use cases.
[00:07 - 00:31] The first is to run the front end without the back end. The second is for testing purposes. Very often, you end up needing to run the front end without the back end. For instance, you are launching a full stack project, but it's not clear when the back end will be created. Maybe since there is an uncertainty, on how it will be deployed, we will deploy it, what stack will be used.
[00:32 - 00:52] Another very frequent problem is that there is another team hunting the back end, and some change needs to be done. Maybe there was a breaking change, maybe there was a rise down, maybe it didn't be under the notification firewall and so on. And you are first right over every team to take action.
[00:53 - 01:06] You can still work on your front end as long as your code is well structured, in such a way that you can run place the network layer as needed. It's often called the exact un-architecture or non-architecture, clean-arch itecture.
[01:07 - 01:30] Second use case, testing. Because we want to test our code, we want making up vehicles every time in our CI/CD pipeline, for instance, and here, making will be very useful. How do we do it? As we saw in previous lesson, the stream API is a native JavaScript API, very powerful, and we are going to use it to create our own stream and to control it.
[01:31 - 01:42] So you can see the function build_mark_stream, what does it do? It creates a new stream instance, it's a readable stream, and here we need two pair of tensions that we are returning a reader and a controller.
[01:43 - 01:59] The reader will be used to read the stream and the controller will be used to control the stream. Controlling means that you can un-queue new information in the stream, they can close the stream and so on as needed. So you get fun control over the stream you created.
[02:00 - 03:27] Another thing we know is that because we want to encode the text into a TI feed , so we use the text on-coder built-in function to encode. As usual, you call go to MDN, the doc is amazing, it's a great explanation, and there are some code samples. You can go on this page using readable streams , you have many examples that give you a better understanding of how it works and how to use the API. Let's go over the usage of this stream. Here, we are building a new implementation of a fetch chat function, and if you look at the interface, you'll see that it can be passed to a use completion hook. And here, we are building the mainstream, we get a reader, we get a controller, and we build an uncued chunk method. And we use the set timeout because we want to replicate this streaming effect where event comes one by one across the time. So here, we are beginning a data chunk, it's a server-cent event, it's uncored, we created our own schema of server-cent event, it's an error chunk, and we uncued it across time. And that's all. And if you pass this method to a use completion hook, where you will get the standard functionality with the sample text you need. Here, I put a sample text, but you can put some emoji, anything you want, you get full control. We will reuse this stream for testing. See you soon.