Rendering Completion Output

This lesson preview is part of the Responsive LLM Applications with Server-Sent Events course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.

This video is available to students only
Unlock This Course

Get unlimited access to Responsive LLM Applications with Server-Sent Events, plus 70+ \newline books, guides and courses with the \newline Pro subscription.

Thumbnail for the \newline course Responsive LLM Applications with Server-Sent Events
  • [00:00 - 01:14] Welcome back. In this lesson, we will learn how to run the other output of our completion. As you can see, it's simply a text string, so it's very simple. You just need to insert the text in the HTML templates and it's done. Then only one thing to know is that GPT uses Markdown syntax to format the output. You will need to use a library to run the Markdown and here we are using React Markdown, which is a very useful library and easy to use. You simply use React Markdown and you input your text. You may have some problems with line breaks and plugin may be needed. Here we are using Remark Bikes, which is another NPM plugin, but it's very straightforward. Lastly, what may be a quality or laugh improvement is to add a loading indicator and a reading indicator. The loading indicator is when the first token is yet to be protected . We add a little pulsating dot and the reading indicator just during the streaming we add a dot.

    [01:15 - 01:26] We made a pass-hating dot using tailwind. You can look up the code. Now let's go to the next lesson which concerned mocking the stream.