Introduction to AI Product Development
Understanding the ChatGPT revolution
Get the project source code below, and follow along with the lesson material.
Download Project Source CodeTo set up the project on your local machine, please follow the directions provided in the README.md
file. If you run into any issues with running the project source code, then feel free to reach out to the author in the course's Discord channel.
This lesson preview is part of the Responsive LLM Applications with Server-Sent Events course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.
Get unlimited access to Responsive LLM Applications with Server-Sent Events, plus 70+ \newline books, guides and courses with the \newline Pro subscription.
[00:00 - 00:08] Welcome, I'm Luis and I'm excited to embark on this course with you. We will learn how to build AI products.
[00:09 - 00:15] Here you can see the homepage of ChatGPT. It's a very simple UI with other buttons at text input.
[00:16 - 00:18] We can write a prompt. Let's say, "Hey!"
[00:19 - 00:28] And now we get an answer in that URL language from GPT and we could carry on a conversation. This new product was a massive success.
[00:29 - 00:35] It's an unprecedented adoption. So you can see in five days it got 1 million users.
[00:36 - 00:43] In two months it got 100 million users. Bill Gates called it the start of the new age, the age of AI.
[00:44 - 00:51] I'm a tech lead in the Data Science company and I'm special as in building AI products. Let me show you some very simple AI products.
[00:52 - 01:04] Here we can see a feature to convert English into image. For instance, let me input the ALMAN and the C. Click on Emojitify and I get my Emojit.
[01:05 - 01:13] This is a thick job that will take 10 years ago, would have been very challenging to build. Now it's become quite accessible.
[01:14 - 01:17] So again let's see the chat. The chat has now become a classic.
[01:18 - 01:26] So I'll type a message and I get a conversation that can enhance. Like I will with a human.
[01:27 - 01:31] Lastly the agent. Suppose I want to know the temperature in Paris.
[01:32 - 01:42] With a simple model it would not be possible because the value of a current weather is not in the training set. It's possible with an agent workflow.
[01:43 - 01:49] Let me show you. I asked for the weather in Tokyo and I get the answer.
[01:50 - 01:57] What happened? Well the model made an APE call to an external service and got more detail about current weather.
[01:58 - 02:02] And that's a natural SQL flow and we'll see how it can be built. Now let's go back to Chudhupit.
[02:03 - 02:10] In this module we'll focus on system design. We will make the core architectural choices and we'll pick our stack.
[02:11 - 02:20] Now as product builder when we look at Chudhupit team we show ourselves what are the core different models. What made it so powerful, so successful.
[02:21 - 02:25] There are essentially two. Is generalist artificial intelligence.
[02:26 - 02:28] The second is tremity. Now generalist what does it mean.
[02:29 - 02:34] We already had many AI products but there were specialists. We had products that would classify the measures.
[02:35 - 02:41] We had products that could make sentimentalist. We had products that would predict the success of marketing company.
[02:42 - 02:56] So GPT model is generalist, the model is the language and the world. Thanks to Chudhupit's APE points we can consume those models as a service and we'll see how to use those API.
[02:57 - 02:59] Sengen is streaming. Let me show you.
[03:00 - 03:06] I ask for a poem about builders. And I get an answer.
[03:07 - 03:10] Now we can see that the answer is straight. Why is it needed?
[03:11 - 03:15] It's needed because Chudhupit is a large language model. It's huge.
[03:16 - 03:20] GPT is more than a billion parameters. It's big because it wants to be smart.
[03:21 - 03:23] But there is price to be paid. This price is latency.
[03:24 - 03:30] It takes dozens of second-generation answers which is extraordinary slow for web application. The solution is trimming.
[03:31 - 03:46] In this course we'll look into the details of how to implement streaming on the front end and on the back end at the network layer and how to keep streaming when we build complex workflow with a syncopical code execution. And so see you soon and happy learning.