This video is available to students only

Connecting to the Database

Connecting our tRPC routes to the Database

Project Source Code

Get the project source code below, and follow along with the lesson material.

Download Project Source Code

To set up the project on your local machine, please follow the directions provided in the README.md file. If you run into any issues with running the project source code, then feel free to reach out to the author in the course's Discord channel.

Previous LessonRefining the tRPC setupNext LessonGenerating Zod schemas

Lesson Transcript

  • [00:00 - 00:11] It's time for us to start connecting to our actual database from the back end. Now I like to create architectures where I have one transaction per request per default.

  • [00:12 - 00:28] This shouldn't necessarily be the only way you access the database, but I think it's a nice default to know that any request that fails for whatever reason won't have written anything in the database. That obviously becomes a lot murkier when you start to access third party services and send emails and whatnot.

  • [00:29 - 00:43] So there's a whole slew of considerations to take into account there, but for our little app, having one transaction per request will work just fine. So we'll do that, but before we do any of that, we need to connect to the database at all.

  • [00:44 - 00:55] So let's start by installing next and creating a little function for accessing that in our back end. We're in the back end folder here, so I'll install next here.

  • [00:56 - 01:06] And the PG library that we need to connect to it as well. And then we'll create a little file called getDB that will just return the database.

  • [01:07 - 01:23] And that doesn't need to be in the RPC folder because we might need that elsewhere as well. So this file simply creates a little next configuration and it connects to local hosts with the default port and with our database and user and so on.

  • [01:24 - 01:33] Obviously, this is not going to work in production, so we will need to do something else for that. But for development, this will work.

  • [01:34 - 01:45] We can now extend our public procedure with a piece of middleware that will create this transaction for us. If we go into our TRPC file here and I will just paste this piece of code.

  • [01:46 - 01:56] So with transaction is a piece of middleware that will basically establish a transaction. So getDB returns as a next instance and that has a transaction function.

  • [01:57 - 02:03] So with this, there is now a transaction in place. We will then try and run the next function which is provided to us.

  • [02:04 - 02:09] This is how middleware works. So this will basically wrap whatever endpoint we're calling.

  • [02:10 - 02:22] And we will decorate the context that the endpoint is being called with this TR X value which is our transaction instance. Once that has succeeded, we will try and commit the transaction.

  • [02:23 - 02:32] And if any error is caught anywhere in the flow, we will roll back the transaction and then throw the error again. So in order to use that, we will add this to our public procedure.

  • [02:33 - 02:47] So we'll specify use and then with transaction. This will now mean that the public procedure will be called with this TRX instance in it.

  • [02:48 - 02:58] Let's update our endpoint to actually fetch the service types from the database instead of just returning this hard-coded array. The way we'll do that is by updating our query function here.

  • [02:59 - 03:11] So first of all, it needs to take the context which is where the transaction lives on. And then instead of this array here, let's just fetch the service types from the database instead.

  • [03:12 - 03:39] I'll access the context and as you can see here, the transaction is available to me because everything here is properly typed. So this is a next transaction and the way I use it is I specify the table name in here, service type and then I will select everything and we'll just return these.

  • [03:40 - 03:49] Let me just update this to include the description which was part of our schema . Oops.

  • [03:50 - 03:58] Now let's try and start our servers and access the web. Let's see what this looks like now.

  • [03:59 - 04:10] You can see that we get service type one which is what I had in the database. Let's try and add some service types if I go in here and I can quickly insert.

  • [04:11 - 04:17] I have a little query here. Oops.

  • [04:18 - 04:21] Service type singular. So I'm adding two service types.

  • [04:22 - 04:34] So the names are furniture assembly and moving and the descriptions are basic help and help with moving that requires a truck. I don't know, those seem like possible service types that someone might offer.

  • [04:35 - 04:40] So we have inserted those two rows now and let's try and refresh here. You can see that we now have these.

  • [04:41 - 04:55] So everything seems to flow now from the database to the backend and all the way to the front end. So that's great but it's a little bit annoying that we had to remember to add the description to our schema.

  • [04:56 - 05:06] There were no squiggly lines anywhere. And in fact, as you can see here, if I hover over the service types variable here, it's this ugly any array type.

  • [05:07 - 05:15] It comes out of next and next does have support for TypeScript. The way it works is that you provide it with a mapping from table names to interfaces.

  • [05:16 - 05:24] And it just so happens that can all can generate that for us. So let's try and go to our kernel configuration here and add this.

  • [05:25 - 05:34] So I will import generate next tables module, which is also from the next plugin. And I will add that as one of the pre-render hooks.

  • [05:35 - 05:50] So if I go to packages/schema and I run kannel, you can now see that there's this file called next tables. Actually let me just make sure that we already always run schema lint as well.

  • [05:51 - 06:11] So I'll open the package JSON in schema and I will add a script for generating models. So this will run schema lint first and then kannel and then it will build with TypeScript in the end to make sure that everything works.

  • [06:12 - 06:23] Let's try and run that instead. There we go.

  • [06:24 - 06:43] We should now have this next tables file. So kannel has generated this for us and you can see that service type table is this composite table type where it uses the selector, the initializer and the mutator respectively and everything else for all the other tables as well.

  • [06:44 - 06:59] So next needs to know about this and the way you do that is a little bit quirky but basically what I found works the best for me right now is by using a triple slash reference. So we can set that up in get db.

  • [07:00 - 07:06] I like to add that at the top. So I'm disabling this linter rule because they are recommending that you don't do this.

  • [07:07 - 07:13] Maybe this should change in the future but for now this works. So basically this is being imported now and next we'll pick it up.

  • [07:14 - 07:26] So we can now go into our router and we should see here that this is now an array of service types. So if I try to do anything now it will be validated with the type checker at compile time.

This lesson preview is part of the Fullstack Typescript with TailwindCSS and tRPC Using Modern Features of PostgreSQL course and can be unlocked immediately with a \newline Pro subscription or a single-time purchase. Already have access to this course? Log in here.

Unlock This Course

Get unlimited access to Fullstack Typescript with TailwindCSS and tRPC Using Modern Features of PostgreSQL, plus 70+ \newline books, guides and courses with the \newline Pro subscription.

Thumbnail for the \newline course Fullstack Typescript with TailwindCSS and tRPC Using Modern Features of PostgreSQL