beginnerTutorials

NEW

Build a Web App with Just a Prompt: Vercel’s v0 Explained

Imagine a world where you could build an entire web application simply by describing it. Well, it’s not a fantasy anymore. Here, I’ll show you how Vercel’s v0 has revolutionized web development. By the end of this article, you’ll learn how to turn your written ideas into production-ready web applications! Vercel’s v0 is a generative AI tool with an ambitious goal – to help people build websites and web applications more efficiently. You can think of it as the ChatGPT for web developers, primarily focusing on building UI components and logic for web applications. It allows you to quickly turn your ideas into a live web app that people can interact with. For context, Vercel is the name of the company that created v0. It’s a cloud platform that provides hosting services as well as other useful tools for developers, including v0. The basic version of v0 is free (with some limitations), but it should be more than enough to get to know the tool. If you're interested in pricing, you can find it – here .
Thumbnail Image of Tutorial Build a Web App with Just a Prompt: Vercel’s v0 Explained
NEW

Unlocking Cursor: Your Beginner's Guide to the AI-Powered IDE

Welcome to our opening article about Cursor (AI IDE). In this article, we would cover all the basic and core knowledge that you need to know about it. And which you would be using most of the time. We’ll do it in depth, without cutting any corners! And in our next article , we'll get into even more complex topics, like advanced tips, Cursor’s Cmd+K, Composer, and Agent features. But first, we’ll start from building a really solid background. So, let's jump right in to it! Nowadays, tech grows and moves so fast that sometimes it's hard to keep up. To stay on top, we as the developers, always have to be ready embrace new tools, which can increase our productivity 10X while saving us a lot of time. One of such tools is Cursor, an IDE powered by AI that’s transforming how developers write, debug, and optimize their code. Cursor combines artificial intelligence with the standard features of an IDE to help you easily debug your code, provide smart code autocompletion, and offer many other features that can boost your productivity. Cursor is forked/built from VSCode, one of the most popular IDEs among developers. And it retains not only the familiar user-friendly interface, but large ecosystem of VS Code extensions as well. This foundation means that those already familiar with VSCode will find it relatively easy to transition to Cursor.
Thumbnail Image of Tutorial Unlocking Cursor: Your Beginner's Guide to the AI-Powered IDE

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $40 per month for unlimited access to over 60+ books, guides and courses!

Learn More
NEW

Next-Level Cursor: Cmd+K, Composer, and Agent Unpacked

In this article, let’s continue to explore Cursor even further. Our first article ( which you can find here ) covered Cursor’s basics and the easiest-to-understand features, such as Rules for AI , Tab autocompletion and the Chat feature. So, if you’re new to Cursor, I highly recommend you check the previous article. In this “Part 2” article, we’ll go over the Cmd+K , Composer , and Agent features, including some use cases. So, get ready to learn how to use Cursor to its fullest potential and save an enormous amount of time. Starting from version 0.46, Cursor includes a lot of UI changes for the AI side panel. That’s why, if you’re currently using an older version, the UI elements mentioned in this article might not look the same for you. That’s completely fine, but I highly recommend you update to the latest version so we’re on the same page.
Thumbnail Image of Tutorial Next-Level Cursor: Cmd+K, Composer, and Agent Unpacked

How To Set Up Auth and Store User Data With Bolt + Supabase

Welcome! This is part 4 of our course on how to build fullstack apps with Bolt and Supabase If you’re just joining, I highly recommend you take the course in the correct order before diving into this one. Here you can find Part 1 , Part 2 , Part 3 .
Thumbnail Image of Tutorial How To Set Up Auth and Store User Data With Bolt + Supabase

How Good is Good Enough? - Introduction to LLM Testing and Benchmarks

The proliferation of Large-Language Models (LLMs), and their subsequent embedding into workflows in every industry imaginable, has upended much of the conventional wisdom around quality assurance and software testing. QA Engineers effectively have to deal with non-deterministic outputs - so traditional automated testing that involves assertions on the output are partially out. Moreover, the input set for LLM-based services has equally ballooned, with the potential input set being the entirety of human language in the worst case, and a very flexible subset for more specialised LLMs. This is a vast test surface with many potential points of failure, one in which it is practically impossible to achieve 100% test coverage, and the edge cases are equally vast and difficult to enumerate - it’s unsurprising that we’ve seen bugs even in top tier customer-facing LLMs even amongst the biggest companies. Like Google’s AI recommending users eat one small rock a day after indexing an Onion article or Grok accusing NBA star Klay Thompson of vandalism .
Thumbnail Image of Tutorial How Good is Good Enough? - Introduction to LLM Testing and Benchmarks

How To Deploy Your Web App With Netlify

Welcome! This is the sixth and final lesson on how to build fullstack apps with Bolt and Supabase If you’re just joining, you’re in luck, ‘cause we already have tons of content for you to enjoy while turning your app from just an idea to deployed web application in just an afternoon. Before you dive into this lesson, here’s where you can find Part 1 , Part 2 , Part 3 , Part 4 , and Part 5 if you want to get up to speed (which you probably do, otherwise, what exactly are you going to deploy? 🤔)
Thumbnail Image of Tutorial How To Deploy Your Web App With Netlify
NEW

DeepSeek-R1 from A-to-Z

Welcome to the LLM model that's been absolutely everywhere on the Internet and news headlines in recent days – DeepSeek-R1! In this article, we take a comprehensive look at this new, industry-disrupting LLM. We'll investigate if it’s truly deserving of all the noise around it, or if there's something (i.e. censorship and GPT-4 references) more sinister going on beneath the buzz. So, brew some tea and settle in, because this is going to be an interesting ride. We're going to cover:
Thumbnail Image of Tutorial DeepSeek-R1 from A-to-Z

What Is Supabase And How It Can Replace Your Entire Backend

Welcome to the second lesson of our course on how to build complete fullstack apps in less than an afternoon with bolt + supabase. If you haven’t yet, I highly suggest you check out Part 1 before diving in, where we talked all about the ‘what’, ‘why’, and ‘how’  of Bolt and briefly the future of AI - I think you’ll benefit a lot from it. Now back to this tutorial.
NEW

Jailbreaking DeepSeek R1: Bypassing Filters for Maximum Freedom

Large language models (LLMs) are very powerful tools that can help us with a wide range of tasks. These models are usually built with safety features meant to stop them from generating harmful, inappropriate, or otherwise restricted content. However, over time, researchers and enthusiasts have discovered ways to bypass these safeguards—a process known as jailbreaking. In this series of articles, we’re going to show you how to jailbreak one of the most popular open-source models out there: DeepSeek R1. In this opening article, we'll start with prompt jailbreaking. But don’t worry—we’re not just jumping straight into prompt examples. First, we’ll explain what jailbreaking really is, why people do it, and some of the tricky parts you should know about. Sound good? Let’s dive in! DISCLAIMER. This article is for learning and research only. The methods shared here should be used responsibly to test AI, improve security, or understand how these systems work. Please don't use them for anything harmful or unethical.
Thumbnail Image of Tutorial Jailbreaking DeepSeek R1: Bypassing Filters for Maximum Freedom

Common Statistical LLM Evaluation Metrics and what they Mean

In one of our earlier articles , we touched on statistical metrics and how they can be used in evaluation - we also briefly discussed precision, recall, and F1-score in our article on benchmarking . Today, we’ll go into more detail on how to apply these metrics more directly, and more complex metrics derived from these that can be used to assess LLM performance. This is a standard measure in statistics, and has long been used to measure the performance of ML systems. In simple terms, this is a measure of how many samples are correctly categorised (true positives) or predicted by a model out of the total set of samples predicted to be positive (true positives + false positives). If we take a simple examples of an ML tool that takes a photo as an input and tells you if there is a dog in the picture, this would be:
Thumbnail Image of Tutorial Common Statistical LLM Evaluation Metrics and what they Mean

How To Build Complete Fullstack Apps In Less Than An Afternoon With Bolt + Supabase

What if I told you that 2-3 hours from now you could have taken your app idea and transformed it into a beautiful, production-level full stack application, deployed and available on the internet, for everyone to use? If I told you something like this a couple of years ago, you’d laugh and scoff and dismiss everything I just said. In fact, this was my reaction too when I first heard someone from Supabase talk about what Bolt and Supabase combined could achieve.
Thumbnail Image of Tutorial How To Build Complete Fullstack Apps In Less Than An Afternoon With Bolt + Supabase

How To Build Beautiful, Responsive UIs in Minutes With Bolt

Welcome! This is part 5 of our course on how to build fullstack apps with Bolt and Supabase If you’re just joining, I highly recommend you take the course in the correct order before diving into this one. Here you can find Part 1 , Part 2 , Part 3 , and Part 4 .
Thumbnail Image of Tutorial How To Build Beautiful, Responsive UIs in Minutes With Bolt

How Good is Good Enough: A Guide to Common LLM Benchmarks

In our last article, we talked about benchmarking as the highest level method of assessing the performance of LLMs. Today, we’re going to be looking in more detail at some of the most popular benchmarks, what they measure, and how they measure it. Note that most of the benchmarks listed below will have leaderboards and questions sets available somewhere public facing if you want to dive deeper, I’ve also included links to papers where appropriate. Let’s dive in!
Thumbnail Image of Tutorial How Good is Good Enough: A Guide to Common LLM Benchmarks
NEW

Beat the AI Filter: How to Get your CV seen by Recruiters in the AI Age

It’s undeniable that AI has - for better or worse - already had a huge impact on the software industry, from its practical applications at the technology level, to the changing demand from skills and experience, to the layoffs linked to AI processes taking over. One area we haven’t really talked about in our articles yet is recruitment: it’s no secret that AI is being used to scan CVs and automatically filter candidates. Several of my developer friends have been talking about this recently, and with a slew of layoffs in tech sending a lot of IT folks on the hunt for new jobs, I thought this would be the perfect time to dive into how AI is being used to filter candidates, and what you can do to stand out and get past the filters. Forbes estimates that 65% of employers will use AI tools to reject candidates in 2025. That’s a staggeringly high number, but for folks who work in hiring or are currently job hunting it’s not a surprising one. The article offers a further breakdown of how employers plan to use–or are already using–AI in their hiring process.

How Good is Good Enough: Subjective Testing and Manual LLM Evaluation

In our previous article , we talked about the highest level of testing and evaluation for LLM models, and went into detail about some of the most commonly used benchmarks for validating LLM performance at a high level. Today, we’re going to look a at some more fine-grained evaluation metrics that you can use while building an LLM-based tool. Here we make the distinction between statistical metrics - that is those computed using a statistical model - and more generalised metrics that attempt to measure the more ‘subjective’ elements of LLM performance (such as those used in manual testing) and that use AI to evaluate how useful a model is in its given context. In this article we’ll give an overview of the different classes of metrics used and cover human evaluation and its importance before moving on to common statistical metrics and LLM-as-Judge evaluations in the following articles.

How To Build A Fullstack App MVP in An Hour With Bolt

Hello and welcome! This is the 3rd lesson in our series about how to build complete fullstack applications in less than an afernoon with Bolt and Supabase. In the first 2 lessons, we talked about what exactly is Bolt in the first place, and what’s Supabase. If you want to read those first, here is Part 1 and Part 2 .
Thumbnail Image of Tutorial How To Build A Fullstack App MVP in An Hour With Bolt