Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
NEW

Jailbreaking DeepSeek R1: Bypassing Filters for Maximum Freedom

Large language models (LLMs) are very powerful tools that can help us with a wide range of tasks. These models are usually built with safety features meant to stop them from generating harmful, inappropriate, or otherwise restricted content. However, over time, researchers and enthusiasts have discovered ways to bypass these safeguards—a process known as jailbreaking. In this series of articles, we’re going to show you how to jailbreak one of the most popular open-source models out there: DeepSeek R1. In this opening article, we'll start with prompt jailbreaking. But don’t worry—we’re not just jumping straight into prompt examples. First, we’ll explain what jailbreaking really is, why people do it, and some of the tricky parts you should know about. Sound good? Let’s dive in! DISCLAIMER. This article is for learning and research only. The methods shared here should be used responsibly to test AI, improve security, or understand how these systems work. Please don't use them for anything harmful or unethical.
Thumbnail Image of Tutorial Jailbreaking DeepSeek R1: Bypassing Filters for Maximum Freedom
NEW

Next-Level Cursor: Cmd+K, Composer, and Agent Unpacked

In this article, let’s continue to explore Cursor even further. Our first article ( which you can find here ) covered Cursor’s basics and the easiest-to-understand features, such as Rules for AI , Tab autocompletion and the Chat feature. So, if you’re new to Cursor, I highly recommend you check the previous article. In this “Part 2” article, we’ll go over the Cmd+K , Composer , and Agent features, including some use cases. So, get ready to learn how to use Cursor to its fullest potential and save an enormous amount of time. Starting from version 0.46, Cursor includes a lot of UI changes for the AI side panel. That’s why, if you’re currently using an older version, the UI elements mentioned in this article might not look the same for you. That’s completely fine, but I highly recommend you update to the latest version so we’re on the same page.
Thumbnail Image of Tutorial Next-Level Cursor: Cmd+K, Composer, and Agent Unpacked

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $40 per month for unlimited access to over 60+ books, guides and courses!

Learn More
NEW

Common Statistical LLM Evaluation Metrics and what they Mean

In one of our earlier articles , we touched on statistical metrics and how they can be used in evaluation - we also briefly discussed precision, recall, and F1-score in our article on benchmarking . Today, we’ll go into more detail on how to apply these metrics more directly, and more complex metrics derived from these that can be used to assess LLM performance. This is a standard measure in statistics, and has long been used to measure the performance of ML systems. In simple terms, this is a measure of how many samples are correctly categorised (true positives) or predicted by a model out of the total set of samples predicted to be positive (true positives + false positives). If we take a simple examples of an ML tool that takes a photo as an input and tells you if there is a dog in the picture, this would be:
Thumbnail Image of Tutorial Common Statistical LLM Evaluation Metrics and what they Mean
NEW

How Good is Good Enough: Subjective Testing and Manual LLM Evaluation

In our previous article , we talked about the highest level of testing and evaluation for LLM models, and went into detail about some of the most commonly used benchmarks for validating LLM performance at a high level. Today, we’re going to look a at some more fine-grained evaluation metrics that you can use while building an LLM-based tool. Here we make the distinction between statistical metrics - that is those computed using a statistical model - and more generalised metrics that attempt to measure the more ‘subjective’ elements of LLM performance (such as those used in manual testing) and that use AI to evaluate how useful a model is in its given context. In this article we’ll give an overview of the different classes of metrics used and cover human evaluation and its importance before moving on to common statistical metrics and LLM-as-Judge evaluations in the following articles.
NEW

How To Build Beautiful, Responsive UIs in Minutes With Bolt

Welcome! This is part 5 of our course on how to build fullstack apps with Bolt and Supabase If you’re just joining, I highly recommend you take the course in the correct order before diving into this one. Here you can find Part 1 , Part 2 , Part 3 , and Part 4 .
Thumbnail Image of Tutorial How To Build Beautiful, Responsive UIs in Minutes With Bolt