intermediateTutorials

Building a Bar Chart Race with D3 and Svelte

In this article, we will create a data visualization that animates the changes in the stargazer counts of popular front-end library/framework GitHub repositories over the past 15 years. Which front-end libraries/frameworks currently dominate the web development landscape? Which front-end libraries/frameworks used to dominate web development landscape?
Thumbnail Image of Tutorial Building a Bar Chart Race with D3 and Svelte
NEW

Inside AI Agents: Core Principles and How They Remember

As AI continues to evolve, we’re constantly finding new ways how to improve and to use it. Today, AI has gone much further being just a chat tool. And one of these significant evolutionary steps is the creation and adoption of AI agents. With agents, you can deploy AI solutions that autonomously perform real-world tasks, for example: managing customer support, processing large amounts of information in real-time, and much more! Basically, any task that benefits from working with real-time data and reasoning capabilities. This series of articles will help you not only to grasp the fundamentals of AI agents, but also to get a practical experience of building one yourself. Covering crucial theoretical knowledge and concepts, as well as also learning how to properly apply them in the real world.
Thumbnail Image of Tutorial Inside AI Agents: Core Principles and How They Remember

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $40 per month for unlimited access to over 60+ books, guides and courses!

Learn More

Replit Agent - An Introductory Guide

Learn about Replit Agent, an advanced AI-coding agent that’s capable of building apps from scratch. Through natural language interactions and real-time assistance, Replit Agent sets up environments, writes code, and deploys apps, all done within minutes.
Thumbnail Image of Tutorial Replit Agent - An Introductory Guide

Creating a React Native Mobile App with Replit Assistant and Expo

Learn how to create your first React Native mobile app with Expo and Replit Agent, an advanced AI-coding agent. This step-by-step guide teaches you how to go from an initial idea to a cross platform, mobile app in minutes, regardless of skill level.
Thumbnail Image of Tutorial Creating a React Native Mobile App with Replit Assistant and Expo

Creating a Chrome Extension with Replit Agent

Learn how to create your first Chrome extension with Replit Agent, an advanced AI-coding agent. This step-by-step guide teaches you how to go from an initial idea to a fully functional Chrome extension in minutes, regardless of skill level.
Thumbnail Image of Tutorial Creating a Chrome Extension with Replit Agent
NEW

RAG: Bridging the Gap Between AI and Real-Time Data

Today we often hear about incredible AI advancements that promise to make our lives easier. But besides developing and improving new AI models, we also find new ways to use them and utilize their full potential. One exciting feature of LLMs AI Retrieval-Augmented Generation, or RAG for short. This system connects real time data to the power of AI models. And knowing how RAG works really raises the ceiling of your expertise as an AI engineer. So, in this opening article let's make sure to cover all the core fundamental concepts. And in the upcoming articles we will build exciting applications to apply our knowledge in practice. Large language models (LLMs) generate text by predicting the most probable next word, but without access to real-time or domain-specific information, they produce errors, outdated answers, and hallucinations.
Thumbnail Image of Tutorial RAG: Bridging the Gap Between AI and Real-Time Data

What is LLM as Judge and Why Should you use it?

In the last article we covered statistical metrics like Perplexity, BLEU, ROUGE and more, as well as some of the statistical concepts that underpin them, their strengths (accuracy, reliability) and weaknesses (no subjective focus, use of reference texts. Between human evaluation (manual testing) and statistical measures we get a mix of high-value qualitative assessment on a small part of the test surface, and a rigorous but limited view on a wider area. That still leaves a lot of middle ground uncovered! That’s why there’s been a push the last few years to get coverage for the space between - something that has a level of subjectivity and nuance but that also scales up. This is where LLM-as-a-Judge comes in. In our manual testing for LLMs article I compared this to a kind of ouroboros where AI validates AI - and rightly so, that isn’t necessarily a bad thing. LLMs are able to do some things better than humans and LLM-as-a-Judge plays to those strengths - but it does not replace the need for human oversight and statistical assessment. There’s also metrics that combine LLM-as-a-Judge with statistical metrics - but we’ll talk more about that later.
Thumbnail Image of Tutorial What is LLM as Judge and Why Should you use it?