Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
NEW

What is Claude Co-Work

Watch: What Are Claude Cowork Projects (And Why They Change Everything) by Paul J Lipsky Claude Co-Work is reshaping how teams approach productivity by turning AI from a chatbot into a true coworker. Unlike traditional tools that require manual input for every step, Co-Work acts as an agentic AI -it can plan, execute, and verify complex workflows autonomously. For businesses, this means tasks like organizing files, generating reports, or analyzing data no longer require constant human oversight. The shift from reactive to proactive automation is a major advantage, especially for teams juggling repetitive or multi-step workflows. As mentioned in the Features and Functionality section, this agentic architecture blends a chat-style workspace with task management tools, enabling non-technical users to delegate workflows seamlessly. One of Co-Work’s standout features is its ability to handle multi-step workflows . For example, a project manager might ask, “Turn these meeting notes into a Q1 roadmap,” and Co-Work would break the task into substeps: extract key themes, align with company goals, format into a slide deck, and save it to Google Drive. This level of automation reduces tasks that once took hours into minutes. Building on concepts from the Introduction to Claude Co-Work section, the tool’s agentic design was specifically engineered to bridge the gap between developers and non-technical users, making advanced automation accessible to broader teams.
Thumbnail Image of Tutorial What is Claude Co-Work
NEW

How Does Tokenizer Works

Watch: Most devs don't understand how LLM tokens work by Matt Pocock Tokenizers are the unsung heroes of modern AI and NLP systems, bridging the gap between raw human language and the numerical precision required by machine learning models. At their core, tokenizers convert text into structured, machine-readable units-tokens-enabling algorithms to process, analyze, and generate language at scale. Without them, models would struggle to handle the vast complexity and variability of natural language, from rare words to morphologically rich languages like Turkish or Bengali. Traditional word-based tokenization splits text on spaces or punctuation, but this approach creates two major issues: huge vocabularies and poor handling of rare words . For example, a naive tokenizer might assign an "unknown" label to 5% of words in a dataset, severely limiting model performance. Sub-word tokenizers like Byte-Pair Encoding (BPE) or SentencePiece solve this by breaking words into learned sub-units (e.g., "unhappy" → "un" + "happy"). These methods reduce the unknown-word problem to near zero while keeping vocabularies manageable (typically 16,000–50,000 tokens).
Thumbnail Image of Tutorial How Does Tokenizer Works

I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

This has been a really good investment!

Advance your career with newline Pro.

Only $40 per month for unlimited access to over 60+ books, guides and courses!

Learn More
NEW

Why LLM Hallucinations Aren’t Bugs

Watch: Why Large Language Models Hallucinate by IBM Technology LLM hallucinations aren’t bugs-they’re a byproduct of how these models are trained, evaluated, and incentivized to perform. Understanding this requires examining the interplay between statistical prediction, evaluation metrics, and the limitations of training data. When models generate text, they’re not solving for factual accuracy but rather selecting the most statistically likely next word. This creates a system where confident, false statements emerge as a natural consequence of the design, as detailed in the The Nature of LLM Hallucinations section. Large language models (LLMs) are trained using next-word prediction, a task that rewards statistical fluency over factual correctness. For example, OpenAI’s GPT-5 “thinking-mini” model abstains from answering 52% of questions, while its counterpart o4-mini abstains just 1% of the time. The trade-off? O4-mini’s hallucination rate soars to 75%, compared to 26% for GPT-5. This stark contrast reveals how evaluation metrics -which prioritize accuracy over honesty-create a “guess-and-win” incentive. Models that abstain are penalized in leaderboards, even if their uncertainty is prudent in real-world scenarios.
Thumbnail Image of Tutorial Why LLM Hallucinations Aren’t Bugs
NEW

Using Codex Subagents to Skip Feature Testing

Codex subagents are transforming how development teams approach feature testing by automating repetitive, time-intensive tasks. Traditional software testing methods often consume 30-50% of a project’s timeline, with manual testing alone accounting for up to 40% of development costs. These figures highlight a critical bottleneck: teams spend excessive time validating features that could instead be accelerated through intelligent automation. Codex subagents address this by delegating testing responsibilities to specialized AI agents, reducing reliance on manual QA cycles while maintaining code quality. The core value of Codex subagents lies in their ability to parallelize testing workflows. Instead of waiting for a single agent to complete sequential tasks, developers can spin up multiple subagents-each focused on a distinct aspect of testing. For example, one subagent might generate unit tests, another could verify edge cases, and a third could execute integration checks. This parallelism slashes testing time by up to 70% in real-world scenarios, as reported by developers using Codex’s orchestrator feature to manage four subagents simultaneously. The result is a streamlined workflow where feature validation occurs in real time, allowing teams to iterate faster without sacrificing accuracy. Subagents also mitigate common pitfalls in AI-driven development. A key challenge in autonomous coding is duplicated or unclean code, which occurs in 60% of cases when agents operate without structured oversight. By assigning a dedicated “tester” subagent to verify outputs against predefined guidelines (e.g., rules in an AGENTS.md file), teams can catch errors early. As mentioned in the Introduction to Codex Subagents section, these configuration files define roles and constraints for subagents, ensuring alignment with project standards. For instance, one developer described how embedding testing protocols into subagent prompts eliminated 80% of code duplication issues during a front-end project. This structured approach ensures subagents adhere to project standards, reducing rework and improving long-term maintainability.
Thumbnail Image of Tutorial Using Codex Subagents to Skip Feature Testing
NEW

Using Google Colab to Prototype AI Workflows

Watch: Build Anything with Google Colab, Here’s How by David Ondrej Google Colab has become a cornerstone of modern AI workflow prototyping, driven by the exponential growth of AI adoption and the urgent need for tools that balance speed, accessibility, and scalability. Industry data reveals that 67% of Fortune 100 companies already use Colab, with over 7 million monthly active users using its browser-based notebooks for experimentation, collaboration, and deployment. This widespread adoption highlights Colab’s role in addressing a critical challenge: the need for rapid, cost-effective prototyping as enterprises and researchers race to innovate in AI. For teams constrained by limited budgets or infrastructure, Colab’s free tier-complete with GPU and TPU access-eliminates the upfront costs of cloud providers like AWS or Azure, enabling projects that would otherwise be financially prohibitive. As mentioned in the Setting Up Google Colab for AI Workflow Prototyping section, this accessibility begins with a simple browser and Google account, bypassing the need for complex local setups. Real-world impact of Colab is evident in its ability to accelerate complex workflows. For example, a developer fine-tuning a CodeLlama-7B model for smart-contract translation reduced training time from 8+ hours on a MacBook to just 45 minutes using a Colab T4 GPU. Similarly, multi-agent systems for vulnerability detection, such as those analyzing blockchain contracts, demonstrate how Colab supports full-stack prototyping-from data preparation to deploying real-time APIs. One notable case study involved a supply-chain optimization project where Ray on Vertex AI streamlined distributed training, cutting costs and improving responsiveness during global disruptions. These examples underscore Colab’s role in bridging the gap between experimental ideas and production-ready solutions. Building on concepts from the Building and Prototyping AI Workflows with Google Colab section, Colab’s seamless integration with Vertex AI and BigQuery Studio enables researchers to move from data exploration to deployment without context-switching.
Thumbnail Image of Tutorial Using Google Colab to Prototype AI Workflows