Latest Tutorials

Learn about the latest technologies from fellow newline community members!

  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL
  • React
  • Angular
  • Vue
  • Svelte
  • NextJS
  • Redux
  • Apollo
  • Storybook
  • D3
  • Testing Library
  • JavaScript
  • TypeScript
  • Node.js
  • Deno
  • Rust
  • Python
  • GraphQL

    Mitigating bias in LLM‑based scoring of English language learners

    Mitigating bias in LLM-based scoring for English language learners (ELLs) requires a structured approach to ensure fairness and accuracy. Below is a summary of key strategies, challenges, and outcomes based on recent research. Different LLMs employ varied bias mitigation methods. For example, GPT-4 uses data augmentation to diversify training samples, while BERT relies on bias-aware training to adjust scoring for linguistic diversity. Advanced frameworks like BRIDGE (LLM-based data augmentation) and AutoSCORE (multi-agent scoring systems) show promise in reducing subgroup bias. A comparison of these models reveals: See the Techniques for Mitigating Bias in LLM-Based Scoring section for more details on these frameworks and their implementation.
    Thumbnail Image of Tutorial Mitigating bias in LLM‑based scoring of English language learners

      What Is Prompt Chaining and How to Use It

      Prompt chaining is a method where complex tasks are broken into sequential subtasks, each handled by a distinct prompt. This approach ensures context is preserved between steps and allows for structured problem-solving. Below is a breakdown of key aspects, techniques, and applications. Benefits : Challenges :
      Thumbnail Image of Tutorial What Is Prompt Chaining and How to Use It

      I got a job offer, thanks in a big part to your teaching. They sent a test as part of the interview process, and this was a huge help to implement my own Node server.

      This has been a really good investment!

      Advance your career with newline Pro.

      Only $40 per month for unlimited access to over 60+ books, guides and courses!

      Learn More

        How to Chain Prompts for Better LLM Flow

        Watch: Let The LLM Write The Prompt 2025 | Design Perfect Prompts for AI Agent | Prompt Mistakes (PART 1/7) by Amine DALY Prompt chaining enhances large language model (LLM) workflows by linking prompts sequentially or in parallel to solve complex tasks. This section breaks down techniques, metrics, and real-world use cases to help you design efficient chains. Prompt chaining methods vary in complexity and use cases. Serial chaining executes prompts one after another, ideal for tasks requiring step-by-step reasoning (e.g., data extraction followed by analysis). Parallel chaining splits tasks into simultaneous prompts, useful for multi-branch decisions or data aggregation. Hybrid approaches combine both for tasks like customer service workflows, where initial triage (parallel) triggers specialized follow-ups (serial).
        Thumbnail Image of Tutorial How to Chain Prompts for Better LLM Flow

          AI Bootcamp Success Checklist: Fine-Tuning Instructions for Real-World Application Development

          Watch: Prompt Engineering by Thinking Neuron The LSU Online AI Bootcamp spans 26 weeks with 200+ hours of live classes and 15+ projects, focusing on Python, TensorFlow, and OpenAI. The Virginia Tech Bootcamp emphasizes machine learning and neural networks but lacks real-time project demos. In contrast, Newline’s AI Bootcamp (and its advanced version, AI Bootcamp 2) offers 50+ hands-on labs, live project demos, and full code access, blending tools like Hugging Face, DSPy, and LangChain. Newline’s curriculum stands out with project-based learning, interactive debugging, and browser-compatible AI deployment techniques. For foundational Python skills required for these projects, see the Preparing for AI Bootcamp Success section for more details on prerequisites. Newline’s program excels in practical application, covering Lora adapters, knowledge distillation, and tensor parallelism. For hands-on practice, platforms like Newline provide structured tutorials on distilling Hugging Face models for browser deployment. The curriculum includes advanced topics like RAG architectures, multi-vector indexing, and reinforcement learning (DPO, PPO), ensuring developers can build enterprise-grade AI pipelines. Unique features include Discord community access, full project source code, and career-focused labs using Replit Agent and Notion integrations. Building on concepts from the Fine-Tuning Instructions for Real-World Application Development section, these labs emphasize real-world deployment strategies.
          Thumbnail Image of Tutorial AI Bootcamp Success Checklist: Fine-Tuning Instructions for Real-World Application Development

            Pipeline Parallelism vs Data Parallelism: Which Improves Throughput?

            Watch: I explain Fully Sharded Data Parallel (FSDP) and pipeline parallelism in 3D with Vision Pro by william falcon Pipeline parallelism and data parallelism are two strategies for optimizing computational workloads, particularly in deep learning and large-scale model training. The choice between them depends on factors like model size, hardware constraints, and performance goals. This section breaks down their differences through a structured comparison, highlights practical considerations, and summarizes real-world applications. The table below compares key metrics across pipeline and data parallelism:
            Thumbnail Image of Tutorial Pipeline Parallelism vs Data Parallelism: Which Improves Throughput?