Blog

Daily bytes of
AI engineering.

Bite-sized insights for building production AI systems. Expert guides, real-world patterns, and practical engineering wisdom.

Create your free account

or

By continuing, you accept our Terms and Privacy Policy.

Already have an account? Sign in

Explore posts

135 posts in total

LLM Engineering

Query anonymization for RAG bias mitigation

How to strip names, roles, and demographics from queries before retrieval to reduce RAG bias. The redaction pipeline and the 3 leakage traps to avoid.

RAGGuardrails+3
Read post
9 min
AI Engineering in Practice

pip vs uv vs poetry for Python AI services

Which Python dependency manager should you use for production agent services in 2026? The install speed, lockfile story, and Docker build times compared.

PythonAI Agents+3
Read post
9 min
AI Engineering in Practice

Retry patterns for LLM API errors in production

How to build retry logic that handles rate limits, timeouts, and transient failures without burning money. The backoff rules and the 3 errors you must not retry.

AI AgentsError Handling+3
Read post
8 min
LLM Engineering

Choosing the LLM judge for evaluation pipelines

How to pick the LLM that grades your LLM. The cost-quality tradeoffs, the calibration check, and why a weaker judge is sometimes the right call.

EvaluationLLM+3
Read post
8 min
LLM Engineering

Ground truth vs relevancy in RAG evaluation

Why ground truth and relevancy measure different things in RAG evals. When to use each, how to build both datasets, and the 2 metrics that matter most.

RAGEvaluation+3
Read post
9 min
LLM Engineering

Pydantic output structuring for RAG agent plans

How to use Pydantic models to force your RAG planner LLM to return structured steps. The schema, the retry loop, and why plain JSON prompts break in production.

RAGPydantic+3
Read post
8 min
LLM Engineering

Hallucination testing for RAG pipelines

How to test a RAG pipeline for hallucinations systematically. Adversarial prompts, the out-of-scope set, and the metric that catches confabulation.

RAGEvaluation+3
Read post
8 min
LLM Engineering

Testing and evaluating RAG pipelines end to end

How to test a RAG pipeline like real software. Unit, integration, and eval tests that catch regressions before they ship. The 3-layer test strategy.

RAGEvaluation+3
Read post
8 min
LLM Engineering

Fact-checking RAG answers: grounding with verification

How to fact-check RAG answers with a second LLM pass that verifies every claim against the retrieved context. The prompt, the rejection rule, and the loop.

RAGLLM+3
Read post
8 min
LLM Engineering

Query rewriting in RAG with LLMs: the rewrite loop

How LLM-powered query rewriting fixes vague user questions before retrieval. The prompt, the multi-query fan-out, and when rewriting hurts more than helps.

RAGLLM+3
Read post
8 min
LLM Engineering

LLM-based content filtering for RAG pipelines

How to filter irrelevant retrieved chunks with a cheap LLM call before the final answer. The prompt, the batch pattern, and the 40 percent noise reduction.

RAGLLM+3
Read post
8 min
LLM Engineering

Retriever k-value tuning for RAG: the right top-k

How to pick the right k value for your RAG retriever. The 3-step tuning process, the failure modes of k=3 and k=20, and the sweet spot in between.

RAGVector Databases+3
Read post
8 min

Weekly Bytes of AI

Technical deep-dives for engineers building production AI systems.

Architecture patterns, system design, cost optimization, and real-world case studies. No fluff, just engineering insights.

Unsubscribe anytime. We respect your inbox.

Param Harrison

Built by Param Harrison

Cofounder of AEOsome.com and Chief Mentor at learnwithparam.com with 15+ years building production systems. I've trained 100+ engineers on AI engineering - these programs distill what actually works into structured paths you can follow at your own pace.

Ready to go deeper?

Go beyond articles. Build production AI systems with hands-on workshops and our intensive AI Bootcamp.

Frequently asked questions

Cadence, authors, topics, and how to follow along.

What does the learnwithparam blog cover?

The blog covers production AI engineering, with a tilt toward RAG, agentic systems, and the operational layer that turns demos into services. New posts ship roughly daily and lean on patterns the mentors actually use at work, not the framework-of-the-week. Every post includes a working code snippet, a mermaid diagram, and a Monday-morning checklist you can act on.

How often are new posts published?

A new post lands roughly every day. The cadence is intentional: small, high-signal posts that solve one specific production problem each, instead of an occasional 5000-word omnibus. If you want every post in your inbox, the newsletter signup is at the bottom of every post page.

Who writes the blog posts?

Three mentors. Param Harrison (chief mentor) writes core AI engineering and RAG fundamentals. Ahmed Aleryani writes complex agentic systems and production infrastructure. Asep Bagja Priandana writes polyglot programming and tooling. Each post links to its author. They all share a tight engineer-to-engineer voice with no fluff and no AI slop.

How do I follow new posts?

Subscribe to the newsletter (signup form on every post page) to get new posts in your inbox. The blog also publishes an RSS feed at /rss.xml. For social, follow Param on Twitter and LinkedIn. Posts are tagged by category so you can filter for RAG, agents, tooling, or whichever topic matters to you.

Are the blog posts free?

Yes, every post is free with no paywall and no email-gate. The deeper hands-on courses (RAG masterclass, agent course, bootcamp) are paid because they include code, projects, and direct mentorship. The blog is the on-ramp; the courses are the deep dive.