blog

The New Mac Ritual: A Dev Setup Playbook

The New Mac Ritual: A Dev Setup Playbook

Personal log for setting up a fresh MacBook โ€” so I don't have to think about it next time

3 min read24 Apr 2026
What the MCP? (Part 3): When Code LLMs Need Help

What the MCP? (Part 3): When Code LLMs Need Help

Everyone's adding MCP servers. Few are thinking about how they'll actually be called correctly.

11 min read08 Jan 2026
What the MCP? (Part 2): I Built Quick Call

What the MCP? (Part 2): I Built Quick Call

I set out to write Part 2 about MCP. Instead, I fell into a rabbit hole...

11 min read10 Dec 2025
What the MCP? (Part 1)

What the MCP? (Part 1)

Understanding Model Context Protocol and why it's different from function calling

21 min read12 Nov 2025
Inside VectorDB

Inside VectorDB

How VectorDBs work under the hood

16 min read03 Oct 2025
Deep Dive into LoRA: A Practical Exploration

Deep Dive into LoRA: A Practical Exploration

Secret sauce to train large language models

26 min read31 Aug 2025
KV Caching in LLMs: A Visual Demonstration

KV Caching in LLMs: A Visual Demonstration

A visual demonstration of KV caching in language models

12 min read01 Mar 2025
Inputs to Byte Latent Transformer

Inputs to Byte Latent Transformer

Part 2 of All you need to know to get started with Byte Latent Transformer

29 min read06 Feb 2025
Ten Trillion Tokens: Making AI Work for Every Indian Language

Ten Trillion Tokens: Making AI Work for Every Indian Language

Building the largest multilingual LLM dataset for Indian languages at People+AI

1 min read29 Jan 2025
Precursors to Byte Latent Transformer

Precursors to Byte Latent Transformer

Part 1 of All you need to know to get started with Byte Latent Transformer

15 min read12 Jan 2025
Attention is all you need

Attention is all you need

Patience is all you need to learn transformers.

14 min read11 Jul 2024
It's LLaVA not lava!

It's LLaVA not lava!

LLaVA = Large Language and Vision assistant โ‰  ๐ŸŒ‹

5 min read01 Jun 2024
Position Encoding in Transformers

Position Encoding in Transformers

How do you understand position of token in transformer?

6 min read25 Apr 2024

Making Misal โ€” India's First Competitive Marathi LLM

How we built Misal 7B/1B โ€” pretraining, custom tokenizer, instruction tuning, and evals for a Marathi-first language model.

1 min read13 Apr 2024
Hello super fast blogging!

Hello super fast blogging!

Exploring static site builder for quick blogging

2 min read07 Apr 2024
LSTM simplified

LSTM simplified

In depth and intuitive explanation of LSTM architecture

17 min read21 Sept 2023
RNNs a walkthrough

RNNs a walkthrough

A Brief about Recurrent Neural Networks

9 min read22 Aug 2023