Large Language Models
Large Language Models
A Step-by-Step Guide to Building Your First Agent with OpenAI’s Agent Builder - Souvik
If you’ve explored building AI agents, you know the process can feel like a complex puzzle. Until now, it often meant juggling fragmented tools — stitching together orchestration logic, building custom connectors, creating manual evaluation pipelines, and spending weeks on frontend work just to see your agent in action. That era of complexity is coming to an end.
This week, OpenAI introduced AgentKit, a complete set of tools designed to unify the entire lifecycle of building, deploying, and optimizing agents. AgentKit provides developers with a powerful, integrated toolkit to move from idea to production faster and more reliably. Read More
Which LLM model gives best value? A deep dive into Cost, Accuracy, and Latency for OpenAI, Gemini, Claude, etc. - Souvik
We live in a golden age of AI. Every few weeks, it seems a new, more powerful model is released by OpenAI, Google, Anthropic, or another major lab, each claiming state-of-the-art performance. For developers and product managers, this is both a blessing and a curse. With so many incredible options, how do you choose the right LLM model for your project?
The truth is a single “#1” ranking on a leaderboard doesn’t tell the whole story. The “best” model is rarely the one with the highest score — it’s the one that strikes the perfect balance for your specific needs. Every choice involves a negotiation within a critical “trade-off triangle”: Read More.
LangGraph Deep Dive: A Step-by-Step Guide to Building ReAct Agents and debugging with LangSmith - Souvik
The earliest wave of Large Language Model (LLM) applications was built on chains — simple, linear sequences of prompts and function calls. Chains worked well for straightforward tasks like summarization or Q&A, but they quickly showed their limits. Real-world problems often require agents that can reason over multiple steps, call tools, when necessary, recover from mistakes, and even incorporate human feedback. Linear pipelines weren’t designed for that level of complexity.
This is where the agentic paradigm emerged. Instead of a one-shot chain, agents run in a loop: they think, take an action, observe the result, and then decide what to do next. One of the most influential frameworks for this style of reasoning is ReAct (Reasoning + Acting) — a technique that encourages LLMs to interleave internal reasoning with external tool use. Read More.
A Definitive Guide to Vector Databases for RAG: A Hands-on Guide to FAISS with Python Code - Souvik
Ever tried asking a traditional database a fuzzy question like “What articles are semantically similar to this paragraph about quantum computing?” Yeah, good luck with that.
Traditional databases are great at matching exact values: dates, customer IDs, numeric ranges. But when it comes to meaning, they’re basically like that one friend who takes everything literally. If you want to search by “what something means” instead of “what something is”, you need something more powerful.
The Definitive Guide to Chunking Strategies for RAG and LLMs - Souvik
From Fixed-Size to Semantic and Hierarchical Splits — A Practical Guide to Structuring Text for Smarter AI Systems
