[{"data":1,"prerenderedAt":97},["ShallowReactive",2],{"blog-\u002Fblog\u002Fdont-ship-ai":3},{"id":4,"title":5,"body":6,"date":85,"description":86,"extension":87,"meta":88,"navigation":89,"path":90,"seo":91,"stem":92,"tag":93,"thumbnail":94,"youtube_url":95,"__hash__":96},"blog\u002Fblog\u002Fdont-ship-ai.md","Don't Ship AI",{"type":7,"value":8,"toc":78},"minimark",[9,13,18,21,29,36,42,48,54,58,61,68,72,75],[10,11,12],"p",{},"Just because AI is out there doesn't mean you should use it. I know — it feels like we've struck gold. Everyone is rushing to shove AI into their software. But I'm advocating for taking a step back.",[14,15,17],"h2",{"id":16},"the-email-validation-test","The email validation test",[10,19,20],{},"Take a basic example: you want to validate someone's email format. There's a million and one ways to do that already. Not the least of which is regex.",[10,22,23,24,28],{},"Sure, you ",[25,26,27],"em",{},"could"," make a call to an LLM with the user's input, asking it whether the string looks like a valid email. But there are real problems with that approach.",[10,30,31,35],{},[32,33,34],"strong",{},"It's slower."," It is way quicker for a program to process a regex and give you an instant answer than to reach out to an LLM provider. The round trip alone kills you — and local models probably aren't even that great at recognizing email formats because there aren't enough parameters to do so.",[10,37,38,41],{},[32,39,40],{},"It costs more."," It costs literally nothing to turn another CPU cycle to process a regex. Calling out to an infrastructure provider that spins up a bunch of GPUs just to answer your query? That's not free.",[10,43,44,47],{},[32,45,46],{},"It's less reliable."," Say the provider you're calling is overloaded or their servers are down. Better yet — they messed something up and the non-deterministic AI decided a gibberish string is indeed an email. And you put it straight in your database.",[49,50,51],"blockquote",{},[10,52,53],{},"First principles thinking, taking precautions — call it whatever you want. I don't think just because AI is there, we should shove it into every solution.",[14,55,57],{"id":56},"ai-tools-vs-ai-in-your-stack","AI tools vs. AI in your stack",[10,59,60],{},"I've been building software with AI tools as soon as they came out. I use them every day, especially Claude Code. It has absolutely accelerated the rate at which I can produce software.",[10,62,63,64,67],{},"What I ",[32,65,66],{},"haven't"," found is a genuine use case for an AI call in my actual software stack. Maybe I just haven't worked in the industry where I'd need to summarize huge heaps of data or perform sentiment analysis. But for the work I do — there's a better, cheaper, faster tool for every job I've needed done.",[14,69,71],{"id":70},"the-takeaway","The takeaway",[10,73,74],{},"AI is an incredible development tool. That doesn't automatically make it the right runtime dependency. Before you reach for an LLM call, ask yourself: is there already a deterministic, reliable, free way to solve this? Most of the time, there is.",[10,76,77],{},"If you've found truly useful ways to integrate AI into your applications, I'd genuinely like to hear about it. But until then — think twice before shipping it.",{"title":79,"searchDepth":80,"depth":80,"links":81},"",2,[82,83,84],{"id":16,"depth":80,"text":17},{"id":56,"depth":80,"text":57},{"id":70,"depth":80,"text":71},"2026-04-06","AI is everywhere, but that doesn't mean it belongs in your code.","md",{},true,"\u002Fblog\u002Fdont-ship-ai",{"title":5,"description":86},"blog\u002Fdont-ship-ai","Tech","\u002Fblog\u002Fdont-ship-ai.png","https:\u002F\u002Fyoutu.be\u002FNlraoSvGAUw","kt1ApzjII9FPU8zv_hK4o-zLXI_j96i-9jXipFVBA5Y",1775491441672]