<?xml version="1.0" encoding="UTF-8"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/"><channel><title>Husar Labs</title><description>Engineering notes on local AI infrastructure, self-hosted software, and adjacent technical work.</description><link>https://husarlabs.com/</link><language>en-gb</language><lastBuildDate>Tue, 05 May 2026 00:00:00 GMT</lastBuildDate><atom:link href="https://husarlabs.com/rss.xml" rel="self" type="application/rss+xml"/><item><title>A Home Assistant morning briefing that actually works</title><link>https://husarlabs.com/posts/home-assistant-morning-briefing/</link><guid isPermaLink="true">https://husarlabs.com/posts/home-assistant-morning-briefing/</guid><description>A door-triggered morning briefing that took three evenings to debug and one line to fix. Notes on layered Cast abstractions and why the working version is shorter than the broken one.</description><pubDate>Tue, 05 May 2026 00:00:00 GMT</pubDate><dc:creator>Shaun</dc:creator></item><item><title>Notes on BitNet and the slow rise of ternary weight models</title><link>https://husarlabs.com/posts/bitnet-ternary-models/</link><guid isPermaLink="true">https://husarlabs.com/posts/bitnet-ternary-models/</guid><description>Ternary weight models promise a step change in local inference economics. Microsoft&apos;s bitnet.cpp is the most concrete piece of infrastructure to date. A practitioner level read on where it is at and what still needs to happen.</description><pubDate>Tue, 28 Apr 2026 00:00:00 GMT</pubDate><dc:creator>Shaun</dc:creator></item><item><title>Why I am not running Claude Code with Ollama as the backend, yet</title><link>https://husarlabs.com/posts/claude-code-ollama/</link><guid isPermaLink="true">https://husarlabs.com/posts/claude-code-ollama/</guid><description>Trying to wire a local LLM into Claude Code looks great on paper. In practice, the capability gap is still too wide for it to be a daily driver. A measured negative finding.</description><pubDate>Tue, 14 Apr 2026 00:00:00 GMT</pubDate><dc:creator>Shaun</dc:creator></item><item><title>Daily Slack digest with Ollama and Telegram</title><link>https://husarlabs.com/posts/slack-digest-ollama/</link><guid isPermaLink="true">https://husarlabs.com/posts/slack-digest-ollama/</guid><description>A small Python pipeline that summarises Slack channels overnight using a local LLM and posts the digest to Telegram. No third party AI services in the loop.</description><pubDate>Tue, 07 Apr 2026 00:00:00 GMT</pubDate><dc:creator>Shaun</dc:creator></item><item><title>Stash Notes, a single user encrypted notes app, and what it taught me</title><link>https://husarlabs.com/posts/stash-notes/</link><guid isPermaLink="true">https://husarlabs.com/posts/stash-notes/</guid><description>A frank write up of building a self-hosted notes app with end-to-end encryption, image attachments, and a shopping list mode. The unsexy parts took the longest.</description><pubDate>Tue, 31 Mar 2026 00:00:00 GMT</pubDate><dc:creator>Shaun</dc:creator></item><item><title>A browser-based vision app, served from a Mac Mini</title><link>https://husarlabs.com/posts/local-vision-app/</link><guid isPermaLink="true">https://husarlabs.com/posts/local-vision-app/</guid><description>Object detection with RF-DETR, vision-language reasoning with Gemma, speech transcription with Whisper.js, all served from a Mac Mini M4, all running without the cloud.</description><pubDate>Tue, 24 Mar 2026 00:00:00 GMT</pubDate><dc:creator>Shaun</dc:creator></item><item><title>Running Gemma 3 locally on a Mac Mini M4</title><link>https://husarlabs.com/posts/gemma3-mac-mini-m4/</link><guid isPermaLink="true">https://husarlabs.com/posts/gemma3-mac-mini-m4/</guid><description>What works and what doesn&apos;t when you run Gemma 3 on Apple&apos;s smallest M4 box with 32GB unified memory, with practical notes on Ollama, Open WebUI, and the VRAM ceiling.</description><pubDate>Tue, 10 Feb 2026 00:00:00 GMT</pubDate><dc:creator>Shaun</dc:creator></item></channel></rss>