Model Releases

  • z-lab/Qwen3.6-35B-A3B-DFlash — Diffusion-based speculative decoding that swaps the usual draft-and-verify loop for a block-diffusion approach. Promises fewer tokens wasted on bad guesses, though you’ll need to stomach custom generation code to get the speedup. 🤖

Open Source Releases

  • opencode v1.14.25 — Keeps permission configs in order, ships IntelliSense for tool keys, and finally lets Roslyn LSP handle Razor and C# without losing your working directory after shell login. 🛠️
  • qamigrate 0.5.19 — Turns Selenium Java into Playwright while keeping TestNG/JUnit/Cucumber baggage intact. Saves you from rewriting tests, but still leaves you with the same test logic you probably didn’t love in the first place. 🛠️
  • llm-notebook 1.3.8 — Scrapes Claude Code, Codex CLI, Cursor, and Obsidian into a searchable, versioned wiki of AI-assisted dev sessions. Great if you want to remember what you asked the bots last Tuesday. 📄
  • reveal-cli 0.87.0 — Walks AI agents through repos without dumping the whole codebase into context. Adapter-driven progressive disclosure keeps token burn low and curiosity high. 🛠️
  • ds2api — Proxy that normalizes Deepseek (and friends) into OpenAI/Claude/Google-style endpoints. Multi-account rotation and Docker/Vercel options mean you can swap models without rewriting client code. 🔥
  • ait-vcs 0.4.0 — MVP that bakes AI into commits, diffs, and branches. Experimental, so treat it like a research preview, not your production trunk. 🤖

AI Dev Tools

  • Mcp-Agent framework — Builds production-ready agents atop the Model Context Protocol with standardized tool hooks and multi-step reasoning. Less glue code, more guardrails. 🛠️

Today’s Synthesis

If you’re tired of paying token tolls every time an agent panics and dumps a repo into context, pair reveal-cli with ds2api and let the adapter do the walking. reveal-cli’s progressive disclosure keeps prompts tight and intent legible, while ds2api normalizes whatever model you want to run—Deepseek, Claude, Gemini—behind one OpenAI-shaped keyhole. You can swap cheaper or smarter models per task without refactoring clients, and you burn fewer tokens on regurgitating code the agent would hallucinate anyway. Treat ait-vcs as the logging layer: commit messages, diff intent, and branch rationale captured upstream so agents don’t re-derive context they already had. The net effect is a workflow where context grows only as fast as intent, spend drops because models stop guessing in the dark, and you still get to swap brains when the economics change. 🤖🔥