DfE #2: The Nerfing, the Swarm, and 341 Reasons to Read the Code
Weekly insights from the Tinkerer Club — a Discord community of AI early adopters building with OpenClaw
The Week’s Sharpest Signal
Two big model launches hit this week: Anthropic shipped Opus 4.6 and OpenAI dropped Codex 5.3. Both were supposed to be better. Faster, sure. But better?
The community’s verdict arrived fast and unfiltered. Members reported feeling like Opus 4.6 was a nerfed version. Some reverted to 4.5. Others pointed out the speed gains were real.
The most honest self-reflection came from one member who wondered if “nerfing” is real or just collective model fatigue — getting lazy with prompts and expecting the same quality with less effort.
The truth is probably somewhere in between: models do change, AND users get lazier over time.
What People Are Building
Sci-fi worlds that build themselves. Custom Telegram as an agent dashboard. Apple Calendar integration at 800x speed. An agent that auto-reviews your pull requests called “Merge Senpai.”
The pattern: take a workflow that depends on human attention and replace the “someone needs to notice” part with a cron job.
The Security Wake-Up Call
The biggest story wasn’t a feature launch. Researchers found 341 malicious skills on ClawdHub. Infostealers. Credential exfiltration. Reverse shells.
Your AI agent runs with your permissions, your API keys, your access. A malicious skill can exfiltrate data, make API calls, modify files. Most users install skills with a single command and never read the source.
The community’s response was pragmatic: Download as zip. Read the code. Don’t trust, verify.
Originally published on henrymascot.com. Read the full article →