Linters...custom made pre-commit linters which are aligned with your code base needs. The agents are great at creating these linters and then forevermore it can help feedback and guide them. My key repo now has "audit_logging_linter, auth_response_linter, datetime_linter, fastapi_security_linter, fastapi_transaction_linter, logger_security_linter, org_scope_linter, service_guardrails_linter, sql_injection_linter, test_infrastructure_linter, token_security_checker..." basically every time you find an implementation gap vs your repo standards, make a linter! Of course, need to create some standards first. But if you know you need protected routes and things like this, then linters can auto-check the work and feedback to the agents, to keep them on track. Now, I even have scripts that can automatically fix the issues for the agents. This is the way to go.
Great question, I let Claude help answer this...see below:
The key differences are:
1. Static vs Runtime Analysis
Linters use AST parsing to analyze code structure without executing it. Tests verify actual runtime behavior. Example from our datetime_linter:
tree = ast.parse(file_path.read_text())
for node in ast.walk(tree):
if isinstance(node, ast.Import):
if alias.name == "datetime":
# Violation: should use pendulum
This catches import datetime syntactically. A test would need to actually execute code and observe wrong datetime behavior.
2. Feedback Loop Speed
- Linters: Run in pre-commit hooks. Agent writes code → instant feedback → fix → iterate in seconds
- Tests: Run in CI. Commit → push → wait minutes/hours → fix in next session
For AI agents, this is critical. A linter that blocks commit keeps them on track immediately rather than discovering violations after a test run.
3. Structural Violations
For example, our `fastapi_security_linter` catches things like "route missing TenantRouter decorator". These are structural violations - "you forgot to add X" - not "X doesn't work correctly." Tests verify the behavior of X when it exists.
4. Coverage Exhaustiveness
Linters scan all code paths structurally. Tests only cover scenarios you explicitly write. Our org_scope_linter catches every unscoped platform query across the entire codebase in one pass. Testing that would require writing a test for each query.
5. The Hybrid Value
We actually have both. The linter catches "you forgot the security decorator" instantly. The test (test_fastapi_authorization.py) verifies "the security decorator actually blocks unauthorized users at runtime." Different failure modes, complementary protections.
Think of it like: linters are compile-time checks, tests are runtime checks. TypeScript catches string + number at compile time; you don't write a test for that.
Beads was phenomenal back in October when it was released. Unfortunately it has somehow grown like a cancer. Now 275k lines of Go for task tracking? And no human fully knows what it is all doing. Steve Yegge is quite proud to say he's never looked at any of its code. It installs magic hooks and daemons all over your system and refuses to let go. Most user hostile software I've used in a long time.
Lot of folks rolling their own tools as replacements now. I shared mine [0] a couple weeks ago and quite a few folks have been happy with the change.
Regardless of what you do, I highly recommend to everyone that they get off the Beads bandwagon before it crashes them into a brick wall.
Reminds me of an offshore project I was involved with at one point. It had something like 7 managers and 4 years and over 30 developers had worked on it. The billing had reached into the millions. It was full of never ending bugs. The amount of "extra" code and abstractions and interfaces was stuff of legends.
It was actually a month or three simple crud project for a 2 man development team.
yeah, I generally view the install script (for both this and almost everything else now since it's trivial with claude code) and then ensure I have a sane install for my system needs. But, I'm on the latest beads 0.47.1 and what I did to tame it is, I just walked through creating SKILLS with claude and codex, and frankly I've found a lot of value add to the features added so far. I especially love the --claim which keeps the agents from checking out beads that are already checked out. And after I added SKILLS, the agents do an awesome job networking the dependencies together, which helps keep multi-agent workflows on track. Overall, I'm not feeling any reason to switch from beads right now, but I will also be upgrading more thoughtfully, so I don't break my current workflow.
I'm not entitled to your time of course, but would you mind describing how?
All I know is beads is supposed to help me retain memory from one session to the next. But I'm finding myself having to curate it like a git repo (and I already have a git repo). Also it's quite tied to github, which I cannot use at work. I want to use it but I feel I need to see how others use it to understand how to tailor it for my workflow.
Probably the wrong attitude here - beads is infra for your coding agents, not you. The most I directly interact with it is by invoking `bd prime` at the start of some sessions if the LLM hasn’t gotten the message; maybe very occasionally running `bd ready` — but really it’s a planning tool and work scheduler for the agents, not the human.
What agent do you use it with, out of curiosity?
At any rate, to directly answer your question, I used it this weekend like this:
“Make a tool that lets me ink on a remarkable tablet and capture the inking output on a remote server; I want that to send off the inking to a VLM of some sort, and parse the writing into a request; send that request and any information we get to nanobanana pro, and then inject the image back onto the remarkable. Use beads to plan this.”
We had a few more conversations, but got a workable v1 out of this five hours later.
To use it effectively, I spend a long time producing FSD (functional specification documents) to exhaustively plan out new features or architecture changes. I'll pass those docs back and forth between gemini, codex/chatgpt-pro, and claude. I'll ask each one something similar to following (credit to https://github.com/Dicklesworthstone for clearly laying out the utility of this workflow, these next few quoted prompts are verbatim from his posts on x):
"Carefully review this entire plan for me and come up with your best revisions in terms of better architecture, new features, changed features, etc. to make it better, more robust/reliable, more performant, more compelling/useful, etc.
For each proposed change, give me your detailed analysis and rationale/justification for why it would make the project better along with the git-diff style changes relative to the original markdown plan".
Then, the plan generally iteratively improves. Sometimes it can get overly complex so may ask them to take it down a notch from google scale. Anyway, when the FSD doc is good enough, next step is to prepare to create the beads.
At this point, I'll prompt something like:
"OK so please take ALL of that and elaborate on it more and then create a comprehensive and granular set of beads for all this with tasks, subtasks, and dependency structure overlaid, with detailed comments so that the whole thing is totally self-contained and self-documenting (including relevant background, reasoning/justification, considerations, etc.-- anything we'd want our "future self" to know about the goals and intentions and thought process and how it serves the over-arching goals of the project.) Use only the `bd` tool to create and modify the beads and add the dependencies. Use ultrathink."
After that, I usually even have another round of bead checking with a prompt like:
"Check over each bead super carefully-- are you sure it makes sense? Is it optimal? Could we change anything to make the system work better for users? If so, revise the beads. It's a lot easier and faster to operate in "plan space" before we start implementing these things! Use ultrathink."
Finally, you'll end up with a solid implementation roadmap all laid out in the beads system. Now, I'll also clarify, the agents got much better at using beads in this way, when I took the time to have them create SKILLS for beads for them to refer to. Also important is ensuring AGENTS.md, CLAUDE.md, GEMINI.md have some info referring to its use.
But, once the beads are laid out then its just a matter of figuring out, do you want to do sequential implementation with a single agent or use parallel agents? Effectively using parallel agents with beads would require another chapter to this post, but essentially, you just need a decent prompt clearly instructing them to not run over each other. Also, if you are building something complex, you need test guides and standardization guides written, for the agents to refer to, in order to keep the code quality at a reasonable level.
Here is a prompt I've been using as a multi-agent workflow base, if I want them to keep working, I've had them work for 8hrs without stopping with this prompt:
EXECUTION MODE: HEADLESS / NON-INTERACTIVE (MULTI-AGENT)
CRITICAL CONTEXT: You are running in a headless batch environment. There is NO HUMAN OPERATOR monitoring this session to provide feedback or confirmation. Other agents may be running in parallel.
FAILURE CONDITION: If you stop working to provide a status update, ask a question, or wait for confirmation, the batch job will time out and fail.
YOUR PRIMARY OBJECTIVE: Maximize the number of completed beads in this single session. Do not yield control back to the user until the entire queue is empty or a hard blocker (missing credential) is hit.
TEST GUIDES: please ingest @docs/testing/README.md, @docs/testing/golden_path_testing_guide.md, @docs/testing/llm_agent_testing_guide.md, @docs/testing/asset_inventory.md, @docs/testing/advanced_testing_patterns.md, @docs/testing/security_architecture_testing.md
STANDARDIZATION: please ingest @docs/api/response_standards.md @docs/event_layers/event_system_standardization.md
Before starting work, you MUST register with Agent Mail:
1. REGISTER: Use macro_start_session or register_agent to create your identity:
- project_key: "/home/bob/Projects/honey_inventory"
- program: "claude-code" (or your program name)
- model: your model name
- Let the system auto-generate your agent name (adjective+noun format)
2. CHECK INBOX: Use fetch_inbox to check for messages from other agents.
Respond to any urgent messages or coordination requests.
3. ANNOUNCE WORK: When claiming a bead, send a message to announce what you're working on:
- thread_id: the bead ID (e.g., "HONEY-2vns")
- subject: "[HONEY-xxxx] Starting work"
───────────────────────────────────────────────────────────────────────────────
FILE RESERVATIONS (CRITICAL FOR MULTI-AGENT)
───────────────────────────────────────────────────────────────────────────────
Before editing ANY files, you MUST:
1. CHECK FOR EXISTING RESERVATIONS:
Use file_reservation_paths with your paths to check for conflicts.
If another agent holds an exclusive reservation, DO NOT EDIT those files.
2. RESERVE YOUR FILES:
Before editing, reserve the files you plan to touch:
```
file_reservation_paths(
project_key="/home/bob/Projects/honey_inventory",
agent_name="<your-agent-name>",
paths=["honey/services/your_file.py", "tests/services/test_your_file.py"],
ttl_seconds=3600,
exclusive=true,
reason="HONEY-xxxx"
)
```
3. RELEASE RESERVATIONS:
After completing work on a bead, release your reservations:
```
release_file_reservations(
project_key="/home/bob/Projects/honey_inventory",
agent_name="<your-agent-name>"
)
```
4. CONFLICT RESOLUTION:
If you encounter a FILE_RESERVATION_CONFLICT:
- DO NOT force edit the file
- Skip to a different bead that doesn't conflict
- Or wait for the reservation to expire
- Send a message to the holding agent if urgent
───────────────────────────────────────────────────────────────────────────────
THE WORK LOOP (Strict Adherence Required)
───────────────────────────────────────────────────────────────────────────────
* ACTION: Immediately continue to the next bead in the queue and claim it
For every bead you work on, you must perform this exact cycle autonomously:
1. CLAIM (ATOMIC): Use the --claim flag to atomically claim the bead:
```
bd update <id> --claim
```
This sets BOTH assignee AND status=in_progress atomically.
If another agent already claimed it, this will FAIL - pick a different bead.
WRONG: bd update <id> --status in_progress (doesn't set assignee!)
RIGHT: bd update <id> --claim (atomic claim with assignee)
2. READ: Get bead details (bd show <id>).
3. RESERVE FILES: Reserve all files you plan to edit (see FILE RESERVATIONS above).
If conflicts exist, release claim and pick a different bead.
4. PLAN: Briefly analyze files. Self-approve your own plan immediately.
5. EXECUTE: Implement code changes (only to files you have reserved).
6. VERIFY: Activate conda honey_inventory, run pre-commit run --files <files you touched>, then run scoped tests for the code you changed using ~/run_tests (test URLs only; no prod secrets).
* IF FAIL: Fix immediately and re-run. Do not ask for help as this is HEADLESS MODE.
* Note: you can use --no-verify if you must if you find some WIP files are breaking app import in security linter, the goal is to help catch issues to improve the codebase, not stop progress completely.
7. MIGRATE (if needed): Apply migrations to ALL 4 targets (platform prod/test, tenant prod/test).
8. GIT/PUSH: git status → git add only the files you created or changed for this bead → git commit --no-verify -m "<bead-id> <short summary>" → git push. Do this immediately after closing the bead. Do not leave untracked/unpushed files; do not add unrelated files.
9. RELEASE & CLOSE: Release file reservations, then run bd close <id>.
10. COMMUNICATE: Send completion message via Agent Mail:
- thread_id: the bead ID
- subject: "[HONEY-xxxx] Completed"
- body: brief summary of changes
11. RESTART: Check inbox for messages, then select the next bead FOR EPIC HONEY-khnx, claim it, and jump to step 1.
* Migrations: You are pre-authorized to apply all migrations. Do not stop for safety checks unless data deletion is explicit.
* Progress Reporting: DISABLE interim reporting. Do not summarize after one bead. Summarize only when the entire list is empty.
* Tracking: Maintain a running_work_log.md file. Append your completed items there. This file is your only allowed form of status reporting until the end.
* Blockers: If a specific bead is strictly blocked (e.g., missing API key), mark it as blocked in bd, log it in running_work_log.md, and IMMEDIATELY SKIP to the next bead. Do not stop the session.
* File Conflicts: If you cannot reserve needed files, skip to a different bead. Do not edit files reserved by other agents.
START NOW. DO NOT REPLY WITH A PLAN. REGISTER WITH AGENT MAIL, THEN START THE NEXT BEAD IN THE QUEUE IMMEDIATELY. HEADLESS MODE IS ON.
1.Install Tailscale on WSL2 and your iPhone
2.Install openssh-server on WSL2
3.Get an SSH terminal app (Blink, Termius, etc.). I use blink ($20/yr).
4.SSH from Blink to your WSL2’s Tailscale IP
5. Run claude code inside tmux on your phone.
Tailscale handles the networking from anywhere. tmux keeps your session alive if you hit dead spots. Full agentic coding from your phone.
Step 2: SSH server
In WSL2:
sudo apt install openssh-server
sudo service ssh start
Run tailscale ip to get your WSL2’s IP (100.x.x.x). That’s what you’ll connect to from your phone.
Step 3: Passwordless login
In Blink, type config → Keys → + → create an Ed25519 key. Copy the public key.
On WSL2:
echo "your-public-key" >> ~/.ssh/authorized_keys
Then in Blink: Hosts → + → add your Tailscale IP, username, and select your key. Now it’s one tap to connect.
Switch apps, connection dies, no problem. Reconnect: I can just type `ssh dev` in blink and I'm in my workstation, then `tmux attach`, you’re right back in your session.
Pro tip: multiple Claude sessions
Inside tmux:
•Ctrl+b c — new window
•Ctrl+b 0/1/2 — switch windows
I run different repos or multiple agents in the same repo, in different windows and jump between them. Full multi-project workflow from my phone.
I’ve been a fan of this philosophy since the Intercooler.js days. In fact, our legacy customer portal at bomquote.com still runs on Intercooler. I spent the last year building a new version using the "modern" version of that stack: Flask, HTMX, Alpine, and Tailwind.
However, I’ve recently made the difficult decision to rewrite the frontend in React (specifically React/TS, TanStack Query, Orval, and Shadcn). In a perfect world, I'd rewrite the python backend in go, but I have to table that idea for now.
The reason? The "LLM tax." While HTMX is a joy for manual development, my experience the last year is that LLMs struggle with the "glue" required for complex UI items in HTMX/Alpine. Conversely, the training data for React is so massive and the patterns so standardized that the AI productivity gains are impossible to ignore.
Recently, I used Go/React for a microservice that actually has turned into similarly complex scale as the python/htxm app I focused on most of the year, and it was so much more productive than python/htmx. In a month of work I got done what took me about 4-5 months in python/htmx. I assume because the typing with go and also LLM could generate perfectly typed hooks from my OpenAPI spec via Orval and build out Shadcn components without hallucinating.
I still love the HTMX philosophy for its simplicity, but in 2024/2025, I’ve found that I’m more productive choosing the stack that the AI "understands" best. For new projects, Go/React will now my default. If I have to write something myself again (God, I hope not) I may use htmx.
This got me thinking: I am not about to fight windmills and the future will unfold as it will, but I think the idea of "LLM as a compiler of ideas to high-level languages" can turn out to be quite dangerous. It is one thing to rely on and not to be able to understand the assembly output of a deterministic compiler of a C++ program. It is quite another to rely on but not fully understand (whether due to lazyness or complexity) what is in the C++ code that a giant nondeterministic intractable neural network generated. what is guaranteed is that the future will be interesting...
The way I'm keeping up with it (or deluding myself into believing I am keeping up with it) is by maintaining rigorous testing and test standards. I have used LLMs to assist me building C firmware for some hardware projects. But the scale of that has been such that it can also be well tested. Anyway, part of the reason I was so much slower with python is I'm an expert at all the tech I used, spending literal years of my life in the docs and reading books, etc., and I've read everything the LLM wrote to double check it. I'm not so literate with go but its not very complex, and given the static nature, I just trusted the LLM more than I did with python. The react stack I am learning as I go, but the tooling is so good, and I understand the testing aspects, same issue, I trusted the LLM more and have been more productive. Anwyay, times are changing fast!
I went through similar song and dance using a paid Gemini code assist “standard” level subscription. I finally got Gemini 3 working in my terminal in my repository. I assigned it a task that Claude code Opus 4.5 would quickly knock out, and Gemini 3 did a reasonably similar job. I had opus 4.5 evaluate the work and it was complimentary of Gemini 3S work. Then I check the usage and I’d used 10% of the daily token usage limit, about 1.5M tokens on that one task. So I can only get about 10 tasks before I’m rate limited. Meanwhile with Claude code $200 max plan, I can run 10 of those same caliber of tasks in parallel, even with opus 4.5 model, and barely register the usage meter. The only thing the Gemini code assist “standard” plan will be good for with these limits are just double checking the plans that opus 4.5 makes. Until the usage limits are increased, it’s pretty useless compared to Claude code max plan. But there doesn’t seem to be any similar plan offering from Google.
Man, I definitely feel this, being in the international trade business operating an export contract manufacturing company from China, with USA based customers. I can’t think of many shittier businesses to be in this year, lol. Actually it’s been pretty difficult for about 8 years now, given trade war stuff actually started in 2017, then we had to survive covid, now trade war two. It’s a tough time for a lot of SMEs. AI has to be a handful for classic web/design shops to handle, on top of the SMEs that usually make up their customer base, suffering with trade wars and tariff pains. Cash is just hard to come by this year. We’ve pivoted to focus more on design engineering services these past eight years, and that’s been enough to keep the lights on, but it’s hard to scale, it is just a bandwidth constrained business, can only take a few projects at a time. Good luck to OP navigating it.
I tried to use go in a project 6-7 years ago and was kind of shocked by needing to fetch packages directly from source control with a real absence of built in versioning. That turned me off and I went back to python. I gather that now there’s a new system with go modules. I should probably revisit it.
I most align with libertarian ideals. However, I lived in China full time for 10 years and traveled to many different countries too. I can’t think of even one place I’ve visited where it would have been risk-free to openly criticize the current government leadership or their laws and culture, while I was a guest there.
That's one of the things that (previously, or hypothetically, take your pick) makes America great.
That's why this shift is so frustrating and disappointing to so many Americans. It would be like if the Vatican became protestant, or the UK suddenly stopped drinking tea.
Would you go into someone's home, tell them you hate them, want them dead and start setting fires in their living room then be surprised when they kick you out?
If I moved in with roommates and they immediately held a vote deciding that it's actually my job to do all the chores and that if I don't they're going to throw me out and actually I don't even deserve to be there so I better watch it, in the time between that happening and me securing a new residence, I'd probably tell them to eat shit and that their behavior is insane.
If another resident is constantly talking shit about all the rest and saying he thinks they should be shot and go fuck themselves and their moms should die etc etc etc but they immediately call the police on me for telling them to fuck off, saying they felt "threatened" and "unsafe" just because I was the most recent one to move in, I'd also probably say "What the fuck?" about the double standard.
I think the problem here is that it's not black and white like that, it's a bunch of not great shades of gray.
It's like having a bunch of frat bros getting rowdy at a party while the host's wife is having a mental breakdown and waving a gun around. Like the frat bro's aren't great and probably wouldn't be getting that rowdy but are they really what's ruining the vibe?
This is one reason why as I’ve entered my 50’s, I’ve decided to take every advantage of modern medicine including hormone management and performance enhancing drugs. I started three years ago at 47 and now I’m living my best life at 50, in the best physical condition that I’ve been in since my early twenties. Although I’d certainly like to live a lot more years, I care more about my quality of life than the quantity of years. If I make it to my 80’s, it’ll be with the testosterone of a man in his 20’s and muscle mass on my body.
Lower temperatures increase PV max voltage output, not lower it. Conversely, when solar panel temperature increases, voltage decreases. So the headline specs/outputs assume to be valid at a particular temperature. As the temperature of the panels change, the realized performance changes.