Distinguished Engineer and AI-assisted delivery expert at Thoughtworks.
And then talk about memory banks. Yeah, I recognize that from work where "AI has taken off" as well.Guess what: As memory banks grow or accumulate the AI gets confused and doesn't quite deliver.
So far, a human that actually knows their product still prevails and is necessary to actually guide any AI effort. AIs have been trying to bullshit me so much it's not even funny any longer. Of course they all apologize and figure out reality when I guide them but that doesn't change the facts. And I simply can't read all the documents the AIs write for themselves to correct all of them and even if I did I wouldn't be sure enough that they'd improve significantly enough for me to try and spend this mind bogglingly boring amount of time to help this thing that's supposed to take my job ....
I've been using Speckit for the last two weeks with Claude Code, on two different projects. Both are new code bases. It's just me coding on these projects, so I don't mind experimenting.
The first one was just speckit doing its thing. It took about 10 days to complete all the tasks and call the job done. When it finished, there was still a huge gap. Most tests were failing, and the build was not successful. I had to spend an equally long, excruciating time guiding it on how to fix the tests. This was a terrible experience, and my confidence in the code is low because Claude kept rewriting and patching it with many fixes to one thing, breaking another.
For the second project, I wanted to iterate in smaller chunks. So after SpecKit finished its planning, I added a few slash commands of my own. 1) generate a backlog.md file based on tasks.md so that I don't mess with SpecKit internals. 2) plan-sprint to generate a sprint file with a sprint goal and selected tasks with more detail. 3) implement-sprint broadly based on the implement command.
This setup failed as the implement-sprint command did not follow the process despite several revisions. After implementing some tasks, it would forget to create or run tests, or even implement a task.
I then modified the setup and created a subagent to handle task-specific coding. This is easy, as all the context is stored in SpecKit files. The implement-sprint functions as an orchestrator. This is much more manageable because I get to review each sprint rather than the whole project. There are still many cases where it declares the sprint as done even though tests still fail. But it's much easier to fix, and my level of trust in the code is significantly higher.
My hypothesis now is that Claude is bed at TDD. It almost always has to go back and fix the tests, not the implementation. My next experiment is going to be to create the tests after the implementation. This is not ideal, but at this point, I'd rather gain velocity, since it would be faster for me to code it myself.
Kiro, your new corporate project manager.
Always made it too complex, and at some point it wasn't worth correcting it anymore.
So much simpler to just iterate without the puzzle box of tasks. "a sledgehammer to crack a nut"
Now I'm left trying to define/design what a "spec" for communication between humans and coding agents would look like, to power what Birgitta called spec anchored.
All the tutorials I've found are little more than "here's how to install it - now let's make a todo list app from scratch!!"
Would be great to see how others are handling real world use cases like making incremental improvements or refactorings to a huge legacy code base that didn't start out as a spec driven development hello world project.
Following a BDD approach with a coding CLI works a lot better, as it documents the features as code rather than verbose markdown files no one will read.
Having a checklist for an AI to follow makes sense, but that's why agents.md exists. Once the coding patterns and NFRs are documented in it, the agent follows them as well as they would follow a separate markdown spec.