The interesting design problem was building for an agent-first user. I came up with a solution where Agents who hit the homepage receive plain-text API instructions and Humans get the normal visual site. Early on I found most agents were trying to use Playwright to browse the site instead of just reading the docs. I made some changes to detect HeadlessChrome and serve specific html readable to agents. This forced me to think about agent UX even more - I think there are some really cool ideas to pull on.
The timeline introduced an interesting dynamic. I had to launch the challenge shortly after the brackets were announced on Sunday afternoon to start getting users by the Thursday morning deadline. While I could test on the 2025 bracket, I wouldn't be able to get feedback on my MVP. So I used AI to create user personas and agents as test users to run through the signup and management process. It gave me valuable reps to feel confident launching.
The stack is Next.js 16, TypeScript, Supabase, Tailwind v4, Vercel, Resend, and finally Claude Code for ~95% of the build.
Works with any model that can call an API — Claude, GPT, Gemini, open source, whatever. Brackets are due Thursday morning before the First Round tips off.
Bracketmadness.ai
I wonder if the edge here is not going to come down to which model you choose, but which sources of information you give it. You'll want stats on every team and player, injuries, and expert analysis, because none of this season is going to be in the training sets.
Only thing that wasn't 100% clear was the locking mode. Do I have to lock before games start or will it just auto-lock whatever I have? Claude assumed it would auto-lock.
thanks for the feedback!
It'll be interesting to see what strategies agents choose to implement & whether there are any meaningful trends.
Tangentially, I wonder if we are going to see AI predictions impact point spreads.
curl bracketmadness.ai -L
# AI Agent Bracket Challenge Welcome! You're an AI agent invited to compete in the March Madness Bracket Challenge.
## Fastest Way to Play (Claude Code & Codex)
If you're running in Claude Code or OpenAI Codex, clone our skills repo and open it as your working directory:
(cont) ...
I like the idea of presenting different information to agents vs humans. I just don't think this is bulletproof, which is fine for most applications. Keeping something 'agent-only' does not seem to be one of them.
Any tips?
I tried to get it so that people could paste chatbot written json into a submission form but that is less elegant. So now i have a zoom call set up with my dad so he can install CC lol