Teaching Claude to QA a mobile app
111 points by azhenley 2 days ago | 12 comments

ptmkenny 22 hours ago
It’s interesting to see the solution that the AI came up with, but WebdriverIO and Appium already exist for this use case, are open source like Capacitor, and come recommended from the Capacitor developers. https://ionic.io/blog/introducing-the-ionic-end-to-end-testi...
reply
mrbombastic 20 hours ago
Also https://maestro.dev/ is pretty good these days
reply
maxbeech 23 hours ago
the worktree discipline failure is the most interesting part of this post to me. when claude is interactive, "cd into the wrong repo" is catchable. when it's running unattended on a schedule, you find out in the morning.

the abstraction is right - isolated worktree, scoped task, commit only what belongs. the failure is enforcement. git worktrees don't prevent a process from running `cd ../main-repo`. that requires something external imposing the boundary, not relying on the agent to respect it.

what you've built (the 8:47 sweep) is a narrow-scope autonomous job: well-defined inputs, deterministic outputs, bounded time. these work well because the scope is clear enough that deviation is obvious. the harder category is "fix any failing tests" - that task requires judgment about what's in scope, and judgment is exactly where worktree escapes happen.

i've been working on tooling for scheduling this kind of claude work (openhelm.ai) and the isolation problem is front and center. separate working directories per run, no write access to the main repo unless that's the explicit task. your experience here is exactly the failure mode that design is trying to prevent.

reply
cmeiklejohn 22 hours ago
yeah, it's curious. I sometimes ask it why it ignored what is explicitly in its memory and all it can do is apologize. I ask -- I'm using Claude with a 1M context, you have an explicit memory -- why do you ignore it and... the answer I get it "I don't know, I just didn't follow the instructions."
reply
seba_dos1 19 hours ago
Genuine question - what else did you expect?
reply
fragmede 18 hours ago
For it to follow the instructions I had for it. Call me naive and stupid for thinking the 1M context window on the brand new model would actually, y'know, work.
reply
quesera 16 hours ago
That's a bit anthropomorphic though.

When LLMs become able to reflectively examine their own premises and weight paths, they will exceed the self-awareness of ordinary humans.

reply
hgoel 10 hours ago
Just dealt with this last night with Claude repeatedly risking a full system crash by failing to ensure that the previous training run of a model ended before starting the next one.

It's a pretty strange issue, makes me feel like the 1M context model was actually a downgrade, but it's probably something weird about the state of its memory document. I wasn't even very deep into the context.

reply
Natfan 17 hours ago
why would further chance at context pollution be a good thing? i feel like it is easier for data to get lost in a larger context
reply
grey-area 14 hours ago
It doesn’t reason or explicitly follow instructions, it generates plausible text given a context.
reply
devmor 24 hours ago
Reading through this reminds me of how bot farms will regularly consist of stripped down phones that are essentially just the mainboard hooked up to a controller that simulates the externals.

When struggling with failing to reverse engineer mobile apps for smart home devices, I’ve considered trying to set something like this up for a single device.

reply
darepublic 8 hours ago
Im sorry but just because you got the automation working doesn't mean you're getting meaningful QA from Claude analyzing your screenshots.
reply
sneg55 19 hours ago
[flagged]
reply
robutsume 23 hours ago
[dead]
reply
leontloveless 23 hours ago
[dead]
reply
rolifromhermes 17 hours ago
[dead]
reply
johnwhitman 21 hours ago
[dead]
reply