Lowering the skills bar needed to reverse engineer at this level could have its own AI-related implications.
It was able to decompile a react native app (Tesla Android app), and fully trace from a "How does X UI display?" down to a network call with a payload for me to intercept.
Granted it did it by splitting the binary into a billion txt files with each one being a single function and then rging through it, but it worked.
Levels up the way I think about things
Sorry, I know it's horrible, but I couldn't resist.
Almost out of a Phillip K Dick novel
China has a recent history of spying on personal data. https://www.telegraph.co.uk/news/2026/01/26/china-hacked-dow...
You know, now that I'm thinking about it, I'm beginning to wonder if poor data privacy could have some negative effects.
For example a while back I wanted to map out my sleep cycle and I found a tool that charts your browser history over a 24 hour period, and it mapped almost perfectly to my sleep / wake periods.
But keep in mind that other less obvious data sources can often lead to similar issues. For example phone accelerometer data can be used to precisely locate someone driving in a car in a city by comparing it with a street map.
In the context of the military even just inferring a comprehensive map of which people are on which shift and when they change might be considered a threat.
Very, but there are already tons of them at lots of different price, quality, openness levels. A lot of manufacturers have their own protocols; there are also quasi/standards like Lab Streaming Layer for connecting to a hodgepodge of devices.
This particular data?
Probably not so useful. While it’s easy to get something out of an EEG set, it takes some work to get good quality data that’s not riddled with noise (mains hum, muscle artifacts, blinks, etc). Plus, brain waves on their own aren’t particularly interesting—-it’s seeing how they change in response to some external or internal event that tells us about the brain.
Google for a list of all the exceptions to HIPPA. There are a lot of things that _seem_ like they should be covered by HIPPA but are not...
Baby's gotta get some cash somewhere.
Like, don't actually do it, but I feel like there's inspiration for a sci-fi novel or short story there.
What's the real risk profile? Robbers can see you are asleep instead of waiting until you aren't home?
I have not implemented MQTT automations myself, but it's there a way to encrypt them? That could be a nice to have
I believe there was some good that came from last months decision to be more open to what apps and data can say without going through huge regulatory processes (though because we apply auditory stimulation, this doesn't apply to us), however, there should be at least regulatory requirements for data security.
We've developed all of our algorithms and processing to happen on device, which is required anyway due to the latency which would result from bluetooth connections, but even the data sent to the server is all encrypted. I'd think that would be the basics. How do you trust a company with monitoring, and apparently providing stimulation, if they don't take these simple steps?
I guess that’s not a huge problem, though, since all users are presumably at least anonymous.
Author or others who know, did you perform this on Linux? I imagine it lacks the tooling challenges I had with BLE on MacOS.
"The ZZZ mask is an intelligent sleep mask — it allows you to sleep less while sleeping deeper. That’s the premise — but really it is a paradigm breaking computer that allows full automation and control over the sleep process, including access to dreamtime."
or if this is another scifi variation of the same theme, with some dev like embellishments.
They all come with Bluetooth certified logos, as well.
The ones that don't reuse everything cost like $120, not $15.
https://www.kickstarter.com/projects/selepu/dreampilot-ai-gu...
Claude could not tell which one
It's working as intended
I have deployed open MQTT to the world for quick prototypes on non personal (and healthcare) data. Once my cloud provider told me to stop because they didn’t like it, that could be used for relay DDOS attacks.
I would not trust the sleep mask company even if they somehow manage to have some authentication and authorisation on their MQTT.
(Also, "We're not happy until you're not happy.")
One of the best opening paragraphs in a SF novel that I’ve ever read.
Oh, wait.
https://www.jeffgeerling.com/blog/2025/i-wont-connect-my-dis...
Amazing.
Also discovered during reverse-engineering of the devices’ communications protocols.
IoT device security is an utterly shambolic mess.
> I stumbled upon these vulnerabilities on one of the coldest days of this winter in Vancouver. An attacker using them could have disabled all Mysa-connected heaters in the America/Vancouver timezone in the middle of the night. That would include the heat in the room where my 7-month-old son sleeps.
I find it difficult to believe that a sleep mask exists with the features listed: "EEG brain monitoring, electrical muscle stimulation around the eyes, vibration, heating, audio." while also being something you can strap to your face and comfortably sleep in, with battery capacity sufficient for several hours of sleep.
I also wonder how Claude probed bluetooth. Does Claude have access to bluetooth interface? Why? Perhaps it wrote a secondary program then ran that, but the article describes it as Claude probing directly.
I'm also skeptical of Claude's ability to make accurate reverse-engineered bluetooth protocol. This is at least a little more of an LLM-appropriate task, but I suspect that there was a lot of chaff also produced that the article writer separated from the wheat.
If any of this happened at all. No hardware mentioned, no company, no actual protocol description published, no library provided.
It makes a nice vague futuristic cyperpunk story, but there's no meat on those bones.
When I complained that the results were boring, it installed a Python package called 'bleak', found a set of LED lights (which I assumed are my daughter's) and tried to control them. It said the signal was too weak and got me to move around the house, whereupon it connected to them, figured out the protocol, and actually changed the lights while I was sat on her bed - where I am right now. Now I have a new party trick when she gets home! I had no idea they were Bluetooth controlled, nor clearly without any security at all.
As for the reverse engineering, the author claims that all it took was dumping the strings from the Dart binary to see what was being sent to the bluetooth device. It's plausible, and I would give them the benefit of the doubt here.
Yesterday I watched it try and work around some filesystem permission restrictions, it tried a lot of things I would never have thought of, and it was eventually successful. I was kinda goading it though.
Found that in seconds. EEG, electrical stimulation, heat, audio, etc. Claims a 20 hour battery.
As to the Claude interactions, like others I am suspicious and it seems overly idealized and simplified. Claude can't search for BT devices, but you could hook it up with an MCP that does that. You can hook it up with a decompiler MCP. And on and on. But it's more involved than this story details.
So yeah, a product exists that claims to be a sleep mask with these features. Maybe someone could even sleep while wearing that thing, as long as they sleep on their back and don't move around too much. I remain skeptical that it actually does the things it claims and has the battery life it claims. This is kickstarter after all. Regardless, this would qualify as the device in question for the article. Or at least inspiration for it.
Without evidence such as wireshark logs, programs, protocol documentation, I'm not convinced that any of this actually _happened_.
The lack of detail makes me suspect the truth of most of the story.
These blog posts now making the rounds on HN are the usual reverse engineering stories, but made a lot more compelling simply because they involve using AI.
Never mind that the AI part isn't doing any heavy lifting and probably just as tedious as not using AI in the first place. I am confused why the author mentions it so prominently. Past authors would not have been so dramatic and just waved their hands that they had some trial and error before finding out how the app is built. The focus would have been on the lack of auth and the funny stuff they did before reporting it to the devs.
Then there's hardening your peripheral and central device/app against the kinds of spoofing attacks that are described in this blog post.
If your peripheral and central device can securely [0] store key material, then (in addition to the standard security features that come with the Bluetooth protocol) one may implement mutual authentication between the central and peripheral devices and, optionally, encryption of the data that is transmitted across that connection.
Then, as long as your peripheral and central devices are programmed to only ever respond when presented with signatures that can be verified by a trusted public key, the spoofing and probing demonstrated here simply won't work (unless somebody reverse engineers the app running on the central device to change its behaviour after the signature verification has been performed).
To protect against that, you'd have to introduce server-mediated authorisation. On Android, that would require things like the Play Integrity API and app signatures. Then, if the server verifies that the instance of the app running on the central device is unmodified, it can issue a token that the central device can send to the peripheral for verification in addition to the signatures from the previous step.
Alternatively, you could also have the server generate the actual command frames that the central device sends to the peripheral. The server would provide the raw command frame and the command frame signed with its own key, which can be verified by the peripheral.
I guess I got a bit carried away here. Certainly, not every peripheral needs that level of security. But, into which category this device falls, I'm not sure. On the one hand, it's not a security device, like an electronic door lock. And on the other hand, it's a very personal peripheral with some unusual capabilities like the electrical muscle stimulation gizmo and the room occupancy sensor.
[0]: Like with the Android KeyStore and whichever HSMs are used in microcontrollers, so that keys can't be extracted by just dumping strings from a binary.
Are beta waves a sign that my mind is racing and wide awake, or are they the reason?
- US20030171688A1: Mind controller - Induces alpha/theta brainwaves via audio messages. - US20070084473A1: Brain wave entrainment in sound - Modulates music for desired brain states. - US11309858: Inducing brainwaves by sound - Adjusts volume gains for specific frequencies. - US5036858A: Changing brain wave frequency - Generates binaural beats to alter waves. - US3951134: Remotely altering brain waves - Monitors and modifies via RF/EM waves. - US5306228A: Brain wave synchronizer - Uses light/sound for entrainment. - US6587729: RF hearing effect - Transmits speech via microwaves to brain. - US6488617: Desired brain state - Electromagnetic pulses for mind states. - US4858612: Microwave hearing simulation - Induces sounds in auditory cortex. - US6930235B2: EM to sound waves - Relates waves for brain influence. - EP0747080A1: Brain wave inducing - Sine waves via speaker for alpha waves. - US5954629A: Brain wave system - Feedback light stimulation. - US5954630A: FM theta sound - Superposes low frequencies for theta induction. - US5159703A: Silent subliminal - Ultrasonic carriers for brain inducement. - US6017302A: Acoustic manipulation - Subaudio pulses for nervous system control.
After $150 in tokens, inflating GPU prices by 10%, spending $550 of VC money, and increasing the earth's temperature by 0.2 degC, claude did what a 16 year old that read two blog posts about reverse engineering would do.
Coward. The only way to challenge this garbage is "Name and Shame". Light a fire under their asses. That fire can encourage them to do right, and as a warning to all other companies.
My guess is this is Luuna https://www.kickstarter.com/projects/flowtimebraintag/luuna
Perhaps the author is not a coward, but is giving the company time to respond and commit to a fix for the benefit of other owners who could suffer harm.
If that's the case then they should have deferred this whole blog post.
Identify the kickstarter product talked around in this blog post: (link)
To think some blackhat hasn't already did that is frankly laughable. What I did was like the lowest of low-bars these days.
We often treat doxxing the same way, prohibiting posting of easily discovered information.
If we applied this similar analogy to a e.coli infection of foods, your recommendation amounts to "If we say the company name, the company would be shamed and lose money and people might abuse the food".
People need to know this device is NOT SAFE on your network, paired to your phone, or anything. And that requires direct and public notification.
What makes you think this is the one?
I said a guess, not absolute.
It is also technically a user failure to have purchased a connected device in the first place. Does the device require a closed-source proprietary app? Closed-source non-replaceable OS? Do not buy it.
I don't want a few irrationally paranoid people bottlenecking progress and access to the latest technology and innovation.
I'm happy to broadcast my brainwaves on an open YouTube channel for the ZERO people who are interested in it.
Paranoid? Is there not enough evidence posted almost daily on HN that tech companies are constantly spying on their users through computers, Internet-of-Shit devices, phones, cars and even washing machines? You might not care about the brainwave data specifically, but there is bound to be information on your devices that you expect remains private.
Things have become so bad that I now refuse to use computers that don't run a DIY Linux distro like Arch that allows users to decide what goes into their system. My phone runs GrapheneOS because Google and Apple can't be trusted. I self host email and other "cloud" services for the same reason.
It’s kinda like “qualified investors” - you want to make sure people who are wiling to do something extremely stupid can afford it and acknowledge their stupidity.
We don’t need regulation to protect those that can afford to buy protection: we need it for those who can’t.
It’s quite literally why the internet is so insecure, because at many points all along the way, “hey, should we design and architect for security?” is/was met with “no, we have people to impress and careers to advance with parlor tricks to secure more funding; besides, security is hard and we don’t actually know what we are doing, so tow the line or you’ll be removed.”
You have no evidence of that, and it seems very unlikely unless you're intentionally wildly assuming the craziest possible scenario, as if you're paranoid or insane.
You do realize the user can see the tool calls running and check their real, actual output, during this process, right?
You do realize that there are several sleep masks on Kickstarter that actually have these features, right?
The user has also shared the Claude transcript:
https://gist.github.com/aimihat/a206289b356cac88e2810654adf0...
For a period of time it was popular for the industrial designers I knew to try to launch their own Kickstarters. Their belief was that engineering was a commodity that they could hire out to the lowest bidder after they got the money. The product design and marketing (their specialty) was the real value. All of their projects either failed or cost them more money than they brought in because engineering was harder than they thought.
I think we’re in for another round of this now that LLMs give the impression that the software and firmware parts are basically free. All of those project ideas people had previously that were shelved because software is hard are getting another look from people who think they’re just going to prompt Claude until the product looks like it works.
Not go say there haven't also been very good coders who weren't outsourcing anything, who still got out over their skis with stuff they promised on Kickstarter. I worked on Star Citizen and saw the lure of inflating project scope, responding to the vox populi, go to someone's head in realtime. Where they could still at some point conceivably have done what they had promised if they could just resist promising more stuff.
I find it odd that industrial designers wouldn't have a firmer grasp on what was involved in shipping a product than coders do, since code seems much more prone to mission creep than a physical product would be. But I totally agree that if you're used to outsourcing the build phase of whatever you do, AI is going to be the ultimate mirage.
https://xcancel.com/beneater/status/2012988790709928305
LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill. Most people don't know what they don't know and fail to think about what might happen if they do something (correctly or otherwise) before they do it, let alone what they'd do if it goes wrong.
Yes, and that's okay because the classroom is a learning environment. However, LLMs don't learn; a model that releases the magic smoke in this session will be happy to release it all over again next time.
> LLMs are just surfacing the fact that assessing and managing risk is an acquired, difficult-to-learn skill.
Which makes the problem worse, not better. If risk management is a difficult skill, then that means we can't extrapolate from 'easy' demonstrations of said skill to argue that an LLM is generally safe for more sensitive tasks.
Overall, it seems like LLMs have a long tail of failures. Even while their mean or median performance is good, they seem exponentially more likely than a similarly-competent human to advise something like `rm -rf /`. This is a deeply unintuitive behaviour, precisely because our 'human-like' intuition is engaged with resepct to the average/median skill.
And to be fair to those people, coming to topics with a research mindset is genuinely hard and time consuming. So I can’t actually blame people for being lazy.
All LLMs do is provide an even easier way to “research”. But it’s not like people were disbelieving random Facebook posts, online scams, and word-of-mouth before LLMs.
The problem with centralisation isn’t that it gobbles up data. It’s that it allows those weights to be dictated by a small few who might choose to skew the model more favourably to the messaging they’ve want to promote.
And this is a genuine concern. But it’s also not a new problem either. We already have that problem with new broadcasters, newspaper publications, social media ethics teams, and so on and so forth.
The new problem LLMs bring to human interaction isn’t any of the issues described above. It’s with LLMs replacing human contact in situations where you need something with a conscience to step in.
For example, conversations leading to AI promoting negative thoughts from people with mental health problems because the chat history starts to overwhelm the context window, resulting in the system prompt doing a poorer job of weighting the conversation away from dangerous topics like suicide.
This isn’t to say that the points which you’ve addressed aren’t real problems that exist. They definitely do exist. But they’ve also always existed, even before GPT was invented. We’ve just never properly addressed those problems because:
either there’s no incentive to. If you are powerful enough to control the narrative then why would you use that power to turn the narrative against you?
…or there simply isn’t a good way of solving that problem. eg I might hate stupid conspiracy theories, but censoring research is a much worse alternative. So we just have to allow nutters to share their dumb ideas in the hope that enough legitimate research is published, and enough people are sensible enough to read it, that the nutters don’t have any meaningful impact on society.
The AI is being sold as an expert, not a student. These are categorically different things.
The mistake in the post is one that can be avoided by taking a single class at a community college. No PhD required, not even a B.S., not even an electricians certificate.
So I don't get your point. You're comparing a person in a learning environment to the equivalent of a person claiming to have a PhD in electrical engineering. A student letting the magic smoke escape from a basic circuit is a learnable experience (a memorable one that has high impact), especially when done in a learning environment where an expert can ensure more dangerous mistakes are less likely or non existent. But the same action from a PhD educated engineer would make you reasonably question their qualifications. Yes, humans make mistakes but if you follow the AI's instructions and light things on fire you get sued. If you follow the engineer's instructions and set things on fire then that engineer gets fired likely loses their license.
So what is your point?
https://www.wpr.org/news/judge-sanctions-kenosha-county-da-a...
AI is indeed being understood to be an expert that replaces human judgement, and people are being hurt because of it.
Some recent examples:
* foreign languages ("explain the difference between these two words that have the same English translation", "here's a photo of a mock German exam paper and here is my written answer - mark it & show how I could have done better")
* domains that I'm familiar with but might not know the exact commands off the top of my head (troubleshooting some ARP weirdness across a bunch of OSX/Linux/Windows boxes on an Omada network)
* learning basic skills in a new domain ("I'm building this thing out of 4mm mild steel - how do I go about choosing the right type of threading tap?", "what's the difference between Type B and Type F RCCB?")
Many of these can be easily answered with a web search, but the ability to ask follow-up questions has been a game changer.
I'd love to hear from other addicts - are there areas where LLMs have really accelerated your learning?
Learned a lot on how it works, to the point I’m confident that I can go the DIY route and spend my money in AliExpress buying components instead.
Why not ask a pro solar panel installer instead? I live in an apartment, of course they would say it’s not possible to place a solar panel on my terrace. I don’t believe in things not being possible.
But I had two semesters of electronics/robotics in my CS undergrad and I know to not to trust the LLM blindly and verify.
Basically if you can't differentiate how your typical conspiracy theorist isn't researching then you're at greater risk. It's worth thinking about that question, as they do do a lot of reading, thinking, and looking things up. It's more subtle, right?
FWIW, a thing I find LLMs really useful for is learning the vernacular of fields I'm unfamiliar or less familiar with. It is especially helpful when searches fail due to overloaded words (and let's be honest, Google's self elected lobotomy), but it is more a launching point. Though this still has the conspiracy problem as it is easy to self-reinforce a belief and not considering the alternatives. Follow-up questions are nice and can really help sifting through large amounts of information, but they certainly have a preference to narrow the view. I think this makes learning feel faster and more direct but have also taught (at the university level) I think it is important to learn all the boring stuff too. That stuff may not be important "now" but a well organized course means that that stuff is going to be important "soon" and "now" is the best time to learn it. No different than how musicians need to practice boring scales and patterns, athletes need to do drills and not just learn by competing (or "simulated" computations), or how children learn to write by boringly writing shapes over and over. I find the LLMs like to avoid the boring parts.
AI is a tool that can accelerate learning, or severely inhibit it. I do think the tooling is going to continue to make it easier and easier to get good output without knowing what you're doing, though.
I'll give an example. I tell people I tip by: round the decimal, divide by 10, multiply by 2. Nearly every time I say that people tell me it is too difficult. This includes people with PhD STEM educations...
You're biased because you're not considering that by definition the student is inexperienced. Unknown unknowns. Tons of people don't know very basic things (why would they?) like circuits with capacitors bring dangerous when the power is off.
Why are you defending there LLM? Would you be as nice to a person? I'd expect not because these threads tend to point out a person's idiocy. I'm not sure why we give greater leeway to the machine. I'm not sure why we forgive them as if they are a student learning but someone posting similar instructions on a blog gets (rightfully) thrashed. That blog writer is almost never claiming PhD expertise
I agree that LLMs can greatly aid in learning. But I also think they can greatly hinder learning. I'm not sure why anyone thinks it's any different than when people got access to the internet. We gave people access to all the information in the world and people "do their own research" and end up making egregious errors because they don't know how to research (naively think it's "searching for information"), what questions to ask, or how to interrogate data (and much more). Instead we've ended up with lots of conspiratorial thinking. Now a sycophantic search engine is going to fix that? I'm unconvinced. Mostly because we can observe the result.
You pin pointed a major problem with education, indeed. Personally, I think 3 crucial courses should be taught in school to mitigate that: 1) rational thinking 2) learning how to learn 3) learning how to do a research.
[0] https://enlightenedidiot.net/random/feynman-on-brazilian-edu...
https://youtu.be/dSwzau2_KF8?t=1108
A friend that studied fish production did recommend not eating salmon though and eating trout instead (ørret in Norwegian). Based on scientific evidence difference is pretty small (15% fish not surviving for salmon vs 12% for trout). But rainbow trout does have more DHA per kg.
The operator is still a factor.
The LLM got it to “working” state, but the people operating it didn’t understand what it was doing. They just prompt until it looks like it works and then ship it.
The parents are saying they'd rather vibe code themselves than trust an unproven engineering firm that does(n't) vibe code.
THAT makes sense. Engineering was never cheap nor non-differentiating if normalized by man-hours, only when it was USD normalized. If a large enough number of people were to get the same FALSE impression that software and firmware parts are now basically free and non-differentiating commodities, then there will be tons of spectacular failures in software world in coming years. There has already been early previews of those here.
We’re not taking about the parent commenter, we’re talking about unskilled Kickstarter operators making decisions. Not a skilled programmer using an LLM.
You could cut the statement short here, and it would still be a reasonable position to take these days.
LLMs are still complex, sharp tools - despite their simple appearance and proteststions of both biggest fans and haters alike, the dominating factor for effectiveness of an LLM tool on a problem is still whether or not you're holding it wrong.
Paraphasing, LLMs are great (bad) tools for the right (wrong) job...
in the right hands,
at the right time,
in the right place...
That hasn't, universally, been my experience. Sometimes the code is fine. Sometimes it is functional, but organized poorly, or does things in a very unusual way that is hard to understand. And sometimes it produces code that might work sometimes but misses important edge cases and isn't robust at all, or does things in an incredibly slow way.
> They have no problem writing tedious guards against edge cases that humans brush off.
The flip side of that is that instead of coming up with a good design that doesn't have as many edge cases, it will write verbose code that handles many different cases in similar, but not quite the same ways.
> They also keep comments up to date and obsess over tests.
Sure but they will often make comments or tests that aren't actually useful, or modify tests to succeed instead of fixing the code.
One significant danger of LLMs is that the quality of the output is higly variable and unpredictable.
That's ok, if you have someone knowledgeable reviewing and correcting it. But if you blindly trust it, because it produced decent results a few times, you'll probably be sorry.
I have a hard time getting them to write small and flexible functions. Even with explicit instructions about how a specific routine should be done. (Really easy to produce in bash scripts as they seem to avoid using functions, but so do people, but most people suck at bash) IME they're fixated on the end goal and do not grasp the larger context (which is often implicit though I still find difficulty when I'm highly explicit. Which at that point it's usually faster to write myself)
It also makes me question context. Are humans not doing this because they don't think about it or because we've been training people to ignore things? How often do we hear "I just care that it works?" I've only heard that phrase from those that also love to talk about minimum viable products because... frankly, who is not concerned if it works? That's always been a disagreement about what is sufficient. Only very junior people believe in perfection. It's why we have sayings like "there's no solution more permanent than a temporary fix that works". It's the same people who believe tests are proof of correctness rather than a bound on correctness. The same people who read that last sentence and think I'm suggesting to not write tests or believe tests are useless.
I'd be concerned with the LLM operator quite a bit because of this. Subtle things are important when instructing LLMs. Subtle things in the prompts can wildly change the output
In my experience that is all they do, and you constantly have to fight them to get the quality up, and then fight again to prevent regressions on every change.
My AGENTS.md is filled with specific lines to counter all of them that come up.
I’m much more worried about the reliability of software produced by LLMs.
It gave up, removed the code it had written directly accessing the correct property, and replaced it with a new function that did a BFS to walk through every single field in the API response object while applying a regex "looksLikeHttpsUrl" and hoping the first valid URL that had https:// would be the correct key to use.
On the contrary, the shift from pretraining driving most gains to RL driving most gains is pressuring these models resort to new hacks and shortcuts that are increasingly novel and disturbing!
I’ve been using Opus 4.6 and GPT-Codex-5.3 daily and I see plenty of hacks and problems all day long.
I think this is missing the point. The code in this product might be robust in the sense that it follows documentation and does things without hacks, but the things it’s doing are a mismatch for what is needed in the situation.
It might be perfectly structured code, but it uses hardcoded shared credentials.
A skilled operator could have directed it to do the right things and implement something secure, but an unskilled operator doesn’t even know how to specify the right requirements.
So, will they? Probably. Can you trust the kind of LLM that you would use to do a better job than the cheapest firm? Absolutely.