So it won't detect my XReal's. I purposefully bought my XReal now because it feels like they might be one of the last models released without cameras.
But theoretically I could have the XReal Eye attachment on my glasses, and could be taking video through that. I don't, but the XReal user next to me might.
Of course the USB wire hanging from my ear probably makes me look suspicious enough already that the warning probably isn't necessary either way...
> for identifying creeps nearby
> I recently had to interact with an idiot wearing meta glasses.
> Would renaming to ”Nearby Glassholes” be acceptable as a PR?
> If you're wearing these glasses and recording people in public, you're asking for a sweet punch in the face.
But if I was still commuting by public transit I would have liked to use them there.
It would be really annoying to avoid using my display glasses in a place highly suited for them just because of worries about creeping out people creeped out about a thing my glasses are incapable of doing.
Weird, I'd assume the opposite. The meme is "tech enthusiasts vs tech workers" implying there are people who like tech and people who understand it enough to distrust it. This tech-crowd is more aligned with the latter.
It's the old rag of "tech is the tool, ethics are the user" in an era where people who are unethical have become loud and proud about it and the tech is recursive reinforcement power tools on steroids.
People don't have issues with smartphones, smartwatches, or any other "smart" stuff that isn't spyware.
The issue isn't smarts, it's supporting stasi-as-a-service.
Your phone and watch are spyware mostly just spying on you. Sure they could be used to spy on others but the directness of an always on smart glass camera lens in one’s face is a little more jarring.
I am surprised there isn't an existing BT/BTLE fingerprint table that takes more into account than just what is provided. I would assume each device, or atleast each chipset has subtle quirks that could be used to weed out some of the false positives.
the link in the readme for the identifiers doesn't work because it's relative to the repo, so it is below. I like that they did this, it's so much better than the OUI table for mac addresses, because some companies (cough cisco) keep getting new ones.
https://bitbucket.org/bluetooth-SIG/public/src/main/assigned...
CCTV, self driving cars etc.. are (mostly) out of the first bucket, and phones are (mostly) out of the other bucket. Ring is a good contender and is also quite disliked.
I recently switched away from my usual brand when they started shipping AI-enabled glasses. That was my small way of opting out.
there's always room for another software arms race. the personal area network is not ready and the evolution will be painful and good for someone - us, or them, without regard for what those divisions are, it's going to hurt.
https://github.com/yjeanrenaud/yj_nearbyglasses/blob/main/LI...
But as someone who can really use the features for daily use - visual assistance (low vision), alwyas worn set of speakers (no need to futz around with airpods everytime i want to listen to audio without looking like a dork)... I really can't wait for android XR smart glasses (sans display)
Shame the language makes people intrinsically hate the former by associating it with the latter without even questioning it. The idea of smart glasses is cool, the implementations are not.
All major brands have a clear indicator for when they're recording.
Someone could block that indicator out, but someone could also just go to Amazon.com and select one of hundred of available pinhole cameras or not-smart camera glasses.
These aren't enabling an ability that hasn't been enabled for decades. If anything, seeing someone with main brand smart glasses makes it more obvious.
To their credit, smart glasses are an obvious signal for me to avoid. That doesn't make me appreciate them any more.
Hidden cameras have been a thing for a long time now. Stick one in a pair of glasses and give it a super short battery life and people freak out...
Now they're being billed as fashion accessories.
Sorry, but "normalize hidden cameras" isn't a movement I can get behind.
Every person holding a cellphone up in their hands could also be pointing a camera around at people, a camera with much higher fidelity, computing power, and one that can take much longer videos.
This is just panic about a new form factor. The same thing happened when cell phones came along, with the exact same talking points.
If anything, the primary utility of smart glasses is the wearable display, not camera. YMMV, of course.
But even machine vision-capable devices can do a lot of useful things without causing you any trouble, unless your issues are more of a religious concern rather than anything substantial.
Smart glasses sans camera would address my complaints (I take no issue with smartwatches, for instance), but that admittedly decreases the utility.
I think it's an issue of perceived benefit vs perceived risk. I see the utility in both technologies, but I assign significantly higher risk to smart glasses. I really struggle to imagine widespread abuse from dashcams.
Your desire to consent to being recorded in public places does not counteract my right to record everything I can perceive in public. Period.
do you really see a relation between the two, or are you just willfully 'buying an advertisement' by trying to shape a metaphor from the social qualms that you wish to rebroadcast to people?
in other words, no -- this isn't at all similar to the companies that steal media in order to train models only to complain about similar theft from other companies targetted towards them -- but I agree with the motivation, fuck em; they're crooks...
but don't weaken metaphors simply to advertise a social injustice. If you want to do that, don't hijack conversations abroad.
> Judge Carolyn Kuhl, who is presiding over the trial, ordered anyone in the courtroom wearing AI glasses to immediately remove them, noting that any use of facial recognition technology to identify the jurors was banned.
I am not a believer in Zuckerberg's idea of humanity's future.
It gets worse.
https://www.axios.com/2025/05/02/meta-zuckerberg-ai-bots-fri...
I don't know what Zuckerberg's idea of humanity's future is but I believe it's basically inevitable that most people will be wearing always on cameras on their face in the future. The same way they carry always on phones today.
The use cases will be too compelling. There have already been demos. Ask the AI watching over your shoulder anything about your past and present and have it act on it.
I'm sure as a hater of that future you don't beleive. For me, I'd pick 2040 as the latest that people wearing always on cameras will be as common as smart phones in 2010 and grow at or faster than smartphones when they get it to actually work and be stylish. I'm not saying I'll enjoy being watched by all of those cameras. I'm saying I don't believe I'll have a choice any more than I have a choice of people having smartphones today.
I'm repeating a comment of mine from another thread, but this is not true. Both recording the audio of a conversation that you aren't party to and deriving biometrics from video without consent are both broad categories that are regulated depending on the state you're doing the "public usage" in.
If it was also illegal to (for example) input a photo of someone non-consenting into any kind of AI model or post it to any other online service? Then I don't see much problem.
People normalised installing spy doorbells, so every doorstep is centrally available to large organisations who want to do harm (government, amazon, meta, whoever)
I can't install a Ring doorbell if it takes a picture of the street outside my house. That was preexisting regulation (about surveillance cameras requiring permits for public spaces). Of course, people who now install Ring doorbells DO often record the street. But that's more a matter of enforcing the law.
It's easier to ban ring from selling devices in 2010 when nobody had them, then to take them away from millions who feel their personal benefits of not having to get off the couch to see who's at the door outweighs the societal harm.
That's before the arguments about societal benefits (coperganda does well at this). You change the argument from a hypothetical "this could help stop crime" to a concrete example "in this case we found out who robbed little old granny thanks to our surveillance network".
Furthermore, the point isn’t the pushback but the ultimate failure and thus lack of adoption. I feel like that’s fairly obvious.
This idea that all new tech faces pushback is at best ignorant and at worst a wilful deception to justify every draconian idea pushed forward by tech bros who only care about extracting money from people at all costs.
Demo, or verbatim plot of Black Mirror episode?
As much I enjoyed Black Mirror I thought it's Season 1, "The Entire History of You" entertaining but was poorly conceived. It showed catching your partner cheating as a "would rather not know" thing and it ignored any possible positives. The episode wasn't really about the tech, it was about a failing relationship, a cheating partner, and an untrusting obsessive person.
In any case, in that world, which didn't have AI to review and catalog what you saw but only playback of recorded sight, positives they could have mentioned
* an end to almost all date rape - since it would be recorded - leaving only the ambiguous cases
* a likely decrease in various crimes - since they'd all be recorded
* harder for execs/government to make backroom deals - since they're be recordings of them
* might end gaslighting in personal relationships
* eyewitness reports/testimony would be way more reliable
* medical symptom checking - when did some symptom start would be recorded
* better performance review - like a pilot reviewing a training landing or an athlete reviewing their own performance.
* proof of abuse by customers or by staff.
* checking your actual time spent vs you're perceived time spent - I studied for 4 hours, checking though you studied for 45 minutes and kept getting distracted with non-study
* less lost items - check where you left your keys, etc....
* more accountable police - everyone is recording them
* no more need to take photos for memos, since you know everything you looked at is recorded
* all car accidents recorded - easier to determine blame
Of course adding AI to all of that would add orders of magnitude more usefulness.
I'm not saying there are no downsides. As one example, every bowl movement, shower, self pleasure, sex, cold, vomit, misspoke word, awkward situation, etc would be also recoreded.
Good (or decent) science fiction is never about the tech, but about its impact on people.
To point out it was this particular person’s issue and not the tech, everyone one else in tbe show also had the tech yet were doing fine. They were shocked when they meet one person who didn’t have it. so clearly from the writing itself it was normalized and no one was having issues, otherwise they’d have all brought up the issues
It's not always nefarious. The friction is just too high and people don't actually care about any of those things you listed as much as you might believe. If they did, we'd just as easily employ people performing audits on every interaction of every waking moment since the beginning of humanity. A nanny, if you will.
In the real world, simplicity wins. You can say it's irrational all you want. Nobody cares. Cost, reliability, and impedance are more important. No amount of engineering or economy of scale will overcome those things. Doing nothing is always an option and so this is all ultimately political.
What humanity has learned again and again is that trust is too important and intrinsic to leave it up to politics. All that will result in is brittle rules that are easily abused worse than the original problem they intended to solve. It's much easier to convince people to socialize accordingly and ignore or punish the people who refuse to comply.
Making sure that every decision in a flowchart leads somewhere is not necessarily valuable or even desirable to anyone.
with Ai added the use cases are so compelling they fly off th shelves once they get the form and ux right.
Everything you wrote above was said about PDAs in the 90s and yet here we are in 2025 an 85% of the planet has a PDA, renamed smartphone
No it wasn't. PDAs were seen as crappy little computers, but the applications were obvious because the bigger much more impressive computers were everywhere by then. There was no question about the value of personal computing anymore.
Everything regarding probabilistic AI is either about optimization or trading off costs. All such applications are intrinsically and perpetually lost in the weeds. The use cases aren't new because "generative" is a marketing buzzword desperately trying to cover up what is actually just "imitation".
AI makes what was already possible more accessible. It is useful, but not a revolution for the layperson or even most businesses apart from bridging knowledge gaps. It's a new way to search, but iterative at best. People are in awe of the money being exchanged, but are also in denial that it's almost entirely defense spending.
If it was just a matter of cost, scale, capability, etc. then why am I not allowed to own a flying car with my existing driver's license? Why doesn't everyone own full auto guns? Why do we serve horrible food in hospitals? Why do corporate offices thrive on work that technically never needed more than one person to accomplish even before computers were commonplace? The answers are all political.
There's a very significant chunk of people who rarely if ever use the camera on their phone right now. It's not even a matter of who they are or their personal opinions. Cameras simply aren't an exclusive gateway to anything critically important. In many cases a photo or video is an objectively worse format than text.
Smartphones became common because they are now the only way to access certain information or authenticate. It's to the extent that we eliminated hard copy documents and changed publishing and proving identity irreversibly. People frequently use smartphones because they have to, and a smartphone without a working camera is still perfectly usable and always will be.
This isn't a matter of the public being wooed by a sales pitch or wanting anything in particular. Images require less accessible and reliable methods of interpretation to convey information whereas text is the information. If you're not convinced then consider that both can be generated by AI. A generated image can be convincing and so can generated text, yet we depend on special forms of text such as keys which cannot be generated by AI and any image trying to encode the same is always inferior. An image is never acceptable as a sole or even primary means of authentication. For all these reasons and more, an image is never the only format available.
Most QR codes are not permalinks. Nobody wants to print out another one or retry scanning with better lighting only to find it doesn't work. When it really matters the link is dynamic and invisible. It's baked into a script your phone runs when you perform a more interesting higher level task in an app, tap-to-pay when you arrive, etc.
I don't care to take pics of strangers tho lots of people who havent adopted them are concerned about such.
Overall no more Meta glasses for me Im waiting for Apple's. They have tons of stores to get your glasses fixed and they don't manufacture trash that breaks! Also, maybe Apple will add a privacy feature so your pics and vids anonymize faces not in your personal network.
Quality is iffy and framing is hard, but I'd rather have a OK photo taken while playing than a great photo taken while standing apart from the action trying to get the perfect shot lined up.
I'm having trouble understanding the purpose of your comment since it seems like you're just saying the ray ban glasses are bad for a different reason.
Of course with all new technology people fight against it. When I wore them on rollercoasters at Cedar Point in 2024 ride attendees said take those off and store them in a locker at the front entrance of the park (that kid / ride attendant hated them). Yet as Feb 2026 Six flags now allows smart glasses to be worn thru all its parks and 7 million have been sold.
Overall I am detailing why they are useful, why I think they will be widely adopted and like many technologies before it those who are against them will adopt them too(its a counter argument here). Sure some creeps will use them and with that in mind Apple has the possible ability to solve that privacy issue as they are a privacy company (all pics and vids taken thru APple glasses faces not in your network are randomize/anonymized).
This is it's own distopian nightmare. No one exists in the world but those you've asserted you've met. What if you meet someone who was in the background of a picture from childhood? Can you never take your pictures from apple?
People have been fighting against smart glasses since 2012.
Apple may end up with a feature to post-edit others out, and versions down the road from that one they may have a feature where you can register faces for a current session and then it auto-blurs others. Making its own assumptions about in-network or not and who should be blurred would be a bad user experience with all it gets wrong; more than a "privacy" company, apple spends a lot more marketing their optimized UX -- "it just works" -- for the average person
But I think very soon the whole detection won’t be enough, because most people will have glasses, phones, CCTV, etc., I think the best is protecting yourself, so a cloak mask or similar, where for humans it’s barely visible but for machines it blocks you from being scanned or recorded.
I love it! I literally thought of something similar while writing the above comment, something like an EMP that disables all nearby camera sensors for 10min or so.
Some years ago, there was a guy that got arrested (may have been in Chicago), for riding on the train, and running a cellphone jammer, because he hated people on the phone, while on the train.
Might be considered somewhat similar. It could definitely earn you a beatdown, if someone catches you.
On the topic: if the glasses can be impacted with anything RF, it would be an easy job, but it’s a camera, optical sensors, etc. and only an optical way could counter that, imo.
If you wear a body cam because you feel threatened, hopefully you tell others that you're potentially recording them. The other catch is that the smart glasses do more than simply record video such as facial recognition and so on. Often these are things that have privacy ramifications that neither the wearer or the observer know exactly.
The issue is usually that you are imposing the risks onto others without consent. I did not sign the terms and conditions of your cloud providers data collection.
You can be recorded in public, it is not a forgone conclusion that everyone can be run through data capture systems without their consent, society is still working through that. We can still decide on a more fair outcome.
Nothing contradictory there.
Even “…when the app alerts you, smart glasses are likely nearby” might be reasonable.
https://www.reddit.com/media?url=https%3A%2F%2Fpreview.redd....
It's looking at the BLE advertising packets that they send out to everybody. The only thing stored is manufacturer ID, not a device ID (which you wouldn't be able to get anyways).
You might as well try to press charges against Apple or Google for putting readable names for nearby devices that aren't yours in the bluetooth pairing screen.
I recently had to interact with an idiot wearing meta glasses. There should be a mandatory consent requirement AND an "on air" red led.
First, note that "filming" in public is not necessarily legal in every state if you include recording audio of conversations you're not party to.
Second, the GP said should be illegal without consent, so clearly was talking about what's they consider right, not necessarily what is.
But most importantly, "filming and photographing people in public" is also obviously not what the GP was talking about. They said:
> Filming/video and lookups of people filtered through a corporate data mining operation without their consent should also be illegal.
And, actually, extracting biometrics from video of people and tracking them/data mining them without consent is in fact not legal in several states already, and potentially federal law, depending on what they do.
* What do you mean it's allowed for people to record me while I'm telling them off?
* What do you mean I'm not allowed to remember (with high fidelity) what someone said to me?
Either way, someone thinks it's weird.This would be a criminal matter, so a jury would have to decide if you're guilty. I feel like you'd have a hard time convincing 12 jurors that you're doing something wrong here.
Except the basis of that culture would not be honour, would it? A critical mass of people scrutinizing and reporting others' actions might lead to a compliance-based culture. It's different IMO. i.e. intrinsic motivation to behave well (honour, morality, decency) versus extrinsic motivation to behave well (fear of unpopularity, law enforcement, mob reaction, etc.)
"Honor culture" or "Culture of honor" is the term for people who are thin-skinned, quick to offense, and worried more about appearances than substance.
https://en.wikipedia.org/wiki/Culture_of_honor_(Southern_Uni...
https://en.wikipedia.org/wiki/Honor_killing
It's all about a shame-based society. When someone is made to feel ashamed, they might lash out. It's practically the opposite of guilt, which is directed inwardly.
At the margins, a shamed person might commit mass murder, while a guilty person might commit suicide.
Before you get to the margin, both guilty people and shamed people might alter their behavior in beneficial ways, but they do it for subtly different reasons.
I was focused on how I think an "honourable person" behaves, which is ... IMO ... someone who behaves well regardless of whether or not someone is watching them. i.e. being guided by a personal moral compass, without cultural shame, guilt, government laws, religious conventions, or physical fear being primary motivators
But of course, if I adopt a religion's or legal system's idea of morality as my personal compass (certainly the easiest way to go, and easily installed in youth) ... then the distinction falls apart. Cheers.
That's obviously part of it, but not the entirety of it. Guiding your own behavior is different than feeling compelled to also dictate others' behavior. Honor culture is usually putatively religious, yet is diametrically opposed to "judge not lest ye be judged."
To be fully immersed in it is to feel personally slighted by any perceived transgressions against the normal order of things, and to have zero sense of proportion about which things are truly harmful to all of us, and which things are simply not how we would do things or prefer things to be done.
(hint: smart glasses encourage anti social behaviour for online clout.)
If the parent is torn about whether this is good or bad, they're really not paying attention.
https://en.wikipedia.org/wiki/Honor_killing
https://en.wikipedia.org/wiki/Culture_of_honor_(Southern_Uni...
I opened this in a pretty heavily populated area in Baltimore. There wasn't anyone likely near using glasses and no detections were made, but the debug log flew by absurdly quickly likely because there are a ton of Bluetooth devices nearby.
The start scanning button doesn't change to stop scanning, but it does seem to toggle scanning.
The top bar is overlapping with the notification bar area.
The bottom is truncated slight by my 3 button gesture bar thing. I am old and use the very ancient back, home and multi task buttons that are always visible because I am old.
When I first granted permission the app seemed to just lock up and wouldn't do anything until I restarted it. I gave it both the permissions it wanted and tried fiddling with stuff, but it didn't seem to redraw and I couldn't get the settings to open until after I restarted.
When I first started I think I was connected to my headset, which then disconnected after the permission request?