Hacker Newsnew | past | comments | ask | show | jobs | submit | intended's commentslogin

Most organizations don’t have the resources to let people travel without an actual event on the other end.

I could be wrong, but I suspect visas may be affected as well.

The organizers and attendees will try and coordinate something else. However, throwing something together on zoom, at such short notice is going to be a crap show.

The schedules wouldn’t survive unscathed, since speakers will start dropping out at this point.

If they do manage to get something together, they run the real risk of it being a rickety and frustrating event.


You were wise enough to avoid this, unfortunately for most people “shiny tech!”.

Yet more regulation? We have regulation for these glasses already?

Aren’t there countries that make it mandatory to blot out faces of people on videos if they didn’t consent?


Which examples did they cover in the book?

I’m betting this is going to some ML / Data labelling pipeline.

Yeah, moderation may instead be labelling in this case. Its likely the same type of firm handles both sorts of work on behalf of FAANG

Sounds plausible.

We could also toss vibe coded mess on top of this and probably get closer to the truth.


The article itself is ambiguous on this point: "At the time of the publication, Meta admitted subcontracted workers might sometimes review content filmed on its smart glasses when people shared it with Meta AI."

That could be moderation, or it could be labelling new examples for training/validation


This feels like an instance of weasel words. One can scarcely imagine any reason to do content moderation over people’s own private and personally consumed data.

I’ve worked in trust and safety - for me this is stupid, but well below the threshold of impossible.

Hell, I know of a major firm that decided QA was not needed for their trust and safety process.

Another common issue will be SEA Arabic speakers tasked with labelling Middle Eastern Arabic content, because accents and cultural dialects are not a thing.

I’ve had people at FAANG firms cry on my shoulder, because they couldn’t get access to engineering resources at their own firms.

There was the famous case of meta executives overriding T&S policy and telling them that what content was news worthy during the Boston bombing. On a separate incident, they told their team that cartel violence was not newsworthy when friends in London complained about it.

When you say this is fantasy, what do you mean precisely?


What I mean is: I'm not sure what they base their statement that it's "a common practice among other companies" on. Unlikely they are talking about their peer companies. I suppose if you read the sentence literally, there surely exist one or more "other companies" in the broad universe of "other companies" that routinely do this kind of stuff. But I wouldn't think anywhere serious.

I mean, given this happened and it was sent to Sama it seems pretty clear that the images being generated from this were being sent to a labelling pipeline somewhere.

There’s probably an opt out / opt in clause somewhere in the terms and conditions, which makes it feasible for Meta (and other firms) to use this data.


Meta could at least pretend that they don't intend to capture people in their most intimate and vulnerable moments instead of slobbering on the sideline like "mm... Data..."

Imagine someone pulling up a smartphone and then recording everything that happens around them. Contrast that with someone wearing smart glasses and doing that exact same thing.

On a separate note, (and this is a genuine question) are you by any chance aware the term Non-consensual intimate imagery / NCII?

I am beginning to suspect that the average HN goer isn’t aware of the scope and scale of the Trust and Safety problem.


Someone pulling up a smartphone on me would feel hostile because it's violating a social contract. Maybe I'd feel betrayed and attacked if it turned out someone was recording me using glasses, but I don't know, I don't care about dashcams and this is not that much different. I imagine it feels bad and scary for women when someone takes creepshots of them, and this tech does open opportunities for that. Maybe that would be enough for me to hate glasshats if I had a bit of empathy. But isn't the genie already out of the bottle with 'deep nude' models available for everyone forever?

No, i don't think I've heard about NCII before, and Trust and Safety sounds like some corporate PR whitewashing term to me.


They don't care. Or they refuse to realize that tech isn't the solution to it, but an amplifier of it's scale.

Can tell you that my urge to take photos/record drastically dips around other people. Particularly if it were meant for any sort of commercial exploitation. Stephenson called people wired for max indiscriminate data collection/processing "gargoyles". Personally I prefer glassholes.

https://www.tabletmag.com/sections/news/articles/the-borg-of...


I admit it's hard to care for what you people can't even articulate

Why dont you state what it is you think isn’t clear?

That way we can figure out what it is that’s confusing or unclear and then see it you find it has any moral moral significance.


Have you heard the term non consensual intimate fantasies? I've heard it's an even bigger problem.

Well, you would fortunately be wrong. Fantasies are commonplace and well studied in society, psychology and even in the law.

The issue is when you go from fantasy to actually enacting it, which is usually when you earn the epithet of “Creep”.

Also, why make a throwaway for this line? I take it you haven’t heard of NCII?


Don’t we already hate the invasive ad tech industry?

Aren’t there already posts and articles on how to ensure that TVs don’t farm information from us?


Yes and no.

Safety and user pain is a part of tech which seems largely ignored, even on sites like HN.

I really have no idea why this ignorance prevails; commenters seem to genuinely be unaware of what goes on in Trust and Safety processes.

I mean, most users would complain about content moderation, but their experience would be miles ahead of what most of humanity enjoys when it comes to responsiveness.

I believe this lack of knowledge, examples, and case history is causing a blind spot in tech centric conversations when it comes to the causes of the Techlash.

Unfortunately this backlash is also the perfect cover for authoritarian government action - they come across as responsive to voters while also reigning in firms that are more responsive to American citizens and government officers than their own.


> The reward ... is it can be far smarter than you.... and they know far more than you

I think this solidified an idea for me. A tool being smarter than me but inconsistent, is useless.

I can work with people who are smarter than me, because I can trust them, and I can trust them to own up or be held accountable for screw ups.

For a calculator, I can only hold myself accountable. However I cannot hold myself accountable for not knowing something I dont know.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: