Hacker Newsnew | past | comments | ask | show | jobs | submit | fc417fc802's commentslogin

> Are you using unique phrasings or behavioral patterns?

Why would Twitter voluntarily run that sort of query to satisfy a subpoena?

Whether it's difficult and risky for the average user depends on the threat model. "Twitter doesn't directly have my name, address, or phone number sitting in their database next to my account" is easy. Other things are more difficult.


Phrasing idiosyncrasies are publicly observable and anyone can note - as external observers did in Kaczynski or Hanssen cases - that a particular phrasing is quaint. It is probably true that Twitter is unlikely to run a browser fingerprinting query to de-anonymize someone tweeting spoilers from a softcore porn show. But a potential leaker has to ask: "how sure am I of that?"

Twitter won't have your various device IDs and VPN IPs are typically shared among many clients simultaneously. You could certainly generate a suspect list but I don't think you'll get conclusive evidence.

That said I don't know how much browser fingerprinting Twitter might be doing and if fingerprints from other services might be possible to crossreference. Much higher risk is probably visiting other sites both with and without the VPN using the same browser without thinking about it and thus leaking your fingerprint or even account cookies that way. Or if you don't run a filter then visiting a site without the VPN that embeds Twitter tracking assets would leak to them directly.


You're right that you can end up with a suspect list instead of a direct answer, but it shouldn't be hard to narrow it down from there, especially in a case like this where most people wouldn't have access to privileged info about unaired shows to start with. It also helps if you have more than one IP address to start with. You can end up with multiple suspect lists, but only one or two people who show up on all of them.

The timestep effect is remarkable. I was quite surprised with a basic PBD simulation when I lowered the timestep into the nanosecond (IIRC, anyway it was really small and no longer ran in real time) range just to see what would happen and got lots of high frequency shivering effects that looked exactly like what happens IRL when metal objects are fed through a shredder.

Never tried with that low of a timestep, wonder if that could start causing floating point issues which will lead to more instability.

Yeah I did have to refactor slightly to minimize precision related impacts.

Well he veered off of the technical and into the personal so I'm not surprised it's dead. But yeah something feels weird about this comment section as a whole but I can't quite put my finger on it.

I think rather than AI it reminds me of when (long before AI) a few colleagues would converge on an article to post supportive comments in what felt like an attempt to manipulate the narrative and even at concentrations that I find surprisingly low it would often skew my impression of the tone of the entire comment section in a strange way. I guess you could more generally describe the phenomenon as fan club comments.


It is one of the few instances were the reddit discussion seems more normal/indepth. See the longer comments here:

https://www.reddit.com/r/programming/comments/1sgtkdf/tailsl...

There are a few glazing comments there too though.

> Well he veered off of the technical and into the personal so I'm not surprised it's dead.

I don't know what he posted, but it is easy to see how a small fan group around Laurie can form?

She is an attractive girl not afraid to be cute (which is done so seldom by women in tech that I found a reddit thread trying to triangulate if she is trans. I am not posting that to raise the question, but she piques peoples interest) plus the impressively high effort put into niche topics PLUS the impressively high production value to present all that.


It halves (or thirds or quarters or etc) available CPU cores, cache space, memory bandwidth, all the critical resources. So I expect that it's only applicable for small reads that you are reasonably certain won't be in cache and that it can only be used extremely sparingly, otherwise it will be nothing but a massive drain.

> I cant read a blog post in the background

You can consume technical content in the background?


this is a thing people do. convince themselves they can consume technical content subconsciously. its now how the brain works though. it will just give you the idea you are following something.

not all technical content is the same, or has the same level of importance. this video does not introduce anything that i need to be able to replicate in my work, so i don't need to catch every detail of it, just grasp the basic concepts and reasons for doing something.

Lots of people will have a show on or something while they're cooking or cleaning or doing other things. Is it worse for it to be interesting technical content with fun other stuff thrown in than if was an episode of Friends or Fraiser or Iron Chef or 9-1-1: Lone Star or The Price is Right?

I guess I'm only allowed to have The Masked Singer on while I make dinner.


if your foreground work doesn't occupy your brain, why not?

Because I prefer not to think about the hair I'm removing from my shower drain?

> using (undocumented!) channel scrambling offsets that works on AMD, Intel, and Graviton

Seems odd to me that all three architectures implement this yet all three leave it undocumented. Is it intended as some sort of debug functionality or what?


it's explained in the video, and there's no way I'll be explaining it better than her

you could however link to the timestamp where that particular explanation starts. i am afraid i don't have time to watch a one hour video just to satisfy my curiosity.

This is approximately the section in the video titled "Memory controllers hate you" (https://www.youtube.com/watch?v=KKbgulTp3FE&t=1399s), combined with the following section.

The actual explanation starts a couple minutes later, around https://youtu.be/KKbgulTp3FE?t=1553. The short explanation is performance (essentially load balancing against multiple RAM banks for large sequential RAM accesses), combined with a security-via-obscurity layer of defense against rowhammer.


I've found Gemini useful in extracting timestamps for particular spots in videos. Presumably it works with transcriptions, given how fast it is.

The three answers it found were:

- Avoiding lock-in to them: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1914

- Competitive advantage: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1852

- Perceived Lack of Use Case: http://www.youtube.com/watch?v=KKbgulTp3FE&t=1971

Those points do actually exist in the video, I checked. If there are more, I don't know about them, as I haven't yet watched the rest of the video.


It's (mostly) not the networking layer where people pay to target us. It's the application layer that would most benefit from being forked.

Of course the problem is that what can be forked already has been. Federated social media. Distributed git hosting. However most "essential" uses are centralized and often also commercial in nature. If you fork Amazon you're ... still Amazon. That sort of thing.


Being against a ban is equivalent to requiring that something be allowed. It might or might not end up happening, but either way it is permitted.

"Requiring something be allowed" != "requiring them to have them"

> an excise tax on number of servers

We need to go full Oracle and charge an excise tax per logical CPU core. For GPUs we can count SIMT lanes.

More seriously they should be taxed per watt, likely in an asymptotic manner because most of the externalities don't scale linearly. Any additional infrastructure requirements should be directly rolled into their electric and water bills, which is to say that they should receive a very unfavorable rate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: