Hacker Newsnew | past | comments | ask | show | jobs | submit | st_goliath's commentslogin

Oh, this is just the usual Microsoft Stockholm syndrome. I've been witnessing this for over 20 years now and have been told that it has been a thing for much longer than that.

"No, we can't switch to OpenOffice you weird Open Source hippie! I can't e-mail documents to other people anymore, nobody can open them. Besides, the UI is all different, I won't be able to find anything!"

Then Office 2007 happened, tossing out the waffle menu for the ribbon and people started receiving e-mails with strange docx/xlsx files that nobody could open. IIRC that was still an issue 3 years later.

But no, when Microsoft does it, it is different: "This is progress! Are you against progress, you weird Luddite?"

I remember by the time Windows 8 was released ("Kachelofen edition" - "hurr, your desktop is a tablet!"), I was discussing with a Unix graybeard friend in the cafeteria how long it will take until the complainers accept that "this is the way now". I think it was him who suggested that if Microsoft sent a sales rep around to shit on peoples lawns, it would take at most a year until they start defending it as the inevitable cost of technological progress.

No matter how slow and bloated the GitHub web UI gets, or how many nonsense anti-features Microsoft stuffs into it. People will accept it and find funny excuses (network effect will be the main one).


> I think it was him who suggested that if Microsoft sent a sales rep around to shit on peoples lawns, it would take at most a year until they start defending it as the inevitable cost of technological progress.

They would inevitably say that Linux was not viable because you had to buy your own fertilizer.


> sudo systemctl enable [email protected]

:-)

Let me guess, ".*@.*\..*"?


Yes, the underlying S3 texture compression algorithm was patented in the US in the late 90s. The last, relevant patent expired in 2018[1]

Direct3D called its variants DXTn, later rename to BCn. From what I recall, Microsoft had some sort of patent licensing deal that implicitly allowed Direct3D implementers to support their formats.

OpenGL had an extension called GL_EXT_texture_compression_S3TC[2].

Under "IP Status" the extension specification explicitly warns that even if you are e.g. shipping graphics cards with Direct3D drivers, supporting S3TC, you may not legally be able to just turn that feature on in your OpenGL driver.

[1] https://en.wikipedia.org/wiki/S3_Texture_Compression#Patent

[2] https://registry.khronos.org/OpenGL/extensions/EXT/EXT_textu...


> For repairing a broken thing? After provably trying in vain to get the landlord to fix it?

Down the hallway from my office used to be the management of a small hotel chain. We often had lunch together and I got to hear a bunch of interesting anecdotes over the years.

Way back when they started up and didn't yet have enough cash to actually own the buildings they operated in, they rented. One of the buildings turned out to have numerous issues (holes in the roof, gaps near exterior walls, etc...). To the point that they eventually didn't pass a fire inspection. They repeatedly asked the owner to have it fixed. Pressed for time, they themselves eventually payed someone, out of their own pocket, so it would at least be up to code for the fire inspection.

From what I was told, the owner threw a tantrum over them modifying the building, terminated the contract and sued them. Successfully.

If you are a tenant in a rental apartment, you'd probably have more leniency on the legal side (compared to a company renting a business property). But still, I'd be very careful making any assumptions about the legal situation rather than risking some sort of Kafkaesque legal mess.

Over here at least, it is very common in apartment complexes that the apartment owner is a different person/entity than the building owner and only the later has the rights to mess with stuff installed in the walls (e.g. plumbing) and especially stuff elsewhere in the building (e.g. an external intercom system). If you ask the landlord to fix it, the best they could do is forward that request to the building owner. If you pulled a stunt like the OP did, there's a good chance that the building owner will sue your landlord.


In the US states that I know well, a residential tenant may perform necessary repairs to bring the space up to health and safety codes, and may deduct the cost from their rent. They have an obligation to notify the property manager, in advance in the case of non-emergency repairs, or after the fact otherwise. There are additional details to consider as well.

I don't know if this would apply to a commercial tenant.

But it would definitely not apply to non-violating conditions like the OP's case.


> the owner threw a tantrum over them modifying the building, terminated the contract and sued them. Successfully.

Was the unauthorized modification permanent or undoable? If the latter, I think some people should really get their judge card (or landlord card) revoked.

Did the judge at least suggest what alternative action the tenant should have taken to comply with the law and code?


Most likely the (legally) correct thing to do in the US is to first report the landlord to the relevant agency, possibly named something like Licensing and Inspections or Fair Housing or somesuch. Each local jurisdiction will have it's own agencies for this, so do research. Failure to respond to that would next involve a landlord-tenant lawyer.

Whether or not it's worth all the trouble and time is a different matter. For most people, I'd say reporting to relevant authorities to make the landlord's life harder without needing much continuing effort is probably worth doing, but the lawsuit side is likely to be a huge time and money sink and it's almost always easier to just move. Let the city sue them for continuing to accrue complaints of unsafe living conditions.

In the same way, a landlord cannot evict you themself if you just fail to pay rent, but there are multiple legal mechanisms to eventually get the sheriff to do it for them. Basically, if landlord-tenant negotiation fails, I think the only legal recourse is to involve governmental third parties unless you technically open yourself up to legal reprisal.


> It's interesting how they found the unused code.

From the article: the code was broken.

The breaking bug was discovered in 2023 by syzbot, a fuzzer, and found out to have been introduced in 2016. This means that probably nobody has been using UDP-Lite (at least on a recent kernel, even LTS) for quite some time now.

It is now 2026, it has been proposed and discussed to remove UDP-Lite entirely, the patch set has gone through several iterations on the netdev mailing list. Apparently nobody complained that, actually, they do need that and it has been merged to the netdev tree, likely ending up in the next release.


I must admit just from reading the description, it doesn't sound that the correct inference is that it's never been used.

"In 2023, syzbot found a null-ptr-deref bug triggered when UDP-Lite attempted to charge an skb after the total memory usage for UDP-Lite _and_ UDP exceeded a system-wide threshold, net.ipv4.udp_mem." to me reads that if the total memory usage never exceeded that threshold then the bug wouldn't trigger. So, wouldn't this bug only affect people who changed that threshold down below the current usage? Because otherwise, usage wouldn't go above the threshold anyway?

And just because the kernel is logging a deprecation notice, there's no guarantee that anyone would ever see that, depending how often it was logged.

But that said, I'd never even heard of this feature, and wouldn't be at all surprised if many routers hadn't just silently dropped these packets anyway because they didn't recognise the protocol version.


The key features that is used here is the '%n' format specifier, that fetches a pointer as the next argument, and writes a character count back.

There is actually an interesting question here: was '%n' always in printf, or was it added at one point?

I took a cursory look at some old Unix source archives at TUHS: https://www.tuhs.org/cgi-bin/utree.pl

As far as I can tell from the PDP-11 assembly, Version 7 research Unix (relevant file: /usr/src/libc/stdio/doprnt.s) does not appear to implement it.

The 4.1BSD version of that file even explicitly throws an error, treating it as an invalid format specifier.

The implementation in a System III archive looks suspiciously similar to the BSD one, also throwing an error.

Only in a System V R4 archive (relevant file: svr4/ucblib/libc/port/stdio/doprnt.c) I found an implementation of "%n" that works as expected.

I guess it was added at some point to System V and through that eventually made it into POSIX?


I think it was first introduced in 4.3 BSD Tahoe (released June 15, 1988): https://www.tuhs.org/cgi-bin/utree.pl?file=4.3BSD-Tahoe/usr/...

This was an update to the earlier 4.3 BSD (1986) which still implemented printf() in VAX assembly instead, and doesn't support the %n feature.

So %n may have originally been implemented in 4.3 BSD Tahoe and made its way into SVR4 subsequently.


In the past, you had to wait for the tubes to warm up, before you got a picture.

Nowadays, you have to wait for the thing to finish booting.

In the future, you have to wait for the ads to finish playing?


Gotta chug that verification can.


> Ever since LLM generated content proliferated we now have...

Or maybe, ever since you became aware of it, you started increasingly becoming aware of it?

See: https://en.wikipedia.org/wiki/Frequency_illusion


Nope. It is generated by LLMs, and a few people got influenced by it now.

It isn’t like em-dashes


I've definitely been writing like that for a long time.


"Nope. It isn’t like em-dashes. It is generated by LLMs, and a few people got influenced by it now."

Your comment can be rearranged in that it's not X it's Y format too.


Yes, but it's not natural to say "It's not <non sequitur thing no one was talking about>. It's <amazing globally impactful thing that should make you pay attention>." That's how LLMs write though.


> The Megahertz Wars were an exciting time.

About a week ago, completely out of the blue, YouTube recommended this old gem to me: https://www.youtube.com/watch?v=z0jQZxH7NgM

A Pentium 4, overclocked to 5GHz with liquid nitrogen cooling.

Watching this was such an amazing throwback. I remember clearly the last time I saw it, which was when an excited friend showed it to me on a PC at our schools library. A year or so before YouTube even existed.

By 2005, my Pentium 4 Prescott at home had some 3.6GHz without overclocking, 4GHz models for the consumer market were already announced (but plagued by delays), but surely 10GHz was "just a few more years away".


IIRC, part of the GHz problem is that very long pipelines like that of the Pentium 4 tend to show increasing benefits at higher clocks. If you can keep the pipeline full then the system reaps the benefits. Sort of like a drag racer - goes very fast in a straight line but terrible on corners.

But with longer pipelines comes larger penalties when the pipeline needs to be flushed, so the P4 eventually hit a wall and Intel returned to the late Pentium 3 Tualatin core, refining it into the Pentium M which later evolved into the first Core CPUs.


only just last year did someone goose a PC cpu to 9.13ghz

https://www.tomshardware.com/pc-components/cpus/core-i9-1490...


> I wonder how many out there seriously think we could ever completely rid ourselves of the CPU. It seems to be a rising sentiment.

This sentiment is not a recent thing. Ever since GPGPU became a thing, there have been people who first hear about it, don't understand processor architectures and get excited about GPUs magically making everything faster.

I vividly recall a discussion with some management type back in 2011, who was gushing about getting PHP to run on the new Nvidia Teslas, how amazingly fast websites will be!

Similar discussions also spring up around FPGAs again and again.

The more recent change in sentiment is a different one: the "graphics" origin of GPUs seem to have been lost to history. I have met people (plural) in recent years who thought (surprisingly long into the conversation) that I mean stable diffusion when talking about rendering pictures on a GPU.

Nowadays, the 'G' in GPU probably stands for GPGPU.


The dream I think has always been heterogeneous computing. The closest here I think is probably apple with their multi-core cpus with different cores, and a gpu with unified memory. (someone with more knowledge of computer architecture could probably correct me here).

Have a CPU, GPU, FPGA, and other specific chips like Neural chips. All there with unified memory and somehow pipelining specific work loads to each chip optimally to be optimal.

I wasn't really aware people thought we would be running websites on GPUs.


The field explored this direction before in vector computers with high bandwidth memory (Cray etc).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: