Hacker Newsnew | past | comments | ask | show | jobs | submit | matoro's commentslogin

Just want to say that a decade ago Warframe was the first game I ever played on WINE when I was first learning Linux in school. If it hadn't been so friendly and easy to keep playing I wouldn't have the skills and job I do today. Thank you!


For me at least I just use the prebuilt MicroG-flavor ROMs at https://lineage.microg.org/

This comes preloaded with the MicroG settings app, so no need to install the extra FDroid repo. But otherwise yes, Aurora Store gets you access to all necessary proprietary apps.


thanks you a lot, I did not know about Aurora store.


That was me that filed the Itanium test suite failure. :)


Ah, porting to HP Superdome servers. It’s like being handed a brochure describing the intricate details of the iceberg the ship you just boarded is about to hit in a few days.

A fellow traveler, ahoy!


I worked on the Superdome servers back in the day. What a weird product. I still can't believe it was a profitable division (at my time circa 2011).

HP was going through some turbulent waters in those days.


Yes, some good times despite all the work.


The Itanic was kind of great :). I'm convinced it helped sink SGI.


Itanium did its most most important job: it killed everything but ARM and POWER.


Sunk by the Great Itanic ?


Why was the sinking of SGI great?


Oh, that wasn't the intent. I meant two separate things. The Itanic itself was kind of fascinating, but mostly panned (hence the nickname).

SGI's decision to built out Itanium systems may have helped precipitate their own downfall. That was sad.


Still makes me sad. I partially think a major reason for the demise was that it was simply constructed too soon. Compiler tech wasn't nearly good enough to handle the ISA.

Nowadays because of the efforts that have gone in to making SIMD effective, I'd think modern compilers would have an easier time taking advantage of that unique and strange uarch.


VLIW has a fatal flaw in how it was used in these systems. You cannot run general purpose dynamically scheduled workloads unless you combine the JIT engine and the scheduler. PRIOR ART. Which is the same exact problem of trying to run multiple compute kernels on a GPU at the same time. VLIW with an OS and runtime that uses a higher level language, Wasm or the JVM, could forseably support dynamic workloads where the main cpu was VLIW.

Now if they had been designed as GPU like devices for processing data, then Fortune 1000 would have never needed or used Hadoop.


SGI and HP! Intel should have a statue of Rick Belluzzo on they’r campus.


one of the best books on Linux architecture i've read was the one on the Itanium port

i think, because Itanic broke a ton of assumptions


This sounds interesting. Can you share the title of the book?


iwd does not wrap wpa_supplicant, it's a from-scratch implementation and a much nicer one at that.


It looks like it wraps NetworkManager or ConnMann - both of which wrap around wpa_supplicant for wifi. So yep, iwd wraps wpa_supplicant.

Architecture diagram on home page: https://iwd.wiki.kernel.org/


You're reading that diagram backwards.

NetworkManager and ConnMan can optionally _USE_ iwd as a backend _INSTEAD OF_ wpa_supplicant.

iwd does _NOT_ use NetworkManager/ConnMan

Source: gentoo user who explicitly avoids the buggy disaster that is NetworkManager+wpa_supplicant whenever possible


I noticed my error. To be fair, I don't think I read the diagram backwards. I think the diagram is drawn backwards instead. In it, iwd seems to talk to the iwd backend via D-Bus as they're close together.

Or maybe that's a diagram technique I'm just not used to.


It reads as a fairly normal diagram to me; NetworkManager has an iwd backend component/plugin that talks over D-Bus to iwd, which in turn stands on top of ell which in turn stands on the kernel (which itself contains a bunch of components of interest).


NetworkManager can use iwd as a wifi backend instead of wpa_supplicant, but nm isn't needed as iwd can also manage the networks on its own. iwd should never run at the same time (on a single network interface) as wpa_supplicant, as wpa_supplicant is (almost?) entirely superseeded by it.


Oh ok, so I misread the arch diagram. I thought that iwd was talking to the iwd backend, which would then delegate to NM or CM.

It looks like you're right [0], so I stand corrected.

It also looks like it can't fully replace wpa_supplicant though:

> IWD and the NM backend are work in progress and the capabilities are still limited.

[0] https://iwd.wiki.kernel.org/networkmanager


That paragraph of the article looks to be ~6 years out of date according to the network manager version number it lists, around the time of the initial iwd release, and the whole article seems to be at least 2-3 years out of date since then iwd is well into version 2.x now.

Distros like Ubuntu have defaulted to iwd as the NM backend for Wi-Fi for a couple years (and now in the LTS version). It really is a quite popular and stable replacement to wpa_supplicant.


Alright, thanks for the information! :)


Can you please share the tool in question? I have been desperately looking for something like this for my sandbox project.


Here you go [1], [2]. It's not completely ready yet - but it's usable. It should be OK if you plan to just modify or reuse parts of it. It currently supports btrfs backend. Plain directory backend and packaging of the tool are not done yet - but shouldn't be too hard. I was keeping it for tomorrow. Meanwhile, you can use asciidoctor to convert the docs if you need to refer it.

[1] https://git.sr.ht/~gokuldas/genpac

[2] https://crates.io/crates/genpac


This is the real answer, I have a paid domain and am still unable to get contact or transfer off (I have attempted this with all known registrars that support .tk, Freenom simply fails to respond to the transfer request)


Google Translate recently moved translated web pages to domains like this. If you plug a webpage into GT it will put the translated content under <domain>-<tld>.translate.goog. This user's actual domain is https://retr0.id


Oof. This will not be the last time that decision causes a problem.


With nginx I also set the return code to 444 on the default virtual host, this is not a real status code but instead tells nginx to kill any connections to this vhost at the TCP level.


I have used a default host with a self signed certificate and 444 for while. One advice was to make it support only the NULL cipher, but I did not succeed to do that, don't remember the details now.

However, many scanners still end with a full 400. Either their implemenations are so bad or they intentionally send corrupted requests to try to exploit some vulnerability. I have not digged any deeper.


for https, since 1.19.4 you can reject the tls handshake early https://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_...


I use dnscrypt-proxy[0] which round-robins to a bunch of upstream servers, plus encryption.

[0] https://github.com/DNSCrypt/dnscrypt-proxy


I also use this for OTP tokens instead of my phone!

https://github.com/tadfisher/pass-otp


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: