Hacker Newsnew | past | comments | ask | show | jobs | submit | AzN1337c0d3r's commentslogin

Were you born after 2001? Did you remember those planes that flew into the buildings?

Private planes can do the same thing.


And the TSA wouldn’t do anything to stop that

Hell the TSA doesn’t do much to prevent that on commercial flights, but requiring private flights to start going through commercial security would be completely pointless


Inconveniencing wealthy people might create motivation to fix the problem.

Doesn't work.

If TSA were added, there still wouldn't be any lines at private terminals.


Even if you're flying commercial, wealthy people can just pay Perq Soleil $250 a pop to waltz them through the employee line with no wait.

What's new about this with the H2 chip?

My H1-chipped USB-C Airpods Max (OG) seem to switch seamlessly between my iphone, ipad, and macbook pro already.


If your gen 1 are already excellent for you, there’s no reason to upgrade, same as there’s no reason to get a new phone or laptop every year. My wired headphones are ten plus years old and will be fine for a couple more decades; my gen1 Max, at a fifth the price, are also fine and will be fine until their Bluetooth becomes too old (which may be ten or twenty years these days). Both benefit from earcup swaps occasionally (but gen1 lightning needs them more often than usb-c.)

If you’re unsatisfied with Transparency mode on your gen1 then the gen2 will give you Adaptive which is a big improvement (especially so if you wear them outdoors or around other people). Same improvement that the AirPods had, if you’re familiar with that.

If you use them for videoconferencing, the lower latency and higher quality headset codec may be worth upgrading. They retain value on the used market so long as you unpair them from Find My an hour before you sell them and have a purchase receipt.

I suspect there might be some slight power savings for your transmitting devices if both sides support Bluetooth 5.3, but I would not expect that to be significant or advertised.


Most workstation class laptops (i.e. Lenovo P-series, Dell Precision) have 4 DIMM slots and you can get them with 256 GB (at least, before the current RAM shortages).

There's also the Ryzen AI Max+ 395 that has 128GB unified in laptop form factor.

Only Apple has the unique dynamic allocation though.


Yep, I have a 13" gaming tablet with the 128 GB AMD Strix Halo chip (Ryzen AI Max+ 395, what a name). Asus ROG Flow Z13. It's a beast; the performance is totally disproportionate to its size & form factor.

I'm not sure what exactly you're referring to with "Only Apple has the unique dynamic allocation though." On Strix Halo you set the fixed VRAM size to 512 MB in the BIOS, and you set a few Linux kernel params that enable dynamic allocation to whatever limit you want (I'm using 110 GB max at the moment). LLMs can use up to that much when loaded, but it's shared fully dynamically with regular RAM and is instantly available for regular system use when you unload the LLM.


What operating system are you using? I was looking at this exact machine as a potential next upgrade.


Arch with KDE, it works perfectly out of the box.

I configured/disabled RGB lighting in Windows before wiping and the settings carried over to Linux. On Arch, install & enable power-profiles-daemon and you can switch between quiet/balanced/performance fan & TDP profiles. It uses the same profiles & fan curves as the options in Asus's Windows software. KDE has native integration for this in the GUI in the battery menu. You don't need to install asus-linux or rog-control-center.

For local AI: set VRAM size to 512 MB in the BIOS, add these kernel params:

ttm.pages_limit=31457280 ttm.page_pool_size=31457280 amd_iommu=off

Pages are 4 KiB each, so 120 GiB = 120 x 1024^3 / 4096 = 31457280

To check that it worked: sudo dmesg | grep "amdgpu.*memory" will report two values. VRAM is what's set in BIOS (minimum static allocation). GTT is the maximum dynamic quota. The default is 48 GB of GTT. So if you're running small models you actually don't even need to do anything, it'll just work out of the box.

LM Studio worked out of the box with no setup, just download the appimage and run it. For Ollama you just `pacman -S ollama-rocm` and `systemctl enable --now ollama`, then it works. I recently got ComfyUI set up to run image gen & 3d gen models and that was also very easy, took <10 minutes.

I can't believe this machine is still going for $2,800 with 128 GB. It's an incredible value.


You may wanna see if openrgb isn't able to configure the RGB. Could even do some fun stuff like changing the color once done with a training run or something


I use openrgb to turn off all the RGB crap on my desktop machine. Unfortunately you have to leave openrgb running and it takes a constant 0.5% of CPU. I wish there was a "norgb" program that would simply turn off RGB everywhere and not use any CPU while doing it.


Yeah, second the norgb option. Even more annoying when openrgb just randomly hangs scanning devices and now im stuck with rainbows i cant turn off!


Brilliant!


Really appreciate this response! Glad to hear you are running Arch and liking it.

I've been a long-time Apple user (and long-time user of Linux for work + part-time for personal), but have been trying out Arch and hyprland on my decade+ old ThinkPad and have been surprised at how enjoyable the experience is. I'm thinking it might just be the tipping point for leaving Apple.


I just did! Warmly encouraging you to try it out! Managed to put Omarchy on an external ssd on my old macbookpro 2019; rarely booting in macos now. Long time i haven’t enjoyed using a computer SO MUCH!


> Only Apple has the unique dynamic allocation though.

What do you mean? On Linux I can dynamically allocate memory between CPU and GPU. Just have to set a few kernel parameters to set the max allowable allocation to the GPU, and set the BIOS to the minimum amount of dedicated graphics memory.


Maybe things have changed but the last time I looked at this, it was only max 96GB to the GPU. And it isn't dynamic in the sense you still have to tweak the kernel parameters, which require a reboot.

Apple has none of this.


Strix Halo you can get at least 120 GB to the GPU (out of 128 GB total), I'm using this configuration.

Setting the kernel params is a one-time initial setup thing. You have 128 GB of RAM, set it to 120 or whatever as the max VRAM. The LLM will use as much as it needs and the rest of the system will use as much it needs. Fully dynamic with real-time allocation of resources. Honestly I literally haven't even thought of it after setting those kernel args a while ago.

So: "options ttm.pages_limit=31457280 ttm.page_pool_size=31457280", reboot, and that's literally all you have to do.

Oh and even that is only needed because the AMD driver defaults it to something like 35-48 GB max VRAM allocation. It is fully dynamic out of the box, you're only configuring the max VRAM quota with those params. I'm not sure why they choice that number for the default.


You do have to set the kernel parameters once to set the max GPU allocation, I have it set to 110 GiB, and you have to set a BIOS setting to set the minimum GPU allocation, I have it set to 512 MiB. Once you've set those up, it's dynamic within those constraints, with no more reboots required.

On Windows, I think you're right, it's max 96 GiB to the GPU and it requires a reboot to change it.


Intel had dynamic allocation since Intel 830(2001) for Pentium III Mobile. Everything always did, especially platforms with iGPUs like Xbox 360.

Only Apple and AMD have APUs with relatively fast iGPU that becomes relevant in large local LLM(>7b) use cases.


Insurance is likely using that same data to adjust rates.


Left-right concurrency control might be a good fit for this problem.

https://concurrencyfreaks.blogspot.com/2013/12/left-right-co...


Bought by Broadcom, now implementing classic strategy of leveraging vendor lock-in to milk customers.


The increase is massive ( I’ve heard x5 over existing contracts in some places )


Ah, 5x? At $WORK, the low code tool vendor that is used to build the monolith (and that of our sister company) is bought by a private equity firm. Our sister company will face a 7x increase. Another fun thing is that the license is based on a percentage of licensing cost to their customers.

Their game is clearly to squeeze very hard for a few years, and then deprecate the product. I can't imagine that there are companies that are fine with such price hikes.


Not only that, add dip downs in quality. For instance, VMware was famous for stuter-less graphics, now it's a 15 FPS show.

Milking customers is already a thin ice but in combination with declining quality it's a death sentence.


Fork?


Are you under the impression that VMware is free open source software?


Yes, I was, actually.


fair enough, you're one of today's (un)lucky 10,000! (https://xkcd.com/1053/)

VMWare has always been proprietary, there's been some handwringing a few times about the fact that they borrow from FOSS quite a bit.

https://www.zdnet.com/article/linux-developer-abandons-vmwar...


Not to be pedantic, but people have died from software programming bugs being a primary contributing factor. One example: Therac-25 (https://en.wikipedia.org/wiki/Therac-25)


I only meant this in relation to Crowdstrike incident that was mentioned in the comment I replied to. The standards and regulations in those other industries have changed dramatically (for the better) since Theract-25.


I mean, that was over 40y ago. Same thing for the Ariane 5 failure which is a staple of safety-critical classes (at least in Europe), it's not getting any younger.

If all the examples you can conjure are decades old*, is it any wonder that people don't really take it seriously? Software power the whole world, and yet the example of critical failure we constantly hear about is close to half a century old?

I think the more insidious thing is all the "minor" pains being inflicted by software bugs, that when summed up reach crazy level of harm. It's just diluted so less striking. But even then, it's hard to say if the alternative of not using software would have been better overall.

* maybe they've added Boeing 737 Max to the list now?


> If all the examples you can conjure are decades old

They're not ALL the examples I can conjure up. MCAS would probably be an example of a modern software bug that killed a bunch of people.

How about the 1991 failure of the Patriot missile to defend against a SCUD missile due to a software bug not accounting for clock drift, causing 28 lives lost?

Or the 2009 loss of Air France 447 where the software displayed all sorts of confusing information in what was an unreliable airspeed situation?

Old incidents are the most likely to be widely disseminated, which is why they're most likely to be discussed, but that doesn't mean that the discussion resolving around old events mean the situation isn't happening now.


In aviation, accidents never happen because of just a single factor. MCAS was mainly an issue in lack of adequate pilot training for this feature, AF447 was complete incompetence from the pilots. (the captain when he returned to the cockpit, quickly realized what was happening, but it was too late)


There's almost never a death where there is a single factor, regardless of aviation or not. You can always decompose systems into various layers of abstractions and relationships. But software bugs are definitely a contributing cause.


I would submit Google's TPUs are not GPUs.

Similarly, Tenstorrent seems to be building something that you could consider "better", at least insofar that the goal is to be open.


> Back in the real world, no race team would agree that their cars should disintegrate after one race.

Wasn't F1 teams basically doing this by replacing their engines and transmissions until the rules introduced penalties for component swaps in 2014?


If you go back further than that, teams used to destroy entire engines for a single qualifying.

The BMW turbocharged M12/M13 that was used in the mid-eighties put out about 1,400 horsepower at 60 PSI of boost pressure, but it may have been even more than that because there was no dyno at the time capable of testing it.

They would literally weld the wastegate shut for qualifying, and it would last for about 2-3 laps: outlap, possibly warmup lap, qualifying time lap, inlap.

After which the engine was basically unusable, and so they'd put in a new one for the race.


Current examples would be drag racing cars that have motors that are designed and used in a way that they only survive for about 800 total revolutions.


Yup, cigarette money enabled all kinds of shenanigans. Engine swaps for qualification, new engines every race, spare third cars, it goes on. 2004 was the first year that specified engines must last the entire race weekend and introduced penalties for swaps.


> cigarette money enabled all kinds of shenanigans.

It still does. New Zealand has a crop of tobacco funded politicians.


>New Zealand has a crop of tobacco funded politicians.

when they leave politics do they just rapidly age and dissolve like that guy in the Indiana Jones film?


F1 income is way way higher than the 80s.


Even today F1 teams are allowed 4 engine replacements before taking a grid place penalty, and those penalties still show up regularly enough. So nobody is making "reliable" F1 engines.

You can see this really on display with the AMG ONE. It's a "production" car using an F1 engine that requires a rebuild every 31,000 miles.


Don't highly optimized drag racers do this? I mean, a clutch that in normal operation gets heated until it glows can't be very durable.


They don't just specify 12 smaller cables for nothing if 2 larger ones will do. There are concerns here with mechanical compatibility (12 wires have smaller allowable bend radius than 2 larger ones with the same ampacity).


One option is to use two very wide, thin insulated copper sheets as cable. Still has a good bend radius in one dimension, but is able to sink a lot of power.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: