Hacker Newsnew | past | comments | ask | show | jobs | submit | Ristovski's commentslogin

A similar tool to this (includes interactive TUI) is https://github.com/wagoodman/dive


While they may look similar at first glance, Dive and Cek target different use cases. Dive is great at visualizing layer content and analyzing image efficiency, but requires the Docker daemon and can't extract file contents. Cek is daemonless (works with any container runtime or none at all) and focuses on providing a programmatic interface: `ls`, `tree`, `cat`, etc. for exploring a container's overlay filesystem and layers.


Dive is a very nice tool. I've been using it for years.


Dive is awesome, it just tends to be a bit slow and eats up a lot of RAM when inspecting big images...


FastEndpoints is nice but they don't seem to plan on supporting NativeAOT anytime soon https://github.com/FastEndpoints/FastEndpoints/issues/565#is....


For those wondering what the exact attack vector is, the AMD advisory has some details:

> Improper validation in a model specific register (MSR) could allow a malicious program with ring0 access to modify SMM configuration while SMI lock is enabled, potentially leading to arbitrary code execution. [0]

[0]: https://www.amd.com/en/resources/product-security/bulletin/a...


> nvtop stands for NVidia TOP, a (h)top like task monitor for NVIDIA GPUs.

The Github repo[0] disagrees: > NVTOP stands for Neat Videocard TOP, a (h)top like task monitor for GPUs and accelerators.

I have been using this on AMD for a long time now.

[0]: https://github.com/Syllo/nvtop


One of the major downsides of the Hexagon DSP is that its near impossible to actually run anything on it unless you somehow get your hands on an unprovisioned/unlocked SoC.

The HLOS (High-level OS) running on the Hexagon requires every "applet" to be signed by either the Qualcomm root cert or the OEMs cert. Usually, every phone has a set of generic Hexagon applets (or "skeleton libs") that are provided and signed by the OEM, which seem to be freely usable to offload some computational work to the DSP (mainly FastCV et al - https://developer.qualcomm.com/sites/default/files/docs/qual...). Those of course come with their own bugs: https://research.checkpoint.com/2021/pwn2own-qualcomm-dsp/

On some older SoCs, you were able to use a TOCTOU (Time of check to time of use) exploit to bypass the signature check by patching the applet loader shim in-memory, once it itself got authenticated: https://github.com/geohot/freethedsp/ (I have personally ported this to the msm8953, and it seems to work)


I am not surprised. When I worked at Qualcomm my main gripe was how closed and secretive they were about everything. The tech underneath was pretty cool, although nothing spectacular in my opinion. I don't think I ever saw anything that deserved all that secrecy, at least in the GPU.

When I switched to NVidia I was surprised to find a much more open ecosystem with good public documentation. NVidia did have some tasty secret sauce stuff that they didn't expose outright, but they did what they could to empower developers to make the best use of the underlying hardware. They strike the right balance between openness and maintaining a competitive advantage, in my view.

Just my opinion based on working in both companies for a number of years. Thankfully I no longer have a dog in that fight.


I am an ex QCOMer and agree with everything here. We always said it was a legal firm with a tech problem. That stranglehold on IP really holds the company back, IMO. Sure the licensing model made $$$ but they lose a lot of good will in the tech community.


Hello,

> The HLOS (High-level OS) running on the Hexagon requires every "applet" to be signed by either the Qualcomm root cert or the OEMs cert

That's no longer true since quite some years now :) See the Unsigned PDs, which are allowed for general purpose compute since at least sm8150 (Snapdragon 855).

Note that the articles you mention says this about it:

> Signature-free dynamic shared objects are run inside an Unsigned PD, which is the user PD limited in its access to underlying DSP drivers and thread priorities. An Unsigned PD is designed to support only general computing applications.


I spent way too much time trying to make use of it with Halide and was not successful. Are you saying that this is now possible? I am the developer an app which would greatly benefit from it.


Yes. Note however that the Pixel line shipped with Hexagon access restricted for non-platform Android apps however. But on other devices, things should just work.


This whole approach makes little sense for a developer (not to mention a user). When a consumer buys a phone at particular price point, they expect it to offer some level of performance. Now if devs can offload to these accelerators on a tiny subset of devices in the market, it will by definition lead to a fragmented user experience (and a ton more dev work). Why bother?

I am becoming convinced that CPU (and maybe GPU) is the only viable accelerator on Android devices. All these fancy accelerators are just for phone makers to do their own thing (mainly camera crap). Might as well make it part of the ISP.

Also, I fear Apple is going to eat Android's lunch at this rate :(


The new Brew MP?


You just gave me PTSD flashbacks. Man I am getting old.


I find this one funny. When I was working at qcom, was surprising to see that BREW was still not gone from the monorepo in the 2020s. (but no longer used by anybody of course)



Sry but that is just some completely optional 3rd party library.

Zig has been compiling to AVR targets sucessfully for a very long time.


I believe that should be possible, yes.

Here is an interesting paper about Computrace: https://www.coresecurity.com/sites/default/files/private-fil...


For anyone interested in efivars, I have an old blog post about essentially the same thing but going a bit more in-depth, including how to actually modify the efivar entries: https://ristovski.github.io/posts/inside-insydeh2o/


Thanks! I've updated to point at your post, since it's much more detailed.


An interesting comment made in the Phoronix forums:

> My theory is that fixing the Spectre V2 vulnerability on a hardware level would lead to fundamental architecture changes that AMD is not willing to make, because it may add so much more complexity to the architecture or it may just be too unconvenient. They probably realized that optimizing the code paths that the Linux kernel utilizes on the default mitigations mode is faster, simpler and it may involve less deeper changes, while still being secure.

> As far as I know, pretty much every CPU architecture that implements speculative execution is vulnerable to some version of Spectre, so note that this is not a fundametal flaw of AMD64.


I am terrified to think what AMD's predictor structure is if it's easier for them to do _this_ than it is to simply add privilege tags to their predictors. I don't personally buy this explanation anyways; trying to optimize retpolines in hardware would be an absolute pain in the ass and require an insane amount of synchronization with the backend since retpolines always trash the RAS.


I would guess it's probably physics. Specifically the complexity of the signal path routing for the predictor core must be pretty heavily optimized, and probably are where AMD (and Intel) have invested heavily in advanced design software - their secret sauce - to push the chip right to the edge of what semiconductor physics can achieve.

The branch predictor is one of the most highly optimized pieces of the CPU core. Lots of discussion has been had about how the arm architecture's frontend is simpler, so for example Apple's chips have way more execution units. Intel and AMD's latest designs have also expanded the number of execution units, but the frontend instruction decode and dispatch is the "serial" part of the process, reading the incoming instruction stream. And the x86 instruction set is hard to decode, with a lot of variation in the number of bytes per instruction. So for the instruction decoder to even know there's a branch coming up is a "hard problem," and then it predicts which way the branch will go.


That seems like a disaster brewing. The whole spectre family of vulnerabilities is a side effect of CPUs keeping state around to optimize things and leaking that data between privilege levels.

I mean, in the humorous extreme: imagine if some enterprising group at AMD got together and realized they could "optimize" all that retpoline code by making the RET instruction aware of the branch prediction cache!

"Fundamental architecture changes" are, in fact, what is actually required here.


Occam's Razor would suggest a Linux bug.


Or maybe if we want to keep in the realm of AMD doing something on purpose here, maybe they can detect that the kernel is run with mitigations and then just let the CPU do all the unsafe speculation, while without the mitigations, they disable a lot of the speculative stuff which is somehow even slower than the former case.


Occam's Razor would suggest that Phoronix' benchmarks are broken.



That's the Spectre V2 from the OP title. What's the point this link is meant to convey?


Probably that they had a feeling they wanted to refute it somehow but figured an article would be more authoritative, except the one they picked was picked by the headline rather than content.


Yes, it certainly seems that way.


I wonder, do those sensors have some built-in battery that lasts a long time, but ultimately the whole sensor needs to be replaced due to the electronics being potted-in?

Or is it so low power that it can use some sort of piezoelectric/MEMS power source that charges it as the wheel is spinning?


They are battery powered. The battery lasts a few years, and conserves charge by only reporting pressure every minute after motion is detected.


Yes, they have a battery that lasts a few years. And typically, the battery alone cannot be changed, the whole unit must be replaced.

In my area, all franchised tire shops will refuse to install new tires on your car without also installing brand-new TPMS sensors, regardless of the age of the existing sensor. "Sorry, it's corporate policy."


When I bring my bike to the tire shop for new tires the guy always cuts the nozzle off and places a new one in. Might just be a safety/liability thing. Since they're also made out of rubber, which degrades.


Tire Review says [0]

> the sensors are usually powered by 3-volt lithium ion batteries, but some use 1.25-volt nickel metal hydride batteries. There are developments underway that promise battery-less sensors in the future, having the potential to dramatically change TPMS markets.

Also, YouTube has a number of videos on how to change out the batteries

[0] https://www.tirereview.com/changing-tpms-sensor-batteries/


Battery shall last for 10 years. Ideally you need to replace the tire before the battery gets empty.


Yep, it's a small battery and low-power device.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: