> There is zero need for hardware assistance for decoding JPEG or JPEG XL, so that difference is a non-difference.
This depends on the system requirements, doesn't it? Suppose you're compositing a low-safety-impact video stream with (well, under) safety-impacting information in an avionics application, and you're currently using a direct GMSL link. There's an obvious opportunity to cost-down and weight-down the system by shifting to a lightly compressed stream over an existing shared Ethernet bus, and MJPEG is a reasonable option for this application (as is H.264, and other options -- trade study needed). When considering replacing JPEG with JPEG XL in this implementation, what's your plan for providing partitioning between the "extremely high quality" but QM software implementation and the DAL components? Are you going to dedicate a core to avoid timing interference? At that point you're spending more silicon than a dedicated JPEG decoder would take. You likely already have an FPGA in the system for doing the compositing itself, but what's the area trade-off between an existing "extremely high quality" JPEG XL hardware decoder and the JPEG one that you've been using for decades?
I don't doubt that in a world where everything is an iPhone (with a token nod to Android), "someone already wrote the code once and it's good enough" is sufficient. But there's a huge field of software engineering where complexity and quality drive decision making, and JPEG XL really is much more complex than JPEG Classic Flavor.
This depends on the system requirements, doesn't it? Suppose you're compositing a low-safety-impact video stream with (well, under) safety-impacting information in an avionics application, and you're currently using a direct GMSL link. There's an obvious opportunity to cost-down and weight-down the system by shifting to a lightly compressed stream over an existing shared Ethernet bus, and MJPEG is a reasonable option for this application (as is H.264, and other options -- trade study needed). When considering replacing JPEG with JPEG XL in this implementation, what's your plan for providing partitioning between the "extremely high quality" but QM software implementation and the DAL components? Are you going to dedicate a core to avoid timing interference? At that point you're spending more silicon than a dedicated JPEG decoder would take. You likely already have an FPGA in the system for doing the compositing itself, but what's the area trade-off between an existing "extremely high quality" JPEG XL hardware decoder and the JPEG one that you've been using for decades?
I don't doubt that in a world where everything is an iPhone (with a token nod to Android), "someone already wrote the code once and it's good enough" is sufficient. But there's a huge field of software engineering where complexity and quality drive decision making, and JPEG XL really is much more complex than JPEG Classic Flavor.