Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What's wrong with JPEG?


Nothing particularly.

Though there are other lossful encoding schemes developed since that either produce better results with the same data size or comparable results with better compression (or both). But the improvements have not been large enough to make the industry as a whole consider making the effort to support these other formats (and in some cases there would be licensing/patent issues making support legally/financially onerous on top of the coding/testing required).


I would add that the "reference implementation" of jpeg encoding and decoding is very easy to compile and to use, reasonably efficient, and completely free software. There has never been a corresponding jpeg2000 implementation, for example, leading to the fast demise of the arguably superior format.


JPEG2000 is not actually much better and I always found the artifacts to be kind of unpleasant. Wavelets are not very good psychovisually because they make the image blurry; they're also more complicated to decode and cost more memory.

It's not a good idea to enable too many file formats in a browser because of the new security issues, so it really needs to be a huge improvement. I also think WebP was a mistake for this reason, it's not good enough.


Actually, JPEG 2000 seems pretty badass. I believe Apple is the only major vendor that supports JPEG 2000 out of the box.

From "JPEG 2000: The Better Alternative to JPEG That Never Made it Big": https://petapixel.com/2015/09/12/jpeg-2000-the-better-altern...

JPEG 2000 is a much better image solution than the original JPEG file format. Using a sophisticated encoding method, JPEG 2000 files can compress files with less loss of, what we might consider, visual performance. In addition, the file format is less likely to be affected by ‘bit errors’ and other file system errors due to its more efficient coding structure.

Those who choose to save their files in the JPEG 2000 standard can also choose between utilizing compression or saving the file as lossless to retain original detail. A higher dynamic range is also supported by the format with no limit of an image’s bit depth. Together, these abilities created a much better alternative than the original JPEG solution.


> In addition, the file format is less likely to be affected by ‘bit errors’ and other file system errors due to its more efficient coding structure.

This is confusing. I think they mean that a file is less likely to be corrupt if it's smaller, which is debatable. But I wouldn't use a newer codec just to make smaller files, I'd make them higher quality at the same size. In that case you need redundancy, which is the opposite of compression efficiency.

> A higher dynamic range is also supported by the format with no limit of an image’s bit depth.

JPEG supports this, but most decoders don't because pixel depth is not something you can just abstract away. Do JPEG2K decoders actually support 10/12-bit? HEIF does.


A file format can be notably more or less resilient to bit errors. It can be the difference between getting a slightly different output, a garbled one or an oops, sorry.

BTW compression efficiency is orthogonal not opposite of structured redundancy that you would want. As a thought experiment, imagine as a last step of coding, encrypting the data with a publicly known key. Theoretical redundancy remains the same, but good luck¹ getting your data back, if you get a bit error.

¹ Imagine a variable length single block cypher was used, multi round CBC or something.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: