Legally, not much, machine learning image enhancement is essentially probabilistically guessing the inbetween pixels of an image, if you use it for actual work and not art you're just asking to have your evidence thrown out of court and run into the same misrecognition problems humans face.
I agree it seems absurd on the face of it (forgive me), but with enhancement techniques going mainstream elsewhere, I could see it happening with surveillance applications. Sure, the false positive rate will go up, but it might help generate legitimate matches that otherwise crack cases even if the recognition footage itself can't be relied upon in an evidentiary capacity.
Speaking of, Pixel 3's new enhancement feature[0] is now mainstream. If these end up mistaken for originals, it opens up an evidentiary Pandora's box. Then again, much poorer quality image enhancement has been around forever now (e.g. "digital zoom").