Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

1. Gaussian blur is just a spatial convolution (recall from signal processing). If a network is susceptible to adversarial examples, it will still be susceptible after a Gaussian blur (assuming the adversary knows you're applying a Gaussian blur. If the adversary doesn't, that's just security by obscurity, and they'll find out eventually).

2. A sequence of frames does not solve the issue because you can have a sequence of adversarial examples (although it would certainly make the actual physical process of projecting onto the camera more difficult, but not really any more difficult than the original problem of projecting an image onto a camera).

3. Using something conventional like LIDAR as a backup is the right approach IMO, and I totally agree with you there. But Tesla and lots of other companies aren't doing that because it's too expensive.



1. If that's the case perhaps another kind of blurring? "Intriguing properties of neural networks" (https://arxiv.org/pdf/1312.6199.pdf page 6) has examples where you get radically different classifications that I don't think would occur naturally or survive a blur with some random element, let alone two moving cameras and a sequence of images. As the title says it's an intriguing property, not necessarily a huge problem.

2. I honestly can't think of a situation where this could occur. It's the equivalent of kids shining lasers into the eyes of airline pilots, but the kids need a PhD in deep learning and specialised equipment to be able to do it. A hacker doing some update to the software via a network sounds much more plausible than attacking the system through its vision while it's traveling.

3. This is the real point in the end I guess, this Google presentation (https://www.youtube.com/watch?v=tiwVMrTLUWg) shows that the first autonomous cars to be sold will be very sophisticated with multiple systems and a lot of traditional software engineering. Hopefully LIDAR costs will come down.


1. Those are examples for a network that does not use blurring. You have the be careful because, remember, the adversary can tailor their examples to whatever preprocessing you use. So the adversarial examples for a network with blurring would look completely different, but they would still exist. Randomness could just force the adversary to use a distribution over examples, and it could mean they are still able to fool you half the time instead of all the time. However, I wouldn't trust my intuition here: that is really a question for the machine learning theory researchers (whether there is some random scheme that is provably resilient or if they're all provably vulnerable, or proving some error bounds on resilience, etc.).

2. The problem of projecting an image onto a car's camera already implies you'd be able to do it for a few seconds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: