Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> A fairly easy way to introduce rotation invariance in DCNNS is to perform random rotations on the inputs during training. Likewise for scale invariance.

It is a bit silly to call these invariances, as different filter/kernel combinations will be activated when a rotated or scaled input is encountered, the individual filters are not rotation or scale invariant. The entire network can only deal with rotations and scales it encountered during training, whilst having to learn 'redundant features' to a certain extent.

It will get the job done for many tasks, but it's a brute force sort of approach that will complicate the learning process (i.e. more scales and rotations require more filters, thus needing a more complex network that is harder to train).

I think there's definitely a lot that can be learnt from (classical) signal processing in order to come up with a much more elegant and efficient solution.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: