Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't find it so surprising that, out of the vast number of possible small perturbations, there are a few that cause the image to be misclassified. I suppose it is interesting that you can systematically find such perturbations. But is there anything here which suggests that a neural network which does well on a test set won't continue to do well so long as the images given to it are truly "natural"?


The interesting thing is that it gets misclassified across networks.

According to the blog post, I can build two NN with different structures and train them on a random subset of a collection of dog and cat pictures. Distort a random picture until network A misclassifys it, then according to the article network B will also misclassify it, despite it having a different structure and a different training set.

I don't think it's obvious that network B will fail as well.


You're right. I guess I just don't like the fact that it's titled as "The Flaw Lurking In Every Deep Neural Net", when in fact neural nets will continue to classify new data as well as ever.

I agree that what you point out is very interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: