Many machine-learning algorithms seem to work very well but we don't know why. If you look at a neural net trained for voice recognition, it's often very hard to understand why it makes the choices it makes. Why should we care? Here are a few of several reasons.
* Trust: How do we know that the neural net is acting correctly? Beyond checking input/output pairs we can't do any other analysis. Different applications have different levels of trust. It's okay if Netflix makes a bad movie recommendation, but less so if a self-driving car recommends a wrong turn.
* Fairness: Many examples abound in which algorithms trained on data learn the intended and unintended biases in that data (see O'Neil30). If you don't understand the program, how do you figure out the biases?
* Security: If you use machine learning to monitor security systems, you won't know what exploits still exist, especially if your adversary is being adaptive. If you can understand the code, you could spot and fix security leaks. Of course, if adversaries have the code, they might find exploits.
* Cause and effect: Right now, you can, at best, check that a machine-learning algorithm only correlates with the kind of output you desire. Understanding the code might help us understand the causality in the data, leading to better science and medicine.
The P vs NP question turns 50. The situation as it is developing today may be an optimum: we can solve many of the toughest NP-complete problems in practice and yet cryptography remains unscathed.
As a Czech, I think it would be a fun scifi novel, some guy in the basement proving P=NP, secretly cashing out lot of Bitcoins... and when it becomes public news, rejecting the Millenium prize just to mess with them (to "establish a tradition").
The complexity class BQP (bounded-error quantum polynomial) is not widely believed to contain NP-complete problems. Instead, if it is indeed different from BPP, it probably contains problems between P and NP, which we know to exist (so long as P!=NP).
Many machine-learning algorithms seem to work very well but we don't know why. If you look at a neural net trained for voice recognition, it's often very hard to understand why it makes the choices it makes. Why should we care? Here are a few of several reasons.
* Trust: How do we know that the neural net is acting correctly? Beyond checking input/output pairs we can't do any other analysis. Different applications have different levels of trust. It's okay if Netflix makes a bad movie recommendation, but less so if a self-driving car recommends a wrong turn.
* Fairness: Many examples abound in which algorithms trained on data learn the intended and unintended biases in that data (see O'Neil30). If you don't understand the program, how do you figure out the biases?
* Security: If you use machine learning to monitor security systems, you won't know what exploits still exist, especially if your adversary is being adaptive. If you can understand the code, you could spot and fix security leaks. Of course, if adversaries have the code, they might find exploits.
* Cause and effect: Right now, you can, at best, check that a machine-learning algorithm only correlates with the kind of output you desire. Understanding the code might help us understand the causality in the data, leading to better science and medicine.