Neural networks and deep learning are normally black box systems. This black box nature of neural networks leads to problems that tend to be underemphasized in the rush to promote these systems.
I find this article absurd. If I were to create a neural network, the very second thing I would program into it would be the capability for it to log WHY it did the thing I programmed it to do. Are you really telling me that the tools available to us right now are incapable of this?