Namely: Hyperparameter Search, Convolution Variants, Network-in-Network, Weight Sharing, Pruning, Quantization — The following is a very, very brief and non-technical summary of a chapter in the author’s dissertation. Motives A very deep neural network that is designed to solve hard problems might take a few seconds to run on modern computers with hardware accelerators for linear algebra operations. Similarly, smaller neural networks…