Given a problem, most approaches to machine learning experiments currently involve a lot of guesswork. This talk presents a line of research to connect machine learning into the sciences to be able to measure and predict experimental design. I summarize several articles that are currently under submission to various conferences. First, I introduce an information theoretic model of neural networks and how to use it to analytically determine the capacity of different neural network architectures. This allows the comparison of t he efficiency of different architectures independent of a task. Second, I introduce a heuristic to estimate the neural network capacity requirement for a given dataset and labeling. This allows a better estimate of the required size of a neural network for a given problem. I then abstract from neural networks to machine learning in general and explain adversarial examples as the result of input redundancies. This is, proper sizing of machine learning experiments does not only dramatically speed up the learning process but also helps prevent adversarial examples. Last but not least, I show how to further reduce machine learning parameters for multimedia data by using front-end perceptual compression, both on audio (Frauenhofer IMT) and visual (ImageNet, CIFAR, MNIST) data. The presentation concludes with a hands-on demo.