Neural Information Dynamics Information theoretic quantities separate and measure key elements of distributed computation in neural systems, such as the storage, transfer, and modification of information. This way, they help to better understand the computational algorithm implemented in a neural system under investigation. This understanding cannot be reached by detailed biophysical modeling alone, as already pointed out by David Marr in his classic tri-level hypothesis. In other words, we may biophysically well understand why the brain signals look the way they do, but not what these signals are worth in terms of information processing proper; this will also be demonstrated explicitly in a simply toy model. Indeed, the missing link between neural dynamics that can be modeled at the biophysical level and the computational algorithms implemented by these dynamics can be provided by information theoretic methods. The use of two such information theoretic techniques, transfer entropy and local active information storage, specifically for MEG source data, local field potentials and voltage sensitive dye imaging data will be demonstrated by examples: In the first example, a time resolved analysis of information transfer between MEG sources in various perceptual closure tasks is used to test psychological theories about the algorithm our brains use to perceive objects when sensory information is incomplete or ambiguous. In the second example, we show that information storage is reduced in patients suffering from autism spectrum disorder. Last, we demonstrate how to understand information processing in the primary visual cortex using the local active information storage -- a recently introduced variety of the information storage measure that is applicable locally in time and space.