We perceive sound in three dimensions in everyday life. That is, without looking at a sound source, we can tell with reasonable precision, its location in space relative to us. We can do this because our brains process the sound signals that reach our eardrums in a manner that is unique to each of us. This should not be surprising as everyone's morphology is different (especially that of the outer ear), and this affects the sound reaching our eardrums in a highly idiosyncratic way. The processing that our brains do is tuned to our unique morphologies, and so swapping ears with someone else for instance would lead to a disorienting listening experience. To enable certain types of 3D sound reproduction systems, one of the tasks is to devise mathematical models that describe the effects our individual morphologies have on the sound we hear. The current project focuses on this task.