TY - JOUR T1 - Rendering localized spatial audio in a virtual auditory space JF - IEEE Transactions on Multimedia Y1 - 2004 A1 - Zotkin,Dmitry N A1 - Duraiswami, Ramani A1 - Davis, Larry S. KW - -D audio processing KW - 3-D audio processing KW - Audio databases KW - audio signal processing KW - audio user interfaces KW - augmented reality KW - data sonification KW - Digital signal processing KW - head related transfer functions KW - head-related transfer function KW - Interpolation KW - Layout KW - perceptual user interfaces KW - Real time systems KW - Rendering (computer graphics) KW - Scattering KW - spatial audio KW - Transfer functions KW - User interfaces KW - virtual audio scene rendering KW - virtual auditory spaces KW - virtual environments KW - Virtual reality KW - virtual reality environments AB - High-quality virtual audio scene rendering is required for emerging virtual and augmented reality applications, perceptual user interfaces, and sonification of data. We describe algorithms for creation of virtual auditory spaces by rendering cues that arise from anatomical scattering, environmental scattering, and dynamical effects. We use a novel way of personalizing the head related transfer functions (HRTFs) from a database, based on anatomical measurements. Details of algorithms for HRTF interpolation, room impulse response creation, HRTF selection from a database, and audio scene presentation are presented. Our system runs in real time on an office PC without specialized DSP hardware. VL - 6 SN - 1520-9210 CP - 4 M3 - 10.1109/TMM.2004.827516 ER -