Next |
Prev |
Up |
Top
|
Index |
JOS Index |
JOS Pubs |
JOS Home |
Search
We have profitably used many of the known properties of the inner-ear
in our spectral models. For example, the peak-dominance of audio
perception matches well with the ``unreasonably effective'' sinusoidal
model. Similarly, as MPEG audio and S+N+T models show, we can
inaudibly eliminate over
of the information in a typical sound,
on average.
An interesting observation from the field of neuroscience is the
following
[82]:
``... most neurons in the primary auditory cortex A1 are silent
most of the time ...''
This experimental fact indicates the existence of a much sparser
high-level model for sound in the brain. We know that the cochlea of
the inner ear is a kind of real-time spectrum analyzer. The question
becomes how is the ``ear's spectrogram'' processed and represented at
higher levels of audition, and how do we devise efficient algorithms
for achieving comparable results?
Next |
Prev |
Up |
Top
|
Index |
JOS Index |
JOS Pubs |
JOS Home |
Search
[How to cite this work] [Order a printed hardcopy] [Comment on this page via email]