Practical tips to optimize hearing aid fittings with musicians

Reading Time: 6 Minutes

Practical tips to optimize hearing aid fittings with musicians

It seems reasonable to assume that the main requirement for hearing aids is to improve the understanding of speech. While this requirement can be met with individual adjustments for personal preferences, it drives the development of hearing aids to emphasize speech signal processing. However, listening to music offers additional challenges as it covers a wide range of musical styles, instruments, and performance conditions. Musicians wearing a hearing aid have an additional requirement as they also use their hearing to control their performance. Although hearing aids currently provide advanced signal processing, fitting musicians requires some additional fine-tuning work. We have shown in our clinical trial that musicians prefer playing music with an optimized music program. This blog post introduces our reflections and fitting approach to optimize the music program with musicians.

Limits of traditional gain-frequency response for music

A key element of the perceived sound quality of a hearing aid is the way to determine and compute the amplification needed for each hearing aid user based on his audiogram and personal data. The gain-frequency response of the hearing aid is computed by the selected fitting rationale with a focus on a speech signal. These fitting rationales use a frequency importance function (DePaolis et al., 1996) to emphasizes frequencies that have the highest contribution to speech intelligibility. For example, when you enter a flat hearing loss in the fitting software you don’t get a flat response from the hearing aid. This approach works well for a speech signal but cannot be generalized to situations with music. More specifically, live music, perceived at the ear of a musician, is much louder than speech and covers a wider frequency range. If we apply the frequency importance function for speech intelligibility to the gain calculation for the music program, the amplified signal could be perceived as being too bright with a lack of fullness and warmth (Figure 1). This perception can be corrected in the fitting software by smoothing the gain curves and avoiding any potential over-amplification of the highest frequencies.

Figure 1: dynamic and frequency range (left) for a speech signal and a live music signal (at the ear of a musician) shown in relation to the human audible range. The frequency importance function (top-right) for words, sentences and continuous discourse based on DePaolis et al. (1996) highlights the frequency range contributing the most to speech intelligibility. Reybrouck et al. (2019) propose a model that divides all detectable sounds into musical “frequency zones” together with their “feels” (bottom-right). The sound from hearing aids optimized for speech and prioritizing frequencies contributing to improved speech intelligibility might not be perceived as appropriate for music.

Individual needs and solutions for musicians

As the perception of each musical instrument might be affected by its acoustical properties and the musician’s hearing loss, there is no standard solution for fitting hearing aids to musicians. Our approach is to optimize the gain-frequency response based on the musician’s feedback during the fitting session, therefore he should play his instrument in the clinic and corrections made in the fitting software should be immediately validated or rejected. The changes in the fitting software should also target the corresponding frequency range and input level. It is therefore important to know what the acoustical properties of the played instrument are. For example, the range of the fundamental frequency is between 200 and 3,500 Hz for a violin, between 60 and 700 Hz for a French horn, and between 28 and 3,950 Hz for a piano. It also means that a low note and a high note is not an absolute concept but has a strong dependency with the instrument in question. Audiologists can use Figure 2 to associate a specific note or instrument range to a frequency in order to get a better understanding about what musicians are talking about.

Figure 2: Chart for use in counseling, showing the frequency ranges of different musical instruments. The range on the left represents the fundamental frequency which determines the pitch of the note. Harmonics, shown in the transparent color, contribute to the perceived sound quality and to discriminate different instruments.

These differences imply that the audiologist must link musical notions to audiological concepts. Another example is O’Brien et al. (2013) who recorded sound levels between 60 and 107 dB L A, eq with peak levels between 101 and 130 dB L C, peak in a sound-treated practice room during solitary practice. The level of live music at the musician’s ear is much higher than the level for speech. The range of conversational speech level is expected to be between 60 and 70 dB SPL in quiet. There is a shift in the sound level range when describing a soft, normal, or loud signal when speaking about speech or music. It implies that gain for an input level of 80 dB in the fitting software might have the strongest influence on the perceived sound quality. Increasing the maximum power output (MPO) can also be carefully explored as it can reduce any perceived distortion from the hearing aid amplification.

Fitting protocol for musicians

The fitting protocol that we used in our clinical trial was split into two phases: the hearing aid pre-fitting and the optimization process. In the first phase, the Live Music Program was used as a starting point for a music performance program with the following characteristics:

  1. Extended input dynamic range before the A/D converter to code loud signals without clipping up to 113 dB SPL. This setting avoids the amplification of a distorted signal.
  2. Gain target adapted for music listening situations with less prescribed amplification. The MPO can be increased but comfort must be verified with the musician while he is playing.
  3. Adaptive features are disabled, and microphone set in an omnidirectional mode. There should be no detrimental signals in a Live Music environment and no assumption about the spatial distribution of the sound sources.
  4. The need to keep frequency lowering active must be tested with each hearing aid user. There is a balance to be found between audibility (Frequency Composition ON) and sound quality (Frequency Composition OFF).
  5. The effect of the adaptive feedback canceller might be audible for certain instruments and certain notes. Disabling the Feedback Manager must be carefully tested as there is a trade-off between expected gain and hearing aid stability.

The gain-frequency response is then adjusted during the optimization process. It involves the active participation of the musician who should play across the entire frequency and dynamic range with his instrument. The ideal is to have a neutral response of the hearing aid, i.e. all the notes must be even to ensure the best possible control of the instrument during the performance. Some notes might not be audible or other notes might be perceived to be too loud with a resonance. Try at first to correct the reported problems with the trimmers in the amplification screen based on the conversion chart between notes and frequencies (Figure 2). This approach might be enough for many situations; however, there might be some limitations especially for notes in the lower frequency range. In this case the solution might be found by changing the acoustical coupling, i.e. either the vent diameter, the insertion depth of the ear mold, or the dome type or size.

It will take more time to find an optimized custom solution than using the default solution as the optimization process might need multiple fitting sessions. However, this extra effort is extremely rewarding for the audiologist because musicians need and will be grateful for the best fitting to continue enjoying music. Satisfied hearing-impaired musicians will acknowledge the competencies of the audiologist and recommend these skills.

You can find more fitting tips for music in this document.


DePaolis, R. A., Janota, C. P., & Frank, T. (1996). Frequency Importance Functions for Words, Sentences, and Continuous Discourse. Journal of Speech, Language, and Hearing Research, 39(4), 714–723.

O’Brien, I., Driscoll, T., & Ackermann, B. (2013). Sound exposure of professional orchestral musicians during solitary practice. The Journal of the Acoustical Society of America, 134((4), 2748–2754.

Reybrouck, M., Podlipniak, P., & Welch, D. (2019). Music and Noise: Same or Different? What Our Body Tells Us. Frontiers in Psychology, 10.



About the author:

Christophe Lesimple
Christophe Lesimple
Christophe is a Clinical Research Audiologist and has worked for Bernafon since 2011. He contributes to various aspects of development like working on concepts, running clinical trials, and analyzing data. Besides his activities with Bernafon, he teaches research methods and statistics at the University of Lyon. In his private time, Christophe likes to play music and volunteer for a hearing impaired association.