Mention the name Dr Carol Flexer, and most audiologists and speech therapists worldwide will nod in recognition. Dr Flexer, an expert in childrens’ auditory brain development, hosted a sparkling session in Dublin on May 7 for professionals ranging from those mentioned, to social workers, educators, researchers and cochlear implant support teams.
Dr Flexer’s last visit to Dublin coincided with a personal cochlear implant surgery, making last Thursday’s session more relevant with the new auditory experiences since then.
Floor Question: Dual-Language Families And Pragmatics
Dual-language families who struggle to use listening and spoken language, have options.
Personal prediction: as the global AVT community grows, regional centres of excellence will emerge with a correlating rise in the quality of materials and resources provided.
b) Possible responses for dual-language families include:
- Online dual-language family resources in the languages of the ‘parent’ countries.
- AVT coaching via telepractice from a therapist in a respective language or country.
- Remote assessments for literacy and language proficiency, again via telepractice.
The Day’s Main Learning Points
1. Ears Are The Doorway To Our Brains
Most people think we hear with our ears, but in fact we hear with our brains. Babies’ brains undergo 90% of lifetime growth from birth to age three. What this means is, if a child’ brain lacks or does not access sound at this critical learning stage, there is no firm foundation on which to build future learning and the child will be disadvantaged socially and educationally.
2. Fit A FM System To A Child At An Early Stage
With hearing difficulties being a neurological emergency for newborns, Dr Flexer stressed the necessity of fitting a FM system to a child’s hearing devices, early on. Their brain will receive clear speech from their caregiver through the day, as if that person was standing next to them and without any distortion from background noise or reverberations.
3. Singing Teaches Conversational Nuances
This point made sense as a person who’s fairly new to hearing conversational nuances. Singing trains the brain to identify pitches and tones like what’s heard in conversational flows – and which make or break meaning, according to a listener’s and speaker’s context.
4. Pace Your Speaking Rate For Successful Comprehension
Decoding of spoken language improves by 40% when speakers slow their speech to 140 words per minute, down from the average 200 words per minute. A point that toastmasters and public speakers will be familiar with. For children with digital hearing, paced speech gives vital time to decode the sounds that are heard and to understand what’s being said.
5. Parameters For Degrees Of Deafness
Seventy-five per cent of deafness is in the mild to moderate levels, with 25% in the severe to profound levels. Any child with hearing difficulties, no matter how mild, will struggle to detect certain pitches and needs routine monitoring to ensure language targets are hit.
Many thanks to ONE for this quality learning and for the realtime captions provided.