Indiana Speech-Language-Hearing Association

Poster Demonstrations

Friday, April 12

10:00 am – 11:30 am

P01Impact of Smart Frames as Assistive Technology: A Case Study

Katya Babe, Saint Mary’s College; Syd Brooks, DePaul University; Tsuki Larkins, Saint Mary’s College; Olivia Rice, Saint Mary’s College; Christina Corso, PhD, CCC-SLP, Saint Mary’s College

Amazon Echo Smart Frames are wearable smart speakers that are controlled by voice commands and a touchpad. Typical smart speakers with voice control (SSVCs) allow individuals with physical disabilities to utilize voice commands to gather information, control their environment, communicate with friends and family, and perform leisure activities. This may include activities such as making phone calls, playing music and turning lights on and off. Smart frames share the same capabilities as other SSVCs on the market and in theory make the capabilities of SSVCs more portable and accessible. Currently there is limited research and discussion on the use of wearable technology, such as Smart frames, as a form of assistive technology for individuals who have physical disabilities. A case study has been conducted with an individual who has a spinal cord injury to gain their opinions and expertise about using smart frames as assistive technology and the impact the frames has on their independence. Results will discuss the methodology of the study, using the HAAT Model as the theoretical framework to assess Smart Frames as a new form of assistive technology, the benefits and challenges of using Smart Frames compared to traditional SSVCs (i.e., cost of frames vs SSVCs, accessibility of device features for frames and SSVCs, etc.), the impact using smart frames has on the participants quality of life and future directions from the study outcomes.

Learner Outcomes: Participants will be able to:

  • Identify and describe beneficial use of smart frames as assistive technology for everyday independence.
  • Compare the benefits of using smart frames as opposed to smart home technology.

Instructional Level: Intermediate │ Poster

P02Sepsis-Associated Encephalopathy and the Role of the Speech-Language Pathologists

Jennifer Vascil, BS, Indiana University of South Bend

Septic-associated encephalopathy (SAE) is a sepsis-related sequela defined by diffuse cerebral dysfunction caused by the body’s immune response to infection with no evidence of brain infection or involvement of other encephalopathies (Chaudhry & Duggal, 2014; Encephalitis – Symptoms and Causes, n.d.). Over the past decade, diagnosis and treatment of sepsis has improved, leading to increased survivability. Of the 14 million patients who survive sepsis yearly, over 2.3 million survive with ongoing physical, psychological and cognitive impairments (Prescott & Angus, 2018). These long-term impairments result in shifts in the cost burden from hospitals to long-term care facilities as survivors’ transition out of ICUs with conditions that often lead to significant complications after discharge (Huang et al., 2019). Though survival rates are rising, there are no definitive guidelines or protocols for providing these survivors post sepsis rehabilitative care (Freeman-Sanderson et al., 2022).  In treating cognitive impairment in sepsis patients, an SLP’s scope includes attention, memory, problem-solving and executive functioning. Given the importance of post-sepsis care in patients with long term cognitive impairment status post ICU discharge, this study seeks to answer questions such as, what knowledge and training experiences do SLPs report regarding sepsis-acquired encephalopathy and do professional years of experience (practice setting, etc.) impact knowledge or attitudes regarding cognitive-communication services for individuals with sepsis-acquired encephalopathy?

Learner Outcomes: Participants will be able to:

  • Identify and differentiate between sepsis-associated encephalopathy and other common encephalopathies.
  • Describe different educational and professional knowledge and training experiences regarding sepsis-associated encephalopathy.
  • Identify whether a SLPs professional years of experience impacts their knowledge or attitudes regarding providing cognitive-communication services for individuals with sepsis-acquired encephalopathy.

Instructional Level: Intermediate │ Poster

P03Vision Specialists’ Perspectives on Smart-Frames: The Benefits/Challenges of Wearables

Grace Layman; Ruby Meza; Christina Corso, PhD, CCC-SLP, from Saint Mary’s College

This presentation will discuss the results of a survey study conducted with optometrists and opticians to gain more information about Smart Frames. Smart Frames with artificial intelligence agent (i.e. AIA) are currently capable of creating shortcuts that respond to what people need when they need it (Amazon, 2023), and have the potential to be a life-changing accessibility tool (Fulton, 2022). The research questions were designed with the HAAT Model (Cook & Polgar, 2015), specifically the assistive technology portion, as the foundation in an effort to consider the factors that most impact the individual in conjunction with the assistive device. Results will discuss the optical-related knowledge, comfort level working with Smart Frames and experience working with individuals who have disabilities by optometrists and opticians. The expense, available brands and effectiveness of Smart Frames among different eye care clinics will be discussed. The impact of demographic information (i.e., region, profession, and years in practice) and the perspectives of vision specialists will be reviewed with attendees.

Learner Outcomes: Participants will be able to

  • Identify different brands of Smart Frames.
  • Describe the important features of Smart Frames such as cost, insurance coverage, prescription specifications, wear-ability (comfort during wear, durability and versatility), etc.
  • Summarize the Eye Care professionals’ perspectives on the benefits and challenges of Smart Frames for individuals with physical disabilities.

Instructional Level: Intermediate │ Poster

P04Efficacy of Accent Modification

Emma Hake, Purdue University

There is a growing number of individuals living and working in the United States who are non-native speakers of English. Many individuals with accents frequently report negative social experiences that may lead them to seek out accent modification services. Accent modification is a voluntary service in which participants are explicitly taught aspects of the English language, which can include pronunciation of vowels and consonants and stress and intonation patterns. A review of the literature was conducted in order to investigate if accent modification services are effective in increasing intelligibility for adult non-native speakers of English. This review revealed that several different program types (segmental, suprasegmental and mixed) are effective in increasing intelligibility. Additionally, participants reported that outcome measures such as confidence in communication and self-esteem increased, which suggests quality of life improvements. Further research is needed regarding satisfaction with accent modification programs themselves, rather than only outcomes.

Learner Outcomes: Participants will be able to:

  • Describe accent modification and provide examples of program components.
  • Summarize the impacts of accent modification programs on intelligibility and participant reported outcomes.
  • List ethical considerations related to accent modification.

Instructional Level: Intermediate │ Poster

P05Electronic Media, Hearing Loss and Language Development: A Pilot Study

Sarah Mahnesmith, Butler University

Previous research suggests infant-directed speech, characterized by exaggerated intonation, clear sounds, repetition and simple grammatical structure, draws the attention of both children with typical hearing and hearing loss (e.g., Fernald, 1992; Wang et al. 2017). However, the quantity and quality of language during caregiver-infant interactions may decrease in the presence of electronic media. Children with sensorineural hearing loss (SNHL) experience greater difficulties with language development in noisy environments (Beer et al., 2014). Thus, media may not only limit language input but create a more complex listening situation for children. The purpose of this study is to explore the relationship between electronic media exposure in the homes of children with SNHL and normal hearing (NH) and later language assessments to provide novel information on the impacts of media in the language development of children with hearing loss. The home-auditory environments of five children with SNHL and four NH children at nine months of hearing age are currently being evaluated using Language ENvironment Analysis (LENA) recordings. The first five minutes of each waking hour of one day-long recording are coded for conversational turns (CTs), caregiver statements without child response, child vocalizations without caregiver response, the presence of television or media, the use of adult- or infant-directed speech, and other situational factors. It is predicted that quantity and quality of caregiver speech will decrease with media exposure, leading to lower language assessment scores for SNHL children. Findings will clarify proper media use and caregiver speech with SNHL children to ensure appropriate language development.

Learner Outcomes: Participants will be able to:

  • Identify and describe how electronic media might play on the quality and quantity of caregiver speech directed towards children with and without hearing loss.
  • Identify and describe the impacts of electronic media on caregiver speech and subsequent language development in children with hearing loss.
  • Identify and describe proper media use and caregiver speech techniques and strategies to utilize with children with and without hearing loss to ensure appropriate language development.

Instructional Level: Intermediate │ Poster

P06Effects of Music on Executive Functioning in Children

Abigail Dame; Tonya Bergeson, PhD, from Butler University

The present study investigates the impact of music listening and singing on sustained attention in typically developing school-aged children. Participants (N=12) were randomly assigned to one of three conditions: music listening, singing, or paced speech, during which they focused on a predetermined keyword presented in two musical pieces. In the listening condition, participants raised their hand when they heard or said the keyword. In the paced speech and singing conditions, the children omitted only that target word while speaking or singing the other lyrics. We measured baseline inhibition and executive functioning (Stroop Color Word test) and attention (Sustained Attention Response Task) before and after completing the two songs. Additionally, a mood survey was administered to both children and parents at the beginning and end of the study. Results indicated no significant difference in executive functioning and attention pre- to post-test difference scores across the music-singing, paced-speech, and listening conditions (all ps > .05). However, the survey revealed improved mood among children following the music-singing task (parent ratings, p = .026; child ratings, p = .073), despite reporting increased fatigue. In summary, the three-song tasks did not differentially affect executive functioning and attention but did improve mood, suggesting that music can mediate cognitive effort associated with such tests. In the future, music should be used for children with executive functioning issues, such as children who stutter and who could benefit from enjoyable cognitive interventions.

Learner Outcomes: Participants will be able to:

  • Identify the distinct characteristics of music, such as listening, singing, and paced speech, to determine their respective impacts on cognitive functions like sustained attention and executive functioning in children.
  • Identify and describe the potential benefits of music-based interventions for specific populations with executive functioning deficits based on mood and arousal.
  • Summarize how future research can further justify the benefits of musical activities increasing attention and executive functioning.

Instructional Level: Intermediate │ Poster

P07The Importance of Hearing Conservation Education in College Orientation

Madison McNeill, BS; Anne Sommer, AuD, CCC-A, from Purdue University

The purpose of this study is to examine the student perspective on their noise exposure and if hearing conservation education should be a part of freshman orientation curriculum. Purdue University hosts Boiler Gold Rush before the start of every fall semester to help freshmen feel more comfortable with the transition to college. This large group orientation has many events that can expose students to hazardous noise levels with unknown effects on their hearing. In addition to Boiler Gold Rush, many students experience high levels of noise at other commonly attended events such as football games, tailgates and parties/concerts. This study used an online survey to collect data from Purdue University undergraduate freshmen students (class of 2026) about their experience with noise at the above events. Finally, students were asked to watch an educational video on how hearing works, what noise is and what steps they can take to help prevent noise induced hearing loss. After the video, the students were asked to share their perspectives on the effectiveness of the video, hearing protection and hearing conservation education.

Learner Outcomes: Participants will be able to

  • Identify and describe the importance of hearing conservation education at college orientation.
  • Define the three (3) key topics in hearing conservation training.
  • Describe how much in-coming college students know about the importance of hearing conservation.

Instructional Level: Introductory │ Poster

P08Code-Switching and Code-Mixing in Bilingual AAC

Kaitlyn Lange, BA, Indiana University

This project investigated bilingual (Spanish-English) augmentative and alternative communication (AAC) to support clinical recommendations and provide foundational information to further technological development and future research. Three guiding questions were addressed: (1) What AAC programs are the easiest to code-switch and code-mix between Spanish and English? Specifically, can someone use a word or phrase from the other language in the same number of steps as a word or phrase from the same language? (2) What restrictions exist in the selected AAC programs for bilingual (Spanish/English) use specifically regarding code-switching and code-mixing? (3) Based on the review of the programs included, what program is most recommended for bilingual Spanish/English use, and why? Four identified experts recommended nine programs for inclusion in this project. Five programs were excluded due to a lack of a preprogrammed bilingual setup that allowed for code-switching and code-mixing. The four programs that fit the inclusion criteria included LAMP Words for Life, TD Snap Core First, Proloquo2Go Advanced Core, and Unidad 84. These programs were then analyzed based on general features, linguistic features and bilingual features to answer the three guiding questions. This study contributes to the understanding of current bilingual augmentative and alternative communication (AAC) options and provides recommendations for clinical selection, future investigation, and technological development.

Learner Outcomes: Participants will be able to

  • Identify necessary features of bilingual AAC.
  • Summarize the differences between the four bilingual AAC programs included in this project.
  • Identify and describe the implications of the results of this project.

Instructional Level: Intermediate │ Poster

P09An AuD Collaboration: How Early Interventionists Can Collaborate With Audiologists

Anna Toovey, CCC-SLP, St. Joseph Institute for Deaf; Shannon Van Hyfte, AuD, Purdue University

Early hearing and detection interventionists (EHDI) and audiologists have been working alongside families to provide early and consistent services for deaf and hard of hearing (D/HoH) children. Traditionally, audiologists see patients and families within a clinical setting approximately every eight weeks and communicate results to the family and early interventionist(s). Allied professionals often work weekly in the home or other preferred setting and communicate progress with the family and periodically with the audiologist. Collaboration and interprofessional educational (IPE) practices have become increasingly common in patient care (IPEC, 2016). Family values and their involvement are the center around which practitioners operate and critical conversations are necessary to meet the families’ goals and needs for their child’s communication. We will highlight one model that has resulted in a method for increasing auditory testing data that can be obtained at a younger age. These results guide the audiologist in programming amplification for better overall communication for the child.  These improvements result in better speech and language outcomes (Tomblin et al., 2015), as well as family buy-in/ownership. Anecdotally, families report increased satisfaction with the improved communication as well as the opportunity to have coordinated visits for overlapping care. The audiology and early intervention clinician maintain updates on patient/family goals and positively challenge one another to affect change.  The model of this collaboration may be of interest to professionals seeking to provide comprehensive care for their child.

Learner Outcomes: Participants will be able to

  • Identify shared interprofessional skills to target.
  • Identify critical/key conversations among families and professionals.
  • List and outline implementation of practical skills and outcomes.

Instructional Level: Intermediate │ Poster

P10#InstagramMoms: How Does Mothers’ Instagram use Influence Child Language Development?

Tierney Maurer; Grace Harahan; Maura Todd; Elaine Stribley, Emily Ziccardi, Jamie Gindorf; Sophie Waldvogel, from Butler University

Social media has become present in our everyday lives. New mothers use Instagram to record the ups and downs of everyday life. Many of these mothers are mom-fluencers who also advertise brands of baby- or mom-related products. We also know from previous research that the quantity and quality of speech to babies is related to later language development. The current study aims to examine how mothers’ use of Instagram influences their infant-directed speech. In our lab, a pilot study revealed that mothers on Instagram decreased their use of infant-directed speech, and made eye contact with their babies in approximately 5-46% of videos, particularly when the camera was in selfie mode. The purpose of the current study is to examine the effects of mothers’ Instagram use on child language development. Researchers are observing a subset of mothers’ public Instagram footage (n=13) when their infants reached around 24 months of age. We are currently transcribing these videos using Systematic Analysis of Language Transcripts software. We will also use this software to analyze features of the conversations such as child word count, mean length of utterance, and mother-child turn-taking behaviors. Because active use of Instagram may disrupt typical mother-infant interactions, we hypothesize that limited use of infant-directed speech may negatively impact children’s later language development.

Learner Outcomes: Participants will be able to

  • Describe the importance of infant-directed speech for language development.
  • Identify the influence of social media use on characteristics of mothers’ speech to babies such as turn-taking and eye contact.
  • Identify and assess the impact social media has on children’s later language development.

Instructional Level: Introductory │ Poster

P11Beyond Babbling: Language and Music in the Homes of Children With Cochlear Implants

Tonya Bergeson, PhD; Brianna Karras, from Butler University

This research study compares the language and music input throughout an average day in families who have infants with cochlear implants to those with infants without hearing loss. Through collaborations with The Ohio State University, recordings of daily interactions in 13 families were obtained using a Language ENvironment Analysis (LENA) device. This device was worn by the target child and records all of the environmental sounds near that child in their environments for up to 16 hours. The children with typical hearing were 9 months of age and the children with cochlear implants were 9 months post-implantation. Researchers were blind to the hearing status of each child. In this study, we focused on the first five minutes of every hour of the day. We coded characteristics of the interactions such as adult-directed speech, infant-directed speech, media speech, and music from either media sources or caregivers. Preliminary results revealed that less music was present in homes than expected based on previous literature. The source of most of this music was the television; parents typically did not sing or perform music for their children in these samples. The amount of infant-directed speech varied significantly across families. These results will help caregivers and clinicians of children with cochlear implants determine how they can improve the language and music environments of children. Future research in this stud will explore the potential relationship between language and music input and children’s later language development.

Learner Outcomes: Participants will be able to

  • Summarize the importance of language and music in the homes of children with and without hearing loss.
  • Describe the impacts of infant-directed language and music on subsequent language development in children with hearing loss.
  • Identify strategies caregivers can use to increase the amounts of spoken language and music when interacting with children to maximize language development.

Instructional Level: Introductory │ Poster

P12Interprofessional Dysphagia Care: Safety and Need for Guidelines

Marissa Van De Weg, Purdue University Fort Wayne; Naomi Gurevich, PhD, Purdue University Fort Wayne; Danielle Osmelak, EdD, Governors State University

Speech-language pathologists (SLPs) play a primary role in treating dysphagia and implementing compensatory treatments intended to reduce the impact on nutritional intake and quality of life. This often involves a diet with modified consistency of solids and viscosity of fluids to provide a safer consistency that helps compensate for swallowing dysfunction. For solids, if the restrictive diet compensates for impaired swallow function, it is safer. For liquids, a more restrictive consistency can pose greater risk if thicker viscosity does not compensate for dysfunction and bolus is aspirated. Honey thick liquids have been associated with higher incidence of pneumonia, dehydration, and additional adverse effects for people with dementia. SLPs are trained to determine whether thickening liquid is appropriate for patients on a case-by-case basis. Nurses are often the first line of defense in recognizing swallowing difficulties and referring to SLPs. Although an extensive literature search produced no documentation of formal recommendation to support this practice, nurses regularly cite permission to downgrade dysphagia diets without SLP consult. Medical SLPs’ experiences with nurses’ diet modification practices patterns were explored via a descriptive survey design. Most participants (86%) reported exposure to nursing staff claiming permission to modify dysphagia diets without consulting speech pathology. Early career SLPs in medical settings are especially at risk for this practice. We discuss implications & need for guidelines.

Learner Outcomes: Participants will be able to

  • Describe two specific nursing dysphagia practice patterns with diet modification.
  • Identify current dysphagia diet modification trends specific to work settings.
  • Identify nurses’ diet modification practice patterns when SLPs are not consulted.

Instructional Level: Introductory │ Poster

P13Inner Speech in the Daily Lives of People With Post-Stroke Aphasia

Allison Harris; Bethany Yagoda, from Indiana University Bloomington

This exploratory, preliminary, feasibility study evaluated the extent to which adults with chronic aphasia (N=23) report experiencing inner speech in their daily lives by leveraging qualitative methodology. The presence of inner speech was assessed via structured interview at three time-points over the course of three weeks. Specific components of inner speech will be discussed, including the use of inner speech to problem solve, motivate oneself, control emotion, and process positive and negative emotions. Most individuals in the sample used inner speech to talk to themselves about these things, with positive emotions being the least frequently discussed components. Actual quotes from participants will be shown, and relationships with demographic information (e.g., time post-stroke) will be discussed. The ability to understand a person with aphasia’s inner speech uniquely enables clinicians and researchers to understand the complex process of living with aphasia.

Learner Outcomes: Participants will be able to

  • Define inner speech.
  • Identify and describe typical experiences of inner speech in people with aphasia.
  • Identify and describe how inner speech may be integrated into clinical services for aphasia.

Instructional Level: Introductory │ Poster

P14STAR - Sentence Treatment for Aphasia Recovery: Implicit Learning through Structural Priming

Katelin Rainey, BS; Hannah Brownd, BA; Jiyeon Lee, PhD, from Purdue University; Rainey, Brownd, Isaacson, & Lee, difficulty producing sentences is pervasive in persons with aphasia (PWA).

Yet, available treatment options are limited and they require explicit re-learning of complex grammatical rules, which is often difficult for PWA. Research suggests that humans learn to produce different types of sentences through experience-based implicit learning, called structural priming (Chang, Griffin, & Bock, 2006; Pickering & Ferreira, 2008). Growing evidence suggests that after reading and hearing prime sentences, PWA can re-use similar sentence structures in their own future production more frequently and successfully (Man et al., 2019; Lee et al., 2023). However, less evidence is available for their long-term and generalization effects. This presentation showcases the clinical impact of structural priming treatment in creating lasting and generalized recovery of aphasia, based on two cases of single-subject treatment design.   Two participants with Broca’s aphasia (A1, A2) completed baseline testing, 12-15 sessions of oral reading structural priming treatment, and up to 2-month follow-up sessions. A1 was trained on dative sentences (e.g., the boy is giving a guitar to the man) in-person and A2 on active sentences (e.g., the girl is chasing the man) over Zoom. Both PWAs showed significant improvements on trained and untrained stimuli and the improvements were maintained at follow-up testing. They also showed generalized improvements in discourse and other aphasia measures. These findings suggest the clinical efficacy of structural priming treatment to improve sentence production across sentence types, tasks, and mode of treatment delivery. Further clinical implications will be discussed.

Learner Outcomes: Participants will be able to

  • Describe common sentence production difficulties in PWA.
  • Identify and explain the structural priming paradigm.
  • Describe different treatment outcome measures, including acquisition, generalization, and maintenance effects.

Instructional Level: Intermediate │ Poster

P15The Diverse Nature of Preschool Peers and Language Outcome Effects

Brooklynn Ledger, BA, Purdue University;

In recent years, preschool attendance has reached an all-time high with over half of children ages three to six attending preschool. Preschools are an early source of language exposure, so these environments should be designed to foster language development and future academic success. This project explores how peer characteristics—specifically, socioeconomic and disability status—play a role in language development. While the research is limited, it suggests that the ideal classroom composition includes children from varying socioeconomic backgrounds as well as children with and without documented disabilities. Professionals should promote interaction between diverse children to support language outcomes.

Learner Outcomes: Participants will be able to

  • List the main sources of language input that preschoolers are exposed to.
  • Identify peer characteristics that have a positive effect on language outcomes.
  • Summarize ideal preschool classroom composition to support language development.

Instructional Level: Intermediate │Poster

P16Effects of SFA and PCA Treatments in Mandarin-speaking Adults With Aphasia

Peng Zhang, MA, Purdue University

Semantic Feature Analysis (SFA) and Phonological Component Analysis (PCA) are two common word-retrieval interventions. Although previous studies have confirmed the effectiveness of the two treatments for word retrieval impairments, it was unclear whether treatments generated improvement in untreated items and connected speech and whether the treatment effects obtained in Indo-European languages could be extended to typologically different languages such as Mandarin-Chinese. The current study examined the effects of SFA and PCA treatment in Mandarin speakers with aphasia in terms of three outcome measurements: acquisition, and maintenance. We also explored the factors underlying changes in naming after SFA and PCA therapies. Five native Mandarin Chinese speakers with aphasia received 12-session SFA treatment and 12-session PCA treatment in a randomized order (five days per week, 40 minutes per session). Three different statistical methods were used to analyze the treatment outcomes: McNemar Test, effect sizes, and the conservative dual criterion (CDC) method (Fisher et al., 2003). The results showed that regardless of the locus of deficit (semantic or phonological), all participants improved in naming words treated with the PCA treatment. Three of four participants significantly benefited from the SFA treatment. Regarding generalization effect, SFA treatment effects were generalized to untreated but semantically related words in three participants (P1, P3, P4), and three participants (P1, P4, P5) notably improved on naming untreated but phonologically related words after the PCA treatment. In addition, baseline cognitive ability was found to be significantly correlated with overall treatment effects (r= .93, p= .02).

Learner Outcomes: Participants will be able to

  • Formulate individualized Semantic Feature Analysis and Phonological Component Analysis treatments to people with aphasia.
  • Analyze data from experiments/research with multiple baseline designs.
  • Identify the key factors influencing naming therapy outcomes in individuals with aphasia.

Instructional Level: Intermediate │ Poster

P17Addressing Hearing Health Equity in Indiana Using Precision Audiology

Joshua Alexander, PhD; Michael Heinz, PhD; Maureen Shader, PhD; Ananth Grama, PhD; Edward Bartlett, PhD; Jennifer Simpson, AuD, from Purdue University

Hearing health equity presents significant challenges across Indiana, particularly affecting rural, minority, and economically disadvantaged populations.  Disparities in hearing aid usage are pronounced, with white adults and those with higher socioeconomic status over twice as likely to use hearing aids compared to Black, Hispanic, and lower-income or less-educated individuals.  Untreated hearing loss is linked to several comorbidities, including depression, anxiety, poorer cognition, physical health, and increased falls, leading to 46% higher healthcare costs. Notably, nearly 70% of rural residents with occupational noise exposure report hearing loss, emphasizing the need for targeted interventions, given that about 50% of hearing loss cases are preventable. These disparities contribute to underemployment, limited access to healthcare, and lower quality of care.  Additionally, the high costs of hearing aids, the stigma around hearing loss, and unequal access to quality care further challenge efforts to address hearing needs in minority and underrepresented communities.  The underrepresentation of diverse populations in hearing research and clinical audiology further exacerbates health disparities, limits the generalizability of research findings, and hinders effective interventions. To combat these issues, the Accessible Precision Audiology Research Center in Indianapolis will engage with diverse groups.  It aims to raise awareness about the impact of untreated hearing loss and available management options through community outreach, standardized audiological evaluations, and free hearing screenings.  By leveraging an open-source database and AI-powered analysis tools, the center seeks to advance precision audiology, enabling more personalized and effective hearing care solutions, thus fostering a deeper understanding of hearing health across the socioeconomic spectrum.

Learner Outcomes: Participants will be able to

  • Describe the epidemiology and impact of hearing loss in Indiana.
  • Identify barriers to hearing health equity.
  • Describe strategies for improving hearing health equity.

Instructional Level: Introductory │ Poster

P18Does Sentence Type Affect the Perception of Accent Distance?

Malachi Henry, MA, CCC-SLP, Sami Branson; Gabrielle Kilaras; Tessa Bent, PhD, from Indiana University – Bloomington

Similar to adults, children are sensitive to the presence of an unfamiliar first language (L1) or second language (L2) accent. There are also a few recent studies suggesting that children are sensitive to an accent’s perceived distance from the predominant local accent. However, many studies utilize stimuli that are fairly homogeneous; typically simple declarative sentences (e.g., “Father forgot the bread.”) making it difficult to definitively say that sensitivity to accent exists in broader linguistic contexts. To address this gap, thirty adults, eight 6-year-olds, and eight 12-year-olds with typical speech, language, and hearing completed six ladder tasks. In the ladder tasks, the participants ranked 24 talkers based on their perceived distance from the local accent. The accents included were both L1 (Midland American, Southern American, Jamaican, Irish, New Zealand, and Ghanaian English) and L2 (Hebrew-, Russian-, Portuguese-, Turkish-, Cantonese-, and Vietnamese-accented English). Utterance length, sentence type (e.g., compound, simple, interrogative, or declarative), or number of pauses were investigated in relation to accent distance.  Preliminary results indicated that children and adults were sensitive to accent distance and listeners rated L1 accents as being less distant from the local accent than L2 accents. Significant interactions between sentence type and length uniquely predicted accentedness ratings. The results suggest that perceived accent distance is shaped not only by talker properties but also by the linguistic characteristics of the stimuli. Researchers should consider using a range of stimuli for studies examining accented speech perception including varying sentence lengths, morphosyntactic structures, and sentence types.

Learner Outcomes: Participants will be able to

  • Describe different kinds of accents in an inclusive and meaningful way.
  • Describe characteristics of speech stimuli that contribute to accent distance judgments.
  • Summarize the development of accented speech perception in children.

Instructional Level: Introductory │Poster