Despite a large body of research on the neural bases of single-talker speech comprehension, relatively few studies have examined speech comprehension in a multi-talker environment. In the classic “cocktail party” situation where multiple speakers are talking concurrently, listeners are struggling to segregate a speech signal from a cacophony of other sounds, and this is especially challenging for hearing-impaired listeners. Specifying how different linguistic units in competing speech streams are encoded in the brain is essential for understanding speech comprehension difficulty for both normal and hearing-impaired listeners. Using functional Magnetic Resonance Imaging (fMRI), the current project examines normal and hearing-impaired participants' neural responses while they listen to two speakers read different sections of a story concurrently. Comparisons between the fMRI data from normal and hearing-impaired listeners will provide novel insights into speech comprehension difficulty manifested at different levels of linguistic units in challenging listening conditions.