Despite a large body of research on the neural bases of single-talker speech comprehension, relatively few studies have examined speech comprehension in a multi-talker environment. Specifying how different linguistic units in competing speech streams are encoded in the brain is essential for understanding speech comprehension difficulty for both normal and hearing-impaired listeners. Using electroencephalogram (EEG), the proposed project will examine normal and hearing-impaired participants' neural responses while they listen to two speakers read different sections of a story concurrently. These neural signals are then fitted by time series predictors derived from a linguistically-informed neural network model. Comparisons between the model fit with the EEG data from normal and hearing-impaired listeners will provide novel insights into speech comprehension difficulty manifested at different levels of linguistic units in challenging listening conditions.