Year |
Citation |
Score |
2023 |
Singla K, Chen Z, Atkins DC, Narayanan S. Towards End-2-end Learning for Predicting Behavior Codes from Spoken Utterances in Psychotherapy Conversations. Proceedings of the Conference. Association For Computational Linguistics. Meeting. 2020: 3797-3803. PMID 36751434 DOI: 10.18653/v1/2020.acl-main.351 |
0.332 |
|
2022 |
Hagedorn C, Lu Y, Toutios A, Sinha U, Goldstein L, Narayanan S. Variation in compensatory strategies as a function of target constriction degree in post-glossectomy speech. Jasa Express Letters. 2: 045205. PMID 35495774 DOI: 10.1121/10.0009897 |
0.793 |
|
2022 |
Gurunath Shivakumar P, Georgiou P, Narayanan S. Confusion2Vec 2.0: Enriching ambiguous spoken language representations with subwords. Plos One. 17: e0264488. PMID 35245327 DOI: 10.1371/journal.pone.0264488 |
0.328 |
|
2021 |
Lynn E, Narayanan SS, Lammert AC. Dark tone quality and vocal tract shaping in soprano song production: Insights from real-time MRI. Jasa Express Letters. 1: 075202. PMID 34291230 DOI: 10.1121/10.0005109 |
0.65 |
|
2021 |
Lim Y, Toutios A, Bliesener Y, Tian Y, Lingala SG, Vaz C, Sorensen T, Oh M, Harper S, Chen W, Lee Y, Töger J, Monteserin ML, Smith C, Godinez B, ... ... Narayanan SS, et al. A multispeaker dataset of raw and reconstructed speech production real-time MRI video and 3D volumetric images. Scientific Data. 8: 187. PMID 34285240 DOI: 10.1038/s41597-021-00976-x |
0.839 |
|
2021 |
Hagedorn C, Kim J, Sinha U, Goldstein L, Narayanan SS. Complexity of vocal tract shaping in glossectomy patients and typical speakers: A principal component analysis. The Journal of the Acoustical Society of America. 149: 4437. PMID 34241468 DOI: 10.1121/10.0004789 |
0.814 |
|
2021 |
Tian Y, Lim Y, Zhao Z, Byrd D, Narayanan S, Nayak KS. Aliasing artifact reduction in spiral real-time MRI. Magnetic Resonance in Medicine. PMID 33728700 DOI: 10.1002/mrm.28746 |
0.8 |
|
2021 |
Hebbar R, Papadopoulos P, Reyes R, Danvers AF, Polsinelli AJ, Moseley SA, Sbarra DA, Mehl MR, Narayanan S. Deep multiple instance learning for foreground speech localization in ambient audio from wearable devices. Eurasip Journal On Audio, Speech, and Music Processing. 2021: 7. PMID 33584835 DOI: 10.1186/s13636-020-00194-0 |
0.302 |
|
2021 |
Zhao Z, Lim Y, Byrd D, Narayanan S, Nayak KS. Improved 3D real-time MRI of speech production. Magnetic Resonance in Medicine. PMID 33452722 DOI: 10.1002/mrm.28651 |
0.805 |
|
2020 |
Lim Y, Bliesener Y, Narayanan S, Nayak KS. Deblurring for spiral real-time MRI using convolutional neural networks. Magnetic Resonance in Medicine. PMID 32710516 DOI: 10.1002/Mrm.28393 |
0.528 |
|
2020 |
Toutios A, Xu M, Byrd D, Goldstein L, Narayanan S. How an aglossic speaker produces an alveolar-like percept without a functional tongue tip. The Journal of the Acoustical Society of America. 147: EL460. PMID 32611190 DOI: 10.1121/10.0001329 |
0.804 |
|
2020 |
Harper S, Goldstein L, Narayanan S. Variability in individual constriction contributions to third formant values in American English /ɹ/. The Journal of the Acoustical Society of America. 147: 3905. PMID 32611162 DOI: 10.1121/10.0001413 |
0.599 |
|
2020 |
Kim J, Toutios A, Lee S, Narayanan SS. Vocal tract shaping of emotional speech. Computer Speech & Language. 64. PMID 32523241 DOI: 10.1016/j.csl.2020.101100 |
0.349 |
|
2020 |
Kumar M, Kim SH, Lord C, Lyon TD, Narayanan S. Leveraging Linguistic Context in Dyadic Interactions to Improve Automatic Speech Recognition for Children. Computer Speech & Language. 63. PMID 32431473 DOI: 10.1016/J.Csl.2020.101101 |
0.444 |
|
2020 |
Arevian AC, Bone D, Malandrakis N, Martinez VR, Wells KB, Miklowitz DJ, Narayanan S. Clinical state tracking in serious mental illness through computational analysis of speech. Plos One. 15: e0225695. PMID 31940347 DOI: 10.1371/Journal.Pone.0225695 |
0.33 |
|
2020 |
Park TJ, Han KJ, Kumar M, Narayanan S. Auto-Tuning Spectral Clustering for Speaker Diarization Using Normalized Maximum Eigengap Ieee Signal Processing Letters. 27: 381-385. DOI: 10.1109/Lsp.2019.2961071 |
0.578 |
|
2019 |
Flemotomos N, Georgiou P, Atkins DC, Narayanan S. ROLE SPECIFIC LATTICE RESCORING FOR SPEAKER ROLE RECOGNITION FROM SPEECH RECOGNITION OUTPUTS. Proceedings of the ... Ieee International Conference On Acoustics, Speech, and Signal Processing. Icassp (Conference). 2019: 7330-7334. PMID 36811087 DOI: 10.1109/icassp.2019.8683900 |
0.309 |
|
2019 |
Alexander R, Sorensen T, Toutios A, Narayanan S. A modular architecture for articulatory synthesis from gestural specification. The Journal of the Acoustical Society of America. 146: 4458. PMID 31893678 DOI: 10.1121/1.5139413 |
0.389 |
|
2019 |
Chen W, Lee NG, Byrd D, Narayanan S, Nayak KS. Improved real-time tagged MRI using REALTAG. Magnetic Resonance in Medicine. PMID 31872918 DOI: 10.1002/Mrm.28144 |
0.808 |
|
2019 |
Sorensen T, Zane E, Feng T, Narayanan S, Grossman R. Cross-Modal Coordination of Face-Directed Gaze and Emotional Speech Production in School-Aged Children and Adolescents with ASD. Scientific Reports. 9: 18301. PMID 31797950 DOI: 10.1038/S41598-019-54587-Z |
0.37 |
|
2019 |
Sorensen T, Toutios A, Goldstein L, Narayanan S. Task-dependence of articulator synergies. The Journal of the Acoustical Society of America. 145: 1504. PMID 31067947 DOI: 10.1121/1.5093538 |
0.653 |
|
2019 |
Imel ZE, Pace BT, Soma CS, Tanana M, Hirsch T, Gibson J, Georgiou P, Narayanan S, Atkins DC. Design feasibility of an automated, machine-learning based feedback system for motivational interviewing. Psychotherapy (Chicago, Ill.). PMID 30958018 DOI: 10.1037/Pst0000221 |
0.323 |
|
2019 |
Chen W, Byrd D, Narayanan S, Nayak KS. Intermittently tagged real-time MRI reveals internal tongue motion during speech production. Magnetic Resonance in Medicine. PMID 30919494 DOI: 10.1002/Mrm.27745 |
0.822 |
|
2019 |
Toutios A, Blaylock R, Goldstein L, Narayanan SS. Toward cross-speaker articulatory modeling The Journal of the Acoustical Society of America. 146: 3085-3085. DOI: 10.1121/1.5137718 |
0.589 |
|
2019 |
Hagedorn C, Sorensen T, Lammert A, Toutios A, Goldstein L, Byrd D, Narayanan S. Engineering Innovation in Speech Science: Data and Technologies Perspectives of the Asha Special Interest Groups. 4: 411-420. DOI: 10.1044/2018_Pers-Sig19-2018-0003 |
0.754 |
|
2019 |
Proctor M, Walker R, Smith C, Szalay T, Goldstein L, Narayanan S. Articulatory characterization of English liquid-final rimes Journal of Phonetics. 77: 100921. DOI: 10.1016/J.Wocn.2019.100921 |
0.595 |
|
2018 |
Lim Y, Zhu Y, Lingala SG, Byrd D, Narayanan S, Nayak KS. 3D dynamic MRI of the vocal tract during natural speech. Magnetic Resonance in Medicine. PMID 30390319 DOI: 10.1002/Mrm.27570 |
0.806 |
|
2018 |
Vaz C, Ramanarayanan V, Narayanan S. Acoustic Denoising using Dictionary Learning with Spectral and Temporal Regularization. Ieee/Acm Transactions On Audio, Speech, and Language Processing. 26: 967-980. PMID 30271810 DOI: 10.1109/Taslp.2018.2800280 |
0.736 |
|
2018 |
Lammert AC, Shadle CH, Narayanan SS, Quatieri TF. Speed-accuracy tradeoffs in human speech production. Plos One. 13: e0202180. PMID 30192767 DOI: 10.1371/Journal.Pone.0202180 |
0.723 |
|
2018 |
Lim Y, Lingala SG, Narayanan SS, Nayak KS. Dynamic off-resonance correction for spiral real-time MRI of speech. Magnetic Resonance in Medicine. PMID 30058147 DOI: 10.1002/Mrm.27373 |
0.576 |
|
2018 |
Gupta R, Audhkhasi K, Jacokes Z, Rozga A, Narayanan S. Modeling multiple time series annotations as noisy distortions of the ground truth: An Expectation-Maximization approach. Ieee Transactions On Affective Computing. 9: 76-89. PMID 29644011 DOI: 10.1109/Taffc.2016.2592918 |
0.76 |
|
2018 |
Ramanarayanan V, Tilsen S, Proctor M, Töger J, Goldstein L, Nayak KS, Narayanan S. Analysis of speech production real-time MRI Computer Speech & Language. 52: 1-22. DOI: 10.1016/J.Csl.2018.04.002 |
0.834 |
|
2017 |
Nasir M, Baucom BR, Georgiou P, Narayanan S. Predicting couple therapy outcomes based on speech acoustic features. Plos One. 12: e0185123. PMID 28934302 DOI: 10.1371/Journal.Pone.0185123 |
0.329 |
|
2017 |
Töger J, Sorensen T, Somandepalli K, Toutios A, Lingala SG, Narayanan S, Nayak K. Test-retest repeatability of human speech biomarkers from static and real-time dynamic magnetic resonance imaging. The Journal of the Acoustical Society of America. 141: 3323. PMID 28599561 DOI: 10.1121/1.4983081 |
0.631 |
|
2017 |
Van Segbroeck M, Knoll AT, Levitt P, Narayanan S. MUPET-Mouse Ultrasonic Profile ExTraction: A Signal Processing Tool for Rapid and Unsupervised Analysis of Ultrasonic Vocalizations. Neuron. 94: 465-485.e5. PMID 28472651 DOI: 10.1016/J.Neuron.2017.04.005 |
0.327 |
|
2017 |
Hagedorn C, Proctor M, Goldstein L, Wilson SM, Miller B, Gorno-Tempini ML, Narayanan SS. Characterizing Articulation in Apraxic Speech Using Real-Time Magnetic Resonance Imaging. Journal of Speech, Language, and Hearing Research : Jslhr. 1-15. PMID 28314241 DOI: 10.1044/2016_Jslhr-S-15-0112 |
0.836 |
|
2017 |
Lingala SG, Zhu Y, Lim Y, Toutios A, Ji Y, Lo WC, Seiberlich N, Narayanan S, Nayak KS. Feasibility of through-time spiral generalized autocalibrating partial parallel acquisition for low latency accelerated real-time MRI of speech. Magnetic Resonance in Medicine. PMID 28185301 DOI: 10.1002/Mrm.26611 |
0.605 |
|
2017 |
Lander-Portnoy M, Goldstein L, Narayanan SS. Using real time magnetic resonance imaging to measure changes in articulatory behavior due to partial glossectomy The Journal of the Acoustical Society of America. 142: 2641-2642. DOI: 10.1121/1.5014684 |
0.596 |
|
2017 |
Harper S, Goldstein L, Narayanan SS. Quantifying labial, palatal, and pharyngeal contributions to third formant lowering in American English /ɹ/ The Journal of the Acoustical Society of America. 142: 2582-2582. DOI: 10.1121/1.5014445 |
0.545 |
|
2017 |
Bone D, Lee C, Chaspari T, Gibson J, Narayanan S. Signal Processing and Machine Learning for Mental Health Research and Clinical Applications [Perspectives] Ieee Signal Processing Magazine. 34: 196-195. DOI: 10.1109/Msp.2017.2718581 |
0.488 |
|
2016 |
Gupta R, Bone D, Lee S, Narayanan S. Analysis of engagement behavior in children during dyadic interactions using prosodic cues. Computer Speech & Language. 37: 47-66. PMID 28713198 DOI: 10.1016/J.Csl.2015.09.003 |
0.331 |
|
2016 |
Gupta R, Audhkhasi K, Lee S, Narayanan S. Detecting paralinguistic events in audio stream using context in features and probabilistic decisions. Computer Speech & Language. 36: 72-92. PMID 28713197 DOI: 10.1016/J.Csl.2015.08.003 |
0.779 |
|
2016 |
Li M, Kim J, Lammert A, Ghosh PK, Ramanarayanan V, Narayanan S. Speaker verification based on the fusion of speech acoustics and inverted articulatory signals. Computer Speech & Language. 36: 196-211. PMID 28496292 DOI: 10.1016/J.Csl.2015.05.003 |
0.82 |
|
2016 |
Xiao B, Huang C, Imel ZE, Atkins DC, Georgiou P, Narayanan SS. A technology prototype system for rating therapist empathy from audio recordings in addiction counseling. Peerj. Computer Science. 2. PMID 28286867 DOI: 10.7717/Peerj-Cs.59 |
0.319 |
|
2016 |
Toutios A, Narayanan SS. Advances in real-time magnetic resonance imaging of the vocal tract for speech science and technology research. Apsipa Transactions On Signal and Information Processing. 5. PMID 27833745 DOI: 10.1017/ATSIP.2016.5 |
0.379 |
|
2016 |
Lingala SG, Zhu Y, Kim YC, Toutios A, Narayanan S, Nayak KS. A fast and flexible MRI system for the study of dynamic vocal tract shaping. Magnetic Resonance in Medicine. PMID 26778178 DOI: 10.1002/Mrm.26090 |
0.585 |
|
2016 |
Ramanarayanan V, Van Segbroeck M, Narayanan SS. Directly data-derived articulatory gesture-like representations retain discriminatory information about phone categories. Computer Speech & Language. 36: 330-346. PMID 26688612 DOI: 10.1016/J.Csl.2015.03.004 |
0.762 |
|
2016 |
Kazemzadeh A, Gibson J, Georgiou P, Lee S, Narayanan S. A Socratic epistemology for verbal emotional intelligence Peerj. 2016. DOI: 10.7717/Peerj-Cs.40 |
0.739 |
|
2016 |
Kim J, Ramakrishna A, Lee S, Narayanan S. Relations between prominence and articulatory-prosodic cues in emotional speech Speech Prosody. 893-896. DOI: 10.21437/Speechprosody.2016-183 |
0.367 |
|
2016 |
Lee S, Byrd D, Narayanan S. Representations of electromagnetic articulography data for tongue shaping and vocal tract configuration Journal of the Acoustical Society of America. 139: 2221-2221. DOI: 10.1121/1.4950660 |
0.356 |
|
2016 |
Papadopoulos P, Tsiartas A, Narayanan S. Long-Term SNR Estimation of Speech Signals in Known and Unknown Channel Conditions Ieee/Acm Transactions On Audio, Speech, and Language Processing. 24: 2495-2506. DOI: 10.1109/Taslp.2016.2615240 |
0.354 |
|
2016 |
Eyben F, Scherer KR, Schuller BW, Sundberg J, Andre E, Busso C, Devillers LY, Epps J, Laukka P, Narayanan SS, Truong KP. The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing Ieee Transactions On Affective Computing. 7: 190-202. DOI: 10.1109/Taffc.2015.2457417 |
0.655 |
|
2016 |
Gupta R, Audhkhasi K, Lee S, Narayanan S. Detecting paralinguistic events in audio stream using context in features and probabilistic decisions Computer Speech and Language. 36: 72-92. DOI: 10.1016/j.csl.2015.08.003 |
0.716 |
|
2016 |
Li M, Kim J, Lammert A, Ghosh PK, Ramanarayanan V, Narayanan S. Speaker verification based on the fusion of speech acoustics and inverted articulatory signals Computer Speech and Language. 36: 196-211. DOI: 10.1016/j.csl.2015.05.003 |
0.794 |
|
2015 |
Xiao B, Imel ZE, Georgiou PG, Atkins DC, Narayanan SS. "Rate My Therapist": Automated Detection of Empathy in Drug and Alcohol Counseling via Speech and Language Processing. Plos One. 10: e0143055. PMID 26630392 DOI: 10.1371/Journal.Pone.0143055 |
0.328 |
|
2015 |
Lajonchere CM, Wheeler BY, Valente TW, Kreutzer C, Munson A, Narayanan S, Kazemzadeh A, Cruz R, Martinez I, Schrager SM, Schweitzer L, Chklovski T, Hwang D. Strategies for Disseminating Information on Biomedical Research on Autism to Hispanic Parents. Journal of Autism and Developmental Disorders. PMID 26563948 DOI: 10.1007/S10803-015-2649-5 |
0.732 |
|
2015 |
Lammert AC, Narayanan SS. On Short-Time Estimation of Vocal Tract Length from Formant Frequencies. Plos One. 10: e0132193. PMID 26177102 DOI: 10.1371/Journal.Pone.0132193 |
0.695 |
|
2015 |
Kim J, Kumar N, Tsiartas A, Li M, Narayanan SS. Automatic intelligibility classification of sentence-level pathological speech. Computer Speech & Language. 29: 132-144. PMID 25414544 DOI: 10.1016/J.Csl.2014.02.001 |
0.329 |
|
2015 |
Bone D, Goodwin MS, Black MP, Lee CC, Audhkhasi K, Narayanan S. Applying machine learning to facilitate autism diagnostics: pitfalls and promises. Journal of Autism and Developmental Disorders. 45: 1121-36. PMID 25294649 DOI: 10.1007/S10803-014-2268-6 |
0.766 |
|
2015 |
Metallinou A, Yang Z, Lee C, Busso C, Carnicke S, Narayanan S. The USC CreativeIT database of multimodal dyadic interactions: from speech and full body motion capture to continuous emotional annotations Language Resources and Evaluation. 50: 497-521. DOI: 10.1007/s10579-015-9300-0 |
0.799 |
|
2014 |
Can D, Gibson J, Vaz C, Georgiou PG, Narayanan SS. Barista: A Framework for Concurrent Speech Processing by USC-SAIL. Proceedings of the ... Ieee International Conference On Acoustics, Speech, and Signal Processing / Sponsored by the Institute of Electrical and Electronics Engineers Signal Processing Society. Icassp (Conference). 2014: 3306-3310. PMID 27610047 DOI: 10.1109/ICASSP.2014.6854212 |
0.312 |
|
2014 |
Lammert A, Goldstein L, Ramanarayanan V, Narayanan S. Gestural Control in the English Past-Tense Suffix: An Articulatory Study Using Real-Time MRI. Phonetica. 71: 229-48. PMID 25997724 DOI: 10.1159/000371820 |
0.831 |
|
2014 |
Bone D, Lee CC, Narayanan S. Robust Unsupervised Arousal Rating: A Rule-Based Framework with Knowledge-Inspired Vocal Features. Ieee Transactions On Affective Computing. 5: 201-213. PMID 25705327 DOI: 10.1109/Taffc.2014.2326393 |
0.531 |
|
2014 |
Kim J, Lammert AC, Ghosh PK, Narayanan SS. Co-registration of speech production datasets from electromagnetic articulography and real-time magnetic resonance imaging. The Journal of the Acoustical Society of America. 135: EL115-21. PMID 25234914 DOI: 10.1121/1.4862880 |
0.78 |
|
2014 |
Narayanan S, Toutios A, Ramanarayanan V, Lammert A, Kim J, Lee S, Nayak K, Kim YC, Zhu Y, Goldstein L, Byrd D, Bresch E, Ghosh P, Katsamanis A, Proctor M. Real-time magnetic resonance imaging and electromagnetic articulography database for speech production research (TC). The Journal of the Acoustical Society of America. 136: 1307. PMID 25190403 DOI: 10.1121/1.4890284 |
0.811 |
|
2014 |
Ramanarayanan V, Lammert A, Goldstein L, Narayanan S. Are articulatory settings mechanically advantageous for speech motor control? Plos One. 9: e104168. PMID 25133544 DOI: 10.1371/Journal.Pone.0104168 |
0.836 |
|
2014 |
Bone D, Lee CC, Black MP, Williams ME, Lee S, Levitt P, Narayanan S. The psychologist as an interlocutor in autism spectrum disorder assessment: insights from a study of spontaneous prosody. Journal of Speech, Language, and Hearing Research : Jslhr. 57: 1162-77. PMID 24686340 DOI: 10.1044/2014_Jslhr-S-13-0062 |
0.559 |
|
2014 |
Bone D, Li M, Black MP, Narayanan SS. Intoxicated Speech Detection: A Fusion Framework with Speaker-Normalized Hierarchical Functionals and GMM Supervectors. Computer Speech & Language. 28. PMID 24376305 DOI: 10.1016/J.Csl.2012.09.004 |
0.33 |
|
2014 |
Xu Y, Chen Y, Guion-Anderson S, Goldstein L, Lammert A, Ramanarayanan V, Narayanan S, Graham C, Stückle D. Contents Vol. 71, 2014 Phonetica. 71. DOI: 10.1159/000431246 |
0.75 |
|
2014 |
Skordilis ZI, Ramanarayanan V, Goldstein L, Narayanan SS. Experimental evaluation of the constant tongue volume hypothesis The Journal of the Acoustical Society of America. 136: 2143-2143. DOI: 10.1121/1.4899732 |
0.767 |
|
2014 |
Lee S, Kim J, Narayanan S. A comparison study of emotional speech articulations using the principal component analysis method Journal of the Acoustical Society of America. 135: 2199-2199. DOI: 10.1121/1.4877178 |
0.379 |
|
2014 |
Blaylock R, Lammert A, Goldstein L, Narayanan S. Gestural coordination of the velum in singing can be different from coordination in speech The Journal of the Acoustical Society of America. 135: 2199-2199. DOI: 10.1121/1.4877177 |
0.796 |
|
2014 |
Lammert A, Narayanan S. Development of a parametric basis for vocal tract area function representation from a large speech production database Journal of the Acoustical Society of America. 135: 2198-2198. DOI: 10.1121/1.4877170 |
0.72 |
|
2014 |
Audhkhasi K, Zavou AM, Georgiou PG, Narayanan SS. Theoretical analysis of diversity in an ensemble of automatic speech recognition systems Ieee Transactions On Audio, Speech and Language Processing. 22: 711-726. DOI: 10.1109/Taslp.2014.2303295 |
0.764 |
|
2014 |
Malandrakis N, Potamianos A, Hsu KJ, Babeva KN, Feng MC, Davison GC, Narayanan S. Affective language model adaptation via corpus selection Icassp, Ieee International Conference On Acoustics, Speech and Signal Processing - Proceedings. 4838-4842. DOI: 10.1109/ICASSP.2014.6854521 |
0.308 |
|
2014 |
Lee CC, Katsamanis A, Black MP, Baucom BR, Christensen A, Georgiou PG, Narayanan SS. Computing vocal entrainment: A signal-derived PCA-based quantification scheme with application to affect analysis in married couple interactions Computer Speech and Language. 28: 518-539. DOI: 10.1016/J.Csl.2012.06.006 |
0.497 |
|
2013 |
Metallinou A, Grossman RB, Narayanan S. QUANTIFYING ATYPICALITY IN AFFECTIVE FACIAL EXPRESSIONS OF CHILDREN WITH AUTISM SPECTRUM DISORDERS. Proceedings / Ieee International Conference On Multimedia and Expo. Ieee International Conference On Multimedia and Expo. 2013: 1-6. PMID 25302090 DOI: 10.1109/ICME.2013.6607640 |
0.776 |
|
2013 |
Lammert A, Proctor M, Narayanan S. Interspeaker variability in hard palate morphology and vowel production. Journal of Speech, Language, and Hearing Research : Jslhr. 56: S1924-33. PMID 24687447 DOI: 10.1044/1092-4388(2013/12-0211) |
0.7 |
|
2013 |
Zu Y, Narayanan SS, Kim YC, Nayak K, Bronson-Lowe C, Villegas B, Ouyoung M, Sinha UK. Evaluation of swallow function after tongue cancer treatment using real-time magnetic resonance imaging: a pilot study. Jama Otolaryngology-- Head & Neck Surgery. 139: 1312-9. PMID 24177574 DOI: 10.1001/Jamaoto.2013.5444 |
0.517 |
|
2013 |
Lammert A, Goldstein L, Narayanan S, Iskarous K. Statistical Methods for Estimation of Direct and Differential Kinematics of the Vocal Tract. Speech Communication. 55: 147-161. PMID 24052685 DOI: 10.1016/J.Specom.2012.08.001 |
0.809 |
|
2013 |
Ghosh PK, Narayanan SS. On smoothing articulatory trajectories obtained from Gaussian mixture model based acoustic-to-articulatory inversion. The Journal of the Acoustical Society of America. 134: EL258-64. PMID 23927234 DOI: 10.1121/1.4813590 |
0.539 |
|
2013 |
Ramanarayanan V, Goldstein L, Narayanan SS. Spatio-temporal articulatory movement primitives during speech production: extraction, interpretation, and validation. The Journal of the Acoustical Society of America. 134: 1378-94. PMID 23927134 DOI: 10.1121/1.4812765 |
0.78 |
|
2013 |
Ramanarayanan V, Goldstein L, Byrd D, Narayanan SS. An investigation of articulatory setting using real-time magnetic resonance imaging. The Journal of the Acoustical Society of America. 134: 510-9. PMID 23862826 DOI: 10.1121/1.4807639 |
0.843 |
|
2013 |
Lammert A, Proctor M, Narayanan S. Morphological variation in the adult hard palate and posterior pharyngeal wall. Journal of Speech, Language, and Hearing Research : Jslhr. 56: 521-30. PMID 23690566 DOI: 10.1044/1092-4388(2012/12-0059) |
0.674 |
|
2013 |
Proctor M, Bresch E, Byrd D, Nayak K, Narayanan S. Paralinguistic mechanisms of production in human "beatboxing": a real-time magnetic resonance imaging study. The Journal of the Acoustical Society of America. 133: 1043-54. PMID 23363120 DOI: 10.1121/1.4773865 |
0.83 |
|
2013 |
Zhu Y, Kim YC, Proctor MI, Narayanan SS, Nayak KS. Dynamic 3-D visualization of vocal tract shaping during speech. Ieee Transactions On Medical Imaging. 32: 838-48. PMID 23204279 DOI: 10.1109/Tmi.2012.2230017 |
0.546 |
|
2013 |
Audhkhasi K, Narayanan S. A globally-variant locally-constant model for fusion of labels from multiple diverse experts without using reference labels. Ieee Transactions On Pattern Analysis and Machine Intelligence. 35: 769-83. PMID 22732663 DOI: 10.1109/Tpami.2012.139 |
0.772 |
|
2013 |
Lammert A, Hagedorn C, Proctor M, Goldstein L, Narayanan S. Interspeaker variability in relative tongue size and vowel production The Journal of the Acoustical Society of America. 134: 4205-4205. DOI: 10.1121/1.4831436 |
0.823 |
|
2013 |
Ramanarayanan V, Goldstein L, Narayanan S. Motor control primitives arising from a dynamical systems model of vocal tract articulation The Journal of the Acoustical Society of America. 134: 4168-4168. DOI: 10.1121/1.4831280 |
0.774 |
|
2013 |
Parrell B, Lammert A, Narayanan S, Goldstein L. Simulations of sound change resulting from a production-recovery loop The Journal of the Acoustical Society of America. 134: 4167-4167. DOI: 10.1121/1.4831273 |
0.773 |
|
2013 |
Lu LH, Lammert A, Ramanarayanan V, Narayanan S. A comparative cross-linguistic study of vocal tract shaping in sibilant fricatives in English, Serbian and Mandarin using real-time magnetic resonance imaging The Journal of the Acoustical Society of America. 133: 3611-3611. DOI: 10.1121/1.4806727 |
0.83 |
|
2013 |
Ramanarayanan V, Lammert A, Goldstein L, Narayanan S. Does articulatory setting provide some mechanical advantage for speech motor action? Proceedings of Meetings On Acoustics. 19. DOI: 10.1121/1.4806708 |
0.837 |
|
2013 |
Hsieh FY, Goldstein L, Byrd D, Narayanan S. Pharyngeal constriction in English diphthong production Proceedings of Meetings On Acoustics. 19. DOI: 10.1121/1.4806707 |
0.61 |
|
2013 |
Yang Z, Ramanarayanan V, Byrd D, Narayanan S. An examination of the articulatory characteristics of prominence in function and content words using real-time magnetic resonance imaging The Journal of the Acoustical Society of America. 133: 3567-3567. DOI: 10.1121/1.4806518 |
0.732 |
|
2013 |
Smith C, Proctor M, Iskarous K, Goldstein L, Narayanan S. On distinguishing articulatory configurations and articulatory tasks: Tamil retroflex consonants Proceedings of Meetings On Acoustics. 19. DOI: 10.1121/1.4800521 |
0.61 |
|
2013 |
Lammert A, Narayanan S. On instantaneous vocal tract length estimation from formant frequencies Journal of the Acoustical Society of America. 133: 60027-60027. DOI: 10.1121/1.4799393 |
0.704 |
|
2013 |
Malandrakis N, Potamianos A, Iosif E, Narayanan S. Distributional Semantic Models for Affective Text Analysis Ieee Transactions On Audio, Speech, and Language Processing. 21: 2379-2392. DOI: 10.1109/Tasl.2013.2277931 |
0.384 |
|
2013 |
Busso C, Mariooryad S, Metallinou A, Narayanan S. Iterative Feature Normalization Scheme for Automatic Emotion Detection from Speech Ieee Transactions On Affective Computing. 4: 386-397. DOI: 10.1109/T-Affc.2013.26 |
0.814 |
|
2013 |
Kazemzadeh A, Lee S, Narayanan S. Fuzzy Logic Models for the Meaning of Emotion Words Ieee Computational Intelligence Magazine. 8: 34-49. DOI: 10.1109/Mci.2013.2247824 |
0.765 |
|
2013 |
Richard G, Sundaram S, Narayanan S. An overview on perceptually motivated audio indexing and classification Proceedings of the Ieee. 101: 1939-1954. DOI: 10.1109/JPROC.2013.2251591 |
0.554 |
|
2013 |
Xiao B, Georgiou PG, Lee CC, Baucom B, Narayanan SS. Head motion synchrony and its correlation to affectivity in dyadic interactions Proceedings - Ieee International Conference On Multimedia and Expo. DOI: 10.1109/ICME.2013.6607480 |
0.455 |
|
2013 |
Black MP, Katsamanis A, Baucom BR, Lee CC, Lammert AC, Christensen A, Georgiou PG, Narayanan SS. Toward automating a human behavioral coding system for married couples' interactions using speech acoustic features Speech Communication. 55: 1-21. DOI: 10.1016/J.Specom.2011.12.003 |
0.721 |
|
2013 |
Metallinou A, Katsamanis A, Narayanan S. Tracking continuous emotional trends of participants during affective dyadic interactions using body language and speech information Image and Vision Computing. 31: 137-152. DOI: 10.1016/J.Imavis.2012.08.018 |
0.816 |
|
2013 |
Ettelaie E, Georgiou PG, Narayanan SS. Unsupervised data processing for classifier-based speech translator Computer Speech and Language. 27: 438-454. DOI: 10.1016/J.Csl.2012.03.001 |
0.797 |
|
2013 |
Schuller B, Steidl S, Batliner A, Burkhardt F, Devillers L, Müller C, Narayanan S. Paralinguistics in speech and language - State-of-the-art and the challenge Computer Speech and Language. 27: 4-39. DOI: 10.1016/J.Csl.2012.02.005 |
0.424 |
|
2013 |
Shin J, Georgiou PG, Narayanan S. Enabling effective design of multimodal interfaces for speech-to-speech translation system: An empirical study of longitudinal user behaviors over time and user strategies for coping with errors Computer Speech and Language. 27: 554-571. DOI: 10.1016/J.Csl.2012.02.001 |
0.672 |
|
2013 |
Li M, Han KJ, Narayanan S. Automatic speaker age and gender recognition using acoustic and prosodic level information fusion Computer Speech and Language. 27: 151-167. DOI: 10.1016/J.Csl.2012.01.008 |
0.656 |
|
2013 |
Tsiartas A, Ghosh P, Georgiou P, Narayanan S. High-quality bilingual subtitle document alignments with application to spontaneous speech translation Computer Speech & Language. 27: 572-591. DOI: 10.1016/J.Csl.2011.10.002 |
0.66 |
|
2013 |
Rangarajan Sridhar VK, Bangalore S, Narayanan S. Enriching machine-mediated speech-to-speech translation using contextual information Computer Speech & Language. 27: 492-508. DOI: 10.1016/J.Csl.2011.08.001 |
0.418 |
|
2013 |
Zu Y, Narayanan SS, Kim YC, Nayak K, Bronson-Lowe C, Villegas B, Ouyoung M, Sinha UK. Evaluation of swallow function after tongue cancer treatment using real-time magnetic resonance imaging: A pilot study Jama Otolaryngology - Head and Neck Surgery. 139: 1312-1319. DOI: 10.1001/jamaoto.2013.5444 |
0.497 |
|
2012 |
Kim YC, Proctor MI, Narayanan SS, Nayak KS. Improved imaging of lingual articulation using real-time multislice MRI. Journal of Magnetic Resonance Imaging : Jmri. 35: 943-8. PMID 22127935 DOI: 10.1002/Jmri.23510 |
0.515 |
|
2012 |
Emken BA, Li M, Thatte G, Lee S, Annavaram M, Mitra U, Narayanan S, Spruijt-Metz D. Recognition of physical activities in overweight Hispanic youth using KNOWME Networks. Journal of Physical Activity & Health. 9: 432-41. PMID 21934162 DOI: 10.1123/Jpah.9.3.432 |
0.31 |
|
2012 |
Kim J, Lammert A, Proctor M, Narayanan S. Co-registration of articulographic and real-time magnetic resonance imaging data for multimodal analysis of rapid speech The Journal of the Acoustical Society of America. 132: 2090-2090. DOI: 10.1121/1.4755722 |
0.74 |
|
2012 |
Tan QF, Narayanan SS. Novel Variations of Group Sparse Regularization Techniques With Applications to Noise Robust Automatic Speech Recognition Ieee Transactions On Audio, Speech, and Language Processing. 20: 1337-1346. DOI: 10.1109/Tasl.2011.2178596 |
0.742 |
|
2012 |
Metallinou A, Wollmer M, Katsamanis A, Eyben F, Schuller B, Narayanan S. Context-Sensitive Learning for Enhanced Audiovisual Emotion Classification Ieee Transactions On Affective Computing. 3: 184-198. DOI: 10.1109/T-Affc.2011.40 |
0.787 |
|
2012 |
Morbini F, Audhkhasi K, Artstein R, Van Segbroeck M, Sagae K, Georgiou P, Traum DR, Narayanan S. A reranking approach for recognition and classification of speech input in conversational dialogue systems 2012 Ieee Workshop On Spoken Language Technology, Slt 2012 - Proceedings. 49-54. DOI: 10.1109/SLT.2012.6424196 |
0.801 |
|
2012 |
Mitra U, Emken BA, Lee S, Li M, Rozgic V, Thatte G, Vathsangam H, Zois DS, Annavaram M, Narayanan S, Levorato M, Spruijt-Metz D, Sukhatme G. KNOWME: A case study in wireless body area sensor network design Ieee Communications Magazine. 50: 116-125. DOI: 10.1109/Mcom.2012.6194391 |
0.759 |
|
2012 |
Audhkhasi K, Georgiou PG, Narayanan SS. Analyzing quality of crowd-sourced speech transcriptions of noisy audio for acoustic model adaptation Icassp, Ieee International Conference On Acoustics, Speech and Signal Processing - Proceedings. 4137-4140. DOI: 10.1109/ICASSP.2012.6288829 |
0.768 |
|
2011 |
Ghosh PK, Narayanan S. Automatic speech recognition using articulatory features from subject-independent acoustic-to-articulatory inversion. The Journal of the Acoustical Society of America. 130: EL251-7. PMID 21974500 DOI: 10.1121/1.3634122 |
0.683 |
|
2011 |
Ghosh PK, Goldstein LM, Narayanan SS. Processing speech signal using auditory-like filterbank provides least uncertainty about articulatory gestures. The Journal of the Acoustical Society of America. 129: 4014-22. PMID 21682422 DOI: 10.1121/1.3573987 |
0.749 |
|
2011 |
Kim YC, Hayes CE, Narayanan SS, Nayak KS. Novel 16-channel receive coil array for accelerated upper airway MRI at 3 Tesla. Magnetic Resonance in Medicine. 65: 1711-7. PMID 21590804 DOI: 10.1002/Mrm.22742 |
0.53 |
|
2011 |
Kim YC, Narayanan SS, Nayak KS. Flexible retrospective selection of temporal resolution in real-time speech MRI using a golden-ratio spiral view order. Magnetic Resonance in Medicine. 65: 1365-71. PMID 21500262 DOI: 10.1002/Mrm.22714 |
0.587 |
|
2011 |
Black MP, Kazemzadeh A, Tepperman J, Narayanan SS. Automatically assessing the ABCs Acm Transactions On Speech and Language Processing. 7: 1-17. DOI: 10.1145/1998384.1998389 |
0.804 |
|
2011 |
Ramanarayanan V, Goldstein L, Byrd D, Narayanan S. Statistical analysis of constriction task and articulatory posture variables during speech and pausing intervals using real-time magnetic resonance imaging Journal of the Acoustical Society of America. 130: 2549-2549. DOI: 10.1121/1.3655194 |
0.804 |
|
2011 |
Lammert A, Ramanarayanan V, Goldstein L, Iskarous K, Saltzman E, Nam H, Narayanan S. Statistical estimation of speech kinematics from real-time MRI data The Journal of the Acoustical Society of America. 130: 2549-2549. DOI: 10.1121/1.3655191 |
0.841 |
|
2011 |
Kim J, Lee S, Narayanan S. Detailed study of articulatory kinematics of critical articulators and dependent articulators of emotional speech Journal of the Acoustical Society of America. 130: 2549-2549. DOI: 10.1121/1.3655190 |
0.368 |
|
2011 |
Parrell B, Lammert A, Goldstein L, Byrd D, Narayanan S. Imaging and quantification of glottal kinematics with ultrasound during speech The Journal of the Acoustical Society of America. 130: 2548-2548. DOI: 10.1121/1.3655189 |
0.779 |
|
2011 |
Hagedorn C, Proctor M, Goldstein L, Narayanan S. Automatic analysis of constriction location in singleton and geminate consonant articulation using real-time magnetic resonance imaging Journal of the Acoustical Society of America. 130: 2548-2548. DOI: 10.1121/1.3655187 |
0.799 |
|
2011 |
Katsamanis A, Bresch E, Goldstein L, Narayanan S. Multipulse articulatory modeling in the Wisconsin x‐ray microbeam speech production database. The Journal of the Acoustical Society of America. 129: 2456-2456. DOI: 10.1121/1.3588073 |
0.823 |
|
2011 |
Tan QF, Georgiou PG, Narayanan S. Enhanced Sparse Imputation Techniques for a Robust Speech Recognition Front-End Ieee Transactions On Audio, Speech, and Language Processing. 19: 2418-2429. DOI: 10.1109/Tasl.2011.2136337 |
0.784 |
|
2011 |
Mower E, Mataric MJ, Narayanan S. A Framework for Automatic Human Emotion Classification Using Emotion Profiles Ieee Transactions On Audio, Speech, and Language Processing. 19: 1057-1070. DOI: 10.1109/Tasl.2010.2076804 |
0.308 |
|
2011 |
Ghosh PK, Tsiartas A, Narayanan S. Robust Voice Activity Detection Using Long-Term Signal Variability Ieee Transactions On Audio, Speech, and Language Processing. 19: 600-613. DOI: 10.1109/Tasl.2010.2052803 |
0.594 |
|
2011 |
Tepperman J, Lee S, Narayanan S, Alwan A. A generative student model for scoring word reading skills Ieee Transactions On Audio, Speech and Language Processing. 19: 348-360. DOI: 10.1109/Tasl.2010.2047812 |
0.711 |
|
2011 |
Tsiartas A, Ghosh P, Georgiou PG, Narayanan S. Bilingual audio-subtitle extraction using automatic segmentation of movie audio Icassp, Ieee International Conference On Acoustics, Speech and Signal Processing - Proceedings. 5624-5627. DOI: 10.1109/ICASSP.2011.5947635 |
0.606 |
|
2011 |
Lee C, Mower E, Busso C, Lee S, Narayanan S. Emotion recognition using a hierarchical binary decision tree approach Speech Communication. 53: 1162-1171. DOI: 10.1016/J.Specom.2011.06.004 |
0.749 |
|
2011 |
Ghosh PK, Narayanan SS. Joint source-filter optimization for robust glottal source estimation in the presence of shimmer and jitter Speech Communication. 53: 98-109. DOI: 10.1016/J.Specom.2010.07.004 |
0.568 |
|
2011 |
Yildirim S, Narayanan S, Potamianos A. Detecting emotional state of a child in a conversational computer game Computer Speech & Language. 25: 29-44. DOI: 10.1016/J.Csl.2009.12.004 |
0.77 |
|
2011 |
Lee CC, Katsamanis A, Black MP, Baucom BR, Georgiou PG, Narayanan SS. Affective state recognition in married couples' interactions using PCA-based vocal entrainment measures with multiple instance learning Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 6975: 31-41. DOI: 10.1007/978-3-642-24571-8_4 |
0.504 |
|
2011 |
Kazemzadeh A, Gibson J, Georgiou PG, Lee S, Narayanan SS. EMO20Q questioner agent Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 6975: 313-314. DOI: 10.1007/978-3-642-24571-8_38 |
0.738 |
|
2011 |
Kazemzadeh A, Lee S, Georgiou PG, Narayanan SS. Emotion twenty questions: Toward a crowd-sourced theory of emotions Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics). 6975: 1-10. DOI: 10.1007/978-3-642-24571-8_1 |
0.722 |
|
2010 |
Ghosh PK, Narayanan SS. Bark frequency transform using an arbitrary order allpass filter. Ieee Signal Processing Letters. 17: 543-546. PMID 24436628 DOI: 10.1109/Lsp.2010.2046192 |
0.478 |
|
2010 |
Bresch E, Narayanan S. Real-time magnetic resonance imaging investigation of resonance tuning in soprano singing. The Journal of the Acoustical Society of America. 128: EL335-41. PMID 21110548 DOI: 10.1121/1.3499700 |
0.745 |
|
2010 |
Ghosh PK, Narayanan S. A generalized smoothness criterion for acoustic-to-articulatory inversion. The Journal of the Acoustical Society of America. 128: 2162-72. PMID 20968386 DOI: 10.1121/1.3455847 |
0.642 |
|
2010 |
Li M, Rozgica V, Thatte G, Lee S, Emken A, Annavaram M, Mitra U, Spruijt-Metz D, Narayanan S. Multimodal physical activity recognition by fusing temporal and cepstral information. Ieee Transactions On Neural Systems and Rehabilitation Engineering : a Publication of the Ieee Engineering in Medicine and Biology Society. 18: 369-80. PMID 20699202 DOI: 10.1109/Tnsre.2010.2053217 |
0.339 |
|
2010 |
Rozgić V, Han KJ, Georgiou PG, Narayanan S. Multimodal speaker segmentation and identification in presence of overlapped speech segments Journal of Multimedia. 5: 322-331. DOI: 10.4304/Jmm.5.4.322-331 |
0.802 |
|
2010 |
SHAH D, HAN KJ, NARAYANAN SS. ROBUST MULTIMODAL PERSON RECOGNITION USING LOW-COMPLEXITY AUDIO-VISUAL FEATURE FUSION APPROACHES International Journal of Semantic Computing. 4: 155-179. DOI: 10.1142/S1793351X10000985 |
0.575 |
|
2010 |
Ghosh P, Narayanan S. Investigation of the inter‐articulator correlation in acoustic‐to‐articulatory inversion using generalized smoothness criterion. The Journal of the Acoustical Society of America. 128: 2291-2291. DOI: 10.1121/1.3508044 |
0.59 |
|
2010 |
Ghosh P, Narayanan S. Acoustic frame selection for acoustic‐to‐articulatory inversion. The Journal of the Acoustical Society of America. 128: 2290-2290. DOI: 10.1121/1.3508043 |
0.625 |
|
2010 |
Proctor M, Lammert A, Goldstein L, Narayanan S. Temporal analysis of articulatory speech errors using direct image analysis of real time magnetic resonance imaging. The Journal of the Acoustical Society of America. 128: 2289-2289. DOI: 10.1121/1.3508036 |
0.799 |
|
2010 |
Shin J, Georgiou PG, Narayanan S. Towards modeling user behavior in interactions mediated through an automated bidirectional speech translation system Computer Speech and Language. 24: 232-256. DOI: 10.1016/J.Csl.2009.04.008 |
0.668 |
|
2009 |
Ghosh PK, Narayanan SS. Pitch contour stylization using an optimal piecewise polynomial approximation. Ieee Signal Processing Letters. 16: 810-813. PMID 24453471 DOI: 10.1109/Lsp.2009.2025824 |
0.595 |
|
2009 |
Kalinli O, Narayanan S. Prominence Detection Using Auditory Attention Cues and Task-Dependent High Level Information. Ieee Transactions On Audio, Speech, and Language Processing. 17: 1009-1024. PMID 20084186 DOI: 10.1109/Tasl.2009.2014795 |
0.812 |
|
2009 |
Byrd D, Tobin S, Bresch E, Narayanan S. Timing effects of syllable structure and stress on nasals: a real-time MRI examination. Journal of Phonetics. 37: 97-110. PMID 20046892 DOI: 10.1016/J.Wocn.2008.10.002 |
0.803 |
|
2009 |
Ramanarayanan V, Bresch E, Byrd D, Goldstein L, Narayanan SS. Analysis of pausing behavior in spontaneous speech using real-time magnetic resonance imaging of articulation. The Journal of the Acoustical Society of America. 126: EL160-5. PMID 19894792 DOI: 10.1121/1.3213452 |
0.807 |
|
2009 |
Ananthakrishnan S, Narayanan S. Unsupervised Adaptation of Categorical Prosody Models for Prosody Labeling and Speech Recognition. Ieee Transactions On Audio, Speech, and Language Processing. 17: 138-149. PMID 19763253 DOI: 10.1109/Tasl.2008.2005347 |
0.823 |
|
2009 |
Ghosh PK, Narayanan SS. Closure duration analysis of incomplete stop consonants due to stop-stop interaction. The Journal of the Acoustical Society of America. 126: EL1-7. PMID 19603847 DOI: 10.1121/1.3141876 |
0.556 |
|
2009 |
Kim YC, Narayanan SS, Nayak KS. Accelerated three-dimensional upper airway MRI using compressed sensing. Magnetic Resonance in Medicine. 61: 1434-40. PMID 19353675 DOI: 10.1002/Mrm.21953 |
0.549 |
|
2009 |
Bresch E, Narayanan S. Region segmentation in the frequency domain applied to upper airway real-time magnetic resonance images. Ieee Transactions On Medical Imaging. 28: 323-38. PMID 19244005 DOI: 10.1109/Tmi.2008.928920 |
0.754 |
|
2009 |
Tepperman J, Bresch E, Kim Y, Goldstein L, Byrd D, Nayak K, Narayanan S. Articulatory analysis of foreign‐accented speech using real‐time MRI. The Journal of the Acoustical Society of America. 125: 2753-2753. DOI: 10.1121/1.4784612 |
0.828 |
|
2009 |
Lammert A, Bresch E, Byrd D, Goldstein L, Narayanan S. An articulatory study of lexicalized and epenthetic schwa using real time magnetic resonance imaging. The Journal of the Acoustical Society of America. 125: 2569-2569. DOI: 10.1121/1.4783736 |
0.807 |
|
2009 |
Proctor M, Goldstein L, Byrd D, Bresch E, Narayanan S. Articulatory comparison of Tamil liquids and stops using real‐time magnetic resonance imaging. The Journal of the Acoustical Society of America. 125: 2568-2568. DOI: 10.1121/1.4783732 |
0.785 |
|
2009 |
Holsinger E, Ramanarayanan V, Byrd D, Goldstein L, Gorno Tempini ML, Narayanan S. Beyond acoustic data: Characterizing disordered speech using direct articulatory evidence from real time imaging. The Journal of the Acoustical Society of America. 125: 2531-2531. DOI: 10.1121/1.4783545 |
0.804 |
|
2009 |
Bresch E, Goldstein L, Narayanan S. An analysis‐by‐synthesis approach to modeling real‐time MRI articulatory data using the task dynamic application framework. The Journal of the Acoustical Society of America. 125: 2498-2498. DOI: 10.1121/1.4783351 |
0.81 |
|
2009 |
Kim Y, Narayanan S, Nayak KS. Rapid three‐dimensional magnetic resonance imaging of vocal tract shaping using compressed sensing. The Journal of the Acoustical Society of America. 125: 2498-2498. DOI: 10.1121/1.4783350 |
0.594 |
|
2009 |
Mower E, Mataric MJ, Narayanan S. Human perception of audio-visual synthetic character emotion expression in the presence of ambiguous and conflicting information Ieee Transactions On Multimedia. 11: 843-855. DOI: 10.1109/Tmm.2009.2021722 |
0.321 |
|
2009 |
Chu S, Narayanan S, Kuo CJ. Environmental Sound Recognition With Time–Frequency Audio Features Ieee Transactions On Audio, Speech, and Language Processing. 17: 1142-1158. DOI: 10.1109/Tasl.2009.2017438 |
0.393 |
|
2009 |
Busso C, Lee S, Narayanan S. Analysis of Emotionally Salient Aspects of Fundamental Frequency for Emotion Detection Ieee Transactions On Audio, Speech, and Language Processing. 17: 582-596. DOI: 10.1109/Tasl.2008.2009578 |
0.733 |
|
2009 |
Yildirim S, Narayanan S. Automatic detection of disfluency boundaries in spontaneous speech of children using audio-visual information Ieee Transactions On Audio, Speech and Language Processing. 17: 2-12. DOI: 10.1109/Tasl.2008.2006728 |
0.774 |
|
2009 |
Sethy A, Georgiou PG, Ramabhadran B, Narayanan S. An iterative relative entropy minimization-based data selection approach for n-gram model adaptation Ieee Transactions On Audio, Speech and Language Processing. 17: 13-23. DOI: 10.1109/Tasl.2008.2006654 |
0.794 |
|
2009 |
Price P, Tepperman J, Iseli M, Duong T, Black M, Wang S, Boscardin CK, Heritage M, David Pearson P, Narayanan S, Alwan A. Assessment of emerging reading skills in young native speakers and language learners Speech Communication. 51: 968-984. DOI: 10.1016/J.Specom.2009.05.001 |
0.804 |
|
2009 |
Rangarajan Sridhar VK, Bangalore S, Narayanan S. Combining lexical, syntactic and prosodic cues for improved online dialog act tagging Computer Speech & Language. 23: 407-422. DOI: 10.1016/J.Csl.2008.12.001 |
0.385 |
|
2008 |
Sridhar VK, Bangalore S, Narayanan SS. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework. Ieee Transactions On Audio, Speech, and Language Processing. 16: 797-811. PMID 19603083 DOI: 10.1109/TASL.2008.917071 |
0.312 |
|
2008 |
Kalinli O, Narayanan S. A TOP-DOWN AUDITORY ATTENTION MODEL FOR LEARNING TASK DEPENDENT INFLUENCES ON PROMINENCE DETECTION IN SPEECH. Proceedings of the ... Ieee International Conference On Acoustics, Speech, and Signal Processing / Sponsored by the Institute of Electrical and Electronics Engineers Signal Processing Society. Icassp (Conference). 2008: 3981-3984. PMID 19194488 DOI: 10.1109/ICASSP.2008.4518526 |
0.801 |
|
2008 |
Ananthakrishnan S, Narayanan S. FINE-GRAINED PITCH ACCENT AND BOUNDARY TONE LABELING WITH PARAMETRIC F0 FEATURES. Proceedings of the ... Ieee International Conference On Acoustics, Speech, and Signal Processing / Sponsored by the Institute of Electrical and Electronics Engineers Signal Processing Society. Icassp (Conference). 2008: 4545-4548. PMID 19180228 DOI: 10.1109/ICASSP.2008.4518667 |
0.78 |
|
2008 |
Ananthakrishnan S, Ghosh P, Narayanan S. AUTOMATIC CLASSIFICATION OF QUESTION TURNS IN SPONTANEOUS SPEECH USING LEXICAL AND PROSODIC EVIDENCE. Proceedings of the ... Ieee International Conference On Acoustics, Speech, and Signal Processing / Sponsored by the Institute of Electrical and Electronics Engineers Signal Processing Society. Icassp (Conference). 4518782: 5005-5008. PMID 19177175 DOI: 10.1109/ICASSP.2008.4518782 |
0.831 |
|
2008 |
Ananthakrishnan S, Narayanan S. A NOVEL ALGORITHM FOR UNSUPERVISED PROSODIC LANGUAGE MODEL ADAPTATION. Proceedings of the ... Ieee International Conference On Acoustics, Speech, and Signal Processing / Sponsored by the Institute of Electrical and Electronics Engineers Signal Processing Society. Icassp (Conference). 4181-4184. PMID 19132142 DOI: 10.1109/ICASSP.2008.4518576 |
0.803 |
|
2008 |
Ananthakrishnan S, Narayanan SS. Automatic Prosodic Event Detection Using Acoustic, Lexical, and Syntactic Evidence. Ieee Transactions On Audio, Speech, and Language Processing. 16: 216-228. PMID 19122857 DOI: 10.1109/Tasl.2007.907570 |
0.807 |
|
2008 |
Bulut M, Narayanan S. On the robustness of overall F0-only modifications to the perception of emotions in speech. The Journal of the Acoustical Society of America. 123: 4547-58. PMID 18537403 DOI: 10.1121/1.2909562 |
0.611 |
|
2008 |
Narayanan S, Kazemzadeh A, Black M, Tepperman J, Lee S, Alwan A. Letter sound and letter name recognition for automated literacy assessment of young children The Journal of the Acoustical Society of America. 123: 3327-3327. DOI: 10.1121/1.2933823 |
0.812 |
|
2008 |
Silva J, Narayanan S. Upper Bound Kullback–Leibler Divergence for Transient Hidden Markov Models Ieee Transactions On Signal Processing. 56: 4176-4188. DOI: 10.1109/Tsp.2008.924137 |
0.328 |
|
2008 |
Han KJ, Kim S, Narayanan SS. Strategies to improve the robustness of agglomerative hierarchical clustering under data source variation for speaker diarization Ieee Transactions On Audio, Speech and Language Processing. 16: 1590-1601. DOI: 10.1109/Tasl.2008.2002085 |
0.585 |
|
2008 |
Tepperman J, Narayanan S. Using Articulatory Representations to Detect Segmental Errors in Nonnative Pronunciation Ieee Transactions On Audio, Speech, and Language Processing. 16: 8-22. DOI: 10.1109/Tasl.2007.909330 |
0.672 |
|
2008 |
Bresch E, Kim Y, Nayak K, Byrd D, Narayanan S. Seeing speech: Capturing vocal tract shaping using real-time magnetic resonance imaging [Exploratory DSP] Ieee Signal Processing Magazine. 25: 123-132. DOI: 10.1109/Msp.2008.918034 |
0.811 |
|
2008 |
Busso C, Bulut M, Lee C, Kazemzadeh A, Mower E, Kim S, Chang JN, Lee S, Narayanan SS. IEMOCAP: interactive emotional dyadic motion capture database Language Resources and Evaluation. 42: 335-359. DOI: 10.1007/s10579-008-9076-6 |
0.771 |
|
2007 |
Wang D, Narayanan S. An Acoustic Measure for Word Prominence in Spontaneous Speech. Ieee Transactions On Audio, Speech, and Language Processing. 15: 690-701. PMID 20454538 DOI: 10.1109/Tasl.2006.881703 |
0.709 |
|
2007 |
Wang D, Narayanan SS. Robust Speech Rate Estimation for Spontaneous Speech. Ieee Transactions On Audio, Speech, and Language Processing. 15: 2190-2201. PMID 20428476 DOI: 10.1109/Tasl.2007.905178 |
0.655 |
|
2007 |
Sundaram S, Narayanan S. Automatic acoustic synthesis of human-like laughter. The Journal of the Acoustical Society of America. 121: 527-35. PMID 17297806 DOI: 10.1121/1.2390679 |
0.655 |
|
2007 |
Bresch E, Narayanan S. Robust unsupervised extraction of vocal tract variables from midsagittal real-time magnetic resonance image sequences using region segmentation The Journal of the Acoustical Society of America. 122: 3030. DOI: 10.1121/1.2942840 |
0.759 |
|
2007 |
Kim S, Lee S, Narayanan S. On voicing activity under the control of emotion and loudness Journal of the Acoustical Society of America. 122: 3019-3019. DOI: 10.1121/1.2942788 |
0.337 |
|
2007 |
Busso C, Narayanan SS. Interrelation Between Speech and Facial Gestures in Emotional Utterances: A Single Subject Study Ieee Transactions On Audio, Speech and Language Processing. 15: 2331-2347. DOI: 10.1109/Tasl.2007.905145 |
0.694 |
|
2007 |
Busso C, Deng Z, Grimm M, Neumann U, Narayanan S. Rigid Head Motion in Expressive Speech Animation: Analysis and Synthesis Ieee Transactions On Audio, Speech and Language Processing. 15: 1075-1086. DOI: 10.1109/Tasl.2006.885910 |
0.665 |
|
2007 |
Shin J, Georgiou PG, Narayanan S. Analyzing the multimodal behaviors of users of a speech-to-speech translation device by using concept matching scores 2007 Ieee 9th International Workshop On Multimedia Signal Processing, Mmsp 2007 - Proceedings. 259-263. DOI: 10.1109/MMSP.2007.4412867 |
0.656 |
|
2007 |
Kim S, Georgiou PG, Lee S, Narayanan S. Real-time emotion detection system using speech: Multi-modal fusion of different timescale features 2007 Ieee 9th International Workshop On Multimedia Signal Processing, Mmsp 2007 - Proceedings. 48-51. DOI: 10.1109/MMSP.2007.4412815 |
0.302 |
|
2007 |
Alwan A, Bai Y, Black M, Casey L, Gerosa M, Heritage M, Iseli M, Jones B, Kazemzadeh A, Lee S, Narayanan S, Price P, Tepperman J, Wang S. A system for technology based assessment of language and literacy in young children: The role of multiple information sources 2007 Ieee 9th International Workshop On Multimedia Signal Processing, Mmsp 2007 - Proceedings. 26-30. DOI: 10.1109/MMSP.2007.4412810 |
0.803 |
|
2007 |
Ananthakrishnan S, Narayanan S. Improved speech recognition using acoustic and lexical correlates of pitch accent in A N-best rescoring framework Icassp, Ieee International Conference On Acoustics, Speech and Signal Processing - Proceedings. 4: IV873-IV876. DOI: 10.1109/ICASSP.2007.367209 |
0.801 |
|
2007 |
Grimm M, Kroschel K, Mower E, Narayanan S. Primitives-based evaluation and estimation of emotions in speech Speech Communication. 49: 787-800. DOI: 10.1016/J.Specom.2007.01.010 |
0.334 |
|
2007 |
Kwon S, Narayanan S. Robust speaker identification based on selective use of feature vectors Pattern Recognition Letters. 28: 85-89. DOI: 10.1016/J.Patrec.2006.06.009 |
0.681 |
|
2006 |
Deng Z, Neumann U, Lewis JP, Kim TY, Bulut M, Narayanan S. Expressive facial animation synthesis by learning speech coarticulation and expression spaces. Ieee Transactions On Visualization and Computer Graphics. 12: 1523-34. PMID 17073374 DOI: 10.1109/Tvcg.2006.90 |
0.617 |
|
2006 |
Bresch E, Nielsen J, Nayak K, Narayanan S. Synchronized and noise-robust audio recordings during realtime magnetic resonance imaging scans. The Journal of the Acoustical Society of America. 120: 1791-4. PMID 17069275 DOI: 10.1121/1.2335423 |
0.807 |
|
2006 |
Narayanan S, Bresch E, Tobin S, Byrd D, Nayak K, Nielsen J. Resonance tuning in soprano singing and vocal tract shaping: Comparison of sung and spoken vowels The Journal of the Acoustical Society of America. 119: 3305-3305. DOI: 10.1121/1.4786284 |
0.783 |
|
2006 |
Tobin S, Byrd D, Bresch E, Narayanan S. Syllable structure effects on velum‐oral coordination evaluated with real‐time MRI The Journal of the Acoustical Society of America. 119: 3302-3302. DOI: 10.1121/1.4786264 |
0.744 |
|
2006 |
Silva J, Narayanan S. Average divergence distance as a statistical discrimination measure for hidden Markov models Ieee Transactions On Audio, Speech and Language Processing. 14: 890-906. DOI: 10.1109/Tsa.2005.858059 |
0.372 |
|
2006 |
Sethy A, Narayanan S, Parthasarthy S. A split lexicon approach for improved recognition of spoken names Speech Communication. 48: 1126-1136. DOI: 10.1016/J.Specom.2006.03.005 |
0.784 |
|
2005 |
Lee S, Yildirim S, Bulut M, Kazemzadeh A, Narayanan S. Some articulatory details of emotional speech The Journal of the Acoustical Society of America. 118: 2025-2025. DOI: 10.1121/1.4809086 |
0.825 |
|
2005 |
Narayanan S. Imaging for understanding speech communication: Advances and challenges The Journal of the Acoustical Society of America. 117: 2501-2501. DOI: 10.1121/1.4788049 |
0.389 |
|
2005 |
Kazemzadeh A, Lee S, Narayanan S. Recognition of voice onset time for use in pronunciation modeling Journal of the Acoustical Society of America. 118: 2026-2026. DOI: 10.1121/1.4785771 |
0.778 |
|
2005 |
Chu S, Narayanan S, Jay Kuo CC. Towards parameter-free classification of sound effects in movies Proceedings of Spie - the International Society For Optical Engineering. 5909: 1-9. DOI: 10.1117/12.616217 |
0.378 |
|
2005 |
Kwon S, Narayanan S. Unsupervised speaker indexing using generic models Ieee Transactions On Speech and Audio Processing. 13: 1004-1013. DOI: 10.1109/Tsa.2005.851981 |
0.707 |
|
2005 |
Potamianos A, Narayanan S, Riccardi G. Adaptive categorical understanding for spoken dialogue systems Ieee Transactions On Speech and Audio Processing. 13: 321-329. DOI: 10.1109/Tsa.2005.845836 |
0.431 |
|
2005 |
Busso C, Deng Z, Neumann U, Narayanan S. Natural head motion synthesis driven by acoustic prosodic features Computer Animation and Virtual Worlds. 16: 283-290. DOI: 10.1002/Cav.80 |
0.651 |
|
2004 |
Narayanan S, Nayak K, Lee S, Sethy A, Byrd D. An approach to real-time magnetic resonance imaging for speech production. The Journal of the Acoustical Society of America. 115: 1771-6. PMID 15101655 DOI: 10.1121/1.1652588 |
0.826 |
|
2004 |
Lee CM, Yildirim S, Bulut M, Busso C, Kazemzadeh A, Lee S, Narayanan S. Effects of emotion on different phoneme classes The Journal of the Acoustical Society of America. 116: 2481-2481. DOI: 10.1121/1.4784911 |
0.804 |
|
2004 |
Sundaram S, Narayanan S. Analysis and synthesis of laughter The Journal of the Acoustical Society of America. 116: 2481-2481. DOI: 10.1121/1.4784910 |
0.628 |
|
2004 |
Yildirim S, Lee S, Lee CM, Bulut M, Busso C, Kazemzadeh E, Narayanan S. Study of acoustic correlates associate with emotional speech The Journal of the Acoustical Society of America. 116: 2481-2481. DOI: 10.1121/1.4784909 |
0.815 |
|
2004 |
Bulut M, Yildirim S, Busso C, Lee CM, Kazemzadeh E, Lee S, Narayanan S. Emotion to emotion speech conversion in phoneme level The Journal of the Acoustical Society of America. 116: 2481-2481. DOI: 10.1121/1.4784908 |
0.814 |
|
2004 |
Tepperman J, Narayanan S. Spoken name pronunciation evaluation The Journal of the Acoustical Society of America. 116: 2480-2480. DOI: 10.1121/1.4784904 |
0.698 |
|
2004 |
Lee S, Narayanan S, Byrd D. A developmental acoustic characterization of English diphthongs Journal of the Acoustical Society of America. 115: 2628-2628. DOI: 10.1121/1.4784850 |
0.309 |
|
2004 |
Li Y, Narayanan SS, Kuo C. Adaptive speaker identification with audiovisual cues for movie content analysis Pattern Recognition Letters. 25: 777-791. DOI: 10.1016/J.Patrec.2004.01.004 |
0.306 |
|
2003 |
Lee S, Narayanan S, Byrd D. Asymmetric kinematic changes in speaking rate explored with FDA Journal of the Acoustical Society of America. 114: 2393-2393. DOI: 10.1121/1.4777982 |
0.398 |
|
2003 |
Sethy A, Narayanan S, Lee S, Byrd D. Toward data‐driven modeling of dynamic vocal‐tract data The Journal of the Acoustical Society of America. 114: 2392-2393. DOI: 10.1121/1.4777949 |
0.783 |
|
2003 |
Potamianos A, Narayanan S. Robust recognition of children's speech Ieee Transactions On Speech and Audio Processing. 11: 603-616. DOI: 10.1109/Tsa.2003.818026 |
0.42 |
|
2002 |
Narayanan S, Potamianos A. Creating conversational interfaces for children Ieee Transactions On Speech and Audio Processing. 10: 65-78. DOI: 10.1109/89.985544 |
0.393 |
|
2000 |
Espy-Wilson CY, Boyce SE, Jackson M, Narayanan S, Alwan A. Acoustic modeling of American English /r/. The Journal of the Acoustical Society of America. 108: 343-56. PMID 10923897 DOI: 10.1121/1.429469 |
0.571 |
|
2000 |
Rose R, Narayanan S, Parthasarathy S, Rosenberg A, Gajic B. Automatic speech recognition for mobile hand‐held devices Journal of the Acoustical Society of America. 108: 2575-2575. DOI: 10.1121/1.4743568 |
0.385 |
|
2000 |
Narayanan S, Alwan A. Noise source models for fricative consonants Ieee Transactions On Speech and Audio Processing. 8: 328-344. DOI: 10.1109/89.841215 |
0.588 |
|
1999 |
Narayanan S, Byrd D, Kaun A. Geometry, kinematics, and acoustics of Tamil liquid consonants Journal of the Acoustical Society of America. 106: 1993-2007. PMID 10530023 DOI: 10.1121/1.427946 |
0.407 |
|
1999 |
Lee S, Potamianos A, Narayanan S. Acoustics of children's speech: developmental changes of temporal and spectral parameters. Journal of the Acoustical Society of America. 105: 1455-1468. PMID 10089598 DOI: 10.1121/1.426686 |
0.309 |
|
1998 |
Espy‐Wilson CY, Boyce SE, Jackson MTT, Alwan A, Narayanan S. Modeling the subglottal space for American English /r/ The Journal of the Acoustical Society of America. 104: 1819-1819. DOI: 10.1121/1.423451 |
0.542 |
|
1997 |
Narayanan SS, Alwan AA, Haker K. Toward articulatory-acoustic models for liquid approximants based on MRI and EPG data. Part I. The laterals. The Journal of the Acoustical Society of America. 101: 1064-77. PMID 9035398 DOI: 10.1121/1.418030 |
0.515 |
|
1997 |
Lee S, Potamianos A, Narayanan S. Analysis of children’s speech. Pitch and formant frequency Journal of the Acoustical Society of America. 101: 3194-3194. DOI: 10.1121/1.419259 |
0.366 |
|
1996 |
Bangayan P, Alwan A, Narayanan S. A transmission‐line model of the lateral approximants The Journal of the Acoustical Society of America. 100: 2663-2663. DOI: 10.1121/1.417474 |
0.54 |
|
1994 |
Narayanan S, Alwan A, Haker K. Three‐dimensional tongue shapes of sibilant fricatives The Journal of the Acoustical Society of America. 96: 3342-3342. DOI: 10.1121/1.410664 |
0.543 |
|
Show low-probability matches. |