Publications

For the most updated version of my publications page, see my Google Scholar profile.

I Still Need Your Help: Online Information Seeking Behavior Among International Students on Reddit

Published in Public Library of Science One, PLOS One, 2026

This study analyzes international students’ online information-seeking behavior on the f1visa subreddit before and during COVID-19, showing how the pandemic shifted discussions from employment to shared crisis-related concerns and reshaped online social capital through both informational and emotional support.

Recommended citation: Youm, S., Chaeeun Han, Hojeong Yoo, Sou Hyun Jang, Bonnie Dorr. I Still Need Your Help: Online Information Seeking Behavior Among International Students on Reddit, Public Library of Science One, PLOS One https://philz0918.github.io/

That Ain’t Right: Assessing LLM Performance on QA in African American and West African English Dialects

Published in Proceedings of the 9th Widening NLP Workshop, Conference on Empirical Methods in Natural Language Processing (EMNLP), 2025

This study tests multiple LLMs on equivalent QA prompts across English dialects and finds a significant performance drop for one dialect.

Recommended citation: Coggins, W., Jasmine McKenzie, Sangpil Youm, Pradham Mummaleti, Juan Gilbert, Eric Ragan, and Bonnie Dorr. That Ain’t Right: Assessing LLM Performance on QA in African American and West African English Dialects, Proceedings of the 9th Widening NLP Workshop, Conference on Empirical Methods in Natural Language Processing (EMNLP), Suzhou, China, pp. 123–129, 2025. https://aclanthology.org/2025.winlp-main.21/

DETQUS: Decomposition-Enhanced Transformers for QUery-focused Summarization

Published in Proceedings of the 2025 Conference of the NAACL: Human Language Technologies (Volume 1: Long Papers), 2025

DETQUS introduces a decomposition-based transformer system that improves query-focused table-to-text summarization through column pruning and ROUGE-L gains.

Recommended citation: Khan, Y., Wu, X., Youm, S., Ho, J., Shaikh, A., Garciga, J., Sharma, R., & Dorr, B. (2025). DETQUS: Decomposition-Enhanced Transformers for QUery-focused Summarization. In Proceedings of the NAACL 2025: Human Language Technologies, Vol. 1. https://aclanthology.org/2025.naacl-long.138/

Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification

Published in Proceedings of the Second Workshop on Social Influence in Conversations (SICon 2024), 2024

This study compares rule-based and deep learning systems for classifying political bias in news, examining their transparency, accuracy, and robustness to unseen data.

Recommended citation: Martinez, M., Schmer-Galunder, S., Liu, Z., Youm, S., Jayawaeera, C., & Dorr, B. (2024). Balancing Transparency and Accuracy: A Comparative Analysis of Rule-Based and Deep Learning Models in Political Bias Classification. In SICon 2024, pp. 102–115. https://aclanthology.org/2024.sicon-1.7/

DAHRS: Divergence-Aware Hallucination-Remediated SRL Projection

Published in International Conference on Applications of Natural Language to Information Systems (NLDB), 2024

This paper proposes a hallucination-remediation technique for multilingual semantic role labeling (SRL) using linguistically-informed alignment and greedy projection strategies.

Recommended citation: Youm, S., Mather, B., Jayawaeera, C., Prada, J., & Dorr, B. (2024). DAHRS: Divergence-Aware Hallucination-Remediated SRL Projection. In International Conference on Applications of Natural Language to Information Systems (NLDB). https://doi.org/10.1007/978-3-031-70239-6_29

Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming

Published in Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024), 2024

This paper evaluates RNN and Transformer models for replicating cross-language structural priming in Chinese-English, showing Transformers outperform RNNs in producing primed sentence structures.

Recommended citation: Zhang, D., Xiao, B., Gao, C., Youm, S., & Dorr, B. (2024). Modeling Bilingual Sentence Processing: Evaluating RNN and Transformer Architectures for Cross-Language Structural Priming. In Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024). https://aclanthology.org/2024.mrl-1.8/

Anti-Asian discourse in Quora: Comparison of before and during the COVID-19 pandemic with machine-and deep-learning approaches

Published in Race and Justice, 2023

This study compares anti-Asian discourse before and during the COVID-19 pandemic using machine- and deep-learning approaches on Quora data.

Recommended citation: Jang, S. H., Youm, S., & Yi, Y. J. (2023). "Anti-Asian discourse in Quora: Comparison of before and during the COVID-19 pandemic with machine-and deep-learning approaches." Race and Justice, 13(1), 55–79. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9863840/

Understanding coronavirus disease 2019 (COVID-19) vaccine hesitancy: Evidence from the community-driven knowledge site Quora

Published in Digital Health, 2022

This study examines COVID-19 vaccine hesitancy over time and analyzes public discourse on Quora using Word2Vec and sentiment analysis.

Recommended citation: Jang, S. H., Gerend, M. A., Youm, S., & Yi, Y. J. (2022). "Understanding coronavirus disease 2019 (COVID-19) vaccine hesitancy: Evidence from the community-driven knowledge site Quora." Digital Health, 8. https://journals.sagepub.com/doi/full/10.1177/20552076221145426

Analysis of Fire Accident Factors on Construction Sites Using Web Crawling and Deep Learning Approach

Published in Sustainability, 2021

This study uses web crawling and deep learning to analyze patterns and causes of fire accidents on construction sites using news media data.

Recommended citation: Kim, J., Youm, S., Shan, Y., & Kim, J. (2021). "Analysis of Fire Accident Factors on Construction Sites Using Web Crawling and Deep Learning Approach." Sustainability, 13(21), 11694. https://www.mdpi.com/2071-1050/13/21/11694