Lee et al., "LLMs Can Infer Political Alignment from Online Conversations" (2026)
2026-03-11 → 2026-04-03
Byunghwee Lee, Sangyeon Kim, Filippo Menczer, Yong-Yeol Ahn, Haewoon Kwak, and Jisun An, Submitted (2026)
arXiv
@misc{lee2026llms,
author = {Byunghwee Lee and Sangyeon Kim and Filippo Menczer and Yong-Yeol Ahn and Haewoon Kwak and Jisun An},
title = {LLMs Can Infer Political Alignment from Online Conversations},
year = {2026},
eprint = {2603.11253},
archivePrefix = {arXiv},
primaryClass = {cs.SI},
}
LLMs can predict users’ political leanings from online conversations on platforms like DebateOrg and Reddit. The models identify words that strongly indicate political alignment without being overtly political. Combined with widespread public social data, this capability represents a meaningful privacy vulnerability and highlights the need for awareness of how LLMs exploit sociocultural correlations.