Conversations with topics that are locally contextual often produces incoherent topic modeling results using standard methods. Splitting a conversation into its individual utterances makes it possible to avoid this problem. However, with increased data sparsity, different methods need to be considered. Baseline bag-of-word topic modeling methods for regular and short-text, as well as topic modeling methods using transformer-based sentence embeddings were implemented. These models were evaluated on topic coherence and word embedding similarity. Each method was trained using single utterances, segments of the conversation, and on the full conversation. The results showed that utterance-level and segment-level data combined with sentence embedding methods performs better compared to other non-sentence embedding methods or conversation-level data. Among the sentence embedding methods, clustering using HDBScan showed the best performance. We suspect that ignoring noisy utterances is the reason for better topic coherence and a relatively large improvement in topic word similarity.

, , , ,
doi.org/10.1016/j.teler.2024.100126
Telematics and Informatics Reports
Samenwerkingsovereenkomst Stichting 113 / CWI - Zelfmoordpreventie
Centrum Wiskunde & Informatica, Amsterdam (CWI), The Netherlands

Salmi, S., van der Mei, R., Mérelle, S., & Bhulai, S. (2024). Topic modeling for conversations for mental health helplines with utterance embedding. Telematics and Informatics Reports, 13, 100126:1–100126:7. doi:10.1016/j.teler.2024.100126