Probabilistic topic models are based upon the idea that documents are mixtures of topics, where a topic is a probability distribution over words. One important feature of those models is the assumption that the only observable variable is the number of times words are produced and their co-occurrence with other words. Representing the content of words and documents with probabilistic topics has one distinct advantage over a purely spatial representation like older approaches like LSA in probabilistic topic modeling. Each topic is individually interpretable, providing a probability distribution over words that picks out a coherent semantic cluster. In this research, we propose to process a set of de-identified transcripts of subjects using a probabilistic topic models technique known as Latent Dirichlet Allocation to create individual semantic models of mental states. Eventually, the modeled mental estates will be derived using Markov Chain Models. Until now, we have created a software project called “Psymantics” capable to apply a full processing to obtain topic models from a set of de-identified transcripts of subjects.
|Publication status||Published - 25 Oct 2019|
|Event||Association for the Psychoanalysis of Culture & Society
2019 Annual Conference: Displacement: Precarity & Community - Rutgers University Inn and Conference Center, New Brunswick, United States|
Duration: 25 Oct 2019 → 27 Oct 2019
|Conference||Association for the Psychoanalysis of Culture & Society 2019 Annual Conference|
|Abbreviated title||APCS 2019|
|Period||25/10/19 → 27/10/19|