Sudeep Bhatia University of Pennsylvania What do people know about real-world risk sources, and how can this knowledge be uncovered, quantified, and used to study the psychological processes involved in judging risk? We addressed this question with vector space models of semantic cognition (Turney & Pantel, 2010, Journal of Artificial Intelligence Research). These models specify human knowledge representations using high-dimensional vectors trained on large-scale natural language data. Appropriately trained representations not only describe behavior in semantic memory tasks, but have also recently been shown to predict a wide range of high-level judgment phenomena, including probability judgment, factual judgment, forecasting, and social judgment (Bhatia, 2017, Psychological Review; Bhatia, 2017, Cognition). These successes suggest that vector space models can also be used to specify knowledge representations for real-world risk sources, and subsequently predict the risk judgments of participants. We tested this in three studies eliciting ratings of riskiness from over 1,000 participants, for over 400 different naturalistic risk sources. Participants were also asked to rate the risk sources on nine key psychometric dimensions of risk perception (Slovic, 1987, Science). We uncovered knowledge representations for the risk sources by training the Word2Vec vector space model (Mikolov et al., 2013, NIPS) on a very large database of news articles. We found that a support-vector-regression mapping 300-dimensional Word2Vec vectors onto participant ratings was able to predict out-of-sample risk ratings with R2 between 0.55 and 0.75 across the three studies. We compared these accuracy levels to those obtained using participant ratings on the nine psychometric risk dimensions, and found that the two approaches were roughly equivalent, despite the fact that the Word2Vec vectors required no additional participant data. Importantly, greatest accuracy rates were obtained by combing the Word2Vec and psychometric approaches, which resulted in R2 values between 0.8 and 0.9. This implies near-perfect prediction of out-of-sample risk ratings. We modified the above technique to uncover the conceptual associates of risk, as revealed by word use in natural language. We found that words such as "fatal", "dangerous", and "tragic" were strongly associated with risk for all types of risk sources. Further computational analysis of over 10,000 risk associates showed that words strongly associated with risk were also words that were high in fear and low in happiness. Emotions such as disgust, sadness, and anger did not have a strong relationship with risk. We also implemented a range of other similar tests, probing the affective and cognitive structure of risk judgment. Additionally, with the above technique applied to historical natural language data, we retrospectively measured changes in risk perception from the 1800s to the present. Due to space constraints, we do not report these tests here. Overall, our results show how insights from computational linguistics can be used to uncover knowledge representations underlying risk perception. These representations can be used to predict participant risk ratings with very high accuracy. They can also be used to examine the affective and cognitive substrates of risk perception, and retrospectively measure changes in risk perception over time.