Dan Hopkins University of Pennsylvania Dimensions of Attitudes on the Affordable Care Act: An Application of Pivoted Text Scaling to Open-Ended Survey Responses Short texts such as open-ended survey responses and tweets contain valuable information about public opinion, but can consist of only a handful of words. This succinctness makes them hard to summarize, especially when the texts are based on common words and have little elaboration. This paper proposes a novel text scaling method to estimate low-dimensional word representations in these contexts. Intuitively, the method reduces noise from rare words and orients scaling output toward common words, so that we are able to find variation in common word use when text responses are not very sophisticated. It does this using a particular implementation of regularized canonical correlation analysis that connects word counts to word co-occurrence vectors using a sequence of activation functions. Usefully, the implementation identifies the common words on which its output is based and we can use these as keywords to interpret the dimensions of the text summaries. It is also able to bring in information from out-of-sample text data to better estimate the semantic locations of words in small data sets. We apply the method to a large public opinion survey on the Affordable Care Act (ACA) in the United States and evaluate whether the method produces compact, meaningful text dimensions. On this data, it identifies a first dimension that appears to pit concerns about the role of government versus patient protection, and a second in which respondents weight the ACA's costs against increased access to health care. These two dimensions have predictive validity, as they are strongly associated with respondents' partisanship and assessment of which groups benefited from the ACA, as well as the ACA's implementation timeline. Further, using a least absolute shrinkage and selection operator (Lasso) to predict ACA attitudes, we show that very little orphaned predictive information remains in higher dimensions. In contrast, when a Lasso is trained on scores from non-stabilized text scaling, it does not perform as well, instead selecting a complex combination of higher dimensions.