Theory Section
  • Info
    • Theory Events ASA Virtual Engagement Meeting
    • Bylaws
    • Section officers
    • Announcements
    • For Students
    • Junior Theorist Symposium
  • Newsletters
    • Current Newsletter Online
    • PDF Archives
  • Awards
    • Awards Overview
    • How to Submit
    • Theory Prize
    • Junior Theorist Award
    • Best Student Paper Award
    • Coser Award
  • Resources
    • New Publications
    • Theory Journals
    • Teaching Theory
    • Theory Syllabi
    • Theory Webpages

Perspectives
A NEWSLETTER OF THE ASA THEORY SECTION


Note from the Chair: Big Data/Big Theory (Part 2)

6/30/2016

3 Comments

 
Picture
John Mohr
University of California,
​Santa Barbara
In the first part of this essay (Perspectives, Fall 2015), I suggested things are looking pretty good for sociological theory, an optimism grounded in my appreciation of emergent sociological sub-fields where interesting theoretical work is being paired with innovative new measurement regimes to create different kinds of sociological insights.  I pointed to the field of computational sociology (or Big Data social science) as an example.  In this second part, I offer a few reasons why I think this area of research will continue to need more and better theory in the years ahead. I highlight three causes, what I call: (1) the paradigm effect, (2) the data effect, and (3) the culture effect. 
 

1. The Paradigm Effect

“What were ducks in the scientist’s world before the revolution are rabbits afterwards”
                                                                                       (Kuhn 1962, p. 111).
 
As described by Thomas Kuhn, paradigms do a lot of the heavy lifting for you. Many bigger issues have already been addressed, and one is left to tackle a more narrowly defined set of intellectual concerns. Kuhn explains, “…once the reception of a common paradigm has freed the scientific community from the need constantly to re-examine its first principles, the members of that community can concentrate exclusively upon the subtlest and most esoteric of the phenomena that concern it…increas[ing] both the effectiveness and the efficiency with which the group as a whole solves new problems” (p. 164). Sociologists don’t share a coherent theoretical paradigm of the sort that Kuhn described, and perhaps nobody does. But sociology does sustain various paradigm-like components that operate unevenly across what is an otherwise fragmented and often contested disciplinary space. One of the most important components is the constellation of assumptions—and related practices/technologies—that serve to organize our thoughts and actions regarding the measurement of the social world. ​​A number of different measurement projects have been put forward in sociology (from network science to conversation analysis). 

​Sociology’s dominant framework for social measurement dates back to the years around World War II when it first coalesced into something like a coherent methodological paradigm. Survey methods emerged as the dominant means of gathering social data in the postwar years, anchoring the core of what was essentially a new methodological field. Many of the relevant technologies, organizational forms, and intellectual developments of survey techniques had already been pioneered and refined in the decades before the war, but WWII was an important catalyst. Scholars from multiple disciplines were brought together in large, well resourced, practical project teams.  For example, the U.S. War Department surveyed over a half million active duty American soldiers about their experiences in combat, unit morale, racial prejudice, and many more topics (Stouffer et al. 1949).  Such efforts led to an accelerated articulation between the theory and the method of survey analysis, as well as an elevation in the legitimacy of the work. After the war, academics like Paul Lazarsfeld further theorized, formalized, and publicized these methods, facilitating their rapid institutionalization (Converse 1987; Mohr and Rawlings 2010, 2015; Platt 1996). Friedland and Alford (1991, p. 248) argue that the institutionalization of fields depends upon the development of a common core institutional logic which they define as “a set of material practices and symbolic constructions which constitutes [the field’s] organizing principles and which is available to organizations and individuals to elaborate.” They emphasize that this is driven both through coherence within the logic and through its difference from other logics. In this case, I think we can trace the emergence of a dominant logic of social measurement that wove together a constellation of survey techniques, material technologies, and practices, along with a deep-level set of shared understandings about what it means to measure, collect, conceptualize, and analyze data about the social world (Knorr Cetina 2001; Latour 2005; MacKenzie 2009; Mirowski 2002; Mohr and Ghaziani 2014; Wagner-Pacifici, Mohr, and Breiger 2015). 

At its heart, and in its brilliance, this is a model of discovery that depends upon the ever more efficient leveraging of scarce information to learn about some larger unmeasured social whole. 

​This dominant measurement logic emerged out of and remains grounded in the theory and practice of survey analysis. Thus, it follows a trajectory of scientific investigation that begins from a statistically controlled sample of respondents, willingly answering a set of precisely worded questions to measure both subjective and objective characteristics about themselves. These responses are then statistically extrapolated to help us understand how those characteristics are distributed, inter-related, and (ideally) causally connected across the larger population. At its heart, and in its brilliance, this is a model of discovery that depends upon the ever more efficient leveraging of scarce information to learn about some larger unmeasured social whole. 
Problems generated by this methodological paradigm led to new sub-specialties that have consistently been both scientifically rigorous and deeply artful.  Included here are technical arenas like probability theory, sampling theory, survey design, and scale analysis. But the field also includes many decades-old research traditions grounded in this socio-technological system—sub-fields like social stratification, social mobility, public opinion, and medical sociology. In such areas, the core project of survey analysis continues to be usefully updated. Two recent examples of updating are Steven Vaisey’s (2009) dual-process model (which seeks to distinguish between two levels of thought processes in survey answers, the practical and the discursive) and Amir Goldberg’s (2011) relational class analysis (RCA) methods for analyzing the relations among responses (rather than the responses themselves). Although both signal critical advances of theory and method—Vaisey for bringing the theory of the cognitive self into our understandings of survey respondents, Goldberg for developing a relational approach to interpreting survey data of cultural meanings—both ultimately remain anchored in the traditional goal of efficiently leveraging small amounts of information.

Elsewhere, Goldberg himself has complained about the intellectual limitations of working within the dominant paradigm. In a marvelous article on analyzing Big Data entitled, “In Defense of Forensic Social Science,” Goldberg (2015) links the embrace of hypothesis testing as a master trope in social scientific work back to the problem of informational scarcity in the post-war era. Goldberg writes, during this period “[t]wo methods of research became particularly prominent: surveys and laboratory experiments. Both are costly and time-consuming, and require a significant investment in infrastructure and personnel. Such an upfront investment makes exploratory research potentially wasteful and therefore highly risky” (p. 2). Goldberg explains how this differs from his experience of working with Big Data. He writes, along “with the pain of drowning in an ocean of amorphous data also comes the liberation of unshackling oneself from the blinders of one’s limited imagination…the analytical focus shifts from thinking about the most cost-effective data that one needs to collect in order to support, or refute, a hypothesis, to figuring out how to structure a mountain of data into meaningful categories of knowledge” (p. 2). 

​Content analysis comes from this same place. Again, WWII served as crucible. Harold Lasswell led a team of social scientists at the U.S. Experimental Division for the Study of Wartime Communications. Building on existing content analysis practices, Lasswell’s group developed and refined a new set of formal procedures for systematically extracting the core bits of information from a textual corpus. This produced quantitative datasets that could be reliably employed, in the formal sense of having a reliability metric, to map the distribution of information across the larger, unmeasured textual space. Using these content analysis techniques, Lasswell’s unit employed teams of human “coders” to read foreign—especially enemy—newspapers to gather war intelligence. After the war, Lasswell worked to help refine and institutionalize the wartime methodologies into the modern research program of formal content analysis (Lasswell and Leites 1949; Lasswell, Lerner, and Pool 1952). Notice, the underlying paradigm is the same. Content analysis, from its beginning, has sought to measure meaning by extrapolating from small bits of textual information that have been carefully selected so as to best represent a larger, more complex, unmeasured—in this case, discursive— whole (Mohr, Wagner-Pacifici and Breiger 2015). 

As brilliant and scientifically accomplished as both of these research programs grew to become over the decades, the underlying measurement framework defining both is the careful and calculated leveraging of scarce information.  As the contemporary era of Big Data so vividly illustrates, this is only one way to measure the social, and it is a measurement paradigm that has quickly proven inadequate for organizing the work of Big Data social scientists.  
Some problems of applying quantitative practices predicated on data scarcity to the world of big data are simply practical. Sample sizes quickly become so large that traditional statistical measures of significance are rendered useless because everything becomes significant (McFarland and McFarland 2015). There are also systematic distortions of the social world that come embedded in Big Data formats (Adams and Brückner 2015; Diesner 2015; Lewis 2015; Shaw 2015). But more than this, as Monica Lee and John Levi Martin (2015) complain, our most basic conventions for measuring the social world become a drag on scientific productivity. For example, thinking through the lens of traditional causal modeling as we engage with Big Data has left us using “new tools to accomplish old tasks. In a word, we have been trying to make things insignificant. Like a neurologically impaired subject with dilated pupils, we are putting our hands over our eyes and hoping to peep through our fingers” (Lee and Martin 2015, p. 1). Lee and Martin give the example of outliers—data points far from the central tendency—which represents a classic technical concern of traditional linear modeling.  With Big Data, 
 

a group of outliers, though a very small percentage of the total population, may still consist of thousands of people—many times more than the total respondents to the GSS. They should not be expunged as outliers but understood as a significant population in its own right…moving from one average man that poorly represents one big population to multiple average men that represent segments of the total population more accurately... (Lee and Martin 2015, p. 2) 
​
​
​Contrary to traditional measurement logics, Big Data social science regularly presents us with methodological conundrums about how to effectively carve into an overwhelming abundance of information in a theoretically meaningful way. Lee and Martin put it starkly, now “[l]ike a real scientist, our problem isn’t running out of information, but choosing which path to follow” (Lee and Martin 2015, p. 4). They propose a path that takes us away from the traditional measurement paradigm “… towards cartography—the construction of question-independent, though theoretically organized, reductions of information to make possible the answering of many questions” (p. 4). They envision a Blau-space-like multi-dimensional map that “allows us to scan our eyes up, down, left, and right, to draw both horizontal and vertical comparisons—how people in the population relate to each other in terms of demographics or any single surface (e.g. psychobilly concert attendance), as well as which factors contribute concert attendance for each subpopulation” (p. 3). 
Big Data social science regularly presents us with methodological conundrums about how to effectively carve into an overwhelming abundance of information in a theoretically meaningful way.
​Paul DiMaggio describes experiencing a similar sort of methodological epiphany while working with colleagues in computer science. DiMaggio (2015, p. 2) writes, 
 

[w]hereas social scientists customarily obsess over causality and rely on formal tests of statistical significance, computer scientists using supervised models focus on results. The first topic-model presentation I attended used the method to identify public records particularly likely to require redaction, out of a set of records too immense for humans to screen by hand. The only measure that mattered was whether the models improved prediction (which they did). 
​
DiMaggio traces the implications of machine learning models for organizing scientific thinking in the analysis of social data. He notes that computer scientists are “less concerned with causality and with model confirmation than are many social scientists. It is not that they care less about getting models right; rather they understand ‘getting it right’ in a different (and I am beginning to suspect more useful) way than do most social scientists, focusing on model plausibility, utility, and descriptive, as opposed to causal, validation” (p. 2).  Optimistic about fusion, he proposes strategies linking sociological and computational perspectives together more effectively. DiMaggio writes, “the computer-science perspective is liberating, as it forces us to recognize real interpretive uncertainty and seek out appropriate and substantively relevant forms of validation fitted to specific research goals.” 
​
In sum, sociologists’ dominant paradigm for measuring the social appears increasingly out of step with the methodological problems that come to the fore as we move from an era of data scarcity into an era of Big Data social science. Lacking an established measurement paradigm to do the conceptual heavy lifting, research scientists working with Big Data will need help in theorizing where to look, how to look, what to look for, and what to make of what they are looking at. Hence, my assertion: the old measurement paradigm is beginning to crack (or, rather, is cracking in some new ways), and Big Data social scientists are going to need more and better theory—two kinds of theories, in fact.  On the one hand, we need to re-theorize the practice of social measurement itself, toward more exploratory and less simple-mindedly hypothesis testing approaches to the use of data. On the other, we need new (and reinvigorated) theories of the social world, theories which can now find a new and possibly more illuminating empirical footing in the plenitude of information which has begun to come our way. 
​

2. The Data EffecT

“There is nothing so practical as a good theory”
​(Lewin 1951). 
​
The era of Big Data changes our relationship to social measurement in many ways. Not only does it include processing larger quantities of information, it also presents us with the opportunity to: 1) evaluate different kinds of information than we have been able to analyze before, and 2) use new tools, technologies, and epistemologies for engaging with data that we have had all along. Consider text analysis. The digital revolution of the last quarter-century created new kinds of textual information streams—texting, tweeting, posting, emailing, etc. (see Part 1 of this essay). But Big Data social science is also opening up new opportunities for investigating old-style scholarly data. For example, the analysis of archival materials, long a staple of historical sociology, is being revolutionized by Big Data technologies. Peter Bearman (2015, p. 1) writes, “historians have been quietly building massive archival data structures from the extant records of crucially important institutions and contexts and making those data structures available to the public. These include—and this is only an idiosyncratic sample drawn mainly from Britain—the complete text record of the Old Bailey, the extant records of the Atlantic slave trade, and the British East India Company.” 

The explosion in digital information has been accompanied by the emergence of an equally dynamic technical field focused on analyzing these data. Tech firms, in close collaboration with research universities, have created a range of new tools for reading this expanding universe of textual data (think Google). Consider topic modeling—a way of automatically coding the thematic content of large textual corpora—which is increasingly used by both social scientists and digital humanists to radically change the scale of data queried and the kinds of questions asked (Blei, Ng, and Jordan 2003; Mohr and Bogdanov 2013). Scholars have used these tools to explore a wide array of topics. 

 In short, these technologies are fundamentally changing the way that scholars read and interrogate textual data.

​ Daniel McFarland and colleagues (2013) analyzed the ProQuest database—containing more than a million dissertation abstracts—to study the emergence of boundaries in scientific fields. ​Paul DiMaggio’s (2013) team studied some 8,000 articles concerning public funding of the arts in 5 major U.S. newspapers over the course of a decade. They showed how political events and political leanings of the newspapers affected the types of stories that were published. Ian Miller (2013) used topic models to study Qing Dynasty Veritable records containing thousands of reports of social unrest submitted to the Chinese emperor over the course of many decades. Miller used this to show how social constructions of crime changed in Chinese society. In short, these technologies are fundamentally changing the way that scholars read and interrogate textual data.
The Digital Humanities are far out ahead of social scientists on this. As entire libraries of classical literature have been digitized and made available for computational investigation, literary scholars such as Alan Liu (2013), Charles Underwood (2015), Mathew Jockers (2013), and Franco Moretti have been radically reinventing traditional literary theory, adopting a perspective that Moretti (2013) has described as “distant reading” (to distinguish it from the traditional hermeneutic method of “close reading”).  With his students and colleagues at the Stanford Literary Lab, Moretti has been a prolific innovator, turning out papers that have used network methods to analyze the plot structure of Shakespeare’s plays (Moretti 2011), and semantic structure analysis to unpack the changing logic of 70 years of “Bankspeak” (Moretti and Pestre 2015). They have measured literary style at the level of the sentence (Allison et al. 2013) and at the level of the paragraph (Algee-Hewitt et al. 2015) and used those metrics to empirically locate genre boundaries, genre splits, and various literary innovations.
​
These types of developments create the historical conditions for a new age of computational hermeneutics that builds productively on the proliferation of new text mining tools and datasets (Mohr et al. 2013, 2015). We argue this heralds the development of a style of content analysis that no longer (by techno-methodological necessity) throws away the nuances of textual information, but which instead seeks to identify as much nuance as possible. Unfettered from the limitations of Lasswell’s human coders, this new analysis relies upon a vast array of algorithmic coding and measurement devices. But, as work in the Digital Humanities illustrates, what Big Data researchers really need is more and better theory: Theories of reading, semiotics, and narrativity; theories about how identity, agency, communication, discourse, and social institutions are meaningfully ordered and constituted. Once again, the old scarcity-based measurement paradigm of content analysis provides few clues for helping contemporary text analysts to think about how to approach such a bewildering array of measurement options.

3. The Culture EffecT

“The whole point of a semiotic approach to culture is…to aid us in
​gaining access to the conceptual world in which our subjects live so that we can,
​in some extended sense of the term, converse with them”
​(Geertz 1973, p. 24).

​Births, deaths, marriages, incomes, occupational categories, and years of education—since sociologists started transforming social observations into measurable units, there has been a distinction between things that are more easily quantified and the wide range of cultural, cognitive, and hermeneutic qualities of social life, which have been far less easily translatable into reliable metrics (Mohr and Ghaziani 2014; Jepperson and Swidler 1994). The more data-intensive side of sociology has conventionally leaned heavily toward structural, resource, and demographic factors that seemed to be more easily quantifiable.  
​One of the most interesting things about the Big Data revolution is that it inverts this old imbalance. We are now inundated with textual data, visual data, audio data, and other kinds of highly nuanced cultural data, as the social world continues to digitize its subjective experience of selfness. Such data calls for ways to theorize language and speech, image and vision, hearing and sound. Moreover, because this information often comes through the “contextlessness” of digital space, scholars now have a comparatively harder time identifying the concrete measures of social relations, social structure, and social demography that have served as the mainstay of quantitative social science over these last many decades. 

We are now inundated with textual data, visual data, audio data, and other kinds of highly nuanced cultural data...

​Christopher Bail is an example of someone who has taken on this challenge in a systematic fashion. Using new mega-datasets capturing micro measures of social exchange, Bail (2014, 2015) has been carefully crafting a new style of cultural science, building from the ground up (see also de Nooy 2015). In this work, and elsewhere, we find a range of basic ontological properties defining the parameters of our experience of social life—what counts as things, agents, context, causality, and even time itself—are being actively re-negotiated as scholars engage with the nitty-gritty demands and possibilities of Big Data (Wagner-Pacifici et al. 2015). Big Data social science thereby presents an opportunity for social scientists and humanists to re-invent the way we study culture and, simultaneously, reinvent the way that we measure the social world. 

Some Conclusions

It seems to me that for many years in sociology, theory tended to race ahead of our methods. Nowadays, that relation is reversed: our methods have raced ahead of our theory, begging for more effective engagement. A recent colloquium, “Assumptions of Sociality: A Colloquium of Social and Cultural Scientists,” in the online journal Big Data and Society that I edited along with Robin Wagner-Pacifici and Ronald Breiger, takes steps toward theoretical engagement with this new methodological juggernaut, with eighteen illuminating essays about Big Data and its impact on the social and human sciences.  A variety of major issues are raised in these essays about the future of data analysis in our discipline. 

My main point, for all of the reasons I have proposed here and more, social scientists working with Big Data are going to need lots of new theories, and a new generation of theorists to accomplish the work that needs to be done (Venturini, Jensen, and Latour 2015). Make no mistake, action is required. When social scientists don’t step up, physicists and engineers have no incentive to wait. The early years of Big Data have suggested that when there is a vacuum of good social scientific theory, Big Data researchers are more than happy to ad hoc a problem and call it theory.  As many papers in the BD&S issue testify, naïve empiricism and large leaps of faith often result in bad social science because of the Big Gap that separates social life and the purported representation of that world with Big Data. As Breiger (2015, p. 2) puts it, “whereas many studies have been undertaken of massively large systems such as social networking sites, an under-researched question is the extent to which the behavioral findings of these studies ‘scale down,’ i.e. apply to human groups and organizations of moderate size (dozens or hundreds), where most human social life takes place and is likely to continue to do so. ​
​

Big Data/Big Theory/Big PaneL

With all of this at stake, I think it is incumbent upon us, as the ASA Section responsible for managing the subject of theory in American sociology, to take this matter seriously and to do so posthaste. That is why I have set aside one of our sessions in Seattle to take up this very matter. Many of the authors I have cited here will join us for an author-meets-critics session. This will be a pretty Big Panel, just the kind of thing I should think that we are going to need if we are to take on a Big Topic like Big Data/Big Theory. I hope you will be able to join us in Seattle. We’re planning on having some Big Fun.


​Acknowledgements: Thanks to Roger Friedland, Erin McDonnell, and Craig Rawlings for useful suggestions and comments on this essay (not all of which I was able to respond to). Thanks also to Ronald Breiger and Robin Wagner-Pacifici for their comments on this essay and for their collaboration on the larger project of which this is but a small piece. Finally, special thanks to Erin McDonnell and Damon Mayrl for their support, patience, and superb editorial work. 
​

References

Adams, Julia, and Hannah Brückner. 2015. “Wikipedia, Sociology, and the Promise and Pitfalls of Big Data.” Big Data and Society 2(2) DOI: 10.1177/2053951715614332.
 
Algee-Hewitt, Sarah Allison, Marissa Gemma, Ryan Heuser, Franco Moretti, and Hannah Walser. 2015. “Canon/Archive. Large-scale Dynamics in the Literary Field” Stanford Literary Lab Pamphlet #11. Available online at https://litlab.stanford.edu/pamphlets/.

Allison, Sarah, Marissa Gemma, Ryan Heuser, Franco Moretti, Amir Tevel and Irena Yamboliev. 2013. “Style at the Scale of the Sentence.” Stanford Literary Lab Pamphlet #11. Available online at https://litlab.stanford.edu/pamphlets/.
 
Bail, Christopher A. 2014. “The Cultural Environment: Measuring Culture with Big Data.” Theory and Society 43(3-4): 465-82.
 
______. 2015.  “Lost in a Random Forest: Using Big Data to Study Rare Events.” Big Data and Society 2(2) DOI: 10.1177/2053951715604333.
 
Bearman, Peter L. 2015. “Big Data and Historical Social Science.” Big Data and Society 2(2) DOI: 10.1177/2053951715612497.
 
Blei, David M., Andrew Y. Ng,and Michael I. Jordan. 2003. “Latent Dirichlet Allocation.” Journal of Machine Learning Research 3: 993-1022.
 
Breiger, Ronald. L. 2015. “Scaling Down.” Big Data and Society 2(2) DOI: 10.1177/2053951715602497.
 
Converse, Jean M. 1987. Survey Research in the United States: Roots and Emergence, 1890-1960. New Brunswick, NJ: Transaction Publishers.
 
De Nooy, Wouter. 2015. “Structure from Interaction Events.” Big Data and Society  2(2) DOI: 10.1177/2053951715603732.
 
Diesner, Jana. 2015. “Small Decisions with Big Impact on Data Analytics.” Big Data and Society 2(2) DOI: 10.1177/2053951715617185.
 
DiMaggio, Paul J. 2015. “Adapting Computational Text Analysis to Social Science (and Vice Versa).”  Big Data and Society 2(2) DOI: 10.1177/2053951715602908.
 
DiMaggio, Paul, Manish Naga, and David Blei. 2013. “Exploiting Affinities between Topic Modeling and the Sociological Perspective on Culture: Application to Newspaper Coverage of U.S. Government Arts Funding.” Poetics 41(6): 570–606.
 
Friedland, Roger, and Robert R. Alford. 1991. “Bringing Society Back In: Symbols, Practices, and Institutional Contradictions.” Pp. 232-63 in The New Institutionalism in Organizational Analysis, edited by Walter W. Powell and Paul J. Dimaggio. Chicago: University of Chicago Press.
 
Geertz, Clifford. 1973. “Thick Description: Toward an Interpretive Theory of Culture.” Pp. 3-30 in The Interpretation of Cultures: Selected Essays. New York: Basic Books.
 
Goldberg, Amir. 2011. “Mapping Shared Understandings Using Relational Class Analysis: The Case of the Cultural Omnivore Reexamined.” American Journal of Sociology 116(5): 1397-1436.
 
______. 2015. “In Defense of Forensic Social Science.” Big Data and Society 2(2) DOI: 10.1177/2053951715601145.
 
Jepperson, Ronald L., and Ann Swidler. 1994. “What Properties of Culture Should We Measure?” Poetics 22(4): 359-71.
 
Jockers, Matthew L. 2013. Macroanalysis: Digital Methods and Literary History. Champaign: University of Illinois Press.
 
Knorr Cetina, Karin. 2001. “Objectual Practice.” Pp. 175-188 in The Practice Turn in Contemporary Theory, edited by Theodore R. Schatzki, Karin Knorr Cetina, and Eike von Savigny. New York: Routledge.
 
Kuhn, Thomas. 1962. The Structure of Scientific Revolutions. Chicago: University of Chicago Press.
 
Lasswell, Harold D., and Nathan Leites (eds.). 1949. Language of Politics: Studies in Quantitative Semantics. New York: George W. Stewart.
 
Lasswell, Harold D., Daniel Lerner,and Ithiel De Sola Pool. 1952. The Comparative Study of Symbols: An Introduction. Palo Alto, CA: Stanford University Press.
 
Latour, Bruno. 2005. Reassembling the Social: An Introduction to Actor-Network Theory. New York: Oxford University Press.
 
Lee, Monica, and John Levi Martin. 2015. “Surfeit and Surface.” Big Data and Society 2(2)  DOI: 10.1177/2053951715604334.
 
Lewin, Kurt. 1951. Field Theory in Social Science. New York: Harper.
 
Lewis, Kevin. 2015. “Three Fallacies of Digital Footprints.” Big Data and Society 2(2)  DOI: 10.1177/2053951715602496.
 
Liu, Alan. 2013. “The Meaning of the Digital Humanities.” PMLA 128: 409-23.
 
MacKenzie, Donald. 2009. Material Markets: How Economic Agents are Constructed. Oxford: Oxford University Press.
 
McFarland, Daniel A., and H. Richard McFarland. 2015. “Big Data and the Danger of Being Precisely Inaccurate.” Big Data and Society 2(2)  DOI: 10.1177/2053951715602495.
 
McFarland, Daniel A., Daniel Ramage, Jason Chuang, Jeffrey Heer, Christopher D. Manning, and Daniel Jurafsky. 2013. “Differentiating Language Usage through Topic Models.” Poetics 41(6): 607-25.
 
Miller, Ian Matthew. 2013. “Rebellion, Crime, and Violence in Qing China, 1722-1911: A Topic Modeling Approach.” Poetics 41(6): 626-49.
 
Mirowski, Philip. 2002. Machine Dreams: Economics Becomes a Cyborg Science. Cambridge: Cambridge University Press.
 
Mohr, John W., and Petko Bogdanov. 2013. “Topic Models: What They Are and Why They Matter.” Poetics 41(6): 545-69.
 
Mohr, John W., and Amin Ghaziani. 2014. “Problems and Prospects for Measurement in the Study of Culture.” Theory and Society 43(3-4): 225-46.
 
Mohr, John W., and Craig Rawlings. 2010. “Formal Models of Culture.” Pp. 118-128 in The Routledge Handbook of Cultural Sociology, edited by John Hall, Laura Grindstaff, and Ming-Cheng Lo. New York: Routledge.
 
______. 2015. “Formal Methods of Cultural Analysis.” Pp. 357-67 in International Encyclopedia of the Social & Behavioral Sciences, edited by James D. Wright. 2nd edition, Vol. 9. Oxford: Elsevier.
 
Mohr, John W., Robin Wagner-Pacifici, and Ronald Breiger. 2015. “Toward a Computational Hermeneutics.” Big Data and Society  2(2) DOI: 10.1177/2053951715613809.
 
Mohr, John W., Robin Wagner-Pacifici, Ronald Breiger, and Petko Bogdanov. 2013. “Graphing the Grammar of Motives in U.S. National Security Strategies: Cultural Interpretation, Automated Text Analysis and the Drama of Global Politics.” Poetics 41(6): 670-700.
 
Moretti, Franco. 2011. “Network Theory, Plot Analysis.” Stanford Literary Lab Pamphlet #2 https://litlab.stanford.edu/pamphlets/.
 
______. 2013. Distant Reading. London: Verso Books.
 
Moretti, Franco and Dominique Pestre. 2015. “Bankspeak: The Language of World Bank Reports, 1946-2012.” Stanford Literary Lab Pamphlet #9 https://litlab.stanford.edu/pamphlets/.
 
Platt, Jennifer. 1996. A History of Sociological Research Methods in America: 1920 – 1960. Cambridge: Cambridge University Press.
 
Shaw, Ryan. 2015. “Big Data and Reality.” Big Data and Society 2(2) DOI: 10.1177/2053951715608877.
 
Stouffer, Samuel A., Edward A. Suchman,Leland C. DeVinney,Shirley A. Star,and Robin M. Williams,Jr. 1949. Studies in Social Psychology in World War II: The American Soldier. Vol. 1, Adjustment During Army Life. Princeton: Princeton University Press.
 
Thornton, Patricia H., William Ocasio, and Michael Lounsbury. 2012. The Institutional Logics Perspective: A New Approach to Culture, Structure, and Process. New York: Oxford University Press.
 
Underwood, Charles. 2015. “The Literary Uses of High-Dimensional Space.” Big Data and Society 2(2) DOI: 10.1177/2053951715602494.
 
Vaisey, Stephen. 2009. “Motivation and Justification: A Dual-Process Model of Culture in Action.” American Journal of Sociology 114(6): 1675-1715.
 
Venturini, Tommaso, Pablo Jensen, and Bruno Latour. 2015. “Fill in the Gap: A New Alliance for Social and Natural Sciences.” Journal of Artificial Societies and Social Simulation 18(2) 11 <http://jasss.soc.surrey.ac.uk/18/2/11.html> DOI: 10.18564/jasss.2729.
 
Wagner-Pacifici, Robin, John W. Mohr, and Ronald L. Breiger. 2015. “Ontologies, Methodologies, and New Uses of Big Data in the Social and Cultural Sciences.” Big Data and Society 2(2) DOI: 10.1177/2053951715613810.

3 Comments
Click here for more information link
3/29/2017 10:59:03 am

Selecting wheels on for a new office chair seems like an easy enough task, with most of us typically just choosing what comes standard at no charge. While it may seem like a simple decision, having the right casters for your office chair can actually increase productivity and improve your work environment. In the grand scheme of office functionality it is always important to have the right desk, the right office chair, the right filing cabinet, the right computer monitor, the right desk set up, but sometimes it is the little things that keep the office rolling (literally!). There are many factors that play into which type of casters will work best for your office chair such as your work environment, how mobile you want/need the chair to be, your desk height, and the surface in which you will be rolling on. In order to identify which type of casters will work best for you, it is first important to understand the different options available.

Reply
au.edusson reviews link
9/27/2018 02:13:56 am

He has been able to present the most fascinating theory which had inspired many out there to be amazed about it. He had done a great research which is going to be a revolution in the field of tech.

Reply
garmin etrex 30x link
3/30/2022 09:22:26 am

garmin etrex 30x

Reply



Leave a Reply.

    FALL 2022 Content

    Letter from the Chair: "Theory as Translation"

    "An Interview with Jordanna Matlon, author of A Man Among Other Men"

    Book Symposium on A Man Among Other Men by Jordanna Matlon
    • Jessie Luna
    • Annie Hikido
    • Yannick Coenders
    • Anna Skarpelis

    Colonialism, Modernity and the Canon: An Interview with Gurminder K. Bhambra

    ​Emerging Social Theorists Spotlight
    • Heidi Nicholls
    • Miray Philips
    • Feyza Akova
    • Davon Norris

    EDITORS

    Vasfiye Toprak
    ​Abigail Cary Moore
    Anne Taylor​

    Archives

    January 2023
    August 2022
    December 2021
    July 2021
    December 2020
    August 2020
    December 2019
    July 2019
    January 2019
    June 2018
    December 2017
    December 2016
    June 2016
    April 2016
    December 2015
    June 2015
    May 2015
    February 2015
    December 2014

    Categories

    All
    ASA Meetings
    Awards
    Big Data
    Book Review
    CFP
    Conference Recap
    Dissertation Spotlight
    Interactive
    JTS
    JTS2014
    Letter From The Editors
    News & Notes
    Notes From The Chair
    Pragmatism
    Prizes
    Recent Publications
    Teaching
    The Classics
    Winners Dialogue

    RSS Feed

Powered by Create your own unique website with customizable templates.