Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 History and definition  





2 Content analysis in internet research  





3 Automatic content analysis  



3.1  Validation  







4 Challenges in online textual analysis  





5 See also  





6 References  














Online content analysis






Español
فارسی
 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


Online content analysisoronline textual analysis refers to a collection of research techniques used to describe and make inferences about online material through systematic coding and interpretation. Online content analysis is a form of content analysis for analysis of Internet-based communication.

History and definition

[edit]

Content analysis as a systematic examination and interpretation of communication dates back to at least the 17th century. However, it was not until the rise of the newspaper in the early 20th century that the mass production of printed material created a demand for quantitative analysis of printed words.[1]

Berelson’s (1952) definition provides an underlying basis for textual analysis as a "research technique for the objective, systematic and quantitative description of the manifest content of communication."[2] Content analysis consists of categorizing units of texts (i.e. sentences, quasi-sentences, paragraphs, documents, web pages, etc.) according to their substantive characteristics in order to construct a dataset that allows the analyst to interpret texts and draw inferences. While content analysis is often quantitative, researchers conceptualize the technique as inherently mixed methods because textual coding requires a high degree of qualitative interpretation.[3] Social scientists have used this technique to investigate research questions concerning mass media,[1] media effects[4] and agenda setting.[5]

With the rise of online communication, content analysis techniques have been adapted and applied to internet research. As with the rise of newspapers, the proliferation of online content provides an expanded opportunity for researchers interested in content analysis. While the use of online sources presents new research problems and opportunities, the basic research procedure of online content analysis outlined by McMillan (2000) is virtually indistinguishable from content analysis using offline sources:

  1. Formulate a research question with a focus on identifying testable hypotheses that may lead to theoretical advancements.
  2. Define a sampling frame that a sample will be drawn from, and construct a sample (often called a ‘corpus’) of content to be analyzed.
  3. Develop and implement a coding scheme that can be used to categorize content in order to answer the question identified in step 1. This necessitates specifying a time period, a context unit in which content is embedded, and a coding unit which categorizes the content.
  4. Train coders to consistently implement the coding scheme and verify reliability among coders. This is a key step in ensuring replicability of the analysis.
  5. Analyze and interpret the data. Test hypotheses advanced in step 1 and draw conclusions about the content represented in the dataset.

Content analysis in internet research

[edit]

Since the rise of online communication, scholars have discussed how to adapt textual analysis techniques to study web-based content. The nature of online sources necessitates particular care in many of the steps of a content analysis compared to offline sources.

While offline content such as printed text remains static once produced, online content can frequently change. The dynamic nature of online material combined with the large and increasing volume of online content can make it challenging to construct a sampling frame from which to draw a random sample. The content of a site may also differ across users, requiring careful specification of the sampling frame. Some researchers have used search engines to construct sampling frames. This technique has disadvantages because search engine results are unsystematic and non-random making them unreliable for obtaining an unbiased sample. The sampling frame issue can be circumvented by using an entire population of interest, such as tweets by particular Twitter users[6] or online archived content of certain newspapers as the sampling frame.[7] Changes to online material can make categorizing content (step 3) more challenging. Because online content can change frequently it is particularly important to note the time period over which the sample is collected. A useful step is to archive the sample content in order to prevent changes from being made.

Online content is also non-linear. Printed text has clearly delineated boundaries that can be used to identify context units (e.g., a newspaper article). The bounds of online content to be used in a sample are less easily defined. Early online content analysts often specified a ‘Web site’ as a context unit, without a clear definition of what they meant.[2] Researchers recommend clearly and consistently defining what a ‘web page’ consists of, or reducing the size of the context unit to a feature on a website.[2][3] Researchers have also made use of more discrete units of online communication such as web comments[8] or tweets.[6]

King (2008) used an ontology of terms trained from many thousands of pre-classified documents to analyse the subject matter of a number of search engines.[9]

Automatic content analysis

[edit]

The rise of online content has dramatically increased the amount of digital text that can be used in research. The quantity of text available has motivated methodological innovations in order to make sense of textual datasets that are too large to be practically hand-coded as had been the conventional methodological practice.[3][7] Advances in methodology together with the increasing capacity and decreasing expense of computation has allowed researchers to use techniques that were previously unavailable to analyze large sets of textual content.

Automatic content analysis represents a slight departure from McMillan's online content analysis procedure in that human coders are being supplemented by a computational method, and some of these methods do not require categories to be defined in advanced. Quantitative textual analysis models often employ 'bag of words' methods that remove word ordering, delete words that are very common and very uncommon, and simplify words through lemmatisationorstemming that reduces the dimensionality of the text by reducing complex words to their root word.[10] While these methods are fundamentally reductionist in the way they interpret text, they can be very useful if they are correctly applied and validated.

Grimmer and Stewart (2013) identify two main categories of automatic textual analysis: supervised and unsupervised methods. Supervised methods involve creating a coding scheme and manually coding a sub-sample of the documents that the researcher wants to analyze. Ideally, the sub-sample, called a 'training set' is representative of the sample as a whole. The coded training set is then used to 'teach' an algorithm how the words in the documents correspond to each coding category. The algorithm can be applied to automatically analyze the remained of the documents in the corpus.[10]

Unsupervised methods can be used when a set of categories for coding cannot be well-defined prior to analysis. Unlike supervised methods, human coders are not required to train the algorithm. One key choice for researchers when applying unsupervised methods is selecting the number of categories to sort documents into rather than defining what the categories are in advance.

Validation

[edit]

Results of supervised methods can be validated by drawing a distinct sub-sample of the corpus, called a 'validation set'. Documents in the validation set can be hand-coded and compared to the automatic coding output to evaluate how well the algorithm replicated human coding. This comparison can take the form of inter-coder reliability scores like those used to validate the consistency of human coders in traditional textual analysis.

Validation of unsupervised methods can be carried out in several ways.

Challenges in online textual analysis

[edit]

Despite the continuous evolution of text-analysis in the social science, there are still some unsolved methodological concerns. This is a (non-exclusive) list with some of this concerns:

See also

[edit]

References

[edit]
  1. ^ a b Krippendorff, Klaus (2012). Content Analysis: An introduction to its methodology. Thousand Oaks, CA: Sage.
  • ^ a b c McMillan, Sally J. (March 2000). "The Microscope and the Moving Target: The Challenge of Applying Content Analysis to the World Wide Web". Journalism and Mass Communication Quarterly. 77 (1): 80–98. doi:10.1177/107769900007700107. S2CID 143760798.
  • ^ a b c van Selm, Martine; Jankowski, Nick (2005). Content Analysis of Internet-Based Documents. Unpublished Manuscript.
  • ^ Riffe, Daniel; Lacy, Stephen; Fico, Frederick (1998). Analyzing Media Messages: Using Quantitative Content Analysis in Research. Mahwah, New Jersey, London: Lawrence Erlbaum.
  • ^ Baumgartner, Frank; Jones, Bryan (1993). Agendas and Instability in American Politics. Chicago. University of Chicao Press. ISBN 9780226039534.
  • ^ a b c Barberá, Pablo; Bonneau, Richard; Egan, Patrick; Jost, John; Nagler, Jonathan; Tucker, Joshua (2014). "Leaders or Followers? Measuring Political Responsiveness in the U.S. Congress Using Social Media Data". Prepared for Delivery at the Annual Meeting of the American Political Science Association.
  • ^ a b c DiMaggio, Paul; Nag, Manish; Blei, David (December 2013). "Exploiting affinities between topic modeling and the sociological perspective on culture: Application to newspaper coverage of U.S. government arts funding". Poetics. 41 (6): 570–606. doi:10.1016/j.poetic.2013.08.004.
  • ^ Mishne, Gilad; Glance, Natalie (2006). "Leave a reply: An analysis of weblog comments". Third Annual Conference on the Weblogging Ecosystem.
  • ^ King, John D. (2008). Search Engine Content Analysis (PhD). Queensland University of Techbology.
  • ^ a b c d Grimmer, Justin; Stewart, Brandon (2013). "Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts". Political Analysis. 21 (3): 267–297. doi:10.1093/pan/mps028.
  • ^ Collingwood, Loren and John Wilkerson. (2011). Tradeoffs in Accuracy and Efficiency in supervised Learning Methods, in The Journal of Information Technology and Politics, Paper 4.
  • ^ Gerber, Elisabeth; Lewis, Jeff (2004). "Beyond the median: Voter preferences, district heterogeneity, and political representation" (PDF). Journal of Political Economy. 112 (6): 1364–83. CiteSeerX 10.1.1.320.8707. doi:10.1086/424737. S2CID 16695697. Archived from the original (PDF) on October 1, 2015.
  • ^ Slapin, Jonathan, and Sven-Oliver Proksch. 2008. A scaling model for estimating time-series party positions from texts. American Journal of Political Science 52(3):705–22.
  • ^ King, Gary, Robert O. Keohane, & Sidney Verba. (1994). Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Prince University Press.
  • ^ Herring, Susan C. (2009). "Web Content Analysis: Expanding the Paradigm". In Hunsinger, Jeremy (ed.). International Handbook of Internet Research. Springer Netherlands. pp. 233–249. CiteSeerX 10.1.1.476.6090. doi:10.1007/978-1-4020-9789-8_14. ISBN 978-1-4020-9788-1.
  • ^ Saldana Johnny. (2009). The Coding Manual for Qualitative Research. London: SAGE Publication Ltd.
  • ^ Chuang, Jason, John D. Wilkerson, Rebecca Weiss, Dustin Tingley, Brandon M. Stewart, Margaret E. Roberts, Forough Poursabzi-Sangdeh, Justin Grimmer, Leah Findlater, Jordan Boyd-Graber, and Jeffrey Heer. (2014). Computer-Assisted Content Analysis: Topic Models for Exploring Multiple Subjective Interpretations. Paper presented at the Conference on Neural Information Processing Systems (NIPS). Workshop on HumanPropelled Machine Learning. Montreal, Canada.

  • Retrieved from "https://en.wikipedia.org/w/index.php?title=Online_content_analysis&oldid=1224520752"

    Category: 
    Online research methods
     



    This page was last edited on 18 May 2024, at 21:56 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki