Categories
Uncategorized

7 Genius ChatGPT Hacks for Academic Writing (No Plagiarism!)

ChatGPT for Academic Writing

Learn how to use ChatGPT for Academic Writing – if you’ve ever used AI for academic writing, you’re probably familiar with the dilemma: where’s the line between smart support and academic misconduct?

Plagiarism can have serious consequences – ranging from poor grades to having your entire paper invalidated. But does that mean you shouldn’t use ChatGPT at all? Not at all! In fact, there are many smart and ethical ways to integrate AI into your academic writing process without falling into the plagiarism trap. And that’s exactly what this post is about!

Today, I’ll walk you through seven ways to use ChatGPT effectively and ethically in your academic writing.

1. Identifying Research Gaps and Formulating Hypotheses

You’ve done your research, but it feels like everything has already been said about your topic? ChatGPT can help you uncover unanswered research questions or methodological weaknesses in existing studies. This technique is used by many researchers to develop innovative research ideas – a real advantage if you’re writing your Bachelor’s or Master’s thesis.

ChatGPT is well-known for its ability to generate creative ideas. According to a study by Lee and Chung (2024), ChatGPT outperforms traditional tech-based brainstorming methods – like Googling – by a wide margin. This is because the AI can combine seemingly unrelated concepts in novel ways, sparking new insights.

Another trick: Use ChatGPT as a peer reviewer for your ideas. Ask it to evaluate your research question from the perspective of a critical academic. This can give you valuable insights on where to refine your approach.

Once you’ve got your research question, you can also use ChatGPT to support the hypothesis-building phase. Rather than asking it to generate a ready-made hypothesis, use it to explore relevant variables or test different hypothesis formats. For example, if your topic is the impact of remote work on productivity, you might ask: “What factors influence productivity when working from home?” or “Are there studies showing a positive correlation between remote work and increased productivity? What were the methodological, theoretical, or conceptual limitations of those studies?”

Try to think a little outside the box when writing your prompts – instead of just asking it to generate this or that.

ChatGPT for Academic Writing1

2. Conducting a More Targeted Literature Search

Many students spend hours on Google Scholar or other academic databases without a clear system or search strategy. This typically leads to two major problems: First, relevant studies are often missed because the search terms aren’t well chosen. Second, the overwhelming number of search results can make it difficult to identify the most important and high-quality sources.

Instead of just typing keywords into a search bar, ask ChatGPT for support: Request relevant key terms or alternative expressions that are commonly used in academic literature. For instance, the term “digital learning” might also appear in studies as “e-learning” or “computer-assisted learning”. By asking ChatGPT for these synonyms, you’re less likely to miss key studies.

To narrow down the flood of results, ask ChatGPT which journals are the most reputable in your field. This will help you target your search and find higher-quality sources. Professors want to see papers that cite studies from the journals and conferences they themselves read and publish in. The quality of your references matters more than the topic match. Imagine you find a study that fits your research topic perfectly. If it’s from an obscure journal, it’s practically irrelevant to your academic field. It sounds harsh, but that’s the reality. Your reference list will be judged based on the quality of the sources it includes. So always make sure your argumentation is grounded in the most respected research.

However, having ChatGPT search for studies for you? That’s something you should seriously reconsider. In a study published in the prestigious PNAS journal, Lehr et al. (2024) tested ChatGPT’s performance across various research-related tasks. Literature searching was where the AI performed the worst. Interestingly, it did quite well when it came to advising on research ethics – which is kind of funny, if you think about it.

3. Improving Your Methodology

Choosing the right research method is one of the biggest challenges in academic writing. Instead of being unsure whether to go with a literature-based approach, or a qualitative or quantitative design, you can ask ChatGPT: “Which empirical method is best suited to answer my research question?” This can help you weigh the pros and cons of different methods more effectively.

Ideally, you choose the method that best fits your research question. But in reality, it’s often the other way around. Your supervisor tells you, “Please conduct interviews.” And now the method is fixed – your task becomes finding a research question that fits the method.

Operationalizing your concepts is also essential. ChatGPT can help you identify suitable items or scales that have already been used in previous studies. That way, you ensure your variables are clearly defined and measurable.

ChatGPT for Academic Writing 3

You can also use the AI to critically reflect on your methodology. Ask: “What methodological weaknesses might arise in my study?” This will help you identify potential limitations early on – and address them proactively.

If you’re planning an empirical study, ChatGPT can also assist in selecting appropriate statistical techniques or methods of analysis. This ensures that your data is analyzed systematically and with solid reasoning. However, you should not let ChatGPT analyze your data directly – the risk of errors is still too high. Instead, perform your statistical tests yourself and use ChatGPT to review your approach. According to Lehr et al. (2024), ChatGPT is quite good at spotting when you’re not following statistical best practices.

4. Researching Theories

Theories are the backbone of any academic paper. ChatGPT can give you an initial orientation by suggesting theories that might be relevant to your topic. Important: AI is not an academic source! Use ChatGPT as a starting point to explore theoretical frameworks, and once you’ve identified one that fits, switch to scholarly literature for further research.

A clever strategy is to ask ChatGPT to compare different theories, for example: “Compare Theory X with Theory Y – what are their strengths and weaknesses in relation to Topic X?” This kind of overview can make the decision-making process during your “theory casting” much easier.

You can also ask ChatGPT to name specific applications or research fields where a given theory is frequently used. This helps you assess how relevant a theory might be for your project — or, conversely, highlight that a theory has already been thoroughly examined in a specific area.

Another smart approach is to ask the AI about theoretical developments or critical debates: “How has Theory X evolved over the past ten years?” or “What are the main criticisms of Theory Y?” This gives you not just a general overview, but also insight into which aspects of a theory are currently debated or evolving.

Additionally, ChatGPT can help you draw interdisciplinary connections: “Are there overlaps between Theory X from psychology and Theory Y from sociology?” This can help you uncover new perspectives for your research and possibly create original theoretical links in your paper.

5. Strengthening Your Argument

A strong academic paper relies on clear argumentation and thoughtful discussion. And this is where ChatGPT can help in a somewhat unconventional way. Imagine you’ve constructed an argument that seems perfectly logical and convincing to you — but how would a critical reviewer assess it?

Most students overlook potential weaknesses in their arguments. But you can ask ChatGPT to deliberately deconstruct your reasoning. For example, ask: “What arguments could be used to contradict my thesis?” or “How might a critic challenge my reasoning?”

This turns ChatGPT into a “devil’s advocate,” giving you a fresh perspective on your work. It also allows you to proactively incorporate counterarguments — making your argumentation more robust and resistant to criticism.

6. Ensuring Argumentative Consistency

Once you’ve found strong arguments, it’s important to maintain consistency throughout your paper. Especially in longer texts, contradictions or unclear transitions can easily sneak in.

ChatGPT can help identify these inconsistencies if you ask it to systematically analyze your argument and point out any logical breaks. Try prompts like: “Identify inconsistencies or flawed reasoning in my text.”

Another common issue in academic writing is a lack of clear logical structure. Transitions between arguments might be missing, or there may be logical jumps that are hard for the reader to follow. ChatGPT can help by checking your paper for coherence and suggesting ways to improve the flow between points. A simple prompt would be: “Do you notice any logical leaps or missing links between my arguments?”

For smoother transitions, try: “How can I improve the argumentative connection between Section A and Section B?”

ChatGPT for Academic Writing 2

7. Improving Academic Writing Style

Another challenge many students face is mastering the academic writing style, and with ChatGPT, many simply ask the tool to rephrase their text to make it sound “better.”

But here’s what most don’t realize: ChatGPT can also help you improve your academic writing more deliberately. Instead of just asking it to rephrase your text, have it analyze and critique your writing style. For instance, you could ask: “What common writing mistakes do students in my field make?” or “Is my writing style appropriate for an academic paper?”

Keep in mind that ChatGPT has been trained on thousands of academic articles. So what it considers “academic” often reflects phrasing that frequently appears in published studies. This has led to a trend where professors joke about texts that overuse words like “crucial” or “key.” The widespread use of ChatGPT has caused an inflation of these terms. Don’t fall into this trap — and don’t let the AI drain the originality from your writing style.

One especially useful feature is that you can ask ChatGPT to adapt your style to a specific journal. If you’re planning to submit your paper to a respected journal, you can ask the AI to revise your text to match the style typically used in that publication. This helps you understand what academic writing looks like across different disciplines.

Final Thoughts – ChatGPT for Academic Writing

ChatGPT can make academic writing easier — but it’s no substitute for your own analysis and critical thinking. If used wisely, it can help you build better structures, sharpen your research question, compare theories, strengthen your arguments, and refine your writing style.

But don’t use ChatGPT to generate entire sections of your paper or accept its arguments uncritically. And above all: don’t use AI-generated citations! ChatGPT often invents sources — and that can lead to serious consequences. If you’re using an AI tool for literature searches, make sure it provides a link to the original source.

Never rely blindly on AI. Always verify academic sources yourself and treat ChatGPT as a research assistant — not a replacement for you as a researcher.

References

Lee, B.C., Chung, J.(. An empirical investigation of the impact of ChatGPT on creativity. Nat Hum Behav 8, 1906–1914 (2024). https://doi.org/10.1038/s41562-024-01953-1 

Lehr, S. A., Caliskan, A., Liyanage, S., & Banaji, M. R. (2024). ChatGPT as research scientist: probing GPT’s capabilities as a research librarian, research ethicist, data generator, and data predictor. Proceedings of the National Academy of Sciences, 121(35), 1-9. https://doi.org/10.1073/pnas.2404328121 

Categories
Research Methods

Axial Coding in Grounded Theory (+ Examples)

axial coding

You’ve successfully completed the first step of Grounded Theory, open coding, and now your initial codes are ready. But what’s next? How do you connect these codes meaningfully? That’s exactly where axial coding comes in.

In this video, I’ll show you step-by-step how to apply axial coding, systematically structure your findings, and lay the foundation for your own mini-theory.

#1 What is axial coding?

Axial coding is the second step in the three-stage coding process according to Strauss and Corbin (1998):

  • Open coding (data is broken down and labeled as codes)
  • Axial coding (relationships between codes are established to form categories)
  • Selective coding (core categories are developed)

While open coding breaks down data into smaller thematic units (open codes), axial coding goes further. It helps you discover how these relatively concrete codes relate to each other and how they can be grouped.

This means you’re no longer working directly with your data but rather with the open codes you’ve created. Keep in mind that other Grounded Theory books might use different terminology or slightly altered techniques. Here, when we speak about axial coding, we always refer to the approach of Strauss and Corbin (1998).

axial coding 2

#2 How does axial coding differ from open coding?

A common misconception is that open and axial coding are entirely separate processes. In reality, they often overlap. While creating your initial codes during open coding, you might already notice certain codes seem related.

Axial coding helps systematically explore these relationships. You deliberately ask questions such as:

  • Which open codes have a cause-and-effect relationship?
  • Are there patterns or regularities in the data?
  • How can the open codes be grouped?

Where open coding primarily serves a descriptive function – breaking your data into meaningful units – axial coding has an analytical function. It allows you to reach the next level of abstraction using your open codes.

Example

Imagine you’re studying how students use AI-supported learning programs. Through open coding, you’ve discovered that AI is used in various contexts, like summarizing texts, creating flashcards, or supporting exam preparation.

Here are 6 of your open codes:

  • Summarizing lecture scripts
  • Creating flashcards
  • Audio transcription of lectures
  • Live quizzes to revise learning material
  • Weighing semester goals
  • Creating study plans

But how exactly do students employ AI? Could there be a temporal relationship? This is where axial coding steps in.

You might now group these 6 open codes into logical categories. For instance, “Summarizing lecture scripts” and “Creating flashcards” fit well together under a category called “Preparation.”

“Audio transcription of lectures” and “Live quizzes to revise learning material” might be grouped as “Active Support.”

The remaining two open codes could form a category called “Strategic Planning.”

If you consider the temporal dimension, you might conclude that the categories follow this sequence: (1) Strategic Planning, (2) Preparation, and then (3) Active Support.

This is already a great connection! But wait—is something missing? Do students not use AI for post-learning review? Oops, maybe you didn’t ask this in your interviews! With this realization, you can gather new data to explore exactly that. Perhaps new open codes and a “Review” category will emerge—or maybe not. The point is, after axial coding, you can always return to data collection or open coding. Grounded Theory is iterative, not linear!

axial coding 3

#3 The coding paradigm by Strauss & Corbin (1998)

Strauss and Corbin (1998) offer very concrete guidance. If you’re unsure how to approach axial coding, use their coding paradigm. It helps systematically analyze and structure different aspects of a phenomenon. Let’s clarify this with a practical example.

Imagine you’re analyzing interviews with students about their use of AI tools like ChatGPT for learning. After open coding, you’ve identified several categories, including “Exam preparation,” “Skepticism towards AI,” and “Changes in learning behavior.”

Now you relate these categories to each other. Earlier, we looked at the dimension of time, but there are more aspects. You don’t need to analyze EVERYTHING in the coding paradigm—only what’s necessary for your theory to address your research question.

Phenomenon

The phenomenon describes the central theme or event you’re studying. Example: “Students use AI-supported tools for learning.”

Causal Conditions (Why does it happen?)

These conditions explain what causes or triggers the phenomenon. Example: “Students often have limited time, seeking more efficient learning methods to manage their studies better and cope with exam pressure.”

Context (Under what conditions does the phenomenon occur?)

Context describes specific circumstances or settings where the phenomenon takes place. Example: “In STEM fields such as computer science or engineering, AI usage is more widespread and socially accepted compared to humanities disciplines.”

Intervening Conditions (What additionally influences the phenomenon?)

These factors don’t directly cause but significantly affect how strongly a phenomenon occurs. Example: “Institutional university guidelines, like official prohibitions or recommendations regarding AI use, existing knowledge about AI, or students’ general technical affinity.”

Action Strategies (How do people respond?)

Here you examine how individuals react and what strategies they develop. Example: “Some students use AI tools intensively and regularly, others very selectively or critically reflectively, while still others avoid them completely.”

Consequences (What are the outcomes?)

Consequences describe the outcomes or results of the action strategies applied. Example: “Students alter their learning behavior. This can save time and enhance exam preparation, but might also result in superficial learning or dependency on AI tools.”

#4 Challenges in axial coding

Though axial coding is a powerful technique for theory-building, it can also pose challenges.

Distinguishing between various conditions isn’t always straightforward. Some factors can be interpreted as either causal or intervening conditions. It helps to continuously return to your data, asking: Why is something happening, and what influences it? Also, regularly ask yourself: What exactly should my theory describe? If you’re aiming for a process theory, causal conditions might be less important than mechanisms and temporal progression.

Researchers tend to draw premature conclusions. Once a pattern emerges, you might unintentionally look only for data supporting that pattern. Regularly challenge yourself: Are there alternative explanations? Ideally, use theoretical sampling to collect new data addressing your doubts or gaps in the theory at its current stage.

The coding paradigm should be applied flexibly. Not every study fits perfectly into the provided framework. Some parts may be irrelevant, others crucial. Only you can answer this clearly, keeping your research objective firmly in mind.

axial coding 4

#5 Axial coding as the foundation for theory development

A crucial point in axial coding is that it doesn’t just help structure your data—it also paves the way to theory development.

As you identify more relationships through axial coding, a central category often emerges (sometimes two or three). These categories form the heart of selective coding—the third step of Grounded Theory—where you ascend another level of abstraction.

If, for instance, you find the most critical factor influencing AI use isn’t time pressure but trust in the technology, you might develop a theory of “Technology Acceptance in University Learning.” If you choose this direction, your codes and categories related to time pressure might become less relevant or even discarded in selective coding.

Remember, Grounded Theory is iterative. You’ll likely move back and forth between open and axial coding several times before developing a robust theoretical model.

Conclusion

Axial coding is central to Grounded Theory, enabling you to understand relationships between categories. By systematically applying Strauss and Corbin’s (1998) coding paradigm, you can uncover patterns and connections in your data, laying a solid theoretical foundation.

Once you’ve mastered axial coding, you can tackle selective coding—the final step, where you identify core categories and further develop your theory.

Questions or need help with coding? Drop a comment below!

Categories
Uncategorized

Open Coding in Grounded Theory (+ Example)

Grounded Theory sounds like a interesting approach, until you try to apply it. Suddenly, you’re drowning in concepts like open coding, categories, and constant comparison. Where do you even start?

If open coding has you scratching your head, don’t worry, you’re in the right place. This guide breaks it down in a way that’s easy to follow, so you can confidently use this method in your research.

What Makes Grounded Theory Different?

Open coding is a part of the data analysis process in the Grounded Theory approach. If you need a refresher on the basics of Grounded Theory, check out the introductory tutorial first.

Typically, when conducting a study, you might create an interview guide, conduct a dozen expert interviews, and then analyze them using content analysis.

In Grounded Theory, things work a bit differently. Here, data collection and data analysis can alternate. You first conduct a few interviews, transcribe them, and then step back to conduct a preliminary analysis. You apply techniques such as open coding (which we will discuss in detail shortly) and develop your initial categories.

Based on these findings, you return to the field and conduct the next round of interviews. This time, your questions build on what you discovered in your first analysis. This allows you to probe more precisely, dive deeper, and identify insights that help refine your emerging mini-theory.

This process, known as theoretical sampling, is the key characteristic that differentiates Grounded Theory from other qualitative research approaches.

Now, let’s get back to open coding.

Open Coding in Grounded Theory

According to Strauss and Corbin (1998), open coding is the first of three phases in the qualitative data analysis process for Grounded Theory:

  1. Open Coding
  2. Axial Coding
  3. Selective Coding

Open coding is the first step in engaging with your data—whether it consists of interview transcripts, social media posts, notes, or other written reflections (referred to as memos in Grounded Theory terminology).

To make this process as easy as possible for you, here are five key goals to keep in mind when conducting open coding. After explaining all the steps, we will go through an example from an interview excerpt to illustrate how to apply open coding in practice.

open coding

Step 1: Identify Your Categories

Open coding is essentially the embodiment of inductive category formation, meaning you approach the data with a completely open mind, without considering existing theories.

Some scholars recommend incorporating existing theories at a later stage of your analysis—comparing the categories and relationships you have developed with what has already been established.

For open coding, however, the key is to go through your data without any preconceptions, marking similar content with the same category. In other words, everything that belongs together conceptually should be grouped into the same category.

By the way, the terms “code” and “category” mean the same thing. Coding is simply the process of categorizing.

But how exactly do you form a category?

Step 2: Abstraction Instead of Description

Rather than merely summarizing the content and using the interviewee’s wording, you should abstract the content. For example, consider this sentence from an interview transcript:

“I usually use TikTok to get the latest news.”

Instead of assigning the code “news” to this sentence, a more abstract code like “information acquisition” would be more appropriate.

According to Strauss & Corbin, categories can also have different dimensions. For instance, the category “information acquisition” could include the dimension “frequency”—how often does this information acquisition occur?

Another dimension could be “content”, referring to the type of information being acquired. In this example, it’s news, but in another case, it could be the latest fashion trends.

Dimensions of a category can also be treated as subcategories—a concept you may already be familiar with from qualitative content analysis. If you use software like MAXQDA, creating subcategories can be especially helpful.

Step 3: Constant Comparison

This goal revolves around the following questions:

  • Are codes being assigned consistently?
  • Are the same criteria being applied to categorize dimensions?

You can achieve consistency by continuously cross-checking previously coded text excerpts to ensure that similar meanings are being coded the same way, or adjusting if differentiation is needed.

At this stage, memos play an important role again. Here, you can jot down all your ideas as you code. What is the broader context? What could be a general explanation for what you are reading? Memos serve as a playground where you draft your initial theory.

open coding 2

Step 4: Achieving Saturation in Open Coding

You’ve probably heard that you should stop conducting interviews when no new information emerges. This principle is particularly associated with Grounded Theory literature.

In this context, the term theoretical saturation means that no new categories, variations, or relationships between categories emerge from your data.

However, there are two challenges with saturation:

  1. The exact point of saturation is difficult to determine objectively.
    Researchers disagree on when and how saturation is reached, and numerous articles (e.g., Saunders et al., 2018) discuss this topic.
  2. In a thesis, you don’t have time for 30 interviews.
    If you are working on a thesis, your time is limited. No one expects you to conduct interviews indefinitely until you reach this elusive saturation point. In this case, it is perfectly acceptable—after consulting with your advisor—to set a target number of interviews and stick to it.

For a fully comprehensive Grounded Theory study, the standard 12–15 interviews in a master’s thesis are usually insufficient. Therefore, it is essential to clearly define in your methodology section whether Grounded Theory is being used as a holistic approach or merely as a methodological toolkit without aiming for a fully developed Grounded Theory study.

Alternatively, you could collaborate with someone else and write a joint thesis. This way, you could impress your evaluators with 20–30 interviews and a more comprehensive Grounded Theory study.

For a PhD dissertation, the expectations are different. Here, you must address theoretical saturation and theoretical sampling, ensuring that the Grounded Theory approach is fully implemented from start to finish.

Example of Open Coding (Grounded Theory)

Let’s take this example from a fictional interview transcript:

“Wow, when I first put on the VR headset, it felt like I was in another world. I was totally surprised by how quickly you can connect with others in this Metaverse. I was in a virtual seminar room and later spoke with the professor. She was from Canada and super friendly and open to my ideas.”

The first step is to identify the key W-questions (Who? Where? What? How? Why?).

  • Where? Metaverse
  • Who? Professors and students
  • What? Virtual seminars
  • How? Through a VR headset in a virtual seminar room

The most interesting statement here is about quick social connection—which the interviewee has already highlighted by expressing surprise. You could code this part as “social connection”.

“Wow, when I first put on the VR headset, it felt like I was in another world. I was totally surprised by how quickly you can connect with others in this Metaverse. I was in a virtual seminar room and later spoke with the professor. She was from Canada and super friendly and open to my ideas.”

In later interviews, you would then explore how and why this interaction happens so easily. Could it be because users are represented by avatars, which lowers the barrier to approaching others?

Maybe. Maybe not.

Only Grounded Theory can reveal the answer.

Categories
Uncategorized

How to Create a Codebook in Qualitative Research

Are you looking for a quick yet precise guide on how to create a codebook for your next qualitative research project?

Then you’ve hit the jackpot with this article!

In this post, I’ll explain what a codebook is (in case you’re unfamiliar with it), guide you step by step through the creation process without overwhelming you, and even share how you might be able to skip most of the effort altogether while improving the validity of your qualitative data analysis.

What is a Codebook?

Before we dive in, let’s clarify that we are discussing codebook creation within qualitative research. That means that the data you will be analysing can be interview transcripts, documents, reports, videos, social media postings, and so on.

A codebook is essentially a coding manual that provides structured guidelines for assigning categories which represent broader thematic groupings to units of analysis within a qualitative dataset. Each category or theme consists of specific codes, which serve as labels for classification.

Using a codebook is more common in research projects that analyze qualitative data, but do so from a more quantitative perspective. I’ll explain what this means in a second.

In “hardcore” interpretive qualitative studies, for example when using Glaser’s grounded theory approach or Braun and Clarke’s reflexive thematic analysis, a codebook can also be used—but its purpose here is a little different.

Let’s start with the “quantitative” way of analyzing qualitative data. This is done in methods such as quantitative content analysis or deductive thematic analysis. I’ve made tutorials for both of these methods, so please feel free to check them out.

In a quantitative content analysis, you assign small bits of your qualitative data to certain categories. In the case of this method, you do not develop these categories yourself. Instead, you define them prior to your analysis. And how do you do that?

With a codebook!

The codebook contains all categories and descriptions of the categories, specifying how units of analysis (e.g., sentences, tweets, or images) should be classified.

The codebook may also define a numerical value (an ID) that you can assign per category: Category 1, category 2, category 3, and so on.

Example of a Codebook

Let’s look at a concrete example to make this clearer. Imagine we want to analyze tweets about COVID-19, specifically focusing on misinformation as part of our research question.

A codebook designed for this study would need to contain various categories of misinformation commonly found on social media.

Here’s an example from an actual codebook by Memon & Carley (2020):

The authors defined 16 categories into which they classified their material.

For each category, the codebook provides:

  • A detailed description
  • Examples
  • Justifications for why a particular example was classified under that category

Creating Your Own Codebook

You should only create a new codebook if, after thorough screening of the literature, you can’t find an existing one that suits your study or can be adapted to your needs.

Structure of a Codebook

A codebook, much like a scientific paper, should be well-structured for clarity. If necessary, include a table of contents for easy navigation.

Here’s a suggested structure:

#1 Introduction

A brief paragraph explaining:

  • The context in which the codebook was developed
  • What it is suitable for
  • Whether it builds upon an existing codebook (if so, specify which one)
  • The dataset used to develop the codebook

#2 Overview of Categories

Include a table summarizing all categories. Sometimes a codebook could have two levels, with categories and subcategories or main codes and sub-codes. Whether you call it codes, categories, or themes depends on the method. In content analysis researcher typically refer to categories, in thematic analysis it’s themes and so on. This means, if you are creating your own codebook, you should stick with the vocabulary of the method you want to apply the codebook to.

#3 Description of Categories

Categories can originate in two ways:

  1. From existing literature or a previously established codebook
    • In this case, provide the citation.
  2. Developed based on your own dataset
    • If you identify a new category during your analysis, you can add it to the codebook.

Each entry in the codebook should consist of:

  1. Title of the Category
  2. Description in your own words, explaining what the category represents and the conditions under which this category applies
  3. Corresponding sub-categories that might be part of this category and what they represent
  4. Unit of analysis (e.g., tweet, comment, video, text snippet)
  5. At least one example (preferably several) from a real dataset
  6. Explanation of why the example(s) were assigned this category or sub-category

For points 5 and 6, you can use a table format similar to the linked example. The key is to keep the codebook as clear and structured as possible for ease of use.

#4 References

Finally, list all sources used in your codebook, just as you would in any scientific work.

create a codebook 2

Using an Existing Codebook

Creating a new codebook from scratch can be time-consuming. That’s why it’s worth checking for existing codebooks first.

Where to Find Existing Codebooks

  1. Open-Science Databases: Many researchers share datasets and resources, including codebooks, to support the academic community. Examples:
    • Zenodo
    • OSF (Open Science Framework)
  2. Contacting Authors: If a paper references a codebook but doesn’t provide it in an appendix, try emailing the authors. Researchers often appreciate interest in their work and may be happy to share their codebook.
  3. Adapting a Codebook: If you find a relevant codebook, you can modify it to fit your study. However, make sure to cite the original source and document any changes you made. If you include your adapted codebook in an appendix, provide a detailed explanation of modifications.

Codebooks in Inductive Qualitative Research

In the beginning, I mentioned that codebooks may also be used in inductive qualitative research, such as Glaserian grounded theory or reflexive thematic analysis.

The main difference here is that you are not looking for pre-defined categories. Instead, you start with a blank canvas and create all categories based on your data. The codebook is simply a tool to document your categories. This will help you and others (such as collaborators or reviewers) to better understand how the categories were built. You are essentially creating a documentation of all your categories and examples. But in contrast to quantitative content analysis and deductive thematic analysis, you are doing it during and after the analysis rather than before.

Final Thoughts

A well-structured codebook is essential for conducting research that aims to assign qualitative data to predefined categories or themes.

Whether you create one from scratch or adapt an existing codebook, being systematic, clear and consistent is key to ensuring valid and replicable results.

Categories
Uncategorized

Quantitative Content Analysis (7-Step Tutorial)

You’ve been trying to figure out quantitative content analysis, but no matter where you look, all you find are books, papers, and information on qualitative content analysis.

Help is on the way.

Quantitative content analysis often takes a backseat to its qualitative counterpart, receiving only a brief mention in methodology books. However, if this is the method you want to apply, you require more guidance. And that’s exactly what you’ll get here.

In this article, you will learn how to conduct a quantitative content analysis in seven steps and understand the key differences from qualitative content analysis.

Quantitative vs. Qualitative Content Analysis: The Key Differences

Quantitative content analysis traces back primarily to a methodology book by social psychologist Bernard Berelson. He defined content analysis as a “research technique for the objective, systematic, and quantitative description of the manifest content of communication” (Berelson, 1952, p. 489).

Note: This definition applies to content analysis in general, not just the quantitative approach. Naturally, this sparks debate, as the very term “quantitative” can provoke strong reactions from advocates of the qualitative research paradigm.

The subject matter of content analysis—whether qualitative or quantitative—is always somehow qualitative in nature. That is because content analysis helps us evaluate qualitative data sources, such as newspaper articles, films, social media posts, or documents. But the method itself is heavily informed by the quantitative paradigm, as its name suggests.

Quantitative content analysis systematically converts qualitative material into quantifiable data by applying structured coding schemes and statistical methods. We’ll explore how that works shortly.

Both quantitative and qualitative content analysis aim to systematically and objectively evaluate content. However, a key distinction is that the quantitative approach allows for greater intersubjective traceability, as it follows a structured and replicable coding process.

While qualitative content analysis relies more on the researcher’s judgment and interpretative creativity, quantitative content analysis follows a strict set of rules. It is designed to test theories by verifying hypotheses rather than generating new ones.

Let’s look at the 7 steps of applying quantitative content analysis.

Step #1: Theoretical Preparation for Quantitative Content Analysis

As with any research project within the quantitative paradigm, engaging with existing theories is crucial. Start by defining your research problem—what exactly do you want to investigate?

Ideally, your problem should focus on the relationship between variables that you can examine through content analysis. For example, you might study “news framing” related to climate change and the “emotions” in social media discussions.

Before conducting your quantitative content analysis, formulate hypotheses—testable assumptions about the relationships between variables. A strong hypothesis clearly defines the dependent and independent variables and ensures that the coding categories reflect these constructs. For example, in a study on climate change news framing, you might hypothesize that news articles from government-funded media use the ‘scientific consensus’ frame more often than private news outlets. Another example: “Climate change news framing (independent variable) influences the emotional responses of social media users (dependent variable).”

For a deeper dive into hypothesis formulation, check out my dedicated tutorial on the topic.

Step #2: Sampling

Now, you need to determine your sample. Suppose you want to analyze how climate change is framed in social media. Your sample could consist of a random selection of 500 tweets from major news outlets (e.g., BBC, CNN, Reuters) over the past six months.

Ensuring that the sample is representative is crucial, for example, by balancing sources from different political perspectives. A quantitative content analysis typically allows for a bit of a larger sample because breadth is more important than depth. For a qualitative content analysis, it’s the exact opposite.

Step #3: Defining the Unit of Analysis

At this stage, you specify the level at which your material will be analyzed. For example, if you are studying how climate change is framed in tweets, your unit of analysis could be (1) entire tweets, (2) individual hashtags, or (3) specific phrases related to emotions (e.g., ‘climate crisis’ vs. ‘climate hoax’). If you’re analyzing a text, the unit could be a full sentence or individual words, depending on your research objective.

If you’re looking for semantic nuances, such as emotional tones, it might make sense to analyze individual words. If you’re investigating broader themes, like news “frames,” analyzing entire sentences or text sections may be more appropriate.

Step #4: Defining Descriptive Categories

Before starting the analysis, you need to establish categories for classifying your units of analysis. This involves researching existing coding manuals or codebooks in the academic literature. If none suit your purpose, you must develop your own.

For example, in a study on news framing, a coding manual would list various frame types such as ‘scientific consensus,’ ‘economic impact,’ or ‘conspiracy theory’ and provide instructions for assigning sentences, tweets, videos, or images to these categories.

Authors of coding manuals typically include example cases and detailed coding guidelines, ensuring clarity and consistency. Think of the coding manual as a structured guide for analysis, whether for your use or for others replicating your study.

If you’d like me to create a video on how to develop a coding manual, let me know in the comments!

Quantitative Content Analysis

Step #5: Quantification

Once you’ve assigned each unit of analysis to a category, count how often each category appears in your sample. For example, if analyzing 500 tweets, you might find that 40% frame climate change as a ‘scientific consensus,’ while 25% present it as a ‘conspiracy theory.’ These frequencies allow for statistical comparison and further quantitative analysis.

The most common technique for evaluation is frequency analysis, which links category occurrences to the variables under investigation.

According to Krippendorff (1980), key techniques in quantitative content analysis include frequency analysis, contingency analysis, and cluster analysis. He emphasizes that quantitative content analysis must ensure reliability through systematic coding procedures and validation techniques. These methods help uncover statistical patterns while ensuring measurement validity and intercoder reliability.

A crucial aspect of any quantitative content analysis is ensuring reliability and validity. Intercoder reliability should be tested using Krippendorff’s Alpha or Cohen’s Kappa to ensure that different coders classify content consistently. Without strong reliability, the statistical findings of the analysis may not be meaningful.

If you are doing the analysis by yourself, you cannot calculate intercoder reliability. For this case, you may look into “intracoder” reliability.

Step #6: Statistical Analysis

Statistical analysis can be either descriptive (e.g., frequency distributions, cross-tabulations, means, and standard deviations) or inferential, depending on the dataset size. Inferential techniques include regression models to test relationships and factor analyses to identify underlying patterns in large datasets. Descriptive statistics summarize patterns within the data, while inferential techniques, such as regression models, examine relationships between variables. Factor analyses can identify latent patterns in large datasets, while contingency analysis tests the association between different categorical variables. For example, contingency analysis can reveal whether certain frames are more common in specific media sources, while a regression model can test how media framing influences audience perceptions.

For meaningful results, your categories must be clearly operationalized and directly related to the variables under examination. Thus, problem formulation, hypothesis generation, and category selection should be well-aligned.

Step #7: Presenting the Results of Quantitative Content Analysis

When reporting your results, tables are your best friend. First, present the absolute frequencies of your categories and describe them in your own words.

Next, outline the results of your statistical tests, explaining why you chose them and what the findings mean.

Finally, state which of your hypotheses were supported and which were rejected.

Conclusion: Why Choose Quantitative Content Analysis?

Quantitative content analysis is an excellent choice when you want to test an exisintg theory or framework with qualitative data. Some research questions cannot be effectively addressed through traditional quantitative methods like surveys or experiments. In such cases, content analysis provides a valuable alternative.

If this sounds like what you’re looking for—then quantitative content analysis is the right method for you!

Literature on Quantitative Content Analysis

Berelson, B. (1954). Content Analysis. In G. Lindzey (Ed.), Handbook of Social Psychology. Vol. 1: Theory and Method (pp. 488–522). London: Addison-Wesley.

Krippendorff, K. (1980). Content Analysis: An Introduction to Its Methodology. Sage Publications.

Categories
Uncategorized

Participant Observation (Research Method Explained)

What is participant observation? Where does this research method originate? In which cases is it used? And most importantly: How can you successfully conduct this method yourself, which has sometimes been called “the last great adventure of social science” (Evans-Pritchard, 1973)?

If these questions matter to you, then you’re in the right place. Grab a drink, sit back, and enjoy this article as a smooth introduction to your own ethnographic adventure.

Ethno…what? Don’t worry, we’ll get to that.

Ethnographic Research

Participant observation is a core method in ethnographic research, often simply referred to as fieldwork. The aim is to gain insights into human behavior, group dynamics, and social interactions.

The subject of study can range from an indigenous tribe in Papua New Guinea to a tech startup in a small town in Germany.

The word “ethnos” comes from ancient Greek and roughly means “foreign people.” This research approach has its roots in anthropology and ethnology. Historically, it was used in expeditions to remote regions or isolated islands to study the people, tribes, and cultures living there. Today, ethnographic methods are widely used in various disciplines, including sociology, education, social psychology, and even business studies.

Observation

Observation is probably the most well-known method in ethnography. Spradley (1979) describes it in very simple terms:

“I want to understand the world from your point of view. I want to know what you know in the way you know it. I want to understand the meaning of your experience, to walk in your shoes, to feel things as you feel them, to explain things as you explain them. Will you become my teacher and help me understand?”​

If you have developed a research question that can be answered by describing the behavior of individuals in their natural environment, then observation is a suitable method. By observing, you can see with your own eyes what you aim to study.

In contrast, methods like expert interviews or surveys require you to rely on participants’ statements being accurate and honest.

As a result, observation is one of the empirical methods where researcher subjectivity plays the largest role. Subjectivity is common in qualitative research, but in observation, it is even more pronounced, as everything is filtered through the researcher’s own perceptions and senses.

Non-Participant Observation

In non-participant observation—just as the name suggests—you remain an outsider, merely watching without engaging in the activity. Besides the distinction between participant and non-participant observation, another key factor is whether the observation is overt or covert.

In overt observation, you ask for permission beforehand, introduce yourself, and explain why the study is being conducted and how it might be beneficial for the participants.

Covert observation, on the other hand, takes place without the knowledge of those being observed. While this might yield highly authentic insights, it is rarely used—and for good reason. Ethically, covert observation is highly problematic and would have difficulty passing an ethics committee review.

participant observation

Participant Observation

The “participant” aspect of participant observation refers to the extent to which you, as the researcher, are involved in the situation. There are different roles you can take on.

Gold (1958) identified four different roles that researchers can assume in participant observation:

Complete participation

When you are already a full member of the group you are studying, such as when you observe a company where you work as a student assistant.

Active participation

When you try to engage in the same activities as the group members but are still an outsider.

Moderate participation

When you alternate between observing and participating to maintain a balanced approach.

Passive participation

When you are present but do not engage in the activities, interacting minimally with the group.

For example, if you were studying an indigenous tribe in the Amazon, you might actively take part in a spiritual ritual. This would make you highly involved in the experience, possibly giving you access to insights and conversations you might not otherwise have. This would be considered active participation. Alternatively, you could just follow along quietly, staying in the background while smiling and clapping along—this would be passive participation. In non-participant observation, you would avoid any interaction altogether.

However, active participation also has a significant drawback: the people you observe may alter their behavior simply because you are participating. This effect must always be considered and critically discussed.

Additionally, you can conduct conversations with participants. These ethnographic interviews are quite different from structured expert interviews. There is no pre-defined questionnaire; instead, conversations occur naturally within the setting. The goal is to build a respectful and trusting relationship. These interactions might take place around a campfire outside usual working hours or in an unexpected setting. Instead of recording the conversation, you take notes and later document your insights in a research diary.

The Three Phases of Participant Observation

To help you prepare for your participant observation, here are three key phases that this method can be divided into (Spradley, 1980; Flick, 2019):

Describing the Research Environment

At the beginning of your participation in a group, you are an outsider and need time to acclimate. Your presence is something new for the group members, and they must adjust to having you around. During this phase, it is advisable to remain somewhat in the background and start by thoroughly documenting the environment. Write down everything you observe—what you see, hear, and experience. Simultaneously, take the opportunity to introduce yourself and gradually establish rapport with individual members.

Focused Observations

Once you have become an accepted presence within the group, you can transition to more purposeful observations. At this stage, you can initiate targeted conversations and immerse yourself in situations that directly contribute to answering your research questions. Your observations become more structured as you begin to refine the focus of your study.

Selective Observations

In the final phase of your study, you will have already gathered significant insights and formulated preliminary answers to your research questions. Now, your objective is to seek out specific examples and supporting evidence that substantiate your findings. This phase requires critical thinking and a keen eye for patterns and consistencies in behavior.

participant observation shribe

Data Collection and Analysis in Participant Observation

When it comes to collecting data, you can take either a structured or unstructured approach. If you have created checklists, formulated guiding questions, or prepared other documentation in advance, you are following a structured approach.

Conversely, if you enter the observation setting with an open mind and an empty notebook, allowing observations to guide your documentation process, your approach is unstructured. Both methods have their advantages and limitations.

Your research diary plays a crucial role in the analysis process. Alongside taking notes during your observations, you should later expand on them in your diary, adding reflections and interpretations. To ensure that you do not overlook documentation, consider setting aside dedicated time—perhaps a few hours or an entire day—away from the field to write down your impressions in detail.

After data collection, qualitative analysis techniques can be applied to make sense of the findings. Common methods include:

  • Thematic Analysis: Identifying recurring patterns, themes, and categories within the observational data.
  • Coding: Assigning labels to different aspects of the data to systematically organize insights.
  • Narrative Analysis: Examining how observed interactions and behaviors construct meaning within a specific social context.

These approaches help translate raw observations into meaningful interpretations, allowing you to draw conclusions from your study.

Now, lace up your boots and embark on your research adventure!

Categories
Uncategorized

What is a Histogram? (Statistics Basics)

What is a histogram in statistics? How does it visualize data? And how can this visualization help you with data analysis?

In this video, I’ll show you how to ace your next statistics exam and take your data analysis to the next level using histograms.

Histograms are a standard tool in statistics and are essential for many academic papers. To help you understand and use histograms effectively, I’ll walk you through the basics today.

Of course, I’ll also show you how to create a histogram for any dataset in no time.

1. What is a Histogram?

A histogram is a type of chart that represents a frequency distribution. As you can see in the graphic, the x-axis represents intervals, while the y-axis shows their corresponding frequencies.

A key characteristic of a histogram is that the bars are directly adjacent to one another, with no gaps in between. This emphasizes the continuous nature of the data, as each bar represents a range of values rather than discrete categories. This is because histograms are used for continuous data (e.g., measurements like weight, length, or time spans).

In contrast, bar charts represent categorical data (nominal data such as the number of students in different study programs like law, psychology, or business administration). That’s why bars in a bar chart are separated from each other.

It’s also crucial that the y-axis of a histogram starts at a frequency of 0. The height of each bar represents the number of data points in that interval.

If the baseline is altered, the perceived heights of the bars change, potentially distorting the actual distribution of the data. This could lead to an overestimation of low frequencies or an underestimation of high frequencies.

histogram statistics

2. Where Are Histograms Used?

Histograms are widely used across various fields. In economics, for example, they help analyze income distribution across different demographic groups. In medicine, they assist in understanding the distribution of measurements like blood pressure or BMI within a population.

They are also crucial for fundamental statistical data analysis, such as checking whether a dataset follows a normal distribution.

3. Creating a Histogram in Statistics

Let’s create a histogram using a real-world example. We have a dataset of exam scores from the last statistics test:

53, 41, 71, 91, 99, 93, 87, 74, 97, 81, 85, 89, 78, 61, 66, 71, 86.

First, you need to create a frequency distribution table and group the scores into intervals.

The intervals must have equal width, ensuring that all bars are the same size. If the intervals are too wide, important details might be lost, whereas too narrow intervals could make the chart too complex. For this example, I’ve chosen intervals of 10 points each (40-49, 50-59, 60-69, etc.).

In statistics, class intervals for histograms are typically chosen so that the lower boundary is inclusive, and the upper boundary is exclusive.

This means that an interval of 60-69 includes all values from 60 up to but not including 69. If we instead used an interval of 60-70, the value 70 would belong to two intervals (both 60-70 and 70-80), leading to ambiguity. To avoid this issue and ensure a clear, unambiguous assignment of data points to intervals, histogram intervals do not overlap.

Now let’s look at the frequencies.

  • One student scored in the 40-49 range.
  • Another student scored between 50-59.
  • Two students scored between 60-69.
  • Four students scored between 70-79.
  • And so on…

Now, you need to plot this data using software like Excel or R. The result for our example looks like this:

4. Understanding a Histogram in Statistics

Interpreting a histogram in statistics is a crucial step in understanding your collected data. A histogram provides a visual representation of how data is distributed.

It helps identify patterns and anomalies that may indicate specific trends or issues. Keep in mind that in density histograms, probabilities are represented by the area of the bars, while in frequency histograms, the bar height indicates the number of observations in each interval.

1. Data Distribution

Histograms show the frequency of data within different intervals, making it easy to assess distribution at a glance. Researchers can quickly determine whether the data is normally distributed, skewed left or right, or exhibits other patterns like bimodal distributions.

A normal distribution, often called a bell curve, means that most data points cluster around a central value, with symmetrical tails extending on both sides. In a university setting, this could represent exam scores, where most students achieve average marks, while very high or very low scores are less common.

A skewed distribution indicates that the data is asymmetrically spread. A positively skewed (right-skewed) histogram shows a concentration of low values with a few high values—such as the time students spend studying for a subject. Many may spend only a little time, while a few invest a lot. A negatively skewed (left-skewed) distribution suggests the opposite.

A bimodal distribution, featuring two peaks, may indicate the presence of two distinct groups. For example, in a class attended by both first-year and advanced students, two peaks might suggest that each group tends to score differently.

2. Identifying Anomalies

Visualizing data can reveal outliers, unusual patterns, or anomalies that may warrant further investigation. The width of the intervals shows how data is grouped.

  • Narrow bars indicate a detailed distribution.
  • Wider bars provide a more generalized overview.
  • Bar height represents the number of observations in each interval—taller bars indicate higher frequencies.

3. Comparing Datasets

Histograms allow for easy comparison of two or more datasets. You can use them to examine how data is distributed under different conditions or across different groups.

4. Hypothesis Testing

Histograms can help formulate or test hypotheses about data. For example, if you hypothesize that a particular variable follows a normal distribution, a histogram in statistics can confirm or disprove this assumption.

5. Decision-Making

In practice, such as in quality control, histograms are used to determine whether a business process meets specific specifications.

5. Interpreting Our Example Histogram

To better understand a histogram in statistics, I’ll now pose a few questions about our example. Feel free to pause and try answering before checking the solutions.

  • Would you say the data is symmetric, or is it skewed left or right?

You can see that the taller bars are on the left side. This suggests a left-skewed distribution, meaning the data has negative skewness. In other words, students scored relatively high in this exam.

  • What is the mode of this dataset?

The mode is the interval with the highest frequency. In this case, most students scored between 80 and 89, making this the mode.

  • How many students scored up to 69 points?

Adding the first three bars: 1+1+2 = 4 students scored up to 69 points.

  • How many students scored at least 80 points?

Adding the last two bars: 5+4 = 9 students scored at least 80 points.

  • How many students scored between 60 and 89 points?

Adding the middle bars: 2+4+5 = 11 students scored within the intervals 60-69, 70-79, and 80-89.

6. Histograms and Probabilities

Histograms help navigate large datasets. These visual representations display probability distributions, which are essential for understanding a dataset’s dynamics.

Returning to our exam example: the bar heights indicate how many students fall within specific score ranges. But they also reflect the probability of a randomly selected student achieving a particular result.

A clustering of values around a central score suggests a normal distribution, which many statistical tests assume. The histogram helps determine whether this assumption holds or if another testing approach is needed.

Histograms also allow us to infer conclusions about an entire population from a sample, provided that the sample is representative and sufficiently large. For instance, a histogram of a class’s exam scores can provide insights into the performance of all students in the program.

All in all, a histogram is like a Swiss Army knife in a statistics. If you want to dive deeper, I highly recommend Andy Field’s book Discovering Statistics.

Categories
Philosophy of Science

Phenomenology Explained Simply (Philosophy, Husserl, Method)

Have you come across the term “phenomenology” but have no clue what it actually means?

Then this article is just for you.

We’re going to break down mysterious concepts like “transcendental reduction,” “epoché,” and “Intuition of Essences” Sounds like something far removed from empirical science as we know it? Exactly and that’s what makes this topic so fascinating.

But don’t worry, I’ll explain phenomenology in a way that’s easy to understand, showing you how this school of thought and its methodology can be applied beyond dense philosophical texts.

The Philosophy Behind Phenomenology (Husserl)

Phenomenology as a philosophical discipline emerged in 1900 with the publication of Logical Investigations by Edmund Husserl, a foundational work that established phenomenology as a distinct methodological approach to studying consciousness. In this work, the philosopher introduced a novel method for examining and exploring consciousness.

What a game-changer!

At the time, psychology was still a young discipline claiming to study consciousness scientifically. To understand why Husserl’s approach was so radical, we need to consider the dominant paradigm of psychology at that time.

Psychology was heavily influenced by positivism – the empirical, numerical study of psychological phenomena modeled after the natural sciences. While this remains the dominant approach in psychology today, it’s far less dogmatic than it was back then.

(If you want to learn more about positivism and how it differs from other epistemological positions, check out my tutorial on ontology, epistemology, and methodology).

Phenomenology fundamentally opposes the positivist mindset, which is why Husserl’s ideas caused such an uproar.

Husserl’s main issue with the prevailing natural science approach was that it entirely ignored the perceiving subject. Yet, things can appear differently to different individuals depending on how their consciousness presents them.

Phenomenology struggled to gain acceptance at first and still holds something of an outsider position today. Nevertheless, Husserl’s work left an indelible mark on the history of philosophy, establishing phenomenology as one of the most significant intellectual movements to originate in Europe.

The Core Idea of Phenomenology

Phenomenology focuses on the phenomenon of consciousness and its various manifestations. The term itself breaks down into “phainomenon” (that which appears) and “logos” (science or study). Phenomenology, therefore, is the science of things as they appear to us.

According to Husserl, the way things appear in our consciousness provides the most powerful basis for acquiring new knowledge.

In other words, the “exact” natural sciences are not the only valid path to discovering new insights. Husserl argued that the human sciences, which approach the world through subjective experience, can also yield valuable knowledge, especially about how we come to understand things.

Phenomenological research should focus on things as they are experienced, free from assumptions and biases. Husserl’s call to suspend biases does not mean that we must completely erase our personal perspective or way of perceiving. Rather, it aims at making us aware of how our preconceptions and assumptions shape our perception and ensuring that we account for these influences in our analysis. Recognizing and reflecting on biases is a crucial part of the phenomenological process.

Husserl identified two key principles for studying consciousness.

(Warning: things are about to get a little mind-bending.)

phenomenology

#1 Consciousness Is Intentional

According to Husserl, every experience in consciousness is directed toward an object. This object could be:

  • Real (e.g., the tree outside your window)
  • Dead (e.g., Edmund Husserl himself)
  • A mental construct (e.g., your idea of Hawaii if you’ve never been there)

That’s already quite a concept to wrap your head around, but it gets even more intriguing:

An intentional act of consciousness can be “full” or “empty.” Imagine waking up in the morning and reaching for your glasses on the bedside table. You know they are there, you need them, and you put them on.

Now imagine you misplaced your glasses the night before. You wake up with the intention of finding them. A disheveled, half-awake person stumbles around the apartment, blindly feeling for objects. An absurd scene, right? But when you understand “empty” intention, the act of searching for an object vividly present in your mind but not in your immediate perception, it suddenly makes perfect sense.

#2 Consciousness Is Separate from Sensory Perception

The second fundamental principle of phenomenology states that sensory perception and conscious experience are distinct. When we feel or see something, we process it in our consciousness—this much is clear. Consciousness functions as the operating system that processes sensory input.

But we can also perceive things in entirely different ways. Imagine you’re in the shower, and out of nowhere, you have a brilliant idea. Where did it come from?

There was no external stimulus or sensory experience that triggered the thought. This means consciousness can be a medium for both sensory and non-sensory experiences.

If we truly want to study consciousness, Husserl argued, we must internalize both of these principles.

The Phenomenological Method

So, how exactly do we study consciousness?

According to Husserl, the process involves three steps:

  1. Describing the object under investigation
  2. Applying transcendental and phenomenological reduction
  3. Gaining an intuitive grasp of essences (Wesensschau)

Before beginning these steps, Husserl insisted that researchers must set aside all prior knowledge that does not stem from direct conscious experience. This process is called “bracketing” or epoché.

#1 Describing the Object of Investigation

The first step involves describing the experience of the object in as much detail as possible from the perspective of the experiencing subject.

In phenomenology, the researcher and the subject of research are often the same person, meaning the philosopher records their own experiences.

When applied in social sciences, phenomenological methods typically use interviews. As a researcher, your goal is to elicit the most detailed description of the subject’s experience.

A good interviewer remains as neutral as possible and encourages the subject to speak freely. For further reading on phenomenological interviews, check out the references linked in the accompanying YouTube video.

#2 Transcendental and Phenomenological Reduction

In this step, the researcher adopts a transcendental attitude, setting aside all empirical knowledge about the object. The focus is solely on the conscious experience. The conditions of this experience are philosophically examined.

For practical applications outside Husserl’s strict philosophical approach, researchers often introduce a compromise. Instead of excluding the external world entirely, they consider the “horizon”—the situational context and external influences shaping the experience.

If someone describes their experience in the metaverse, for example, researchers still acknowledge that it takes place in a virtual environment and interpret the experience accordingly.

#3 Intuition of Essences (Wesensschau)

In the final step, Husserl attempts to determine how the object appears to consciousness. He conducts “imaginative variations,” altering different aspects to see how perception changes. If modifying an element changes the perceived essence, then that element is crucial to the phenomenon.

For contemporary phenomenologists, this step involves analyzing raw data—such as interview descriptions—using inductive logic to identify patterns and commonalities. This approach is related to methods like inductive coding or Grounded Theory.

For precise data analysis steps, refer to existing methodologies. Personally, I recommend Giorgi’s (2017) approach.

Conclusion

As you can see, phenomenology isn’t the simplest concept to grasp. Even if you don’t study philosophy, that’s okay, many struggle to fully grasp Husserl’s ideas.

Don’t be intimidated by Husserl’s complex language and terminology.

If you’re interested in the methodology, which offers an exciting alternative to conventional empirical methods, I recommend starting with secondary literature such as Giorgi et al. (2017) and simply trying it out. Learning by doing!

Categories
Uncategorized

Discourse Analysis Simply Explained (Foucault, Method, Examples)

Do you want to conduct a discourse analysis for your academic paper but feel confused by all the overly complicated explanations of this method?

In this video, I’ll answer the three most important questions about discourse analysis:

  1. What is a discourse, and how can this concept be understood?
  2. How do you conduct a discourse analysis step by step?
  3. What are some examples of discourse analysis?

By the end of this article, you’ll know exactly how to proceed to turn your discourse analysis into a structured academic paper.

What Is a Discourse?

To answer this question, there’s no avoiding the work of the French philosopher Michel Foucault (1926–1984). Foucault was a fascinating thinker who, alongside many other ideas and theories, significantly developed the concept of discourse.

For Foucault, “discourse” refers to all forms of statements, such as texts, terms or concepts, that circulate within a society and shape public dialogue about a particular topic. Discourse defines not only the language, but also the way society thinks about that topic.

From this arise unwritten rules about how the topic is discussed and what might be considered taboo. Ultimately, discourse even determines whether and how actions are taken in relation to that issue.

Examples from Foucault’s Work

One example from Foucault’s early work is the discourse surrounding mental illness—or, as he called it, madness. He analyzed when and how society began labeling individuals as insane and what was considered “normal” or “abnormal” behavior. It’s shocking to see how little it once took to be deemed insane and excluded from society.

Over time, these boundaries have shifted, and psychiatric care is no longer relegated to hospital basements.

Later, Foucault also analyzed discourses surrounding sexuality, yielding equally fascinating results. Discourse analysis almost always addresses topics that have significant societal relevance or explosive potential.

Discourse Analysis

Discourse Analysis: Knowledge and Power

Another key point to remember: Foucault realized that discourse always involves knowledge and power. Power influences discourse—not necessarily in a positive or negative way, but in ways that must be considered.

  • Power: A discourse analysis must take into account who participates in the discourse, why they do so, and what interests they represent in trying to shape it.
  • Knowledge: A discourse develops over time and contributes to an increasingly sophisticated understanding. For instance, society once knew very little about the causes of mental illness, but today’s discourse reflects a far more nuanced perspective.

Foucault conducted his discourse analyses through linguistic deconstruction and reconstruction. However, his approach did not result in a reproducible scientific method.

To conduct a discourse analysis that meets modern academic standards, we need to look at how this method has been further developed.

Discourse Analysis as a Scientific Method (5 Steps)

Much like thematic analysis or the grounded theory approach, discourse analysis has been refined and expanded by many scholars. For simplicity, we’ll focus on the work of Reiner Keller, who wrote a whole book on discourse analysis and the different approaches to it.

Keller’s book The Sociology of Knowledge Approach to Discourse (2011) should be at the top of your reading list if you’re planning to conduct a discourse analysis.

A discourse analysis isn’t some vague or overly abstract process—it follows the same principles as other qualitative methods in empirical social research. In fact, it often uses many of the same components, as we’ll see in a moment.

Here’s Keller’s 5-step process:

#1 Formulating Research Questions

The research question for a discourse analysis is no different from any other research question. However, it must be framed so that discourse analysis is the logical methodological choice to address the question.

Example Research Question:
How is climate change being discussed in political discourse in the United States?

#2 Conducting a Literature Review

Next, as with any academic work, you’ll need to review the current state of research and engage with key concepts.

If you want to take your discourse analysis a step further, you can also develop a theoretical framework. In that case, you’ll need to adjust your research question to incorporate the theory into your analysis.

Example:
How does political discourse on climate change influence public attitudes in fossil fuel-dependent regions of the United States?

In this context, the “spiral of silence” theory could be a useful framework. This theory explains why certain groups refrain from expressing their opinions when they believe they are in the minority.

#3 Data Sampling

Now it’s time to collect your data. For discourse analysis, this typically means documents, such as texts or publications, that best reflect the public discourse on your topic.

Example:
For the research question on climate change, relevant data could include campaign platforms from Democratic and Republican candidates, congressional speeches, opinion pieces from major newspapers like The New York Times or The Wall Street Journal, and environmental reports from think tanks.

The principle of theoretical sampling, which you might recognize from grounded theory, also applies here. Your sample can expand over time. For instance, you could start with a Democratic Party campaign platform and then add a contrasting perspective, such as a Republican Party campaign platform. Depending on your research question, you can iteratively build your dataset to better understand the discourse.

#4 Coding

When analyzing your data, Keller again draws on grounded theory. He suggests creating categories that summarize and link recurring aspects of the discourse. Write comments and memos, which you can then abstract into broader categories.

Example:
In the discourse on climate change, a category like “economic impacts” might emerge if discussions frequently center on how climate policies affect jobs or industry competitiveness in the United States.

The unique aspect of discourse analysis is that it doesn’t focus on individual statements or actors (as is often the case with expert interviews) but rather on how the entirety and polarity of statements and actors interact. Your goal is to uncover overarching patterns that define the discourse.

#5 Presenting Results

In your results section, explain what you’ve uncovered about the discourse in relation to your research question. It might make sense to structure your findings by actors or thematic patterns based on your categories. Use subheadings for clarity and tables to present your results concisely and accessibly.

Discourse Analysis 2

Discussing the Results of Your Discourse Analysis

These questions are a great way to guide your discussion. You can either work through them one by one to create a detailed overview of the discourse or pick a few key questions to focus on.

If you’re writing a term paper, it’s probably best to keep things manageable and stick to one or two questions. But for a master’s thesis, you’ll have more space to dig deeper, so tackling as many questions as possible will give you a richer, more comprehensive understanding of the discourse.

The main goal is to really analyze what’s happening in the discourse—its evolution, how it connects to other discourses, and the power dynamics driving it.

  • What triggers the emergence of a discourse, and what factors contribute to its decline or transformation?
  • What linguistic or symbolic strategies are employed to frame and convey meanings within the discourse?
  • In what ways does the discourse shape and define objects, concepts, or identities?
  • Which key events or turning points have significantly influenced the trajectory of the discourse?
  • Which actors occupy specific speaker positions, and what strategies do they use to assert or legitimize their authority?
  • Who initiates or controls the discourse, who is the intended target, and how is it received by the audience?
  • What relationships or tensions exist between this discourse and other intersecting or opposing discourses?
  • How does the discourse reflect, reinforce, or challenge prevailing social, cultural, or political contexts?
  • What power effects are produced by the discourse, and how do these effects influence or intersect with societal practices and structures?

If the scope of your work allows, try to incorporate as many of these questions as possible into your discussion.

Conclusion

With Foucault’s concept of discourse, Keller’s 5-step methodology, and the discussion questions, you’re well-equipped to conduct your own discourse analysis.

However, remember: this video is only a quick introduction to the topic. It’s meant to inspire you to dive deeper. Grab Keller’s book or search for a documentary on Michel Foucault to immerse yourself further.

Discourse analysis isn’t difficult to understand or execute. There’s no strict right or wrong—it all depends on how you approach it.

Categories
Uncategorized

Focus Group Discussion – Qualitative Research Method (Tutorial)

Are you thinking about using a focus group discussion as a qualitative research method?

If so, take 10 minutes to go through this guide.

We’ll cover everything you need to know: starting with an introduction to the method and when it’s most effective, before walking you through the process step-by-step. By the end, you’ll be fully prepared to run your first focus group discussion and analyze the results with confidence.

What Are Focus Groups in Qualitative Research?

Focus group discussions were first used in market research and later adopted in sociology. Today, they’re a recognized and versatile qualitative research method applied in a wide range of fields.

In a focus group discussion, you, as the researcher, bring together a small group of experts to discuss your research topic, with you acting as the moderator.

What makes this method unique is its ability to generate a rich variety of interpersonal interactions in a short amount of time. These interactions can uncover more detailed background information than one-on-one interviews typically provide (Krueger, 1994).

This method isn’t limited to a single group—you can organize multiple groups with different participants or reconvene the same group at various stages of your research.

Unlike observations, focus group discussions take place in a controlled setting that you design. The discussions are collaborative, encouraging the exchange of new ideas, opinions, and reactions. The aim isn’t to spark heated debates but rather to foster thoughtful and meaningful dialogue.

When Are Focus Groups Useful in Qualitative Research?

Focus group discussions are flexible and can be applied in a variety of scenarios:

1. Developing Theory on a New Topic or Phenomenon

Focus groups are especially useful when exploring a relatively new topic with an exploratory approach. This involves relying less on existing knowledge or theories and using the discussion to develop new ideas or theories.

Remember, in research we often separate data collection from data analysis. Focus groups are a classic data collection method for gathering your own insights.

2. Research on Group or Team Dynamics

For example, if you’re studying how a new software tool impacts team dynamics, you could organize a focus group where participants test the software together and share their real-time experiences, rather than interviewing each member individually.

3. Evaluation Scenarios

Focus groups are also great for evaluations—assessing how well something works and whether it achieves its intended purpose.

Artifact?! What does that mean?

An artifact could be anything: a robot prototype, a learning app, a dietary guide, or even a theoretical framework. You could also use focus groups to evaluate a definition or model you’ve developed.

In all these cases, focus groups can provide deep insights.

Conducting a Focus Group Discussion in 7 Steps

1. Selecting the Right Participants

The ideal size for a focus group is 6 to 8 participants. The group should be “small enough for everyone to share insights yet large enough to provide diversity of perceptions” (Krueger & Casey, 2000, p. 10).

Smaller groups rely more heavily on individual expertise since each participant needs to contribute more. However, smaller groups may lack diversity in perspectives, which you should address in your research discussion.

Select participants who bring relevant expertise. The goal isn’t random sampling but purposeful sampling to identify the best candidates for the discussion.

Together, participants should represent a broad spectrum of perspectives on your research topic.

2. Creating the Right Setting

The setting of a focus group can significantly influence the conversation. Traditional methodology books often emphasize physical arrangements like seating layouts.

Today, tools like Zoom have expanded the possibilities. While virtual discussions may lose some of the group dynamics found in face-to-face settings, they allow you to gather experts from anywhere, potentially enhancing the quality of your group.

Choose between a physical or virtual setting based on your research needs, and consider the pros and cons of each. Virtual discussions are now widely accepted and can be just as effective.

3. Preparing for Moderation

Like qualitative interviews, focus group discussions benefit from a clear guide. As the moderator, it’s your role to steer the conversation.

Hennink (2014) describes this process using a sandglass model:

  • Start broad: Introduce the topic, provide context, and thank participants. Ask for consent to record and answer any questions.
  • Narrow the focus: Conduct a brief round of introductions and ask participants about their prior experience with the topic.
  • Main discussion: Dive into the core of the discussion, asking prepared questions or assigning a collaborative task.
  • Wrap up: Conclude with reflective or follow-up questions to address anything that wasn’t discussed earlier.

#4 Providing Materials (Optional)

If you’re working with a team, now’s the time to brief them. Sometimes it makes sense to have two people running the discussion: one person moderates, while the other supports by taking notes. If you’re organizing multiple groups, you might need additional moderators. In that case, make sure they’re properly briefed and familiar with the discussion guide.

The second point is about providing materials. If your focus group involves an interactive task, you might need supplies like posters, markers, or sticky notes.

For virtual settings, you can use tools like an online whiteboard, Google Jamboard for example, to achieve the same effect.

#5 Conducting and Recording

On the day of the focus group discussion, it’s best to have two recording options ready, just in case one fails. I usually record audio on both my phone and laptop at the same time. If possible, consider recording video too, it can be a valuable addition.

Make sure you’re familiar with data protection rules for recordings and handle the data responsibly. Store or delete the recordings in compliance with ethical guidelines to avoid any issues with your university’s ethics committee.

It’s also a good idea to take notes during the discussion. These can provide valuable supplemental data for your analysis.

#6 Transcription

The primary data source for your analysis will be the transcript of the discussion. This means either typing out everything verbatim or using a transcription tool to save time.

Your dataset will include the transcript, your notes, and any creative outputs from the group, like posters or other materials.

#7 Data Analysis

Once you have your data, it’s time to move on to analysis. At this stage, we’re stepping beyond the focus group discussion itself and into the broader research framework. The method you use for analysis will depend on how the discussion fits into your overall study design.

Common approaches include grounded theory, thematic analysis, or a combination of coding techniques. This would be the perfect opportunity to explore an in-depth guide on the analysis method that best suits your research!