Categories
Uncategorized

Discourse Analysis Simply Explained (Foucault, Method, Examples)

Do you want to conduct a discourse analysis for your academic paper but feel confused by all the overly complicated explanations of this method?

In this video, I’ll answer the three most important questions about discourse analysis:

  1. What is a discourse, and how can this concept be understood?
  2. How do you conduct a discourse analysis step by step?
  3. What are some examples of discourse analysis?

By the end of this article, you’ll know exactly how to proceed to turn your discourse analysis into a structured academic paper.

What Is a Discourse?

To answer this question, there’s no avoiding the work of the French philosopher Michel Foucault (1926–1984). Foucault was a fascinating thinker who, alongside many other ideas and theories, significantly developed the concept of discourse.

For Foucault, “discourse” refers to all forms of statements, such as texts, terms or concepts, that circulate within a society and shape public dialogue about a particular topic. Discourse defines not only the language, but also the way society thinks about that topic.

From this arise unwritten rules about how the topic is discussed and what might be considered taboo. Ultimately, discourse even determines whether and how actions are taken in relation to that issue.

Examples from Foucault’s Work

One example from Foucault’s early work is the discourse surrounding mental illness—or, as he called it, madness. He analyzed when and how society began labeling individuals as insane and what was considered “normal” or “abnormal” behavior. It’s shocking to see how little it once took to be deemed insane and excluded from society.

Over time, these boundaries have shifted, and psychiatric care is no longer relegated to hospital basements.

Later, Foucault also analyzed discourses surrounding sexuality, yielding equally fascinating results. Discourse analysis almost always addresses topics that have significant societal relevance or explosive potential.

Discourse Analysis

Discourse Analysis: Knowledge and Power

Another key point to remember: Foucault realized that discourse always involves knowledge and power. Power influences discourse—not necessarily in a positive or negative way, but in ways that must be considered.

  • Power: A discourse analysis must take into account who participates in the discourse, why they do so, and what interests they represent in trying to shape it.
  • Knowledge: A discourse develops over time and contributes to an increasingly sophisticated understanding. For instance, society once knew very little about the causes of mental illness, but today’s discourse reflects a far more nuanced perspective.

Foucault conducted his discourse analyses through linguistic deconstruction and reconstruction. However, his approach did not result in a reproducible scientific method.

To conduct a discourse analysis that meets modern academic standards, we need to look at how this method has been further developed.

Discourse Analysis as a Scientific Method (5 Steps)

Much like thematic analysis or the grounded theory approach, discourse analysis has been refined and expanded by many scholars. For simplicity, we’ll focus on the work of Reiner Keller, who wrote a whole book on discourse analysis and the different approaches to it.

Keller’s book The Sociology of Knowledge Approach to Discourse (2011) should be at the top of your reading list if you’re planning to conduct a discourse analysis.

A discourse analysis isn’t some vague or overly abstract process—it follows the same principles as other qualitative methods in empirical social research. In fact, it often uses many of the same components, as we’ll see in a moment.

Here’s Keller’s 5-step process:

#1 Formulating Research Questions

The research question for a discourse analysis is no different from any other research question. However, it must be framed so that discourse analysis is the logical methodological choice to address the question.

Example Research Question:
How is climate change being discussed in political discourse in the United States?

#2 Conducting a Literature Review

Next, as with any academic work, you’ll need to review the current state of research and engage with key concepts.

If you want to take your discourse analysis a step further, you can also develop a theoretical framework. In that case, you’ll need to adjust your research question to incorporate the theory into your analysis.

Example:
How does political discourse on climate change influence public attitudes in fossil fuel-dependent regions of the United States?

In this context, the “spiral of silence” theory could be a useful framework. This theory explains why certain groups refrain from expressing their opinions when they believe they are in the minority.

#3 Data Sampling

Now it’s time to collect your data. For discourse analysis, this typically means documents, such as texts or publications, that best reflect the public discourse on your topic.

Example:
For the research question on climate change, relevant data could include campaign platforms from Democratic and Republican candidates, congressional speeches, opinion pieces from major newspapers like The New York Times or The Wall Street Journal, and environmental reports from think tanks.

The principle of theoretical sampling, which you might recognize from grounded theory, also applies here. Your sample can expand over time. For instance, you could start with a Democratic Party campaign platform and then add a contrasting perspective, such as a Republican Party campaign platform. Depending on your research question, you can iteratively build your dataset to better understand the discourse.

#4 Coding

When analyzing your data, Keller again draws on grounded theory. He suggests creating categories that summarize and link recurring aspects of the discourse. Write comments and memos, which you can then abstract into broader categories.

Example:
In the discourse on climate change, a category like “economic impacts” might emerge if discussions frequently center on how climate policies affect jobs or industry competitiveness in the United States.

The unique aspect of discourse analysis is that it doesn’t focus on individual statements or actors (as is often the case with expert interviews) but rather on how the entirety and polarity of statements and actors interact. Your goal is to uncover overarching patterns that define the discourse.

#5 Presenting Results

In your results section, explain what you’ve uncovered about the discourse in relation to your research question. It might make sense to structure your findings by actors or thematic patterns based on your categories. Use subheadings for clarity and tables to present your results concisely and accessibly.

Discourse Analysis 2

Discussing the Results of Your Discourse Analysis

These questions are a great way to guide your discussion. You can either work through them one by one to create a detailed overview of the discourse or pick a few key questions to focus on.

If you’re writing a term paper, it’s probably best to keep things manageable and stick to one or two questions. But for a master’s thesis, you’ll have more space to dig deeper, so tackling as many questions as possible will give you a richer, more comprehensive understanding of the discourse.

The main goal is to really analyze what’s happening in the discourse—its evolution, how it connects to other discourses, and the power dynamics driving it.

  • What triggers the emergence of a discourse, and what factors contribute to its decline or transformation?
  • What linguistic or symbolic strategies are employed to frame and convey meanings within the discourse?
  • In what ways does the discourse shape and define objects, concepts, or identities?
  • Which key events or turning points have significantly influenced the trajectory of the discourse?
  • Which actors occupy specific speaker positions, and what strategies do they use to assert or legitimize their authority?
  • Who initiates or controls the discourse, who is the intended target, and how is it received by the audience?
  • What relationships or tensions exist between this discourse and other intersecting or opposing discourses?
  • How does the discourse reflect, reinforce, or challenge prevailing social, cultural, or political contexts?
  • What power effects are produced by the discourse, and how do these effects influence or intersect with societal practices and structures?

If the scope of your work allows, try to incorporate as many of these questions as possible into your discussion.

Conclusion

With Foucault’s concept of discourse, Keller’s 5-step methodology, and the discussion questions, you’re well-equipped to conduct your own discourse analysis.

However, remember: this video is only a quick introduction to the topic. It’s meant to inspire you to dive deeper. Grab Keller’s book or search for a documentary on Michel Foucault to immerse yourself further.

Discourse analysis isn’t difficult to understand or execute. There’s no strict right or wrong—it all depends on how you approach it.

Categories
Uncategorized

Focus Group Discussion – Qualitative Research Method (Tutorial)

Are you thinking about using a focus group discussion as a qualitative research method?

If so, take 10 minutes to go through this guide.

We’ll cover everything you need to know: starting with an introduction to the method and when it’s most effective, before walking you through the process step-by-step. By the end, you’ll be fully prepared to run your first focus group discussion and analyze the results with confidence.

What Are Focus Groups in Qualitative Research?

Focus group discussions were first used in market research and later adopted in sociology. Today, they’re a recognized and versatile qualitative research method applied in a wide range of fields.

In a focus group discussion, you, as the researcher, bring together a small group of experts to discuss your research topic, with you acting as the moderator.

What makes this method unique is its ability to generate a rich variety of interpersonal interactions in a short amount of time. These interactions can uncover more detailed background information than one-on-one interviews typically provide (Krueger, 1994).

This method isn’t limited to a single group—you can organize multiple groups with different participants or reconvene the same group at various stages of your research.

Unlike observations, focus group discussions take place in a controlled setting that you design. The discussions are collaborative, encouraging the exchange of new ideas, opinions, and reactions. The aim isn’t to spark heated debates but rather to foster thoughtful and meaningful dialogue.

When Are Focus Groups Useful in Qualitative Research?

Focus group discussions are flexible and can be applied in a variety of scenarios:

1. Developing Theory on a New Topic or Phenomenon

Focus groups are especially useful when exploring a relatively new topic with an exploratory approach. This involves relying less on existing knowledge or theories and using the discussion to develop new ideas or theories.

Remember, in research we often separate data collection from data analysis. Focus groups are a classic data collection method for gathering your own insights.

2. Research on Group or Team Dynamics

For example, if you’re studying how a new software tool impacts team dynamics, you could organize a focus group where participants test the software together and share their real-time experiences, rather than interviewing each member individually.

3. Evaluation Scenarios

Focus groups are also great for evaluations—assessing how well something works and whether it achieves its intended purpose.

Artifact?! What does that mean?

An artifact could be anything: a robot prototype, a learning app, a dietary guide, or even a theoretical framework. You could also use focus groups to evaluate a definition or model you’ve developed.

In all these cases, focus groups can provide deep insights.

Conducting a Focus Group Discussion in 7 Steps

1. Selecting the Right Participants

The ideal size for a focus group is 6 to 8 participants. The group should be “small enough for everyone to share insights yet large enough to provide diversity of perceptions” (Krueger & Casey, 2000, p. 10).

Smaller groups rely more heavily on individual expertise since each participant needs to contribute more. However, smaller groups may lack diversity in perspectives, which you should address in your research discussion.

Select participants who bring relevant expertise. The goal isn’t random sampling but purposeful sampling to identify the best candidates for the discussion.

Together, participants should represent a broad spectrum of perspectives on your research topic.

2. Creating the Right Setting

The setting of a focus group can significantly influence the conversation. Traditional methodology books often emphasize physical arrangements like seating layouts.

Today, tools like Zoom have expanded the possibilities. While virtual discussions may lose some of the group dynamics found in face-to-face settings, they allow you to gather experts from anywhere, potentially enhancing the quality of your group.

Choose between a physical or virtual setting based on your research needs, and consider the pros and cons of each. Virtual discussions are now widely accepted and can be just as effective.

3. Preparing for Moderation

Like qualitative interviews, focus group discussions benefit from a clear guide. As the moderator, it’s your role to steer the conversation.

Hennink (2014) describes this process using a sandglass model:

  • Start broad: Introduce the topic, provide context, and thank participants. Ask for consent to record and answer any questions.
  • Narrow the focus: Conduct a brief round of introductions and ask participants about their prior experience with the topic.
  • Main discussion: Dive into the core of the discussion, asking prepared questions or assigning a collaborative task.
  • Wrap up: Conclude with reflective or follow-up questions to address anything that wasn’t discussed earlier.

#4 Providing Materials (Optional)

If you’re working with a team, now’s the time to brief them. Sometimes it makes sense to have two people running the discussion: one person moderates, while the other supports by taking notes. If you’re organizing multiple groups, you might need additional moderators. In that case, make sure they’re properly briefed and familiar with the discussion guide.

The second point is about providing materials. If your focus group involves an interactive task, you might need supplies like posters, markers, or sticky notes.

For virtual settings, you can use tools like an online whiteboard, Google Jamboard for example, to achieve the same effect.

#5 Conducting and Recording

On the day of the focus group discussion, it’s best to have two recording options ready, just in case one fails. I usually record audio on both my phone and laptop at the same time. If possible, consider recording video too, it can be a valuable addition.

Make sure you’re familiar with data protection rules for recordings and handle the data responsibly. Store or delete the recordings in compliance with ethical guidelines to avoid any issues with your university’s ethics committee.

It’s also a good idea to take notes during the discussion. These can provide valuable supplemental data for your analysis.

#6 Transcription

The primary data source for your analysis will be the transcript of the discussion. This means either typing out everything verbatim or using a transcription tool to save time.

Your dataset will include the transcript, your notes, and any creative outputs from the group, like posters or other materials.

#7 Data Analysis

Once you have your data, it’s time to move on to analysis. At this stage, we’re stepping beyond the focus group discussion itself and into the broader research framework. The method you use for analysis will depend on how the discussion fits into your overall study design.

Common approaches include grounded theory, thematic analysis, or a combination of coding techniques. This would be the perfect opportunity to explore an in-depth guide on the analysis method that best suits your research!

Categories
Uncategorized

Conducting a Qualitative Meta-Study (Simply Explained)

Do you want to analyze qualitative data for your academic work, but the idea of conducting new interviews or collecting documents feels overwhelming? A qualitative meta-study might be the perfect solution for you.

This method has been gaining traction lately. And just like literature reviews, it doesn’t require you to collect your own data…

You can simply use qualitative data from other studies to uncover new connections and develop theories. It doesn’t get much more practical than that, does it?

In this video, I’ll walk you through how to conduct a qualitative meta-study while adhering to academic standards.

1. What is a Qualitative Meta-Study?

A qualitative meta-study (QMS) is a method for combining data from multiple qualitative studies on a specific topic. While a single study often provides only one perspective, QMS allows you to integrate findings from numerous studies, offering a more comprehensive and nuanced understanding.

Meta-analyses are well-established in quantitative research, where they statistically combine numerical data from various studies to draw universal conclusions. In qualitative research, however, the focus is on narrative data, such as interviews and case studies. These are not merely aggregated but reanalyzed, often from a new theoretical perspective, to generate fresh insights.

Until recently, the process for conducting QMS lacked clarity. But in 2024, a groundbreaking paper by Habersang and Reihlen provided detailed guidelines for structuring and standardizing QMS, making the method more accessible and comparable. This paper has been met with widespread acclaim, with many researchers calling it an “instant classic.”

Let’s dive deeper into these guidelines!

qualitative meta-study

2. The Three Reflective Meta-Practices by Habersang and Reihlen

To ensure qualitative meta-studies yield meaningful insights, Habersang and Reihlen propose three key reflective practices. These practices help you derive deeper understanding from your studies. We’ll cover how to select studies in the next section, but first, here’s what you need to know:

1. Translation

Different studies often use varied terminology for similar concepts. For example, one study might discuss “emotional leadership,” while another refers to “transformational leadership.” Translation involves aligning these terms into a shared language so the studies can be compared effectively.

This process goes beyond mere word alignment—it’s about understanding the underlying meaning of each concept. Your goal is to preserve the essential insights of each study while creating connections between them.

2. Abstraction

Abstraction involves distilling the details of individual studies to identify broader patterns and overarching theories. It’s about lifting the analysis to a higher level, enabling you to see commonalities across studies.

When abstracting, it’s essential to consider the unique context of each study while developing theories that apply across multiple cases. Striking the right balance between detail and generalization is key.

Developing theories might sound intimidating, but it’s not unlike other methods such as grounded theory or theoretical literature reviews. Creating new theoretical insights is often easier than it seems.

3. Iterative Interrogation

Iterative interrogation means revisiting your data repeatedly throughout the analysis to question and refine your assumptions. This process involves continuously challenging your interpretations and adapting them based on new patterns or insights that emerge.

Here, your “data” consists of direct quotes and findings from the qualitative studies you’re analyzing. While you might begin with a specific idea, the iterative process ensures your conclusions evolve as you uncover new evidence.

This constant interplay between critical questioning and discovery helps ensure your research is both innovative and grounded in clear, reproducible results.

qualitative meta-study 2

3. Guidelines for Conducting Confirmatory QMS

Confirmatory QMS tests existing theories by comparing findings from multiple studies. The goal is to determine whether the collected data supports or challenges a particular theory. This approach is particularly useful when you want to validate a widely accepted theory or identify inconsistencies across studies.

Guidelines and Procedure:

  1. Develop a Theory-Driven, Focused Research Question
    Start with a precise research question grounded in an existing theory. This question will guide you in formulating specific hypotheses, which you’ll test using data from various studies. Example: “Does transformational leadership increase employee satisfaction in flat hierarchies?” This hypothesis could form the basis for your QMS.
  2. Justify a Comprehensive or Selective Search Strategy
    Decide whether to conduct a comprehensive search (aiming to include as many relevant studies as possible) or a selective search (focusing on high-quality studies that align closely with your research question). Example: If you’re studying the impact of transformational leadership in startups, you might specifically look for case studies from that context.
  3. Select Homogeneous and Comparable Cases
    Choose cases that are methodologically and theoretically aligned to ensure meaningful comparisons. However, including a few outliers can be useful for testing the boundaries of your theory. Example: If comparing leadership styles, ensure the methods for measuring employee satisfaction are consistent across studies. An outlier might be a study showing that transformational leadership only works in specific cultural contexts.
  4. Synthesize Through Aggregation
    Aggregation involves combining the findings of different studies to see whether they support or contradict your hypotheses. Use deductive categories (e.g., “Supports the theory?”) and introduce inductive categories when unexpected patterns arise. The goal is to create a clear theoretical model showing how well your hypotheses hold up.
  5. Ensure Quality Through Transparency
    Document every decision and step of your analysis process. Transparency is crucial for ensuring your work can be replicated and validated by others. Maintain a detailed log covering everything from your literature search to your case selection and analysis.

4. Guidelines for Conducting Exploratory QMS

Exploratory QMS focuses on developing new theories or expanding existing ones. The goal is to explore studies for fresh patterns or explanations that might have been overlooked. This method is especially helpful when there are no clear existing theories on the topic.

Guidelines and Procedure:

  1. Develop an Open Research Question
    Keep the research question broad to allow for new ideas and theories to emerge. Refine the question as patterns or insights from the data guide you. Example: You could investigate the phases of digital transformation across organizations without relying on a predefined theoretical framework.
  2. Broad or Targeted Literature Search
    Conduct a broad search to capture diverse data or focus on particularly rich studies that provide deep insights into specific aspects of your topic. Example: Collect case studies from various industries to understand how digital transformation unfolds in different settings.
  3. Choose Heterogeneous and Diverse Cases
    Include diverse and contrasting cases to uncover new perspectives and patterns that might not emerge in a homogeneous dataset.
  4. Synthesize Through Configuration
    Reinterpret the data creatively to develop a new theoretical model, rather than forcing it into predefined categories. Goal: Generate fresh insights about the phenomenon by integrating findings from different studies.
  5. Ensure Quality Through Diversity and Depth
    The success of exploratory QMS depends on the variety and depth of the analyzed cases. The better you identify and articulate new patterns, the stronger your theoretical contribution.

Summary of QMS Types

Here’s a quick comparison of the two types of QMS:

CriterionConfirmatory QMSExploratory QMS
GoalTest and refine existing theoriesDevelop new theories
Research QuestionFocused, theory-drivenOpen, broadly defined
HypothesesPredefinedNone—focused on discoveries
Search StrategyComprehensive or selectiveBroad, but targeted cases also possible
SampleHomogeneous and comparableHeterogeneous and diverse
SynthesisAggregation of findingsConfiguration of new theoretical models
Quality CriteriaTesting and refining theoriesDeveloping new theories

A qualitative meta-study is certainly not the easiest method to tackle, but it’s a perfectly feasible choice for something like a master’s thesis. If you already have experience with qualitative research or are willing to invest the time to learn this method, suggesting a QMS can really impress your supervisors.

Good luck with your research!

📖 Habersang, S., & Reihlen, M. (2024). Advancing qualitative meta-studies (QMS): Current practices and reflective guidelines for synthesizing qualitative research. Organizational Research Methods.

Categories
Uncategorized

Theoretical Literature Review According to Webster & Watson (Tutorial)

Are you struggling with writing an independent theoretical literature review, perhaps even as part of your thesis and don’t know where to begin? Don’t worry! With the guidance from Webster and Watson (2002), you can bring structure to the chaos and impress even the harshest professor.

In this article, I’ll show you how to write a literature review based on Webster & Watson’s recommendations in 7 easy steps.

By the end of this tutorial, you’ll realize it’s not as daunting as it seems. In fact, it’s simpler than you think!

Writing a Literature Review According to Webster and Watson (2002)

Webster & Watson (2002) were the first to introduce a structured process for writing what’s now known as a “theoretical” literature review.

It’s important to note that systematic reviews originated in medicine, where their main purpose is to summarize empirical research findings.

In other disciplines, like most social sciences, the literature base is much more diverse. You’ll encounter qualitative studies, quantitative studies, mixed-methods research, and even purely conceptual papers.

The original methodology of systematic literature reviews or meta-analyses (as applied in medicine) doesn’t work well here. These methods rely on a highly uniform body of research, where nearly every study reports similar statistical tests.

That’s why the social sciences have developed what’s now referred to as a “theoretical literature review.” Some studies still use the term “systematic literature review,” even though they aren’t summarizing purely quantitative findings as originally intended.

A theoretical literature review brings together all types of literature and aims to contribute a unique theoretical perspective that goes beyond the sum of the individual studies.

This is why Webster & Watson’s (2002) article is titled “Analyzing the Past to Prepare for the Future.”

Here’s how they envision a theoretical literature review:

Theoretical Literature Review 1

1. Your Literature Review Must Be Concept-Centric

At the beginning of your review, just like any academic paper, you need to identify a research gap or highlight a problem in the existing literature. The key here is to focus on a theoretical problem.

For instance, let’s say your topic is digital transformation in the workplace. That’s the phenomenon but it’s not a theoretical concept. A theoretical concept in this context might be “identity.”

The problem shouldn’t be practically motivated (e.g., companies struggling to adopt technologies like Zoom in the workplace) but theoretically motivated. For example, what Identity Theory A claims might contradict what we observe in reality, suggesting that Theory B might provide a better explanation.

In this example, current literature might focus either on organizational identity (Who are we as a company?) or individual identity (Who am I as an employee?). However, in the context of digital transformation, these identities are deeply intertwined. We need theoretical explanations to understand this interplay and its connection to technology.

Next, your background chapters should precisely define key concepts and clearly outline the scope of your literature analysis. In our example, you might need one chapter on digital transformation in the workplace and another on identity theory.

Webster & Watson also stress the difference between concept-centric and author-centric literature analysis—a distinction that’s relevant to all academic writing, so take note!

  • Concept-centric writing begins like this:
    Concept X (e.g., identity) … (Author A; Author B)
  • Author-centric writing begins like this:
    Author A states that Concept X (e.g., identity) …

A theoretical literature review is always concept-centric – not just in writing style but also in its structure. You need a central theoretical concept – without it, you’re not writing a theoretical literature review according to Webster & Watson (2002).

Theoretical Literature Review 2

2. Finding the Right Literature

To conduct a thorough literature review, the first step is to find relevant studies and papers on your topic. Webster and Watson recommend starting with the most prominent journals in your field. If your topic spans multiple disciplines, you may need to look into related fields as well. For example, research on digital transformation and identity in the workplace could appear in management, human resources, information systems, computer science, and psychology journals.

It’s crucial, however, to clearly define the scope of your review. Avoid making it too broad—this can quickly become overwhelming and dilute the impact of your theoretical contribution. Instead, focus on a specific angle or perspective that allows you to make a precise and meaningful contribution to the field.

This is where Webster and Watson adapt a key element from traditional systematic literature reviews: the structured search process. Use well-defined search terms and systematically explore academic databases. Additionally, they suggest incorporating a systematic search technique to expand your review.

3. Backward and Forward Searches

Once you’ve identified some initial studies through your database search, you can expand your review using two complementary techniques: backward and forward searches.

  • Backward Search: Examine the reference lists of the studies you’ve already found. This helps you uncover older, foundational works that might be relevant to your research. It also gives you insight into how the field has evolved over time.
  • Forward Search: Use citation databases like Google Scholar to identify newer studies that have cited your initial sources. This allows you to explore the most recent research developments in your area of interest.

Example:

Let’s say you’ve found a 2022 study by Author X on remote work and identity. A backward search might lead you to earlier works by Author Y (2016) and Author Z (2020) that explore similar themes. A forward search, on the other hand, could help you discover a 2023 study by Author A that offers valuable insights into your topic.

By combining these approaches, you’ll build a comprehensive foundation for your review, covering both historical context and current developments in your field.

4. Creating a Concept Matrix

Webster and Watson strongly advise against organizing your literature by author or publication date. Instead, they recommend grouping studies by concepts. This approach helps you identify patterns across the literature and makes it easier to compare findings.

The tool they suggest for this is a concept matrix—a simple table that allows you to categorize studies based on the theoretical concepts they address. This method not only makes your analysis more systematic but also helps you identify gaps in the research.

Example Concept Matrix:

StudyIndividual IdentityOrganizational IdentityInter-Organizational Identity
Author A (2015)X
Author B (2017)X
Author C (2020)XX

Using a concept matrix like this, you can visually map the relationships between studies, identify areas that are well-researched, and pinpoint gaps that need further exploration. This clarity not only helps you structure your review but also provides a strong foundation for your theoretical contribution.

Additionally, you can include the matrix as a figure in your final paper to make your analysis more transparent and visually appealing.

theoretical literature review 3

5. Theory Development

A key part of Webster and Watson’s method is developing a theoretical contribution.

A theoretical literature review isn’t just about summarizing existing studies it’s also about proposing new theoretical ideas. This might involve creating a theoretical framework or model based on your analysis.

There are several ways to approach this, and while none of them are simple, once you understand what’s expected, your review can make a meaningful contribution.

Option 1: Develop a Theoretical Model/Framework from Scratch

With this approach, you analyze the selected literature without relying on an existing theory. For instance, you could develop a model for identity threats caused by workplace technologies, perhaps focusing on employees at the individual level.

Option 2: Build on an Existing Theoretical Model/Framework

Here, you take an established framework or model from another context and expand it. For example, there might already be a model in psychology that explains identity threats without considering technology. Your literature analysis could extend this model by incorporating a technological dimension.

6. Formulating Propositions

Webster & Watson emphasize that your review should make it easy for other researchers to build on your work and apply your ideas.

One way to do this is by formulating propositions – generalized ideas that others can test quantitatively as hypotheses or explore further using qualitative methods.

Example: Digital Transformation and Identity

Let’s say your analysis uncovers a range of findings on identity threats:

  • Author A (2015): Strategic investments in artificial intelligence threaten the identity of customer service employees.
  • Author B (2017): ChatGPT reinforces the identity of individuals in management roles.
  • Author C (2020): Increased use of Zoom weakens organizational identity.

These findings hint at broader theoretical relationships that you can summarize as propositions. In reality, you’d usually be working with far more studies than just three.

Possible Propositions:

  1. Strategic investments in artificial intelligence undermine the professional identity of employees in roles that traditionally rely on personal interactions.
  2. The use of AI technologies strengthens the professional identity of managers by supporting their decision-making and leadership roles.
  3. Increased remote work reduces employees’ sense of belonging and identification with their organization.

In your review, it’s best to focus even more narrowly than in this example. For instance, concentrating only on individuals or only on organizations.

7. Evaluating Your Model/Framework and Propositions

To support your theoretical ideas, Webster & Watson recommend drawing on three main sources:

  1. Theoretical Explanations
    Base your theoretical ideas on established scientific models and concepts. These theories help explain the “why” behind your propositions by highlighting known relationships and mechanisms. They provide a logical foundation that gives your propositions credibility.
  2. Empirical Findings
    Use evidence from related studies or similar research topics to back up your propositions. These findings show that similar relationships have already been successfully tested, even if they don’t directly address your specific topic.
  3. Practical Experiences
    Practical insights or real-world case studies can also support your propositions. These examples demonstrate how your concepts or models work in practice, complementing the theoretical and empirical foundations.

Wrap up your discussion by outlining the implications for researchers and, if relevant, for practitioners.

Now all that’s left is to write a conclusion, and your theoretical literature review is complete!

Have questions? Drop me a comment!

Categories
Uncategorized

Always Tired? Try These 7 Fixes That Work (Scientifically Proven)!

always tired

Do you often feel tired, sluggish, and drained?

Don’t worry – you’re not alone!

In this article, I’ll explain why you constantly feel exhausted and share seven scientifically backed strategies to turn things around.

Stay awake till the end, because there’s a lot to learn and by the time we’re done, you’ll know exactly what to do to feel energized again.

The Basics of Fatigue

Before diving into the causes and solutions, let’s first understand what fatigue actually is.

Fatigue is a complex phenomenon with both physical and mental causes. When you’re tired, your body is signaling that it needs rest or sleep to recover and recharge.

A key factor to understand here is the human sleep cycle. Our sleep consists of different phases that repeat in roughly 90-minute cycles:

  • Light Sleep: This is when you’re drifting off and can be easily woken.
  • Deep Sleep: Crucial for physical recovery and growth.
  • REM Sleep (Rapid Eye Movement): The phase where most dreaming happens, essential for mental recovery and memory.

The importance of these cycles lies in the fact that your body and mind recover in different ways during each phase. Waking up in the middle of a cycle can leave you feeling groggy, even if you’ve clocked enough hours.

Always tired? Understanding and respecting these cycles can help you sleep better and wake up refreshed.

Causes of Fatigue and How to Fix Them

#1 Insufficient Sleep

Too little or poor-quality sleep can leave you perpetually tired. Let’s face it—you can’t expect to bounce through the day like a ball of energy if you spend half the night scrolling TikTok, dancing, or downing tequila.

Solution: Stick to a regular sleep schedule

A consistent sleep routine helps stabilize your body’s internal clock. Try to go to bed and wake up at the same times every day—even on weekends. Yes, that means no sleeping in till noon on Sundays.

Your body loves routine, as boring as that sounds. Create a relaxing evening ritual to prepare for sleep. A warm bath, a good book, or soft music can work wonders. And ditch screens for at least an hour before bed—blue light from phones and laptops can seriously mess with your sleep.

From a scientific perspective, our body’s circadian rhythm (our internal clock) thrives on regularity. Research shows that maintaining a steady sleep schedule improves sleep quality and reduces the risk of sleep disorders.

always tired1

#2 Poor Sleep Environment

Feeling always tired might be due to an unsuitable sleep environment. Too much light, noise, or an uncomfortable temperature can prevent you from getting a good night’s sleep.

Solution: Optimize your sleep environment

Ensure your bedroom is dark, quiet, and cool. Invest in blackout curtains or an eye mask, and consider earplugs if your roommate snores like a chainsaw.

Keep the room temperature comfortable—around 18°C (64°F) is ideal for most people. And if your mattress feels older than you are, it might be time for an upgrade.

Research shows that a cool, dark, and quiet environment boosts melatonin production, the hormone that regulates sleep, leading to better rest.

Personally, I struggle most in winter. When it’s still pitch-black at 7:30 a.m., all I want to do is stay in bed. Sunrise alarm clocks, which gradually brighten to mimic the rising sun, have been a game-changer for me during dark winters.

always tired2

#3 Stress and Emotional Overload

Stress raises certain hormone levels that can interfere with sleep. If your mind is racing with endless worries, falling asleep can feel impossible.

Solution: Manage your stress

Stress-reducing techniques can help lower your stress levels. Try relaxation methods like meditation or breathing exercises. Even five minutes of deep breathing can work wonders. There are plenty of great apps to guide you—Waking Up, 7Mind, Calm, or Mindbuilding, to name a few.

Keeping a journal can also help. Write down everything that’s bothering you—it’s often a relief to get your worries out of your head and onto paper.

Scientifically speaking, studies show that mindfulness exercises and meditation can lower cortisol levels, the hormone associated with stress. Journaling has also been shown to effectively reduce stress, improving sleep quality.

#4 Unbalanced Diet

Poor eating habits can negatively impact your sleep. Heavy meals late at night not only give you weird dreams but also disrupt your sleep.

Solution: Adjust your diet

Avoid greasy foods and caffeine before bed. That midnight burger? Bad idea. Your body needs time to digest food, and a full stomach can make it harder to sleep. Aim to have your last meal at least two to three hours before bedtime.

Caffeine stays in your system for up to six hours, so make your last cup of coffee no later than 3 p.m.—earlier if possible.

Caffeine blocks adenosine, the chemical that makes you feel sleepy, which is why overconsumption can ruin your sleep.

#5 Lack of Exercise

Regular physical activity supports healthy sleep. If you’re a couch potato all day, your body might not be tired enough to sleep well.

Solution: Move regularly

The best sleep-boosting exercises include cardio (like running, swimming, or cycling), yoga, and moderate strength training.

The ideal time for exercise is in the morning or early evening. Avoid intense workouts close to bedtime—your body needs time to wind down. Aim to finish your workout at least two hours before going to bed.

Exercise improves sleep quality by reducing the time it takes to fall asleep and increasing deep sleep duration. It also helps regulate your circadian rhythm and lower stress levels.

#6 Chronotype and Individual Sleep Needs

Your chronotype determines your internal clock, influencing when you feel most awake and productive. Always tired? This could mean your schedule is out of sync with your natural rhythm. Some people are early birds, buzzing with energy at 6 a.m., while others are night owls, hitting their stride at midnight.

Solution: Find the right balance

Your chronotype is largely genetic. Early birds thrive in the morning, while night owls are more active at night.

Adjust your schedule to suit your chronotype as much as possible. Night owls can schedule important tasks for the evening, while early birds can tackle their most challenging work in the morning.

Research shows that aligning your daily routine with your chronotype can improve performance and overall well-being.

always tired3

#7 Too Much Pressure

Many people stress about needing exactly eight hours of sleep every night. The truth? Ideal sleep duration is highly individual.

Solution: Listen to your body

The “eight-hour rule” isn’t one-size-fits-all. Some people thrive on less sleep, while others need more. Pay attention to your body to figure out what works best for you.

Chronic sleep deprivation—less than six hours a night—can cause serious health problems, so aim for at least seven to nine hours as a general guideline. But don’t stress over occasional bad nights—that pressure can make it harder to sleep.

Bonus Tip: Wearables

If you’re unsure how to track your sleep habits, consider using technology. Wearables like the Oura Ring, Fitbit, or Apple Watch come with tools to monitor your sleep and help you understand how your body responds to different stimuli.

But beware of the over-optimization trap. Life’s best moments don’t always lead to a perfect sleep score.

Literature:

3Brand, S., Holsboer-Trachsler, E., Naranjo, J. R., & Schmidt, S. (2012). Influence of mindfulness practice on cortisol and sleep in long-term and short-term meditators. Neuropsychobiology65(3), 109-118.

2Caddick, Z. A., Gregory, K., Arsintescu, L., & Flynn-Evans, E. E. (2018). A review of the environmental parameters necessary for an optimal sleep environment. Building and environment, 132, 11-20.

6Montaruli, A., Castelli, L., Mulè, A., Scurati, R., Esposito, F., Galasso, L., & Roveda, E. (2021). Biological rhythm and chronotype: new perspectives in health. Biomolecules, 11(4), 487.

1Ohayon, M. M., Lemoine, P., Arnaud-Briant, V., & Dreyfus, M. (2002). Prevalence and consequences of sleep disorders in a shift worker population. Journal of psychosomatic research, 53(1), 577-583.

4Reichert, C. F., Deboer, T., & Landolt, H. P. (2022). Adenosine, caffeine, and sleep–wake regulation: state of the science and perspectives. Journal of sleep research31(4), e13597.

5Yang, P. Y., Ho, K. H., Chen, H. C., & Chien, M. Y. (2012). Exercise training improves sleep quality in middle-aged and older adults with sleep problems: a systematic review. Journal of physiotherapy58(3), 157-163.

Categories
Uncategorized

How Many Interviews Do I Need for My Thesis?

You’re in the early stages of your thesis and have decided to conduct interviews to gather empirical data. But now comes the big question: how many interviews do you actually need? Five? Ten? Fifty?

This is one of the most common questions I get asked, and the answer is—it depends.

Don’t worry, though. In this article, I’ll walk you through how to determine the optimal number of interviews for your study.

Why Isn’t There a Single Correct Answer?

You might have heard the phrase, “There are no fixed rules in qualitative research.” But what does that really mean? Unlike quantitative research, where sample size is often determined using statistical calculations, qualitative research is more flexible. Each qualitative study has different goals and uses different methods. This variability means there’s no universal number of interviews that’s always right—just guidelines and recommendations.

Luckily, Wutich and colleagues (2024) tackled this exact question in their paper. They developed a step-by-step flowchart to help you figure out the right number of interviews for your study.

According to the authors, the number of interviews largely depends on your research goals and methods. So, the first step is to clearly define what you want to achieve with your study and what kind of insights you aim to uncover. The appropriate number of interviews will then be guided by your research goals and how deeply you want to dive into the topic.

Their paper introduces several recommendations to help you narrow down the number of interviews without fixating on a rigid number. One central concept is saturation—the point at which additional interviews no longer provide new information.

How Many Interviews Do I Need for My Thesis?

The Five Key Approaches to Determining the Number of Interviews

The flowchart begins with a fundamental question: What is your research goal? Depending on whether you aim for a broad overview or an in-depth analysis, you’ll need a different amount of data.

1. Theme (Data) Saturation

If your goal is to gain a general overview of the main themes in your research area, you should aim for theme saturation. This occurs when no new themes emerge, and you’ve identified all the key aspects of your research topic. Wutich et al. (2024) recommend about nine interviews or four focus groups for this type of saturation. Theme saturation is ideal for studies designed to provide an overview of central themes, such as identifying the main stress factors among students.

Example: Imagine you’re exploring the topic of “stress in university life” and asking students what they find stressful. If, after several interviews, responses like “exam pressure” and “time constraints” keep repeating without any new factors emerging, you’ve reached theme saturation.

2. Meaning Saturation

For studies aiming to capture not just themes but also the interpretations and meanings of these themes from the participants’ perspectives, meaning saturation is the focus. This type of saturation digs deeper into the details associated with a theme. According to Wutich et al. (2024), meaning saturation usually requires about 24 interviews or eight focus groups.

Example: You’re studying how students experience exam stress. Instead of just identifying stress factors, you aim to understand how they perceive this stress. For some, it might stem from perfectionism, while for others, it’s due to time pressure or lack of support. When you’ve captured all these perspectives and no new interpretations arise, you’ve reached meaning saturation.

3. Theoretical Saturation

This approach is common in Grounded Theory, where the goal is to develop a theory that provides new insights into a phenomenon. Theoretical saturation involves understanding patterns and connections between different themes and building a theoretical foundation. According to Wutich et al. (2024), achieving theoretical saturation typically requires 20–30 interviews or more, depending on the complexity of the theory being developed.

Example: Suppose you’re developing a process theory on stress management in university life, exploring how various strategies interact over time. To create a comprehensive theory, you need detailed data covering multiple perspectives and connections. Theoretical saturation is achieved when additional interviews no longer refine or improve your theory.

Once you reach this point, you can stop collecting data—whether it’s at 23, 35, or 42 interviews. What matters is the outcome, not the exact number of interviews.

4. Metatheme Saturation

The meta-theme analysis method was originally developed to study cultural differences in language. Over time, it evolved into a mixed-methods approach that identifies overarching themes from qualitative data. This method combines qualitative data with quantitative analyses of word meanings or codes.

In recent research, meta-theme analysis has shifted towards qualitative applications, focusing on identifying and comparing shared themes across datasets collected in different locations or cultures. Typically, 20–40 interviews per site are needed to develop a solid list of main themes and identify common variations within each site.

Example: You’re researching “stress in university life” and interviewing students in both Germany and the USA. To highlight differences and similarities between these countries, you conduct enough interviews for each group until the central themes in each group start to repeat.

5. Saturation in Salience

With saturation in salience the focus is on identifying the topics that are most important to participants. This type of saturation often uses a method called “free listing,” where participants list the topics or challenges that matter most to them. Salience saturation is reached when the participants’ lists begin to repeat. Wutich et al. (2024) suggest that 10 detailed free lists are often enough.

Example: If you ask students to list the biggest challenges in university life, and after about 10 lists, no new topics emerge, you’ve reached saturation in salience. This method is especially useful for quickly identifying the central issues that are most relevant to your participants.

How Many Interviews Do I Need for My Thesis?1

Applying the Flowchart Step by Step

Now that you’re familiar with the five types of saturation, here’s a quick guide to using the flowchart to determine the number of interviews for your study:

  1. Define Your Research Goal
    Decide whether you want an overview of a topic or deeper insights and connections, such as developing your own theory or model.
  2. Choose the Right Type of Saturation
    Select the type of saturation that aligns with your goal—for example, theme saturation for a broad overview or theoretical saturation for theory development.
  3. Set an Initial Number of Interviews
    Start with the recommendations from Wutich et al., such as nine interviews for theme saturation or 20–30 for theoretical saturation.
  4. Analyze and Adjust
    Analyze your data and check whether saturation has been reached. If new themes or meanings emerge, conduct additional interviews as needed.
  5. Draw Conclusions
    Once saturation is reached and no new insights are uncovered, you’ve identified the right number of interviews for your study.

Practical Tips for Deciding on the Number of Interviews

While the flowchart provides a solid framework, practical factors also come into play. For example, the limited time available to complete your thesis. Here are some tips for efficiently implementing the recommendations:

  • Stay Flexible: Qualitative research is dynamic. You may need to adjust the number of interviews during data collection—whether because new themes emerge or many themes begin to repeat. Start with an approximate number and adapt as needed.
  • Use Pilot Interviews: Pilot interviews are a great way to get an initial impression and test your questions. They also help you estimate how many interviews you’ll need to cover all the relevant themes.
  • Plan Time and Resources: Conducting and analyzing interviews is time-consuming. Consider how many interviews you can realistically handle without compromising the quality of your work.
  • Focus on Data Quality: A thorough analysis of fewer interviews can often be more valuable than a superficial analysis of many.

Source: Wutich, A., Beresford, M., & Bernard, H. R. (2024). Sample Sizes for 10 Types of Qualitative Data Analysis: An Integrative Review, Empirical Guidance, and Next Steps. International Journal of Qualitative Methods, 23, 1-14.

Categories
Uncategorized

How to Identify High-Quality Academic Papers


Ever experienced this? You cite a source that seems reliable, and your professor suddenly questions whether it’s even academic. Finding good academic papers is essential!

Or it is academic, but you’re unsure whether you should cite a paper from the “32nd Gordon Conference on Cannabinoid Function in the Brain” or not.

In this article, I’ll reveal 5 indicators that will help you distinguish good scientific sources from less reliable ones.

Why is evaluating scientific sources so important?

Here’s an important point that many often underestimate: the quality of your sources directly influences the credibility of your arguments.

Science relies on building new knowledge on solid, verifiable information. Sources that meet scientific standards, such as undergoing a rigorous peer-review process, are essential for creating a stable foundation for your academic work.

Without reliable sources, you risk basing your arguments on uncertain or outdated information, which diminishes the perceived quality of your own research.

Choosing the right publication outlet

In most research disciplines, three types of publications have become the standard:

  • Books
  • Journals
  • Conference papers

Choosing the right outlet can save you valuable time and effort. If you know that a journal, a conference, or a book publisher has a good reputation, you can be fairly confident that any source published there is likely reliable. At the end of this article, I’ll explain how to differentiate between a decent, average journal and a top-tier journal—so stay tuned.

Now let’s first look at the 5 general indicators.

finding good academic papers

1. The Peer-Review Process

A strong quality marker of a good scientific source is that it has undergone a peer-review process. Peer review means that a group of (usually anonymous) experts in the field has reviewed and evaluated the work before publication. They ensure that the methodology is solid, the arguments are convincing, and the results contribute to existing research. Only papers that pass this peer-review process are published.

Unfortunately, it’s not always easy to determine whether the peer-review process for a journal or conference is robust. If you see details in articles such as submission dates, how many revisions were made, and the names of the editors, that’s already a good sign.

Peer review takes time. If you see a journal article that was submitted three months before publication, that’s a sign the peer-review process may not be very thorough.

If you come across platforms like arXiv or SSRN, be aware that articles listed there have not yet undergone a peer-review process. These are called “preprints.” Preprints have the advantage of sharing the latest research with the world quickly, but they may still contain errors. So, be cautious about citing preprints.

It’s best to combine multiple indicators. Let’s take a look at a few more.

finding good academic papers 2

2. The Number of Citations for an Article

The citation count shows how often other researchers have used an article as a source. A high number of citations indicates that the article is considered important or groundbreaking in its field. If there are hundreds of citations, that’s already a strong signal.

However, it’s worth taking a closer look: an article isn’t always widely cited because of its quality. Some articles are cited because they’re groundbreaking, while others may be cited because they’re controversial or even flawed. Therefore, you should always view the citation count in context and distinguish good from bad citations.

Platforms like Google Scholar or Scopus can provide insights into an article’s citations. For example, if the articles citing your original article are also frequently and positively cited, that’s a good sign. This technique is also known as “citation chaining.”

However, there are also journals and publishers that exploit this and play a dirty game with citations. They attempt to artificially generate citations to boost their journal’s reputation. You can gauge how often a journal is cited by looking at its Impact Factor.

3. Impact Factor

The Impact Factor shows how often articles in a journal are cited on average—typically over the past two years. A high value indicates that the research published there receives a lot of attention and is considered relevant. The Impact Factor is calculated by dividing the total number of citations for articles from the last two years by the total number of articles published. For example, a journal with an Impact Factor of 5 means that, on average, each article was cited five times.

In dynamic fields like medicine or natural sciences, a high Impact Factor is often considered a sign of quality and influence. However, the Impact Factor has its limitations. In specialized fields where fewer articles are published and cited, the value is often lower, even if the journal’s research is high-quality.

There are also cases where the Impact Factor is artificially inflated through “citation cartels.” In these cases, researchers frequently cite each other’s work within the same journal to boost its Impact Factor. The open-access publisher MDPI, for instance, has been criticized for high citation rates driven by internal citations. If you encounter unusually high citation counts in such a journal, it’s worth taking a closer look at the citation sources.

4. Timeliness of a Source

The timeliness of your sources is critical, especially in rapidly evolving fields like computer science. New findings and technologies can quickly render older studies less relevant. To ensure your sources are up-to-date, aim to use materials that are no more than 3-5 years old when presenting the current state of research. Of course, when addressing groundbreaking studies or theories, older sources are indispensable.

Using outdated sources in your introduction or current state of research not only weakens your arguments but can also lead to relying on obsolete approaches. This is especially important for empirical studies: an experiment conducted in the 1980s might yield entirely different results if reevaluated with modern methods. Current literature reviews and systematic reviews provide a comprehensive overview of the state of research and help you weed out outdated sources.

5. Open Access and Paid Access -Finding Good Academic Papers

Scientific articles are not always freely accessible. There are two main ways to publish academic articles:

  • Open Access: These articles are freely accessible and cost-free, often available through platforms like PubMed, DOAJ, or directly on journal websites. The advantage is that they’re immediately and freely available—ideal for students and researchers without access to expensive databases. Many universities are increasingly promoting Open Access to facilitate access to research.
  • Paid Access: Articles in high-ranking journals, particularly Q1 journals, often require payment. These articles are usually behind a paywall and may require a per-article fee or access via a university subscription. Many institutions provide access to such articles through databases like Elsevier, Springer, or JSTOR, allowing students to access them at no additional cost.

Bonus: Journal Rankings

The simplest and cross-disciplinary ranking system for finding good academic papers is the quartile classification (Q1 to Q4). This helps you compare journals within their respective fields. Q1 journals (the top 25%) are among the most cited and respected publications.

In each discipline, there are also specialized journal rankings to guide you. Let’s take business administration as an example. Here, you’ll find specific rankings that help you identify high-quality journals and assess their reputation. These rankings are invaluable when it comes to finding good academic papers.

  • FT50 – Financial Times 50 Ranking: This internationally recognized ranking is used by many MBA programs to assess the quality of research in business-related fields. The journals listed here are the best across all subfields of business administration, from marketing and management to human resources.
  • UT Dallas List: This list is even stricter, including only 24 of the world’s leading business journals. Journals on this list place the highest value on academic quality and scientific rigor. Citing articles from these journals demonstrates that you’ve engaged with the very best academic literature.
Categories
Uncategorized

The 10 Best Books on Artificial Intelligence (AI)

10 Best Books on AI

Artificial Intelligence is no longer just science fiction. It’s already part of our daily lives and will reshape the world even more in the coming years. Whether it’s jobs, education, or ethical questions, the 10 Best Books on AI will help you understand this transformative technology—and if you want to help shape the future, you can’t afford to ignore them.

But don’t worry: you don’t need to be a computer scientist to understand how AI works and what opportunities and risks it presents.

In this video, I’ll introduce you to 10 books that not only explain the technological foundations of AI but also explore how this technology could impact our lives in the years and decades to come. By the end, you’ll be well-prepared for any conversation about AI.

1. A Brief History of Intelligence: Why the Evolution of the Brain Holds the Key to the Future of AI by Max Solomon Bennett

In A Brief History of Intelligence, Max Solomon Bennett takes you on a journey through the evolution of the human brain, explaining its connection to artificial intelligence. The book starts with the origins of our cognitive abilities, from primitive nervous systems to complex human thinking.

Bennett shows that developing AI isn’t about replicating the human brain but drawing inspiration from evolution. You’ll learn why the brain is so efficient and how AI can learn from it. At the same time, the book raises the question of what makes us uniquely human and whether AI will ever truly match us.

2. Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

In Life 3.0, Max Tegmark explores a future where AI doesn’t just create tools but thinks and acts independently. The title refers to three evolutionary stages of life:

  • Life 1.0 is biological, like bacteria adapting to their environment.
  • Life 2.0 is us, humans, who evolve culture and knowledge.
  • Life 3.0 describes beings or machines capable of designing their own hardware and software.

Tegmark focuses on the societal, ethical, and political challenges of this third stage. What happens when AI surpasses human intelligence? Who controls these technologies, and how can we ensure they act for humanity’s benefit?

The book is unique in that Tegmark doesn’t spread fear but develops concrete scenarios. Imagine AI not only boosting productivity but also solving global problems like climate change. At the same time, he warns that the same technology could become dangerous in the wrong hands.

To be honest, Tegmark’s book sketches a pretty far-out future, imagining AI hundreds of years ahead. Still, it’s a fascinating thought experiment.

10 Best Books on AI 1

3. Klara and the Sun by Kazuo Ishiguro

Kazuo Ishiguro’s novel Klara and the Sun tells the story of Klara, a robot companion for a young girl. But Klara is more than just a robot. She observes, learns, and develops a surprising understanding of the people around her. At the same time, the question always lingers: Is this genuine empathy, or just a perfect simulation?

Through Klara’s eyes, we experience a world where the boundaries between humans and machines blur. The book’s strength lies in the questions it raises: Can machines truly develop empathy? What separates a robot like Klara from a human? And what happens when people form emotional bonds with machines?

Ishiguro avoids technical explanations, focusing instead on the intimate questions of human-AI interaction.

4. Nexus by Yuval Noah Harari

Yuval Noah Harari is known for tackling the big questions of our time, and Nexus is no exception. In this book, he explores how information networks have shaped societies from the Stone Age to the modern era. Harari examines the social, political, and ethical challenges posed by technologies like AI.

A central theme of the book is the power of big data and how it could change our understanding of freedom. Harari describes how AI can predict—and potentially manipulate—our decisions. He asks: What remains of human autonomy when machines understand us better than we understand ourselves?

Nexus is not a technical book but a historical-philosophical look at the challenges and opportunities awaiting us in an AI-driven world. If you liked Harari’s previous books, you won’t be disappointed by this one.

10 Best Books on AI 2

5. The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI by Fei-Fei Li

In her autobiography, Fei-Fei Li shares her inspiring journey. Growing up in China, she moved to the U.S. as a teenager and climbed to the top of a male-dominated scientific field.

Her unique perspective as a woman from China makes this book stand out. Her work on the ImageNet project laid the foundation for many AI applications, from facial recognition to autonomous vehicles. But her book goes far beyond technical achievements.

Li addresses big questions: How can we ensure AI is not just efficient but also ethical? And why is it so important for people from diverse backgrounds to shape this technology?

6. The Science of the Artificial by Herbert Simon

Herbert Simon, a pioneer of artificial intelligence, offers a classic exploration of human-made artifacts in The Science of the Artificial. He explains that everything humans create—from simple tools to modern software—is designed to solve specific problems.

Simon introduces the concept of “bounded rationality,” describing how our decisions are often limited by incomplete information and resources. AI, he argues, can help overcome these limitations and enable better solutions.

7. Deep Utopia: Life and Meaning in a Solved World by Nick Bostrom

In Deep Utopia, Nick Bostrom asks: What happens when humanity’s biggest problems are solved? Imagine a world without climate change, disease, or poverty. Sounds perfect, right? But Bostrom shows that even a utopian world raises new questions: Where do we find meaning when all major challenges are gone?

The book combines philosophy and technology, encouraging us to think about the long-term consequences of AI—not just what it can solve but the new dilemmas it might create.

8. Co-Intelligence: Living and Working with AI by Ethan Mollick

Ethan Mollick explores how humans and AI can collaborate successfully—not as competitors but as partners. The book highlights how AI is revolutionizing work, from data analysis to creative processes.

What makes this book stand out as one of the 10 Best Books on AI are its practical examples. You’ll learn how AI tools can improve your workflows, whether it’s project management, data analysis, or creative tasks. At the same time, Mollick cautions against seeing AI as a cure-all, emphasizing the importance of critical thinking and human creativity.

10 Best Books on AI 3

9. AI 2041: Ten Visions for Our Future by Kai-Fu Lee and Chen Qiufan

This book combines science and fiction to illustrate how AI could transform our world. The authors present ten future scenarios based on real technological developments.

What makes AI 2041 special is its blend of well-researched facts and imaginative storytelling. Each story is paired with an analysis explaining how realistic the scenario is and the technologies that could make it possible.

10. Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World by Mo Gawdat

Former Google X executive Mo Gawdat argues in Scary Smart that AI development is not just a technical issue but a human responsibility. He explains how AI is evolving rapidly and autonomously—a development with great potential but also significant risks.

What sets this book apart, earning its place among the 10 Best Books on AI, is Gawdat’s focus on morality. He emphasizes that AI reflects the values we instill in it, making it our responsibility to define those values consciously. Scary Smart is not a technical manual but a call to take responsibility and actively shape the future of AI development.




Categories
Uncategorized

Inductive Coding in Qualitative Research (Full Tutorial)

Have you chosen a qualitative method for your research and now face the challenge of creating your first codes, categories, or themes through inductive coding?

And what does that even mean?

In this article, I will walk you through the entire process of inductive coding using a step-by-step example.

At the end of this tutorial, you will have everything you need to start coding your own qualitative data.

Inductive Coding in Qualitative Research

Inductive coding is a specific technique in qualitative research. Whether follow the recommendations of thematic analysis, content analysis, or grounded theory — all these approaches involve some form of inductive coding.

However, if you read a methods book for the first time, you might be confused about how to actually do it.

So, let’s do it together.

The Process of Inductive Coding (Example)

For our example, let’s assume you are working on a thesis about “Collaboration Using Virtual Reality in the Workplace.” During the COVID-19 pandemic, a company sent VR headsets to 10 employees and held weekly team meetings in a VR app.

You are now accompanying this study by interviewing the 10 employees about their experiences as part of your thesis.

It is important to lay the right foundation for analyzing your interviews before you even conduct them.

This means that you have a broad research objective or a more concrete research question in mind before you start your interviews.

The good thing about qualitative research is that it’s often very exploratory, looking at new and emerging topics and phenomena. This fits well with an inductive analysis, which entails that you do not have a strong theoretical framing, that you use to guide your analysis.

For mainly inductive qualitative research, you therefore need a slightly broader research question and can start without a specific theory in mind! For your interview questions, this means that they are very open, and you lead the interview to where it gets interesting, rather than structuring your questions strictly according to a theory you read about.

A suitable overarching question for our example would be, “How do knowledge workers collaborate on a team-level when using a virtual reality application?”

You can get more specific if you think is question has been addressed multiple times in previous research, but for simplicity, we’ll stick with this research question for the sake of this video.

In your literature review, you aim to become an expert in this area and check if you find helpful papers that you could build on to solve a more narrow problem that previous research has not tackled.

inductive coding

Deductive Approach

A deductive approach would look quite different. Suppose the company’s employees work with heavy machinery and already need to concentrate a lot. Here, you could use a theory like the “Cognitive Load Theory” to design your interview questions and guide your analysis. The theory provides specific dimensions to structure your study. These are, if you will, pre-made categories into which you sort your data, i.e., the interview quotes.

Your interview data analysis then follows a deductive approach, based on the predetermined theoretical framework.

But now let’s see how we can create codes from scratch, in a bottom-up, inductive fashion.

Inductive Coding

Inductive coding means that your codes emerge (inductively) from the material itself.

Codes are just labels that summarize a bunch of similar data. So if 3 of the employees talk about a similar issue they encountered, you give these parts in your interview transcript the same code, like “being overwhelmed by the functionalities of the virtual meeting room”.

The goal is to reduce or summarize all your material, in our example, all 10 interviews, to the essentials.

This means that you want to end up with a list of codes that are representative of your entire dataset in relation to your research objective. If someone looks at that list, they know exactly what the interviewees experienced when collaborating in VR.

This also means that, if something that people said is not relevant to team collaboration in VR, you don’t need to code it.

To make it a little easier, you can follow these 7 steps to build your first inductive codes.

5 Steps of Inductive Coding

  1. Determine the Unit of Analysis: In our example, this would be each complete statement of an employee about their VR collaboration experience.
  2. Paraphrase the Statements: This means cleaning up the statements from unnecessary details and writing them down clearly.
    • In our example, it could look like this: From “I often had problems with dizziness during fast movements in our VR meeting,” it becomes “dizziness during fast movements.”
  3. Set the Level of Abstraction: Be aware of how far you need to go from your material to a code, which may consist of only two or three words. It usually makes sense to perform two so-called reductions, for example, from a whole paraphrased sentence to a shorter code. The level of abstraction is then raised later in your analysis. After you have a list of maybe 50 initial codes or so, you can further summarize them and make them more abstract. Then you end up with 6 or 7 categories or themes, which are more abstract than your initial codes. How this abstraction works depends on the approach you use. While the first step, the initial list of codes is pretty similar in all qualitative methods that involve inductive coding, the steps that follow can be quite different. Please watch my method-specific tutorials on thematic analysis, grounded theory and so on, if you want to learn more.
  4. Summarize the Statements into Codes: In inductive coding, it’s important to go through the statements one by one and assign each one to a code. If the next statement is “I had some difficulties when I was trying to take notes with the VR controller”, you check if this fits into the existing code “dizziness during fast movements” If not, you create a new one, like “difficulties with handling the hardware.”
  5. Review: Your list of codes gradually forms. At first, it makes sense to create more different codes rather than fewer. If you find your list contains 57 codes and many are similar, you can perform another summarization step and just merge those that are very similar. Reviewing means going back to the original material and comparing it with your list of codes. Does the list of codes appropriately reflect what the employees said?
inductive coding 2

Common Pitfalls in Inductive Coding

I often observe that the guidance from methods books, especially on inductive coding, is perceived too dogmatically. Students often fear that deviating from the guidelines could be “wrong”.

This is commendable, but if you reach a point with your data where the next step that a methods book suggest doesn’t work for you, it’s up to you as a researcher to make an independent methodological decision, do it differently, and justify it in your methods section.

You can and should deviate from the plan if necessary. Qualitative methods are not a standard instrument that always look the same. They must be adapted to the specific material and constructed towards the specific research question.

As long as you proceed systematically, justify your decisions, and describe them precisely, everything will be fine.

Categories
Uncategorized

How Does an AI Detector Work and Can You Outsmart It?

Did you use a little AI help in writing your academic paper? Watch out, because an AI detector could flag your work.

More and more universities are using AI detectors to find out whether you’ve secretly involved AI tools like ChatGPT, JenniAI, or Paperpal in your writing process.

Sometimes, though, these detectors flag your work even if you didn’t use any AI at all!

No need to panic just yet and fear handing in your paper. In this article, I’ll show you 7 secret tips to help you avoid triggering an AI detector and make your texts look more human.

What is an AI Detector and How Does it Work?

More and more students are using AI tools like ChatGPT to help them with their writing.

Sure, universities could just stop using papers as a form of assessment. But since universities aren’t too keen on changing their exam formats, and papers are actually useful for learning how to do academic work, another solution is needed.

Enter AI detectors, which try to figure out whether parts of your paper were written by an AI or if you toiled over them yourself. Naturally, universities are also jumping on this bandwagon and using these tools to assess academic work.

These detectors use algorithms to check if your paper shows patterns typical of AI-generated texts.

Because AIs tend to use certain sentence structures and phrases that people wouldn’t normally use. They also check for logic. AI outputs are often just too perfectly structured, and everything feels a bit “too smooth.”

Let’s say your text stands out in one of these areas. Then the AI detector goes off. And that could make your supervisor suspicious.

At my university, for example, AI detectors aren’t allowed to be used as “proof” of an attempt to cheat.

Still, every submission goes through Turnitin, a plagiarism detection software that now also includes an AI detector. As a supervisor, I then get a score that indicates how likely it is that AI was used. What I do with that information is up to me.

Unfortunately, some detectors flag texts even if no AI was used.

So, it’d be helpful to know how to make sure your academic work doesn’t even raise suspicion in the first place.

And here’s how you can do that.

7 Secret Tips Against AI Detectors

ai detector 1

#1 No Copy & Paste

Sounds obvious, but trust me—I’ve seen it all.

Don’t just copy texts directly from an AI tool and paste them into your work!

Sure, it’s super tempting to use copy & paste and have a chapter written in seconds.

Don’t do it.

What universities will do more and more is simply ask you about a specific part of your paper. If you don’t know the sources you’ve cited or can’t answer a simple question about “your” text, your grade will tank fast.

So feel free to use AI to get creative and generate ideas, but always rewrite the text yourself. What would be great, too, is paraphrasing your SELF-written text with AI to improve grammar and sentence structure.

That brings us to tip number two.

#2 Use Synonyms and Change Sentence Structure

Let’s say you’ve hit a writing block and just can’t move forward. So you let AI inspire you.

It happens to the best of us. Oops.

To avoid having AI-generated text sneak into your paper, you should at least change the sentence structure and use synonyms.

AI detectors can recognize sentence patterns commonly found in AI-generated texts. So, if you completely restructure your sentences, you make it harder for the AI detector to identify these patterns.

You can practice this by creating multiple variations of a sentence. Practice makes perfect here too. At first, it might be difficult, but eventually, you’ll be able to rewrite sentences quickly and easily.

After rewriting several sentences, it’s worth reading through the text to make sure everything feels coherent.

ai detector 2

#3 Make Your Text More Human

As mentioned before, AI-generated texts often sound too good to be true. This is largely because AI uses very formal and precise language. That’s why AI detectors flag texts that are too smooth and flawless. Avoid this by trying to use a more human tone and vocabulary.

The beauty of academic writing doesn’t come from perfection but from originality. An AI can’t achieve that because it always uses the word that’s most likely to fit next.

Riemer and Peter (2024) from the University of Sydney call them “style engines.” This means generative AI is very good at mimicking a style—and that’s exactly the problem. True originality can’t come from that.

By incorporating the unpredictable into your text, you make it original and prevent an AI detector from being triggered.

#4 Keep “Higher-Level Thinking” to Yourself

AI tools often cram a lot of facts into a short section and sometimes sound generic because of it. This is another reason why AI detectors flag texts.

So avoid overloading your text with too many facts or information. Instead of just explaining the bare facts, keep it brief. An academic paper isn’t a Wikipedia article, it’s an argument that unfolds gradually.

For example, you could include a theoretical perspective to look at a topic from a new angle.

An AI would only come up with that if you fed it the idea. So as long as higher-level thinking remains your responsibility, you’re on safe ground.

Let’s say you’re writing a paper on digital transformation in the service sector.

You could just describe the topic and the related literature. An AI could do that too—so don’t expect a top grade here.

But if you come up with the original idea to analyze your topic through the lens of French pragmatism, like Boltanski and Thévenot (1991), then you’re about to create an original piece of work.

Meanwhile, your classmates might use ChatGPT to churn out a paper in 30 minutes and spend the rest of the day watching Netflix.

But who do you think will know more after graduation?

If you dig into Boltanski and Thévenot (1991) and use your paper as a chance to grow intellectually, you’ve already won.

It’s about resisting the quick AI solution and investing in work that truly helps you move forward.

#5 Avoid Low-Quality Sources

Sure, you can use ChatGPT or other AI tools for research. For example, the tool Consensus is super helpful for finding suitable sources.

However, you shouldn’t just blindly trust the information. AI tools often give useful summaries and explanations, but they don’t rely on primary scientific sources. To ensure the facts are correct, cross-check the AI’s info with other sources. Use reliable sources like books, scholarly articles, or databases.

At the same time, AI might give you a source that actually exists but is from an MDPI-published journal. These are often poorly peer-reviewed and therefore highly questionable.

I would never cite such an article, and I’d grade a paper relying on such sources more harshly.

For you, this means you need to develop the ability to differentiate between good and bad sources. AI can’t do this—yet—and it’s a risk for the quality of your academic work!

#6 Know the Difference Between Support and Plagiarism

In my opinion, AI is here to stay. Learning to use tools like ChatGPT properly will be an essential skill in the future job market. That’s why I don’t think you should avoid using AI entirely in your studies.

Instead, you should start using these tools right now—just in a smart way.

Many universities agree and allow AI use, but you must be transparent about how and to what extent you used it.

It’s perfectly fine to use AI tools as a support—even for academic writing. See AI as your creative assistant, helping you develop your ideas and structure your thoughts—not as a tool that writes your entire paper for you.

I’ve already made a detailed video on AI and plagiarism, which you can find linked here.

However, AI detectors work differently than plagiarism scanners. If you use AI to paraphrase, the plagiarism scanner won’t go off, but the AI detector likely will.

So, if you want to use AI for paraphrasing or spell check, just get your supervisor’s approval. Then, write a statement disclosing this in your affidavit at the end of your paper, and you won’t have to worry about AI detectors again.

Of course, this only works if your university’s exam regulations don’t explicitly prohibit AI use. So check your university’s current AI policy beforehand.

#7 Use an AI Detector Yourself

A final tip: Before submitting your paper, run it through various AI detectors or plagiarism scanners. There are several online tools now that can detect if your text might be flagged as AI-generated.

You can test an AI detector yourself and play around with it.

If you want to try it out, for example, you can use Quillbot’s free AI detector: https://quillbot.com/ai-content-detector.

Test your own text, the AI-generated text, and something in between. You’ll be able to spot patterns and see how changes affect the score.

This knowledge will help you when writing your academic paper and applying the previous 6 tips!

Conclusion

AI detectors have become really good at spotting patterns in AI-generated texts. But they’re not infallible.

In English, you’d call this an “arms race”: AI detectors and AI tools constantly push each other forward, with one always trying to stay a step ahead of the other.

This is why no student will fail an exam solely because of an AI detector. Sure, plagiarism can be definitively proven, as that’s relatively easy to verify.

That said, this doesn’t apply to AI-generated content. There will always be some doubt. Someone could simply have a writing style that’s a lot like a generative AI tool’s. There’s no surefire way to prove a text was generated by AI.

But I can’t stress this enough: if you use AI, don’t shut off your own thinking.

Instead, think of AI as a tool that makes things easier, giving you more space for genuine creative thought. But you really have to use that space—and not waste the time you save on something else. Only then will AI help you study more effectively than people could 10 years ago, allowing you to produce truly original work.