Categories
Uncategorized

How Many Interviews Do I Need for My Thesis?

You’re in the early stages of your thesis and have decided to conduct interviews to gather empirical data. But now comes the big question: how many interviews do you actually need? Five? Ten? Fifty?

This is one of the most common questions I get asked, and the answer is—it depends.

Don’t worry, though. In this article, I’ll walk you through how to determine the optimal number of interviews for your study.

Why Isn’t There a Single Correct Answer?

You might have heard the phrase, “There are no fixed rules in qualitative research.” But what does that really mean? Unlike quantitative research, where sample size is often determined using statistical calculations, qualitative research is more flexible. Each qualitative study has different goals and uses different methods. This variability means there’s no universal number of interviews that’s always right—just guidelines and recommendations.

Luckily, Wutich and colleagues (2024) tackled this exact question in their paper. They developed a step-by-step flowchart to help you figure out the right number of interviews for your study.

According to the authors, the number of interviews largely depends on your research goals and methods. So, the first step is to clearly define what you want to achieve with your study and what kind of insights you aim to uncover. The appropriate number of interviews will then be guided by your research goals and how deeply you want to dive into the topic.

Their paper introduces several recommendations to help you narrow down the number of interviews without fixating on a rigid number. One central concept is saturation—the point at which additional interviews no longer provide new information.

How Many Interviews Do I Need for My Thesis?

The Five Key Approaches to Determining the Number of Interviews

The flowchart begins with a fundamental question: What is your research goal? Depending on whether you aim for a broad overview or an in-depth analysis, you’ll need a different amount of data.

1. Theme (Data) Saturation

If your goal is to gain a general overview of the main themes in your research area, you should aim for theme saturation. This occurs when no new themes emerge, and you’ve identified all the key aspects of your research topic. Wutich et al. (2024) recommend about nine interviews or four focus groups for this type of saturation. Theme saturation is ideal for studies designed to provide an overview of central themes, such as identifying the main stress factors among students.

Example: Imagine you’re exploring the topic of “stress in university life” and asking students what they find stressful. If, after several interviews, responses like “exam pressure” and “time constraints” keep repeating without any new factors emerging, you’ve reached theme saturation.

2. Meaning Saturation

For studies aiming to capture not just themes but also the interpretations and meanings of these themes from the participants’ perspectives, meaning saturation is the focus. This type of saturation digs deeper into the details associated with a theme. According to Wutich et al. (2024), meaning saturation usually requires about 24 interviews or eight focus groups.

Example: You’re studying how students experience exam stress. Instead of just identifying stress factors, you aim to understand how they perceive this stress. For some, it might stem from perfectionism, while for others, it’s due to time pressure or lack of support. When you’ve captured all these perspectives and no new interpretations arise, you’ve reached meaning saturation.

3. Theoretical Saturation

This approach is common in Grounded Theory, where the goal is to develop a theory that provides new insights into a phenomenon. Theoretical saturation involves understanding patterns and connections between different themes and building a theoretical foundation. According to Wutich et al. (2024), achieving theoretical saturation typically requires 20–30 interviews or more, depending on the complexity of the theory being developed.

Example: Suppose you’re developing a process theory on stress management in university life, exploring how various strategies interact over time. To create a comprehensive theory, you need detailed data covering multiple perspectives and connections. Theoretical saturation is achieved when additional interviews no longer refine or improve your theory.

Once you reach this point, you can stop collecting data—whether it’s at 23, 35, or 42 interviews. What matters is the outcome, not the exact number of interviews.

4. Metatheme Saturation

The meta-theme analysis method was originally developed to study cultural differences in language. Over time, it evolved into a mixed-methods approach that identifies overarching themes from qualitative data. This method combines qualitative data with quantitative analyses of word meanings or codes.

In recent research, meta-theme analysis has shifted towards qualitative applications, focusing on identifying and comparing shared themes across datasets collected in different locations or cultures. Typically, 20–40 interviews per site are needed to develop a solid list of main themes and identify common variations within each site.

Example: You’re researching “stress in university life” and interviewing students in both Germany and the USA. To highlight differences and similarities between these countries, you conduct enough interviews for each group until the central themes in each group start to repeat.

5. Saturation in Salience

With saturation in salience the focus is on identifying the topics that are most important to participants. This type of saturation often uses a method called “free listing,” where participants list the topics or challenges that matter most to them. Salience saturation is reached when the participants’ lists begin to repeat. Wutich et al. (2024) suggest that 10 detailed free lists are often enough.

Example: If you ask students to list the biggest challenges in university life, and after about 10 lists, no new topics emerge, you’ve reached saturation in salience. This method is especially useful for quickly identifying the central issues that are most relevant to your participants.

How Many Interviews Do I Need for My Thesis?1

Applying the Flowchart Step by Step

Now that you’re familiar with the five types of saturation, here’s a quick guide to using the flowchart to determine the number of interviews for your study:

  1. Define Your Research Goal
    Decide whether you want an overview of a topic or deeper insights and connections, such as developing your own theory or model.
  2. Choose the Right Type of Saturation
    Select the type of saturation that aligns with your goal—for example, theme saturation for a broad overview or theoretical saturation for theory development.
  3. Set an Initial Number of Interviews
    Start with the recommendations from Wutich et al., such as nine interviews for theme saturation or 20–30 for theoretical saturation.
  4. Analyze and Adjust
    Analyze your data and check whether saturation has been reached. If new themes or meanings emerge, conduct additional interviews as needed.
  5. Draw Conclusions
    Once saturation is reached and no new insights are uncovered, you’ve identified the right number of interviews for your study.

Practical Tips for Deciding on the Number of Interviews

While the flowchart provides a solid framework, practical factors also come into play. For example, the limited time available to complete your thesis. Here are some tips for efficiently implementing the recommendations:

  • Stay Flexible: Qualitative research is dynamic. You may need to adjust the number of interviews during data collection—whether because new themes emerge or many themes begin to repeat. Start with an approximate number and adapt as needed.
  • Use Pilot Interviews: Pilot interviews are a great way to get an initial impression and test your questions. They also help you estimate how many interviews you’ll need to cover all the relevant themes.
  • Plan Time and Resources: Conducting and analyzing interviews is time-consuming. Consider how many interviews you can realistically handle without compromising the quality of your work.
  • Focus on Data Quality: A thorough analysis of fewer interviews can often be more valuable than a superficial analysis of many.

Source: Wutich, A., Beresford, M., & Bernard, H. R. (2024). Sample Sizes for 10 Types of Qualitative Data Analysis: An Integrative Review, Empirical Guidance, and Next Steps. International Journal of Qualitative Methods, 23, 1-14.

Categories
Uncategorized

How to Identify High-Quality Academic Papers


Ever experienced this? You cite a source that seems reliable, and your professor suddenly questions whether it’s even academic. Finding good academic papers is essential!

Or it is academic, but you’re unsure whether you should cite a paper from the “32nd Gordon Conference on Cannabinoid Function in the Brain” or not.

In this article, I’ll reveal 5 indicators that will help you distinguish good scientific sources from less reliable ones.

Why is evaluating scientific sources so important?

Here’s an important point that many often underestimate: the quality of your sources directly influences the credibility of your arguments.

Science relies on building new knowledge on solid, verifiable information. Sources that meet scientific standards, such as undergoing a rigorous peer-review process, are essential for creating a stable foundation for your academic work.

Without reliable sources, you risk basing your arguments on uncertain or outdated information, which diminishes the perceived quality of your own research.

Choosing the right publication outlet

In most research disciplines, three types of publications have become the standard:

  • Books
  • Journals
  • Conference papers

Choosing the right outlet can save you valuable time and effort. If you know that a journal, a conference, or a book publisher has a good reputation, you can be fairly confident that any source published there is likely reliable. At the end of this article, I’ll explain how to differentiate between a decent, average journal and a top-tier journal—so stay tuned.

Now let’s first look at the 5 general indicators.

finding good academic papers

1. The Peer-Review Process

A strong quality marker of a good scientific source is that it has undergone a peer-review process. Peer review means that a group of (usually anonymous) experts in the field has reviewed and evaluated the work before publication. They ensure that the methodology is solid, the arguments are convincing, and the results contribute to existing research. Only papers that pass this peer-review process are published.

Unfortunately, it’s not always easy to determine whether the peer-review process for a journal or conference is robust. If you see details in articles such as submission dates, how many revisions were made, and the names of the editors, that’s already a good sign.

Peer review takes time. If you see a journal article that was submitted three months before publication, that’s a sign the peer-review process may not be very thorough.

If you come across platforms like arXiv or SSRN, be aware that articles listed there have not yet undergone a peer-review process. These are called “preprints.” Preprints have the advantage of sharing the latest research with the world quickly, but they may still contain errors. So, be cautious about citing preprints.

It’s best to combine multiple indicators. Let’s take a look at a few more.

finding good academic papers 2

2. The Number of Citations for an Article

The citation count shows how often other researchers have used an article as a source. A high number of citations indicates that the article is considered important or groundbreaking in its field. If there are hundreds of citations, that’s already a strong signal.

However, it’s worth taking a closer look: an article isn’t always widely cited because of its quality. Some articles are cited because they’re groundbreaking, while others may be cited because they’re controversial or even flawed. Therefore, you should always view the citation count in context and distinguish good from bad citations.

Platforms like Google Scholar or Scopus can provide insights into an article’s citations. For example, if the articles citing your original article are also frequently and positively cited, that’s a good sign. This technique is also known as “citation chaining.”

However, there are also journals and publishers that exploit this and play a dirty game with citations. They attempt to artificially generate citations to boost their journal’s reputation. You can gauge how often a journal is cited by looking at its Impact Factor.

3. Impact Factor

The Impact Factor shows how often articles in a journal are cited on average—typically over the past two years. A high value indicates that the research published there receives a lot of attention and is considered relevant. The Impact Factor is calculated by dividing the total number of citations for articles from the last two years by the total number of articles published. For example, a journal with an Impact Factor of 5 means that, on average, each article was cited five times.

In dynamic fields like medicine or natural sciences, a high Impact Factor is often considered a sign of quality and influence. However, the Impact Factor has its limitations. In specialized fields where fewer articles are published and cited, the value is often lower, even if the journal’s research is high-quality.

There are also cases where the Impact Factor is artificially inflated through “citation cartels.” In these cases, researchers frequently cite each other’s work within the same journal to boost its Impact Factor. The open-access publisher MDPI, for instance, has been criticized for high citation rates driven by internal citations. If you encounter unusually high citation counts in such a journal, it’s worth taking a closer look at the citation sources.

4. Timeliness of a Source

The timeliness of your sources is critical, especially in rapidly evolving fields like computer science. New findings and technologies can quickly render older studies less relevant. To ensure your sources are up-to-date, aim to use materials that are no more than 3-5 years old when presenting the current state of research. Of course, when addressing groundbreaking studies or theories, older sources are indispensable.

Using outdated sources in your introduction or current state of research not only weakens your arguments but can also lead to relying on obsolete approaches. This is especially important for empirical studies: an experiment conducted in the 1980s might yield entirely different results if reevaluated with modern methods. Current literature reviews and systematic reviews provide a comprehensive overview of the state of research and help you weed out outdated sources.

5. Open Access and Paid Access -Finding Good Academic Papers

Scientific articles are not always freely accessible. There are two main ways to publish academic articles:

  • Open Access: These articles are freely accessible and cost-free, often available through platforms like PubMed, DOAJ, or directly on journal websites. The advantage is that they’re immediately and freely available—ideal for students and researchers without access to expensive databases. Many universities are increasingly promoting Open Access to facilitate access to research.
  • Paid Access: Articles in high-ranking journals, particularly Q1 journals, often require payment. These articles are usually behind a paywall and may require a per-article fee or access via a university subscription. Many institutions provide access to such articles through databases like Elsevier, Springer, or JSTOR, allowing students to access them at no additional cost.

Bonus: Journal Rankings

The simplest and cross-disciplinary ranking system for finding good academic papers is the quartile classification (Q1 to Q4). This helps you compare journals within their respective fields. Q1 journals (the top 25%) are among the most cited and respected publications.

In each discipline, there are also specialized journal rankings to guide you. Let’s take business administration as an example. Here, you’ll find specific rankings that help you identify high-quality journals and assess their reputation. These rankings are invaluable when it comes to finding good academic papers.

  • FT50 – Financial Times 50 Ranking: This internationally recognized ranking is used by many MBA programs to assess the quality of research in business-related fields. The journals listed here are the best across all subfields of business administration, from marketing and management to human resources.
  • UT Dallas List: This list is even stricter, including only 24 of the world’s leading business journals. Journals on this list place the highest value on academic quality and scientific rigor. Citing articles from these journals demonstrates that you’ve engaged with the very best academic literature.
Categories
Uncategorized

The 10 Best Books on Artificial Intelligence (AI)

10 Best Books on AI

Artificial Intelligence is no longer just science fiction. It’s already part of our daily lives and will reshape the world even more in the coming years. Whether it’s jobs, education, or ethical questions, the 10 Best Books on AI will help you understand this transformative technology—and if you want to help shape the future, you can’t afford to ignore them.

But don’t worry: you don’t need to be a computer scientist to understand how AI works and what opportunities and risks it presents.

In this video, I’ll introduce you to 10 books that not only explain the technological foundations of AI but also explore how this technology could impact our lives in the years and decades to come. By the end, you’ll be well-prepared for any conversation about AI.

1. A Brief History of Intelligence: Why the Evolution of the Brain Holds the Key to the Future of AI by Max Solomon Bennett

In A Brief History of Intelligence, Max Solomon Bennett takes you on a journey through the evolution of the human brain, explaining its connection to artificial intelligence. The book starts with the origins of our cognitive abilities, from primitive nervous systems to complex human thinking.

Bennett shows that developing AI isn’t about replicating the human brain but drawing inspiration from evolution. You’ll learn why the brain is so efficient and how AI can learn from it. At the same time, the book raises the question of what makes us uniquely human and whether AI will ever truly match us.

2. Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

In Life 3.0, Max Tegmark explores a future where AI doesn’t just create tools but thinks and acts independently. The title refers to three evolutionary stages of life:

  • Life 1.0 is biological, like bacteria adapting to their environment.
  • Life 2.0 is us, humans, who evolve culture and knowledge.
  • Life 3.0 describes beings or machines capable of designing their own hardware and software.

Tegmark focuses on the societal, ethical, and political challenges of this third stage. What happens when AI surpasses human intelligence? Who controls these technologies, and how can we ensure they act for humanity’s benefit?

The book is unique in that Tegmark doesn’t spread fear but develops concrete scenarios. Imagine AI not only boosting productivity but also solving global problems like climate change. At the same time, he warns that the same technology could become dangerous in the wrong hands.

To be honest, Tegmark’s book sketches a pretty far-out future, imagining AI hundreds of years ahead. Still, it’s a fascinating thought experiment.

10 Best Books on AI 1

3. Klara and the Sun by Kazuo Ishiguro

Kazuo Ishiguro’s novel Klara and the Sun tells the story of Klara, a robot companion for a young girl. But Klara is more than just a robot. She observes, learns, and develops a surprising understanding of the people around her. At the same time, the question always lingers: Is this genuine empathy, or just a perfect simulation?

Through Klara’s eyes, we experience a world where the boundaries between humans and machines blur. The book’s strength lies in the questions it raises: Can machines truly develop empathy? What separates a robot like Klara from a human? And what happens when people form emotional bonds with machines?

Ishiguro avoids technical explanations, focusing instead on the intimate questions of human-AI interaction.

4. Nexus by Yuval Noah Harari

Yuval Noah Harari is known for tackling the big questions of our time, and Nexus is no exception. In this book, he explores how information networks have shaped societies from the Stone Age to the modern era. Harari examines the social, political, and ethical challenges posed by technologies like AI.

A central theme of the book is the power of big data and how it could change our understanding of freedom. Harari describes how AI can predict—and potentially manipulate—our decisions. He asks: What remains of human autonomy when machines understand us better than we understand ourselves?

Nexus is not a technical book but a historical-philosophical look at the challenges and opportunities awaiting us in an AI-driven world. If you liked Harari’s previous books, you won’t be disappointed by this one.

10 Best Books on AI 2

5. The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI by Fei-Fei Li

In her autobiography, Fei-Fei Li shares her inspiring journey. Growing up in China, she moved to the U.S. as a teenager and climbed to the top of a male-dominated scientific field.

Her unique perspective as a woman from China makes this book stand out. Her work on the ImageNet project laid the foundation for many AI applications, from facial recognition to autonomous vehicles. But her book goes far beyond technical achievements.

Li addresses big questions: How can we ensure AI is not just efficient but also ethical? And why is it so important for people from diverse backgrounds to shape this technology?

6. The Science of the Artificial by Herbert Simon

Herbert Simon, a pioneer of artificial intelligence, offers a classic exploration of human-made artifacts in The Science of the Artificial. He explains that everything humans create—from simple tools to modern software—is designed to solve specific problems.

Simon introduces the concept of “bounded rationality,” describing how our decisions are often limited by incomplete information and resources. AI, he argues, can help overcome these limitations and enable better solutions.

7. Deep Utopia: Life and Meaning in a Solved World by Nick Bostrom

In Deep Utopia, Nick Bostrom asks: What happens when humanity’s biggest problems are solved? Imagine a world without climate change, disease, or poverty. Sounds perfect, right? But Bostrom shows that even a utopian world raises new questions: Where do we find meaning when all major challenges are gone?

The book combines philosophy and technology, encouraging us to think about the long-term consequences of AI—not just what it can solve but the new dilemmas it might create.

8. Co-Intelligence: Living and Working with AI by Ethan Mollick

Ethan Mollick explores how humans and AI can collaborate successfully—not as competitors but as partners. The book highlights how AI is revolutionizing work, from data analysis to creative processes.

What makes this book stand out as one of the 10 Best Books on AI are its practical examples. You’ll learn how AI tools can improve your workflows, whether it’s project management, data analysis, or creative tasks. At the same time, Mollick cautions against seeing AI as a cure-all, emphasizing the importance of critical thinking and human creativity.

10 Best Books on AI 3

9. AI 2041: Ten Visions for Our Future by Kai-Fu Lee and Chen Qiufan

This book combines science and fiction to illustrate how AI could transform our world. The authors present ten future scenarios based on real technological developments.

What makes AI 2041 special is its blend of well-researched facts and imaginative storytelling. Each story is paired with an analysis explaining how realistic the scenario is and the technologies that could make it possible.

10. Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World by Mo Gawdat

Former Google X executive Mo Gawdat argues in Scary Smart that AI development is not just a technical issue but a human responsibility. He explains how AI is evolving rapidly and autonomously—a development with great potential but also significant risks.

What sets this book apart, earning its place among the 10 Best Books on AI, is Gawdat’s focus on morality. He emphasizes that AI reflects the values we instill in it, making it our responsibility to define those values consciously. Scary Smart is not a technical manual but a call to take responsibility and actively shape the future of AI development.




Categories
Uncategorized

Inductive Coding in Qualitative Research (Full Tutorial)

Have you chosen a qualitative method for your research and now face the challenge of creating your first codes, categories, or themes through inductive coding?

And what does that even mean?

In this article, I will walk you through the entire process of inductive coding using a step-by-step example.

At the end of this tutorial, you will have everything you need to start coding your own qualitative data.

Inductive Coding in Qualitative Research

Inductive coding is a specific technique in qualitative research. Whether follow the recommendations of thematic analysis, content analysis, or grounded theory — all these approaches involve some form of inductive coding.

However, if you read a methods book for the first time, you might be confused about how to actually do it.

So, let’s do it together.

The Process of Inductive Coding (Example)

For our example, let’s assume you are working on a thesis about “Collaboration Using Virtual Reality in the Workplace.” During the COVID-19 pandemic, a company sent VR headsets to 10 employees and held weekly team meetings in a VR app.

You are now accompanying this study by interviewing the 10 employees about their experiences as part of your thesis.

It is important to lay the right foundation for analyzing your interviews before you even conduct them.

This means that you have a broad research objective or a more concrete research question in mind before you start your interviews.

The good thing about qualitative research is that it’s often very exploratory, looking at new and emerging topics and phenomena. This fits well with an inductive analysis, which entails that you do not have a strong theoretical framing, that you use to guide your analysis.

For mainly inductive qualitative research, you therefore need a slightly broader research question and can start without a specific theory in mind! For your interview questions, this means that they are very open, and you lead the interview to where it gets interesting, rather than structuring your questions strictly according to a theory you read about.

A suitable overarching question for our example would be, “How do knowledge workers collaborate on a team-level when using a virtual reality application?”

You can get more specific if you think is question has been addressed multiple times in previous research, but for simplicity, we’ll stick with this research question for the sake of this video.

In your literature review, you aim to become an expert in this area and check if you find helpful papers that you could build on to solve a more narrow problem that previous research has not tackled.

inductive coding

Deductive Approach

A deductive approach would look quite different. Suppose the company’s employees work with heavy machinery and already need to concentrate a lot. Here, you could use a theory like the “Cognitive Load Theory” to design your interview questions and guide your analysis. The theory provides specific dimensions to structure your study. These are, if you will, pre-made categories into which you sort your data, i.e., the interview quotes.

Your interview data analysis then follows a deductive approach, based on the predetermined theoretical framework.

But now let’s see how we can create codes from scratch, in a bottom-up, inductive fashion.

Inductive Coding

Inductive coding means that your codes emerge (inductively) from the material itself.

Codes are just labels that summarize a bunch of similar data. So if 3 of the employees talk about a similar issue they encountered, you give these parts in your interview transcript the same code, like “being overwhelmed by the functionalities of the virtual meeting room”.

The goal is to reduce or summarize all your material, in our example, all 10 interviews, to the essentials.

This means that you want to end up with a list of codes that are representative of your entire dataset in relation to your research objective. If someone looks at that list, they know exactly what the interviewees experienced when collaborating in VR.

This also means that, if something that people said is not relevant to team collaboration in VR, you don’t need to code it.

To make it a little easier, you can follow these 7 steps to build your first inductive codes.

5 Steps of Inductive Coding

  1. Determine the Unit of Analysis: In our example, this would be each complete statement of an employee about their VR collaboration experience.
  2. Paraphrase the Statements: This means cleaning up the statements from unnecessary details and writing them down clearly.
    • In our example, it could look like this: From “I often had problems with dizziness during fast movements in our VR meeting,” it becomes “dizziness during fast movements.”
  3. Set the Level of Abstraction: Be aware of how far you need to go from your material to a code, which may consist of only two or three words. It usually makes sense to perform two so-called reductions, for example, from a whole paraphrased sentence to a shorter code. The level of abstraction is then raised later in your analysis. After you have a list of maybe 50 initial codes or so, you can further summarize them and make them more abstract. Then you end up with 6 or 7 categories or themes, which are more abstract than your initial codes. How this abstraction works depends on the approach you use. While the first step, the initial list of codes is pretty similar in all qualitative methods that involve inductive coding, the steps that follow can be quite different. Please watch my method-specific tutorials on thematic analysis, grounded theory and so on, if you want to learn more.
  4. Summarize the Statements into Codes: In inductive coding, it’s important to go through the statements one by one and assign each one to a code. If the next statement is “I had some difficulties when I was trying to take notes with the VR controller”, you check if this fits into the existing code “dizziness during fast movements” If not, you create a new one, like “difficulties with handling the hardware.”
  5. Review: Your list of codes gradually forms. At first, it makes sense to create more different codes rather than fewer. If you find your list contains 57 codes and many are similar, you can perform another summarization step and just merge those that are very similar. Reviewing means going back to the original material and comparing it with your list of codes. Does the list of codes appropriately reflect what the employees said?
inductive coding 2

Common Pitfalls in Inductive Coding

I often observe that the guidance from methods books, especially on inductive coding, is perceived too dogmatically. Students often fear that deviating from the guidelines could be “wrong”.

This is commendable, but if you reach a point with your data where the next step that a methods book suggest doesn’t work for you, it’s up to you as a researcher to make an independent methodological decision, do it differently, and justify it in your methods section.

You can and should deviate from the plan if necessary. Qualitative methods are not a standard instrument that always look the same. They must be adapted to the specific material and constructed towards the specific research question.

As long as you proceed systematically, justify your decisions, and describe them precisely, everything will be fine.

Categories
Uncategorized

How Does an AI Detector Work and Can You Outsmart It?

Did you use a little AI help in writing your academic paper? Watch out, because an AI detector could flag your work.

More and more universities are using AI detectors to find out whether you’ve secretly involved AI tools like ChatGPT, JenniAI, or Paperpal in your writing process.

Sometimes, though, these detectors flag your work even if you didn’t use any AI at all!

No need to panic just yet and fear handing in your paper. In this article, I’ll show you 7 secret tips to help you avoid triggering an AI detector and make your texts look more human.

What is an AI Detector and How Does it Work?

More and more students are using AI tools like ChatGPT to help them with their writing.

Sure, universities could just stop using papers as a form of assessment. But since universities aren’t too keen on changing their exam formats, and papers are actually useful for learning how to do academic work, another solution is needed.

Enter AI detectors, which try to figure out whether parts of your paper were written by an AI or if you toiled over them yourself. Naturally, universities are also jumping on this bandwagon and using these tools to assess academic work.

These detectors use algorithms to check if your paper shows patterns typical of AI-generated texts.

Because AIs tend to use certain sentence structures and phrases that people wouldn’t normally use. They also check for logic. AI outputs are often just too perfectly structured, and everything feels a bit “too smooth.”

Let’s say your text stands out in one of these areas. Then the AI detector goes off. And that could make your supervisor suspicious.

At my university, for example, AI detectors aren’t allowed to be used as “proof” of an attempt to cheat.

Still, every submission goes through Turnitin, a plagiarism detection software that now also includes an AI detector. As a supervisor, I then get a score that indicates how likely it is that AI was used. What I do with that information is up to me.

Unfortunately, some detectors flag texts even if no AI was used.

So, it’d be helpful to know how to make sure your academic work doesn’t even raise suspicion in the first place.

And here’s how you can do that.

7 Secret Tips Against AI Detectors

ai detector 1

#1 No Copy & Paste

Sounds obvious, but trust me—I’ve seen it all.

Don’t just copy texts directly from an AI tool and paste them into your work!

Sure, it’s super tempting to use copy & paste and have a chapter written in seconds.

Don’t do it.

What universities will do more and more is simply ask you about a specific part of your paper. If you don’t know the sources you’ve cited or can’t answer a simple question about “your” text, your grade will tank fast.

So feel free to use AI to get creative and generate ideas, but always rewrite the text yourself. What would be great, too, is paraphrasing your SELF-written text with AI to improve grammar and sentence structure.

That brings us to tip number two.

#2 Use Synonyms and Change Sentence Structure

Let’s say you’ve hit a writing block and just can’t move forward. So you let AI inspire you.

It happens to the best of us. Oops.

To avoid having AI-generated text sneak into your paper, you should at least change the sentence structure and use synonyms.

AI detectors can recognize sentence patterns commonly found in AI-generated texts. So, if you completely restructure your sentences, you make it harder for the AI detector to identify these patterns.

You can practice this by creating multiple variations of a sentence. Practice makes perfect here too. At first, it might be difficult, but eventually, you’ll be able to rewrite sentences quickly and easily.

After rewriting several sentences, it’s worth reading through the text to make sure everything feels coherent.

ai detector 2

#3 Make Your Text More Human

As mentioned before, AI-generated texts often sound too good to be true. This is largely because AI uses very formal and precise language. That’s why AI detectors flag texts that are too smooth and flawless. Avoid this by trying to use a more human tone and vocabulary.

The beauty of academic writing doesn’t come from perfection but from originality. An AI can’t achieve that because it always uses the word that’s most likely to fit next.

Riemer and Peter (2024) from the University of Sydney call them “style engines.” This means generative AI is very good at mimicking a style—and that’s exactly the problem. True originality can’t come from that.

By incorporating the unpredictable into your text, you make it original and prevent an AI detector from being triggered.

#4 Keep “Higher-Level Thinking” to Yourself

AI tools often cram a lot of facts into a short section and sometimes sound generic because of it. This is another reason why AI detectors flag texts.

So avoid overloading your text with too many facts or information. Instead of just explaining the bare facts, keep it brief. An academic paper isn’t a Wikipedia article, it’s an argument that unfolds gradually.

For example, you could include a theoretical perspective to look at a topic from a new angle.

An AI would only come up with that if you fed it the idea. So as long as higher-level thinking remains your responsibility, you’re on safe ground.

Let’s say you’re writing a paper on digital transformation in the service sector.

You could just describe the topic and the related literature. An AI could do that too—so don’t expect a top grade here.

But if you come up with the original idea to analyze your topic through the lens of French pragmatism, like Boltanski and Thévenot (1991), then you’re about to create an original piece of work.

Meanwhile, your classmates might use ChatGPT to churn out a paper in 30 minutes and spend the rest of the day watching Netflix.

But who do you think will know more after graduation?

If you dig into Boltanski and Thévenot (1991) and use your paper as a chance to grow intellectually, you’ve already won.

It’s about resisting the quick AI solution and investing in work that truly helps you move forward.

#5 Avoid Low-Quality Sources

Sure, you can use ChatGPT or other AI tools for research. For example, the tool Consensus is super helpful for finding suitable sources.

However, you shouldn’t just blindly trust the information. AI tools often give useful summaries and explanations, but they don’t rely on primary scientific sources. To ensure the facts are correct, cross-check the AI’s info with other sources. Use reliable sources like books, scholarly articles, or databases.

At the same time, AI might give you a source that actually exists but is from an MDPI-published journal. These are often poorly peer-reviewed and therefore highly questionable.

I would never cite such an article, and I’d grade a paper relying on such sources more harshly.

For you, this means you need to develop the ability to differentiate between good and bad sources. AI can’t do this—yet—and it’s a risk for the quality of your academic work!

#6 Know the Difference Between Support and Plagiarism

In my opinion, AI is here to stay. Learning to use tools like ChatGPT properly will be an essential skill in the future job market. That’s why I don’t think you should avoid using AI entirely in your studies.

Instead, you should start using these tools right now—just in a smart way.

Many universities agree and allow AI use, but you must be transparent about how and to what extent you used it.

It’s perfectly fine to use AI tools as a support—even for academic writing. See AI as your creative assistant, helping you develop your ideas and structure your thoughts—not as a tool that writes your entire paper for you.

I’ve already made a detailed video on AI and plagiarism, which you can find linked here.

However, AI detectors work differently than plagiarism scanners. If you use AI to paraphrase, the plagiarism scanner won’t go off, but the AI detector likely will.

So, if you want to use AI for paraphrasing or spell check, just get your supervisor’s approval. Then, write a statement disclosing this in your affidavit at the end of your paper, and you won’t have to worry about AI detectors again.

Of course, this only works if your university’s exam regulations don’t explicitly prohibit AI use. So check your university’s current AI policy beforehand.

#7 Use an AI Detector Yourself

A final tip: Before submitting your paper, run it through various AI detectors or plagiarism scanners. There are several online tools now that can detect if your text might be flagged as AI-generated.

You can test an AI detector yourself and play around with it.

If you want to try it out, for example, you can use Quillbot’s free AI detector: https://quillbot.com/ai-content-detector.

Test your own text, the AI-generated text, and something in between. You’ll be able to spot patterns and see how changes affect the score.

This knowledge will help you when writing your academic paper and applying the previous 6 tips!

Conclusion

AI detectors have become really good at spotting patterns in AI-generated texts. But they’re not infallible.

In English, you’d call this an “arms race”: AI detectors and AI tools constantly push each other forward, with one always trying to stay a step ahead of the other.

This is why no student will fail an exam solely because of an AI detector. Sure, plagiarism can be definitively proven, as that’s relatively easy to verify.

That said, this doesn’t apply to AI-generated content. There will always be some doubt. Someone could simply have a writing style that’s a lot like a generative AI tool’s. There’s no surefire way to prove a text was generated by AI.

But I can’t stress this enough: if you use AI, don’t shut off your own thinking.

Instead, think of AI as a tool that makes things easier, giving you more space for genuine creative thought. But you really have to use that space—and not waste the time you save on something else. Only then will AI help you study more effectively than people could 10 years ago, allowing you to produce truly original work.

Categories
Uncategorized

Bloom’s Taxonomy: The Secret Formula for Top Grades!

Have you ever wondered why some students seem to ace every exam with an A, while others, despite intense studying, barely scrape by with a C? The secret might just lie in how they use Bloom’s Taxonomy to approach their studying.

The key to successful learning isn’t just how much time you spend with your books but how smartly you use that time.

And this is where Bloom’s Taxonomy comes in—a super useful tool that teachers use – but if you know it too, it can help you improve your study strategies and boost your grades.

Basics of Bloom’s Taxonomy

Let’s start with the basics. Bloom’s Taxonomy was developed in 1956 by Benjamin Bloom and his colleagues.

Their goal was to create a classification of learning objectives that covers different levels of thinking. This classification consists of six levels: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating.

Originally, the taxonomy was designed for educators to help them clearly define learning goals and assess student progress.

Nowadays, modern exam software and learning management systems (LMS) are increasingly incorporating features to sort and analyze questions according to the different levels of Bloom’s Taxonomy.

Great, but why should this matter to you?

In many university exams, questions are designed to cover a range of cognitive skills, as described in Bloom’s Taxonomy.

By understanding these different levels, you can better prepare for the various types of questions you’ll face in your exams. You’ll know exactly what’s expected to score full points when you see a particular keyword in the question.

The number of points awarded typically depends on the type of question. A question that tests factual knowledge (like multiple-choice) will usually be worth fewer points than one that asks you to apply knowledge (like a case study).

And if you take a closer look at the taxonomy, it becomes clear why you didn’t get that top grade in your last exam, even though you spent hours memorizing the entire script!

For example, if you only memorized facts, you’ve only covered the lower levels of the taxonomy. In a task like, “Describe the basic principles of…,” you’re only asked for knowledge.

But when a question says, “Apply the principles of… to example X and explain…,” then you’re dealing with higher levels of the taxonomy, and simply recalling facts won’t cut it.

When you see keywords like “describe, explain, apply,” you’ll know how profs structure their exams, what the expectations for top marks are, and you can tailor your exam prep and study techniques accordingly.

The Six Levels of Bloom’s Taxonomy

1. Remember

The first level is about recalling facts and basic information. In exams, these questions are often multiple-choice or short-answer questions that test simple factual knowledge. They check whether you’ve memorized basic info. You’ll need this as the foundation for deeper questions and analysis. To prepare for these types of questions, flashcards are an effective tool. Regular repetition is also crucial—schedule fixed times in your study plan to revisit and solidify what you’ve learned.

Example exam questions:

  • Define the term “photosynthesis.”
  • Name the four basic principles of bioethics according to Beauchamp and Childress.
blooms taxonomy

2. Understand

The next level is understanding. Here, it’s about grasping the meaning of information and being able to explain it in your own words. Exam questions might ask you to explain concepts or clarify the significance of theories. To prep for understanding questions, discuss concepts with your classmates. Explain the concepts to each other in your own words. This deepens your understanding and helps clear up any confusion. Paraphrasing is also helpful—try summarizing complex texts in your own words. Creating concept maps or mind maps that show the relationships between different ideas can also help. This visual representation helps you grasp the bigger picture and understand how everything fits together.

Example exam questions:

  • Explain how photosynthesis works in your own words.
  • Explain the difference between microeconomic and macroeconomic models.

3. Apply

These questions test whether you can apply your theoretical knowledge in practical situations. They might ask you to apply theories and concepts to real-world problems, often using case studies or practical tasks. To prepare for application questions, regularly work on practice problems that challenge you to apply what you’ve learned in new contexts. Or look for case studies that deal with similar problems as those discussed in class and practice analyzing them.

Example exam questions:

  • Use a SWOT analysis to assess the strengths and weaknesses of a real company of your choice.
  • Apply the concept of Nash equilibrium to analyze the strategic behavior of two competing firms.
blooms taxonomy 1

4. Analyze

Analyzing involves breaking down information and understanding the relationships between the parts. These questions often require in-depth analysis of texts, data, or theories. They test your critical thinking and ability to dissect complex information. You can practice this by reading academic papers and understanding their argument structures. Or look at how data and statistics are analyzed and interpreted. You could also create argument chains to sharpen your analytical skills.

Example exam questions:

  • Analyze the argument structure in Kant’s Critique of Pure Reason and evaluate the validity of his conclusions.
  • Compare the various theories of personality development.

5. Evaluate

Evaluating means making judgments about the value and quality of information or methods. Exam questions might ask you to compare and assess different theories or models. Prepare for this by writing critical essays where you compare different theories or models. You can also create evaluation rubrics to assess your own work and that of your peers. Through peer review processes, you can evaluate others’ work and provide feedback.

Example exam questions:

  • Evaluate the effectiveness of the European Central Bank’s current monetary policy in the context of post-COVID-19 economic recovery.
  • Critique the methodology and conclusions of the study on the effectiveness of online learning compared to in-person instruction.
blooms taxonomy 2

6. Create

The highest level, creating, involves combining elements to develop something new and original. Exam questions at this level might ask you to formulate hypotheses or develop creative solutions to problems. Use techniques like brainstorming or mind mapping to develop new ideas. You could even participate in projects to work on your creative skills.

Example exam questions:

  • Develop a research plan to study the long-term effects of microplastics on marine ecosystems.
  • Design an innovative business model for a start-up.

Exam Prep with Bloom’s Taxonomy

So, how can you effectively use Bloom’s Taxonomy for your exam prep?

First, it helps you clearly and systematically define your learning goals. For example, when preparing for an exam, you can organize your study objectives according to the six levels of Bloom’s Taxonomy.

Start with memorizing basic facts (Remembering), then work your way through understanding the concepts (Understanding), and apply what you’ve learned in practice problems (Applying). Next, analyze complex problems (Analyzing), evaluate different solutions (Evaluating), and finally, develop new ideas or projects (Creating). This approach makes your studying more efficient and prepares you perfectly for exams.

Check out my YouTube channel for tutorials on different study techniques. Match them to the levels of the taxonomy: Spaced repetition for remembering. Active recall for remembering and understanding. Inquiry-based learning for analyzing and evaluating. The Feynman technique for applying. Design thinking for creating, and so on.

By using Bloom’s Taxonomy, you can target your preparation for different types of exam questions and optimize your study strategies.

This structured approach not only leads to better grades but also a deeper understanding and higher competence in your field.

By systematically working through this process, you’ll be fully prepared for exams, ace them, and be able to use your knowledge flexibly afterward.

Conclusion – Blooms Taxonomy

Bloom’s Taxonomy shows that deep and lasting learning involves multiple levels that go beyond just memorizing information.

In short, a deep understanding and the ability to apply and evaluate knowledge lead to better exam performance and top grades.

It’s about mastering knowledge and being able to use it flexibly, rather than just memorizing it temporarily.

Of course, there are exceptions. Some exams mainly test factual knowledge. Medical students in their first semester might know this all too well. In these cases, exams are 90% multiple-choice, and they “cross off” answers like there’s no tomorrow.

But now you have the ability to mentally run any kind of exam through the lens of Bloom’s Taxonomy and prepare yourself laser-focused based on that.

This puts you ahead of 99% of others.

Categories
Research Methods

PRISMA Literature Review (Flow Chart & Example)

Are you planning to conduct a systematic literature review and want to follow the PRISMA protocol for this?

It’s easier than you think!

In this article, I’ll explain what PRISMA is and show you exactly how you can apply it in your own literature review.

What is a PRISMA Literature Review?

PRISMA stands for “Preferred Reporting Items for Systematic Reviews and Meta-Analyses.” It’s a guideline developed to improve the process and reporting of systematic reviews and meta-analyses.

These literature-based papers are particularly valuable because they summarize the findings of many individual studies, providing a more comprehensive picture of a topic.

The PRISMA guidelines offer a standardized framework that ensures all important aspects of a systematic review are reported transparently and completely. This includes describing the search strategy, the criteria for selecting studies, the method for data extraction, and the assessment of study quality.

One important point is that PRISMA does not provide specific instructions on how to conduct the systematic review itself.

It does not include detailed steps for what databases to select or, how to analyze the data. These tasks fall under the methodology of the systematic review and are a bit dependent on your field. Therefore, you need to come up with your own analysis method and combine it with PRISMA.

However, PRISMA helps guide you through the systematic search process step by step and documents it thoroughly.

The Goals of PRISMA

The main goals of PRISMA are:

  • Transparency: Ensuring that your search strategy is clearly and thoroughly described so that other researchers can replicate and verify your study.
  • Completeness: All relevant information must be reported to give readers a full picture of your literature search.
  • Comparability: By standardizing the reporting, it becomes easier to compare and evaluate different systematic reviews.

You can find a complete overview here: https://www.prisma-statement.org/prisma-2020.

When following the PRISMA guidelines, always make sure to cite the original source that contains the most recent version of the guidelines. The current version is PRISMA 2020. Here’s the complete reference for the PRISMA 2020 guidelines:

Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … & Moher, D. (2021). The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021, 372.

What is the PRISMA Flow Chart?

The PRISMA flow chart, also sometimes called the PRISMA diagram, is a chart that shows how studies are selected for a systematic review.

It consists of four main phases:

  1. Identification: You search databases and other sources for studies and record the total number of studies found.
  2. Screening: You review the titles and abstracts of the studies and filter out those that are not relevant.
  3. Eligibility: You read the full text of the remaining studies and exclude those that do not fit your criteria.
  4. Inclusion: The final group of studies that will be included in your literature review or meta-analysis remains.

The PRISMA diagram helps you document the selection process clearly and ensures that nothing important is overlooked.

In the methods section of your paper, you should mention that your systematic review followed the PRISMA guidelines.

By explicitly mentioning PRISMA in the methodology section, you ensure that readers (and your supervisor) recognize and (hopefully) appreciate the structured approach of your systematic review.

Implementing a PRISMA Literature Search

Here are a few simple steps to implement the PRISMA literature search in your own work:

  • Research: Search multiple databases, such as PubMed or Scopus, for relevant studies. Make a note of how and where you searched.
  • Study Selection: Review the studies and remove those that don’t fit your criteria. Use the PRISMA diagram to document this process. You’ll need to develop your own selection criteria.
  • Data Extraction: Gather key information from the selected studies, such as sample size, methods, and results. What exactly you extract depends on what you’re investigating.
  • Study Quality Assessment: Assess the quality of the studies to ensure they are reliable.

Example of a Literature Review Using a PRISMA Diagram

To show you how PRISMA works in practice, let’s take a look at a paper that followed the PRISMA guidelines. The systematic review by Helen Crompton and Diane Burke, “Artificial intelligence in higher education: the state of the field,” examines the use of artificial intelligence (AI) in higher education.

PRISMA Literature Review

The PRISMA guidelines were used in this study to make the process of the systematic review transparent and complete. Here’s a simple explanation of how the PRISMA guidelines were applied:

  • Identification: The researchers conducted a literature search across several databases, identifying 341 relevant studies. Additionally, they conducted a manual search, finding 34 more studies. A manual search means that the researchers independently searched specific journals, reference lists, search engines, and websites in addition to the automated database search to ensure that no relevant studies were overlooked. Four duplicate studies were removed.
  • Screening: After removing duplicates, 371 articles remained. After reviewing the titles and abstracts, no articles were excluded, so all 371 proceeded to full-text screening.
  • Eligibility: The remaining articles were read in full and assessed. Some studies were excluded for the following reasons:
  • No original research (n = 68): These articles were not original studies, but rather reviews or commentaries.
  • Not in the field of higher education (n = 55): Studies were not related to higher education.
  • No artificial intelligence (n = 92): These studies did not deal with AI.
  • No use of AI for educational purposes (n = 18): AI was not used for educational purposes in these studies.
  • Inclusion: Finally, 138 articles were included in the systematic review. These articles were analyzed in detail and qualitatively coded to answer the study’s research questions.

Source: Crompton, H., & Burke, D. (2023). Artificial intelligence in higher education: the state of the field. International Journal of Educational Technology in Higher Education, 20(1), 22.

You just need to fill out the PRISMA flowchart with the results of your literature search and screening, and you can include it in the methods section of your paper as a figure. Super easy, right?

The PRISMA Checklist

Additionally, PRISMA offers useful resources like a checklist, available on the PRISMA website. This checklist helps ensure that systematic reviews and meta-analyses are reported in a complete and transparent manner. It consists of 27 items, organized into different sections, and serves as a guide to structure your review.

This checklist is particularly relevant if you are preparing a full systematic review for your thesis or paper.

Checklist Summary:
  • Title and Abstract: Clearly state that it is a systematic review. Provide a concise overview of the study.
  • Introduction: Outline the background and reasons for the review. Clearly define the review’s objectives and research questions.
  • Methods: Specify the inclusion and exclusion criteria for the studies. Describe the information sources and search strategies. Explain the selection and data extraction processes. Outline methods for assessing risk of bias and measures of effect. Detail how data from different studies were combined and analyzed.
  • Results: Present the results of the search and selection process, ideally using a flow diagram. Summarize the characteristics and findings of the included studies. Evaluate the risk of bias and the certainty of the results.
  • Discussion: Interpret the findings in the context of other evidence. Address the limitations of the evidence and methods. Consider the implications for practice and future research.
  • Additional Information: Provide details on the registration and protocol of the review. List both financial and non-financial sources of support. Disclose any potential conflicts of interest among the authors. Indicate the availability of data and materials.

While other PRISMA resources may be useful for high-level publications or complex meta-analyses, for your studies, the most relevant parts are the flowchart and sections of the checklist.

If you have any questions, feel free to leave a comment!

Categories
Research Methods

Deductive-Inductive Combination in Thematic Analysis (Tutorial)

Do you want to apply a combination of deductive and inductive thematic analysis in your qualitative research?

Is that even possible, and what should you keep in mind?

In the next few minutes, I’ll show you how to combine deductive and inductive coding, what authors like Braun and Clark, who are the most cited authors on thematic analysis think about it, and how to use this knowledge to perfect your qualitative study.

Inductive and Deductive Category Formation for Qualitative Content Analysis

If you’re familiar with my videos on thematic analysis, you know that this method distinguishes between two types of code and theme formation for analysing qualitative data.

#1 Building Themes Inductively

Here, you derive abstract codes from your data, that is, for example your interview transcripts. As the saying goes, the themes “emerge from the material.” This approach is also often referred to as “bottom-up” coding.

#2 Applying Themes Deductively

In this type of thematic analysis, you create a list of pre-defined themes from a theory or other literature.

Then, you approach the data with these themes in mind and systematically allocate your data to each theme. You can also count how often each theme occurs in your data set.

But what about a deductive-inductive combination? Does this have any advantages.

The answer is: Yes!

Deductive-Inductive Combination for Thematic Analysis

Since thematic analysis was originally invented to be an inductive method, but it’s not always easy to form new codes and themes purely from the ground up, a deductive-inductive combination is often a good way to get the best of both worlds.

Background

Why is this approach useful?

First, pure induction is powerful but difficult. For most research questions, pure induction isn’t the ultimate solution. The problem of induction, which frustrated David Hume and Karl Popper, still persists. Inferring a general rule from a single case is highly problematic. But, even if you are OK with that, it is quite difficult for beginners to confidently develop themes “out of nothing.” At the same time, supervisors often encourage you to read about theories. Incorporating existing theory in inductive research is quite a challenge, and you need a lot of experience to do it correctly.

Second, doing only deductive coding also has significant disadvantages. The theoretical framework you choose to work with is practically predetermined. Deduction doesn’t allow for breaking out of this framework, which means surprising insights that might contradict prior knowledge are not considered. These surprising insights are what make many research projects interesting and can potentially provide greater value for existing theory and literature.

Thus, a combination of both logics can be considered for thematic analysis.

deductive inductive combination

A deductive-inductive thematic analysis

In deductive thematic analysis, you choose your themes based on prior theoretical knowledge to guide the research process. The data is then coded and allocated into this structure (e.g., the main themes). You start with the same thing in your deductive-inductive thematic analysis.

But then, you use surprising findings or data that do not fit these themes to form new, inductive subthemes. If you find a lot, you can even add a main theme to your list of themes.

This can create theoretical added value, contributing something new to the existing knowledge – all while you enjoy the comfort of the theoretical framework you used for the deductive part of the coding.

Example

To better understand how a deductive-inductive combination works in thematic analysis, let’s look at a brief example.

Theory

Assume your study involves Identity Theory. It broadly states that individuals adopt certain characteristics, values, and norms to view themselves as unique compared to others. According to Burke and Stets (1999), three theoretical dimensions are involved in the identity formation process: Investment, Self-esteem, and Rewards.

These three theoretical dimensions could serve as a structure for your thematic analysis. According to deductive logic, three main themes would emerge:

#1 Main Category: Investment in… X (X=your topic)

#2 Main Category: Self-esteem based on… X (X=your topic)

#3 Main Category: Rewards from… X (X=your topic)

You could now structure your material, such as interview passages, according to these three main themes using regular coding techniques.

Now, you could inductively add subthemes by grouping your codes.

Or, based on data that doesn’t fit any of the three main themes, add another main theme based on that “left-over” data.

Data

Imagine you have the following quote in your interview transcript:

“Earlier, I thought I would have to give up my job completely to be a mother. Now I’m more confident that I can do both if I want, or if I want to work part-time or flexibly and arrange childcare without it affecting my career.”

This quote initially falls under Self-esteem, as it deals with self-perception and confidence in overcoming obstacles.

For this quote, you could create the subtheme “self-evaluation of own competencies.” This new subtheme could be placed under the main theme “Self-esteem in … X (X=your topic).”

You might find more quotes that fit into this subtheme, or perhaps quotes that form a completely new main theme – though this is unlikely with a well-established and often-tested theory.

You work inductively through your data until your system of main and subthemes forms a complete picture.

What Braun and Clark Say About the Deductive-Inductive Combination?

The authors of the most cited paper on the method have made clear that the method was originally intended to follow an inductive logic. With their focus on being “reflexive” as a researcher underscores this.

However, it is exactly this what makes qualitative research so flexible. It is not about following a guide from start to finish and never deviate from it. It is about tailoring the analytical approach to your needs and what makes the most sense in your situation.

Therefore, Braun and Clark would agree that combining both logics can be of value if it makes sense in your context.

Outside the Braun and Clark bubble, many researchers see deductive-inductive code formation as a very legitimate way to conduct qualitative research, especially, if you are a beginner.

As I always emphasize, you have the freedom to tailor your approach to your needs, as long as you act systematically and can logically justify your decisions in your methods chapter.

Is Deductive-Inductive the Same as Abductive?

You might have heard of the third type of reasoning: Abduction. This comes into play when you encounter a particularly surprising result and try to explain it with what makes most sense based on the information you have at this point. Then you infer a rule or category that best explains this result. However, this is quite risky as you can’t verify if the rule is correct.

A typical example of abduction is the detective Sherlock Holmes. He looks for clues at the crime scene and abductively infers how the events might have unfolded. These inferences are often very bold but bring him closer to the truth.

However, abduction in qualitative research isn’t really something you can plan systematically. You can’t know if you’ll find surprising results where an abductive inference could help. Here, we can only adopt abduction as a general attitude towards surprising results.

The deductive-inductive method, on the other hand, can be systematically planned and implemented.

Categories
Uncategorized

Social Network Analysis (Introduction & Tutorial)

Social Network Analysis

What is a Social Network Analysis? You’ve probably seen those colorful network graphs in newspaper articles or scientific papers. They look like a lot of work to create, right? Or maybe not?

Actually, you can conduct such an analysis without extensive programming knowledge or expensive software.

If you want to know how to do it – then you should sharpen your pencil and take notes.

In this article, I will explain everything about Social Network Analysis – where it comes from, what it’s good for, and how you can apply it. I will cover these five areas:

  1. Network Theory
  2. Applications of Social Network Analysis
  3. Data Collection
  4. Data Analysis and Visualization
  5. Overview of the Best Software Tools

By the end of this article, you’ll have all the links and further information you need to conduct your first Social Network Analysis.

#1 Network Theory

To understand Social Network Analysis, we first need to be aware of its theoretical basis: Network theory. This theory comes from the mathematical graph theory.

Network theory deals with the relationships between specific objects. In the context of Social Network Analysis, these objects are usually social actors. These relationships and objects are represented using a graph, meaning a diagram that connects (i.e., with lines) two or more objects (i.e., points).

Nodes and Edges

In the vocabulary of Social Network Analysis, an object is called a node (or vertex). The relationship between two or more nodes is represented by edges. These are the lines between the nodes.

A relationship can be either undirected or directed. Let’s imagine our network represents the relationships between Instagram accounts of famous politicians. The nodes are the people, and the edges are the follower relationships.

If Kamala Harris follows Donald Trump, but he does not follow her back, there is a directed edge from Kamala Harris to Donald Trump, usually shown with an arrow. Kamala Harris is the starting node and Donald Trump is the ending node.

If Donald Trump also follows Joe Biden, but not Kamala Harris, Donald Trump is an adjacent node to both Kamala Harris and Joe Biden. However, Joe Biden is not an adjacent node to Kamala Harris.

Centrality Measures

When facing a larger network, you might want to know certain properties of individual nodes or determine which nodes are particularly important or play a specific role in the network.

For this, you can calculate various centrality measures.

Social Network Analysis 2

Density

The density measure helps you to describe a characteristic of the entire network. It indicates how many edges there are in the network relative to the maximum possible number of edges.

For example, it shows how many users in our group of politicians are connected with each other compared to a scenario where everyone is connected with everyone. If all nodes are connected, the density is 1 or 100%. So, you always get a value between 0 and 1 for density.

Degree Centrality

Now let’s look at centrality measures. They do not describe properties of the whole network but single nodes.

This measure indicates how many edges a node has. If Kamala Harris has 9 follower-relationships (regardless in which direction), the degree of her node is 9.

For directed graphs, we distinguish between incoming edges (in-degree) and outgoing edges (out-degree).

Closeness Centrality

This measure indicates the average length of the shortest path between a node and all other nodes. It shows how central a node is within the entire network.

For example, how many contacts must Kamala Harris go through on average to reach certain politicians? The fewer, the more central she is in the network.

Betweenness Centrality

This measure indicates how often a node lies on the shortest path between two other nodes. Nodes with high betweenness centrality often lie between two or more clusters of nodes, essentially forming a bridge between them.

Eigenvector Centrality

This measure indicates how important the neighbors of a node are. The more important neighbors, the higher the value.

The best example of this measure is Google’s PageRank algorithm. It follows the rule that a web page is ranked higher in search results the more other important pages link to it.

So, if I have a blog post on my website and it is linked by major sites like CNN, BBC, and the Forbes, it’s better than if it is linked by two local newspapers and an unknown blogger.

#2 Applications of Social Network Analysis

Social Network Analysis has two main applications. The first is in academic research.

Social Network Analysis in Research

Theoretically, every discipline within the social sciences can use Social Network Analysis. But it goes beyond that. For example, you can also analyze and visualize citation relationships between papers, universities, and scientists.

Social Network Analysis 3
Citation network from Stieglitz et al. (2018)


Most commonly, you’ll find Social Network Analyses in political science, communication studies, and sociology.

Social Network Analysis in Journalism

#3 Data Collection

The basis for conducting any Social Network Analysis is data. In most cases, this data is obtained through web scraping or an API interface (for example of a Social Media platform) when dealing with online research.

If you want to practice, there are plenty of datasets available online for free. You can try Google’s search for datasets, Kaggle, or data.gov.

Data doesn’t always have to be collected automatically. It’s also possible to create small networks by manually entering your data into an Excel sheet or digitizing it in some other way.

For a Social Network Analysis, it is important that the data points reference each other, for example, using an ID for each node that is referenced in any other node that has a connection to that node.

Only then can you calculate centrality measures and visualize a network with software.

#4 Data Analysis and Visualization

Now we come to the analysis. The two most common tools for conducting a Social Network Analysis are R and Gephi.

Both programs can be downloaded and used for free. With R, you’ll need some time to get used to it, as you’ll need to learn or look up the programming language commands.

If you want to avoid programming languages entirely, I’d recommend Gephi. This software has a complete graphical user interface, and you can perform all sorts of tasks related to Social Network Analysis.

It still requires some time to learn Gephi, but there are great tutorials available on YouTube or you can get help in Gephi support groups on Facebook.

A Social Network Analysis with very large datasets requires quite a bit of computing power. To prevent your PC or laptop from reaching its limits and Gephi from crashing, you should filter your data beforehand or use a virtual machine.

The next steps to start your first Social Network Analysis would be:

  1. Read the foundational book on Social Network Analysis by Wasserman & Faust (1994)
  2. Get a free dataset to practice
  3. Watch YouTube tutorials on R or Gephi until you’re an expert
  4. Join Facebook groups where you can ask questions
  5. Learn by doing
  6. And don’t forget: Have fun! 🙂
Categories
Research Methods

Triangulation in Research (Simply Explained)

Triangulation

Have you come across the term triangulation while working on your research paper? You might have a rough idea of what it means, but you’re not entirely sure?

Then sit back and relax.

In this video, I will explain briefly but precisely what triangulation in research is all about. Additionally, I’ll provide you with all four types of triangulation and how you can implement this technique.

This way, you can elevate your qualitative research design to the next level and make your research methodologically robust.

Triangulation (Word Origin)

You can easily derive the meaning of triangulation from Latin. “Tri” means three and “angulus” means angle. So, triangulation involves “measuring in a triangle,” a concept that originates from land surveying.

However, outside of land measurement and geometry, empirical social research has adopted this term. And that’s what we’re focusing on now.

Triangulation in Research

When we talk about triangulation, we are on a methodological level. It’s about how a specific research design can provide as much insight as possible using one or more methods.

While it is commonly associated with qualitative research, it can also be applied in quantitative and mixed methods research.

To avoid confusing you, let’s look at the definition of “triangulation” in the research context from our colleague Flick (2008, p.12):

“Triangulation involves taking different perspectives on a subject under investigation or more generally: when answering research questions. These perspectives can be realized in different methods applied and/or different chosen theoretical approaches, both of which are related to each other.”

The goal of triangulation is to gain deeper insights than would be possible with just a single method or a single theoretical perspective.

Using the metaphor of land surveying, the position of an object can be determined more accurately when viewed from at least two different angles.

4 Types of Triangulation

To help you apply triangulation in your scientific work, here are the four most prominent types (Denzin, 1970; Flick, 2011).

#1 Method Triangulation

This form is probably the most commonly used. Denzin, the father of triangulation, even distinguishes between within-method and between-method triangulation.

Within-method triangulation could involve using two different interview guides to loosen the constraints of methodological decisions when creating the research design.

Between-method triangulation would involve adding a second method as described earlier. In our example, you could distribute an additional online questionnaire to the employees or evaluate the user data of the system.

#2 Data Triangulation

With this approach, you need different data sources. The method can remain the same, as can the phenomenon you are investigating.

To vary the data sources, you can change time, location, and people. There are almost endless possibilities, as you can already triangulate within each of these dimensions.

Time

Let’s take an example. Suppose your method is limited to expert interviews. You conduct interviews in a company and want to accompany the introduction of a new logistics system. You could triangulate within the dimension of “time.”

You select your expert, such as the head of the logistics department. Provided the company agrees, you could conduct an interview with the expert at two or three different points in time.

Here you would gain wonderful insights, for example, into the time before the introduction, the introduction process, and the experiences after the system has been used in the company for some time.

Location

In the same scenario, you could also triangulate the location. You could find two other companies where the system is also being implemented. Then you interview the heads of the logistics departments in these companies.

This way, you can make comparisons and examine the “phenomenon,” whatever you are investigating during the system adaptation, from different perspectives.

Data Subjects

Additionally, and I always recommend this, you can triangulate the data subjects. In addition to the logistics manager, you could include a warehouse specialist and a mid-level manager.

Of course, you can triangulate all three dimensions, but this also increases the effort. Consider which type of data triangulation would provide the most value for answering your research question.

Investigator Triangulation

This type involves briefly switching sides. Two or more researchers can prevent subjective distortions or a so-called “bias” on the part of the researcher.

In our example of expert interviews, at least two interviewers would have to be used. It would not be enough for you to conduct interview 1 and your fellow student interview 2. You would have to do it together, take notes independently, and then compare your evaluations.

This type of triangulation is only really feasible if you work in a group.

Theoretical Triangulation

The last form of triangulation is quite exciting but also not easy for novices like you and me to implement. Before analyzing your data, you must be aware of your theoretical background.

This means which theory you use to understand the data or the phenomenon. Different theories offer different perspectives. In theoretical triangulation, you would apply several different theoretical frameworks to the data and view the phenomenon from different angles.

For example, you could develop a codebook for analyzing your interviews based on a behaviorist theory. Then, analyze your transcript again, this time with a codebook developed using a different sociological or psychological theory. Your imagination is the limit here.

Of course, always provided you argue well.

Triangulation in Research: Validation vs. Balance

To fully understand the concept of triangulation, it’s worth looking at the debate that has been carried out in the research literature over the past few decades.

It was Denzin (1978) who originally proposed triangulation as a strategy for validating research results. His idea was to use an additional method to ensure the accuracy of an analysis. But that this additional method is conducted on a much smaller scale.

This approach, however, has been repeatedly criticized (e.g., Mayring, 2001), leading more and more researchers to argue that different methodological approaches or theoretical perspectives should better be considered equal.

It is also important to understand that the research design in qualitative methods does not always have to be strictly predefined.

Most textbooks suggest a certain approach, with steps you should take, important quality criteria, and so on.

But in qualitative research, it is always possible to deviate from a blueprint if certain circumstances in your research require it.

Triangulation, too, should be understood as an open concept rather than something that needs to follow a strict guideline.

The Difference Between Triangulation and Mixed Methods

If you’re familiar with my article on mixed methods, you might wonder what the big difference is, since different methods are combined there too.

Mixed methods and triangulation are indeed two related concepts within empirical social research. They share similarities, such as the combination of different methods.

But: Mixed methods represent an independent research strategy that explicitly combines quantitative and qualitative methods to benefit from the strengths of both approaches.

Triangulation, on the other hand, is a much broader concept, which not only involves the combination of methods (although it can) but also includes theoretical perspectives and other subjective viewpoints.

Moreover, unlike in mixed methods, you can do what is called “within method” triangulation, which could be a combination of two different qualitative methods.

References

Denzin, N. K. (1978). Triangulation: A Case for Methodological Evaluation and Combination. Sociological Methods, 339-357.

Flick U. (2008). Managing quality in qualitative research. London, England: Sage.

Flick, U. (2011) Introducing Research Methodology: A Beginner’s Guide to Doing a Research Project. Los Angeles: Sage

Mayring, P. (2001). Combination and Integration of Qualitative and Quantitative Analysis. Forum Qualitative Sozialforschung Forum: Qualitative Social Research, 2(1).