Categories
Uncategorized

How Many Interviews Do I Need for My Thesis?

You’re in the early stages of your thesis and have decided to conduct interviews to gather empirical data. But now comes the big question: how many interviews do you actually need? Five? Ten? Fifty?

This is one of the most common questions I get asked, and the answer is—it depends.

Don’t worry, though. In this article, I’ll walk you through how to determine the optimal number of interviews for your study.

Why Isn’t There a Single Correct Answer?

You might have heard the phrase, “There are no fixed rules in qualitative research.” But what does that really mean? Unlike quantitative research, where sample size is often determined using statistical calculations, qualitative research is more flexible. Each qualitative study has different goals and uses different methods. This variability means there’s no universal number of interviews that’s always right—just guidelines and recommendations.

Luckily, Wutich and colleagues (2024) tackled this exact question in their paper. They developed a step-by-step flowchart to help you figure out the right number of interviews for your study.

According to the authors, the number of interviews largely depends on your research goals and methods. So, the first step is to clearly define what you want to achieve with your study and what kind of insights you aim to uncover. The appropriate number of interviews will then be guided by your research goals and how deeply you want to dive into the topic.

Their paper introduces several recommendations to help you narrow down the number of interviews without fixating on a rigid number. One central concept is saturation—the point at which additional interviews no longer provide new information.

How Many Interviews Do I Need for My Thesis?

The Five Key Approaches to Determining the Number of Interviews

The flowchart begins with a fundamental question: What is your research goal? Depending on whether you aim for a broad overview or an in-depth analysis, you’ll need a different amount of data.

1. Theme (Data) Saturation

If your goal is to gain a general overview of the main themes in your research area, you should aim for theme saturation. This occurs when no new themes emerge, and you’ve identified all the key aspects of your research topic. Wutich et al. (2024) recommend about nine interviews or four focus groups for this type of saturation. Theme saturation is ideal for studies designed to provide an overview of central themes, such as identifying the main stress factors among students.

Example: Imagine you’re exploring the topic of “stress in university life” and asking students what they find stressful. If, after several interviews, responses like “exam pressure” and “time constraints” keep repeating without any new factors emerging, you’ve reached theme saturation.

2. Meaning Saturation

For studies aiming to capture not just themes but also the interpretations and meanings of these themes from the participants’ perspectives, meaning saturation is the focus. This type of saturation digs deeper into the details associated with a theme. According to Wutich et al. (2024), meaning saturation usually requires about 24 interviews or eight focus groups.

Example: You’re studying how students experience exam stress. Instead of just identifying stress factors, you aim to understand how they perceive this stress. For some, it might stem from perfectionism, while for others, it’s due to time pressure or lack of support. When you’ve captured all these perspectives and no new interpretations arise, you’ve reached meaning saturation.

3. Theoretical Saturation

This approach is common in Grounded Theory, where the goal is to develop a theory that provides new insights into a phenomenon. Theoretical saturation involves understanding patterns and connections between different themes and building a theoretical foundation. According to Wutich et al. (2024), achieving theoretical saturation typically requires 20–30 interviews or more, depending on the complexity of the theory being developed.

Example: Suppose you’re developing a process theory on stress management in university life, exploring how various strategies interact over time. To create a comprehensive theory, you need detailed data covering multiple perspectives and connections. Theoretical saturation is achieved when additional interviews no longer refine or improve your theory.

Once you reach this point, you can stop collecting data—whether it’s at 23, 35, or 42 interviews. What matters is the outcome, not the exact number of interviews.

4. Metatheme Saturation

The meta-theme analysis method was originally developed to study cultural differences in language. Over time, it evolved into a mixed-methods approach that identifies overarching themes from qualitative data. This method combines qualitative data with quantitative analyses of word meanings or codes.

In recent research, meta-theme analysis has shifted towards qualitative applications, focusing on identifying and comparing shared themes across datasets collected in different locations or cultures. Typically, 20–40 interviews per site are needed to develop a solid list of main themes and identify common variations within each site.

Example: You’re researching “stress in university life” and interviewing students in both Germany and the USA. To highlight differences and similarities between these countries, you conduct enough interviews for each group until the central themes in each group start to repeat.

5. Saturation in Salience

With saturation in salience the focus is on identifying the topics that are most important to participants. This type of saturation often uses a method called “free listing,” where participants list the topics or challenges that matter most to them. Salience saturation is reached when the participants’ lists begin to repeat. Wutich et al. (2024) suggest that 10 detailed free lists are often enough.

Example: If you ask students to list the biggest challenges in university life, and after about 10 lists, no new topics emerge, you’ve reached saturation in salience. This method is especially useful for quickly identifying the central issues that are most relevant to your participants.

How Many Interviews Do I Need for My Thesis?1

Applying the Flowchart Step by Step

Now that you’re familiar with the five types of saturation, here’s a quick guide to using the flowchart to determine the number of interviews for your study:

  1. Define Your Research Goal
    Decide whether you want an overview of a topic or deeper insights and connections, such as developing your own theory or model.
  2. Choose the Right Type of Saturation
    Select the type of saturation that aligns with your goal—for example, theme saturation for a broad overview or theoretical saturation for theory development.
  3. Set an Initial Number of Interviews
    Start with the recommendations from Wutich et al., such as nine interviews for theme saturation or 20–30 for theoretical saturation.
  4. Analyze and Adjust
    Analyze your data and check whether saturation has been reached. If new themes or meanings emerge, conduct additional interviews as needed.
  5. Draw Conclusions
    Once saturation is reached and no new insights are uncovered, you’ve identified the right number of interviews for your study.

Practical Tips for Deciding on the Number of Interviews

While the flowchart provides a solid framework, practical factors also come into play. For example, the limited time available to complete your thesis. Here are some tips for efficiently implementing the recommendations:

  • Stay Flexible: Qualitative research is dynamic. You may need to adjust the number of interviews during data collection—whether because new themes emerge or many themes begin to repeat. Start with an approximate number and adapt as needed.
  • Use Pilot Interviews: Pilot interviews are a great way to get an initial impression and test your questions. They also help you estimate how many interviews you’ll need to cover all the relevant themes.
  • Plan Time and Resources: Conducting and analyzing interviews is time-consuming. Consider how many interviews you can realistically handle without compromising the quality of your work.
  • Focus on Data Quality: A thorough analysis of fewer interviews can often be more valuable than a superficial analysis of many.

Source: Wutich, A., Beresford, M., & Bernard, H. R. (2024). Sample Sizes for 10 Types of Qualitative Data Analysis: An Integrative Review, Empirical Guidance, and Next Steps. International Journal of Qualitative Methods, 23, 1-14.

Categories
Uncategorized

How to Identify High-Quality Academic Papers


Ever experienced this? You cite a source that seems reliable, and your professor suddenly questions whether it’s even academic. Finding good academic papers is essential!

Or it is academic, but you’re unsure whether you should cite a paper from the “32nd Gordon Conference on Cannabinoid Function in the Brain” or not.

In this article, I’ll reveal 5 indicators that will help you distinguish good scientific sources from less reliable ones.

Why is evaluating scientific sources so important?

Here’s an important point that many often underestimate: the quality of your sources directly influences the credibility of your arguments.

Science relies on building new knowledge on solid, verifiable information. Sources that meet scientific standards, such as undergoing a rigorous peer-review process, are essential for creating a stable foundation for your academic work.

Without reliable sources, you risk basing your arguments on uncertain or outdated information, which diminishes the perceived quality of your own research.

Choosing the right publication outlet

In most research disciplines, three types of publications have become the standard:

  • Books
  • Journals
  • Conference papers

Choosing the right outlet can save you valuable time and effort. If you know that a journal, a conference, or a book publisher has a good reputation, you can be fairly confident that any source published there is likely reliable. At the end of this article, I’ll explain how to differentiate between a decent, average journal and a top-tier journal—so stay tuned.

Now let’s first look at the 5 general indicators.

finding good academic papers

1. The Peer-Review Process

A strong quality marker of a good scientific source is that it has undergone a peer-review process. Peer review means that a group of (usually anonymous) experts in the field has reviewed and evaluated the work before publication. They ensure that the methodology is solid, the arguments are convincing, and the results contribute to existing research. Only papers that pass this peer-review process are published.

Unfortunately, it’s not always easy to determine whether the peer-review process for a journal or conference is robust. If you see details in articles such as submission dates, how many revisions were made, and the names of the editors, that’s already a good sign.

Peer review takes time. If you see a journal article that was submitted three months before publication, that’s a sign the peer-review process may not be very thorough.

If you come across platforms like arXiv or SSRN, be aware that articles listed there have not yet undergone a peer-review process. These are called “preprints.” Preprints have the advantage of sharing the latest research with the world quickly, but they may still contain errors. So, be cautious about citing preprints.

It’s best to combine multiple indicators. Let’s take a look at a few more.

finding good academic papers 2

2. The Number of Citations for an Article

The citation count shows how often other researchers have used an article as a source. A high number of citations indicates that the article is considered important or groundbreaking in its field. If there are hundreds of citations, that’s already a strong signal.

However, it’s worth taking a closer look: an article isn’t always widely cited because of its quality. Some articles are cited because they’re groundbreaking, while others may be cited because they’re controversial or even flawed. Therefore, you should always view the citation count in context and distinguish good from bad citations.

Platforms like Google Scholar or Scopus can provide insights into an article’s citations. For example, if the articles citing your original article are also frequently and positively cited, that’s a good sign. This technique is also known as “citation chaining.”

However, there are also journals and publishers that exploit this and play a dirty game with citations. They attempt to artificially generate citations to boost their journal’s reputation. You can gauge how often a journal is cited by looking at its Impact Factor.

3. Impact Factor

The Impact Factor shows how often articles in a journal are cited on average—typically over the past two years. A high value indicates that the research published there receives a lot of attention and is considered relevant. The Impact Factor is calculated by dividing the total number of citations for articles from the last two years by the total number of articles published. For example, a journal with an Impact Factor of 5 means that, on average, each article was cited five times.

In dynamic fields like medicine or natural sciences, a high Impact Factor is often considered a sign of quality and influence. However, the Impact Factor has its limitations. In specialized fields where fewer articles are published and cited, the value is often lower, even if the journal’s research is high-quality.

There are also cases where the Impact Factor is artificially inflated through “citation cartels.” In these cases, researchers frequently cite each other’s work within the same journal to boost its Impact Factor. The open-access publisher MDPI, for instance, has been criticized for high citation rates driven by internal citations. If you encounter unusually high citation counts in such a journal, it’s worth taking a closer look at the citation sources.

4. Timeliness of a Source

The timeliness of your sources is critical, especially in rapidly evolving fields like computer science. New findings and technologies can quickly render older studies less relevant. To ensure your sources are up-to-date, aim to use materials that are no more than 3-5 years old when presenting the current state of research. Of course, when addressing groundbreaking studies or theories, older sources are indispensable.

Using outdated sources in your introduction or current state of research not only weakens your arguments but can also lead to relying on obsolete approaches. This is especially important for empirical studies: an experiment conducted in the 1980s might yield entirely different results if reevaluated with modern methods. Current literature reviews and systematic reviews provide a comprehensive overview of the state of research and help you weed out outdated sources.

5. Open Access and Paid Access -Finding Good Academic Papers

Scientific articles are not always freely accessible. There are two main ways to publish academic articles:

  • Open Access: These articles are freely accessible and cost-free, often available through platforms like PubMed, DOAJ, or directly on journal websites. The advantage is that they’re immediately and freely available—ideal for students and researchers without access to expensive databases. Many universities are increasingly promoting Open Access to facilitate access to research.
  • Paid Access: Articles in high-ranking journals, particularly Q1 journals, often require payment. These articles are usually behind a paywall and may require a per-article fee or access via a university subscription. Many institutions provide access to such articles through databases like Elsevier, Springer, or JSTOR, allowing students to access them at no additional cost.

Bonus: Journal Rankings

The simplest and cross-disciplinary ranking system for finding good academic papers is the quartile classification (Q1 to Q4). This helps you compare journals within their respective fields. Q1 journals (the top 25%) are among the most cited and respected publications.

In each discipline, there are also specialized journal rankings to guide you. Let’s take business administration as an example. Here, you’ll find specific rankings that help you identify high-quality journals and assess their reputation. These rankings are invaluable when it comes to finding good academic papers.

  • FT50 – Financial Times 50 Ranking: This internationally recognized ranking is used by many MBA programs to assess the quality of research in business-related fields. The journals listed here are the best across all subfields of business administration, from marketing and management to human resources.
  • UT Dallas List: This list is even stricter, including only 24 of the world’s leading business journals. Journals on this list place the highest value on academic quality and scientific rigor. Citing articles from these journals demonstrates that you’ve engaged with the very best academic literature.
Categories
Uncategorized

The 10 Best Books on Artificial Intelligence (AI)

10 Best Books on AI

Artificial Intelligence is no longer just science fiction. It’s already part of our daily lives and will reshape the world even more in the coming years. Whether it’s jobs, education, or ethical questions, the 10 Best Books on AI will help you understand this transformative technology—and if you want to help shape the future, you can’t afford to ignore them.

But don’t worry: you don’t need to be a computer scientist to understand how AI works and what opportunities and risks it presents.

In this video, I’ll introduce you to 10 books that not only explain the technological foundations of AI but also explore how this technology could impact our lives in the years and decades to come. By the end, you’ll be well-prepared for any conversation about AI.

1. A Brief History of Intelligence: Why the Evolution of the Brain Holds the Key to the Future of AI by Max Solomon Bennett

In A Brief History of Intelligence, Max Solomon Bennett takes you on a journey through the evolution of the human brain, explaining its connection to artificial intelligence. The book starts with the origins of our cognitive abilities, from primitive nervous systems to complex human thinking.

Bennett shows that developing AI isn’t about replicating the human brain but drawing inspiration from evolution. You’ll learn why the brain is so efficient and how AI can learn from it. At the same time, the book raises the question of what makes us uniquely human and whether AI will ever truly match us.

2. Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark

In Life 3.0, Max Tegmark explores a future where AI doesn’t just create tools but thinks and acts independently. The title refers to three evolutionary stages of life:

  • Life 1.0 is biological, like bacteria adapting to their environment.
  • Life 2.0 is us, humans, who evolve culture and knowledge.
  • Life 3.0 describes beings or machines capable of designing their own hardware and software.

Tegmark focuses on the societal, ethical, and political challenges of this third stage. What happens when AI surpasses human intelligence? Who controls these technologies, and how can we ensure they act for humanity’s benefit?

The book is unique in that Tegmark doesn’t spread fear but develops concrete scenarios. Imagine AI not only boosting productivity but also solving global problems like climate change. At the same time, he warns that the same technology could become dangerous in the wrong hands.

To be honest, Tegmark’s book sketches a pretty far-out future, imagining AI hundreds of years ahead. Still, it’s a fascinating thought experiment.

10 Best Books on AI 1

3. Klara and the Sun by Kazuo Ishiguro

Kazuo Ishiguro’s novel Klara and the Sun tells the story of Klara, a robot companion for a young girl. But Klara is more than just a robot. She observes, learns, and develops a surprising understanding of the people around her. At the same time, the question always lingers: Is this genuine empathy, or just a perfect simulation?

Through Klara’s eyes, we experience a world where the boundaries between humans and machines blur. The book’s strength lies in the questions it raises: Can machines truly develop empathy? What separates a robot like Klara from a human? And what happens when people form emotional bonds with machines?

Ishiguro avoids technical explanations, focusing instead on the intimate questions of human-AI interaction.

4. Nexus by Yuval Noah Harari

Yuval Noah Harari is known for tackling the big questions of our time, and Nexus is no exception. In this book, he explores how information networks have shaped societies from the Stone Age to the modern era. Harari examines the social, political, and ethical challenges posed by technologies like AI.

A central theme of the book is the power of big data and how it could change our understanding of freedom. Harari describes how AI can predict—and potentially manipulate—our decisions. He asks: What remains of human autonomy when machines understand us better than we understand ourselves?

Nexus is not a technical book but a historical-philosophical look at the challenges and opportunities awaiting us in an AI-driven world. If you liked Harari’s previous books, you won’t be disappointed by this one.

10 Best Books on AI 2

5. The Worlds I See: Curiosity, Exploration and Discovery at the Dawn of AI by Fei-Fei Li

In her autobiography, Fei-Fei Li shares her inspiring journey. Growing up in China, she moved to the U.S. as a teenager and climbed to the top of a male-dominated scientific field.

Her unique perspective as a woman from China makes this book stand out. Her work on the ImageNet project laid the foundation for many AI applications, from facial recognition to autonomous vehicles. But her book goes far beyond technical achievements.

Li addresses big questions: How can we ensure AI is not just efficient but also ethical? And why is it so important for people from diverse backgrounds to shape this technology?

6. The Science of the Artificial by Herbert Simon

Herbert Simon, a pioneer of artificial intelligence, offers a classic exploration of human-made artifacts in The Science of the Artificial. He explains that everything humans create—from simple tools to modern software—is designed to solve specific problems.

Simon introduces the concept of “bounded rationality,” describing how our decisions are often limited by incomplete information and resources. AI, he argues, can help overcome these limitations and enable better solutions.

7. Deep Utopia: Life and Meaning in a Solved World by Nick Bostrom

In Deep Utopia, Nick Bostrom asks: What happens when humanity’s biggest problems are solved? Imagine a world without climate change, disease, or poverty. Sounds perfect, right? But Bostrom shows that even a utopian world raises new questions: Where do we find meaning when all major challenges are gone?

The book combines philosophy and technology, encouraging us to think about the long-term consequences of AI—not just what it can solve but the new dilemmas it might create.

8. Co-Intelligence: Living and Working with AI by Ethan Mollick

Ethan Mollick explores how humans and AI can collaborate successfully—not as competitors but as partners. The book highlights how AI is revolutionizing work, from data analysis to creative processes.

What makes this book stand out as one of the 10 Best Books on AI are its practical examples. You’ll learn how AI tools can improve your workflows, whether it’s project management, data analysis, or creative tasks. At the same time, Mollick cautions against seeing AI as a cure-all, emphasizing the importance of critical thinking and human creativity.

10 Best Books on AI 3

9. AI 2041: Ten Visions for Our Future by Kai-Fu Lee and Chen Qiufan

This book combines science and fiction to illustrate how AI could transform our world. The authors present ten future scenarios based on real technological developments.

What makes AI 2041 special is its blend of well-researched facts and imaginative storytelling. Each story is paired with an analysis explaining how realistic the scenario is and the technologies that could make it possible.

10. Scary Smart: The Future of Artificial Intelligence and How You Can Save Our World by Mo Gawdat

Former Google X executive Mo Gawdat argues in Scary Smart that AI development is not just a technical issue but a human responsibility. He explains how AI is evolving rapidly and autonomously—a development with great potential but also significant risks.

What sets this book apart, earning its place among the 10 Best Books on AI, is Gawdat’s focus on morality. He emphasizes that AI reflects the values we instill in it, making it our responsibility to define those values consciously. Scary Smart is not a technical manual but a call to take responsibility and actively shape the future of AI development.




Categories
Uncategorized

Inductive Coding in Qualitative Research (Full Tutorial)

Have you chosen a qualitative method for your research and now face the challenge of creating your first codes, categories, or themes through inductive coding?

And what does that even mean?

In this article, I will walk you through the entire process of inductive coding using a step-by-step example.

At the end of this tutorial, you will have everything you need to start coding your own qualitative data.

Inductive Coding in Qualitative Research

Inductive coding is a specific technique in qualitative research. Whether follow the recommendations of thematic analysis, content analysis, or grounded theory — all these approaches involve some form of inductive coding.

However, if you read a methods book for the first time, you might be confused about how to actually do it.

So, let’s do it together.

The Process of Inductive Coding (Example)

For our example, let’s assume you are working on a thesis about “Collaboration Using Virtual Reality in the Workplace.” During the COVID-19 pandemic, a company sent VR headsets to 10 employees and held weekly team meetings in a VR app.

You are now accompanying this study by interviewing the 10 employees about their experiences as part of your thesis.

It is important to lay the right foundation for analyzing your interviews before you even conduct them.

This means that you have a broad research objective or a more concrete research question in mind before you start your interviews.

The good thing about qualitative research is that it’s often very exploratory, looking at new and emerging topics and phenomena. This fits well with an inductive analysis, which entails that you do not have a strong theoretical framing, that you use to guide your analysis.

For mainly inductive qualitative research, you therefore need a slightly broader research question and can start without a specific theory in mind! For your interview questions, this means that they are very open, and you lead the interview to where it gets interesting, rather than structuring your questions strictly according to a theory you read about.

A suitable overarching question for our example would be, “How do knowledge workers collaborate on a team-level when using a virtual reality application?”

You can get more specific if you think is question has been addressed multiple times in previous research, but for simplicity, we’ll stick with this research question for the sake of this video.

In your literature review, you aim to become an expert in this area and check if you find helpful papers that you could build on to solve a more narrow problem that previous research has not tackled.

inductive coding

Deductive Approach

A deductive approach would look quite different. Suppose the company’s employees work with heavy machinery and already need to concentrate a lot. Here, you could use a theory like the “Cognitive Load Theory” to design your interview questions and guide your analysis. The theory provides specific dimensions to structure your study. These are, if you will, pre-made categories into which you sort your data, i.e., the interview quotes.

Your interview data analysis then follows a deductive approach, based on the predetermined theoretical framework.

But now let’s see how we can create codes from scratch, in a bottom-up, inductive fashion.

Inductive Coding

Inductive coding means that your codes emerge (inductively) from the material itself.

Codes are just labels that summarize a bunch of similar data. So if 3 of the employees talk about a similar issue they encountered, you give these parts in your interview transcript the same code, like “being overwhelmed by the functionalities of the virtual meeting room”.

The goal is to reduce or summarize all your material, in our example, all 10 interviews, to the essentials.

This means that you want to end up with a list of codes that are representative of your entire dataset in relation to your research objective. If someone looks at that list, they know exactly what the interviewees experienced when collaborating in VR.

This also means that, if something that people said is not relevant to team collaboration in VR, you don’t need to code it.

To make it a little easier, you can follow these 7 steps to build your first inductive codes.

5 Steps of Inductive Coding

  1. Determine the Unit of Analysis: In our example, this would be each complete statement of an employee about their VR collaboration experience.
  2. Paraphrase the Statements: This means cleaning up the statements from unnecessary details and writing them down clearly.
    • In our example, it could look like this: From “I often had problems with dizziness during fast movements in our VR meeting,” it becomes “dizziness during fast movements.”
  3. Set the Level of Abstraction: Be aware of how far you need to go from your material to a code, which may consist of only two or three words. It usually makes sense to perform two so-called reductions, for example, from a whole paraphrased sentence to a shorter code. The level of abstraction is then raised later in your analysis. After you have a list of maybe 50 initial codes or so, you can further summarize them and make them more abstract. Then you end up with 6 or 7 categories or themes, which are more abstract than your initial codes. How this abstraction works depends on the approach you use. While the first step, the initial list of codes is pretty similar in all qualitative methods that involve inductive coding, the steps that follow can be quite different. Please watch my method-specific tutorials on thematic analysis, grounded theory and so on, if you want to learn more.
  4. Summarize the Statements into Codes: In inductive coding, it’s important to go through the statements one by one and assign each one to a code. If the next statement is “I had some difficulties when I was trying to take notes with the VR controller”, you check if this fits into the existing code “dizziness during fast movements” If not, you create a new one, like “difficulties with handling the hardware.”
  5. Review: Your list of codes gradually forms. At first, it makes sense to create more different codes rather than fewer. If you find your list contains 57 codes and many are similar, you can perform another summarization step and just merge those that are very similar. Reviewing means going back to the original material and comparing it with your list of codes. Does the list of codes appropriately reflect what the employees said?
inductive coding 2

Common Pitfalls in Inductive Coding

I often observe that the guidance from methods books, especially on inductive coding, is perceived too dogmatically. Students often fear that deviating from the guidelines could be “wrong”.

This is commendable, but if you reach a point with your data where the next step that a methods book suggest doesn’t work for you, it’s up to you as a researcher to make an independent methodological decision, do it differently, and justify it in your methods section.

You can and should deviate from the plan if necessary. Qualitative methods are not a standard instrument that always look the same. They must be adapted to the specific material and constructed towards the specific research question.

As long as you proceed systematically, justify your decisions, and describe them precisely, everything will be fine.

Categories
Uncategorized

How Does an AI Detector Work and Can You Outsmart It?

Did you use a little AI help in writing your academic paper? Watch out, because an AI detector could flag your work.

More and more universities are using AI detectors to find out whether you’ve secretly involved AI tools like ChatGPT, JenniAI, or Paperpal in your writing process.

Sometimes, though, these detectors flag your work even if you didn’t use any AI at all!

No need to panic just yet and fear handing in your paper. In this article, I’ll show you 7 secret tips to help you avoid triggering an AI detector and make your texts look more human.

What is an AI Detector and How Does it Work?

More and more students are using AI tools like ChatGPT to help them with their writing.

Sure, universities could just stop using papers as a form of assessment. But since universities aren’t too keen on changing their exam formats, and papers are actually useful for learning how to do academic work, another solution is needed.

Enter AI detectors, which try to figure out whether parts of your paper were written by an AI or if you toiled over them yourself. Naturally, universities are also jumping on this bandwagon and using these tools to assess academic work.

These detectors use algorithms to check if your paper shows patterns typical of AI-generated texts.

Because AIs tend to use certain sentence structures and phrases that people wouldn’t normally use. They also check for logic. AI outputs are often just too perfectly structured, and everything feels a bit “too smooth.”

Let’s say your text stands out in one of these areas. Then the AI detector goes off. And that could make your supervisor suspicious.

At my university, for example, AI detectors aren’t allowed to be used as “proof” of an attempt to cheat.

Still, every submission goes through Turnitin, a plagiarism detection software that now also includes an AI detector. As a supervisor, I then get a score that indicates how likely it is that AI was used. What I do with that information is up to me.

Unfortunately, some detectors flag texts even if no AI was used.

So, it’d be helpful to know how to make sure your academic work doesn’t even raise suspicion in the first place.

And here’s how you can do that.

7 Secret Tips Against AI Detectors

ai detector 1

#1 No Copy & Paste

Sounds obvious, but trust me—I’ve seen it all.

Don’t just copy texts directly from an AI tool and paste them into your work!

Sure, it’s super tempting to use copy & paste and have a chapter written in seconds.

Don’t do it.

What universities will do more and more is simply ask you about a specific part of your paper. If you don’t know the sources you’ve cited or can’t answer a simple question about “your” text, your grade will tank fast.

So feel free to use AI to get creative and generate ideas, but always rewrite the text yourself. What would be great, too, is paraphrasing your SELF-written text with AI to improve grammar and sentence structure.

That brings us to tip number two.

#2 Use Synonyms and Change Sentence Structure

Let’s say you’ve hit a writing block and just can’t move forward. So you let AI inspire you.

It happens to the best of us. Oops.

To avoid having AI-generated text sneak into your paper, you should at least change the sentence structure and use synonyms.

AI detectors can recognize sentence patterns commonly found in AI-generated texts. So, if you completely restructure your sentences, you make it harder for the AI detector to identify these patterns.

You can practice this by creating multiple variations of a sentence. Practice makes perfect here too. At first, it might be difficult, but eventually, you’ll be able to rewrite sentences quickly and easily.

After rewriting several sentences, it’s worth reading through the text to make sure everything feels coherent.

ai detector 2

#3 Make Your Text More Human

As mentioned before, AI-generated texts often sound too good to be true. This is largely because AI uses very formal and precise language. That’s why AI detectors flag texts that are too smooth and flawless. Avoid this by trying to use a more human tone and vocabulary.

The beauty of academic writing doesn’t come from perfection but from originality. An AI can’t achieve that because it always uses the word that’s most likely to fit next.

Riemer and Peter (2024) from the University of Sydney call them “style engines.” This means generative AI is very good at mimicking a style—and that’s exactly the problem. True originality can’t come from that.

By incorporating the unpredictable into your text, you make it original and prevent an AI detector from being triggered.

#4 Keep “Higher-Level Thinking” to Yourself

AI tools often cram a lot of facts into a short section and sometimes sound generic because of it. This is another reason why AI detectors flag texts.

So avoid overloading your text with too many facts or information. Instead of just explaining the bare facts, keep it brief. An academic paper isn’t a Wikipedia article, it’s an argument that unfolds gradually.

For example, you could include a theoretical perspective to look at a topic from a new angle.

An AI would only come up with that if you fed it the idea. So as long as higher-level thinking remains your responsibility, you’re on safe ground.

Let’s say you’re writing a paper on digital transformation in the service sector.

You could just describe the topic and the related literature. An AI could do that too—so don’t expect a top grade here.

But if you come up with the original idea to analyze your topic through the lens of French pragmatism, like Boltanski and Thévenot (1991), then you’re about to create an original piece of work.

Meanwhile, your classmates might use ChatGPT to churn out a paper in 30 minutes and spend the rest of the day watching Netflix.

But who do you think will know more after graduation?

If you dig into Boltanski and Thévenot (1991) and use your paper as a chance to grow intellectually, you’ve already won.

It’s about resisting the quick AI solution and investing in work that truly helps you move forward.

#5 Avoid Low-Quality Sources

Sure, you can use ChatGPT or other AI tools for research. For example, the tool Consensus is super helpful for finding suitable sources.

However, you shouldn’t just blindly trust the information. AI tools often give useful summaries and explanations, but they don’t rely on primary scientific sources. To ensure the facts are correct, cross-check the AI’s info with other sources. Use reliable sources like books, scholarly articles, or databases.

At the same time, AI might give you a source that actually exists but is from an MDPI-published journal. These are often poorly peer-reviewed and therefore highly questionable.

I would never cite such an article, and I’d grade a paper relying on such sources more harshly.

For you, this means you need to develop the ability to differentiate between good and bad sources. AI can’t do this—yet—and it’s a risk for the quality of your academic work!

#6 Know the Difference Between Support and Plagiarism

In my opinion, AI is here to stay. Learning to use tools like ChatGPT properly will be an essential skill in the future job market. That’s why I don’t think you should avoid using AI entirely in your studies.

Instead, you should start using these tools right now—just in a smart way.

Many universities agree and allow AI use, but you must be transparent about how and to what extent you used it.

It’s perfectly fine to use AI tools as a support—even for academic writing. See AI as your creative assistant, helping you develop your ideas and structure your thoughts—not as a tool that writes your entire paper for you.

I’ve already made a detailed video on AI and plagiarism, which you can find linked here.

However, AI detectors work differently than plagiarism scanners. If you use AI to paraphrase, the plagiarism scanner won’t go off, but the AI detector likely will.

So, if you want to use AI for paraphrasing or spell check, just get your supervisor’s approval. Then, write a statement disclosing this in your affidavit at the end of your paper, and you won’t have to worry about AI detectors again.

Of course, this only works if your university’s exam regulations don’t explicitly prohibit AI use. So check your university’s current AI policy beforehand.

#7 Use an AI Detector Yourself

A final tip: Before submitting your paper, run it through various AI detectors or plagiarism scanners. There are several online tools now that can detect if your text might be flagged as AI-generated.

You can test an AI detector yourself and play around with it.

If you want to try it out, for example, you can use Quillbot’s free AI detector: https://quillbot.com/ai-content-detector.

Test your own text, the AI-generated text, and something in between. You’ll be able to spot patterns and see how changes affect the score.

This knowledge will help you when writing your academic paper and applying the previous 6 tips!

Conclusion

AI detectors have become really good at spotting patterns in AI-generated texts. But they’re not infallible.

In English, you’d call this an “arms race”: AI detectors and AI tools constantly push each other forward, with one always trying to stay a step ahead of the other.

This is why no student will fail an exam solely because of an AI detector. Sure, plagiarism can be definitively proven, as that’s relatively easy to verify.

That said, this doesn’t apply to AI-generated content. There will always be some doubt. Someone could simply have a writing style that’s a lot like a generative AI tool’s. There’s no surefire way to prove a text was generated by AI.

But I can’t stress this enough: if you use AI, don’t shut off your own thinking.

Instead, think of AI as a tool that makes things easier, giving you more space for genuine creative thought. But you really have to use that space—and not waste the time you save on something else. Only then will AI help you study more effectively than people could 10 years ago, allowing you to produce truly original work.

Categories
Uncategorized

Bloom’s Taxonomy: The Secret Formula for Top Grades!

Have you ever wondered why some students seem to ace every exam with an A, while others, despite intense studying, barely scrape by with a C? The secret might just lie in how they use Bloom’s Taxonomy to approach their studying.

The key to successful learning isn’t just how much time you spend with your books but how smartly you use that time.

And this is where Bloom’s Taxonomy comes in—a super useful tool that teachers use – but if you know it too, it can help you improve your study strategies and boost your grades.

Basics of Bloom’s Taxonomy

Let’s start with the basics. Bloom’s Taxonomy was developed in 1956 by Benjamin Bloom and his colleagues.

Their goal was to create a classification of learning objectives that covers different levels of thinking. This classification consists of six levels: Remembering, Understanding, Applying, Analyzing, Evaluating, and Creating.

Originally, the taxonomy was designed for educators to help them clearly define learning goals and assess student progress.

Nowadays, modern exam software and learning management systems (LMS) are increasingly incorporating features to sort and analyze questions according to the different levels of Bloom’s Taxonomy.

Great, but why should this matter to you?

In many university exams, questions are designed to cover a range of cognitive skills, as described in Bloom’s Taxonomy.

By understanding these different levels, you can better prepare for the various types of questions you’ll face in your exams. You’ll know exactly what’s expected to score full points when you see a particular keyword in the question.

The number of points awarded typically depends on the type of question. A question that tests factual knowledge (like multiple-choice) will usually be worth fewer points than one that asks you to apply knowledge (like a case study).

And if you take a closer look at the taxonomy, it becomes clear why you didn’t get that top grade in your last exam, even though you spent hours memorizing the entire script!

For example, if you only memorized facts, you’ve only covered the lower levels of the taxonomy. In a task like, “Describe the basic principles of…,” you’re only asked for knowledge.

But when a question says, “Apply the principles of… to example X and explain…,” then you’re dealing with higher levels of the taxonomy, and simply recalling facts won’t cut it.

When you see keywords like “describe, explain, apply,” you’ll know how profs structure their exams, what the expectations for top marks are, and you can tailor your exam prep and study techniques accordingly.

The Six Levels of Bloom’s Taxonomy

1. Remember

The first level is about recalling facts and basic information. In exams, these questions are often multiple-choice or short-answer questions that test simple factual knowledge. They check whether you’ve memorized basic info. You’ll need this as the foundation for deeper questions and analysis. To prepare for these types of questions, flashcards are an effective tool. Regular repetition is also crucial—schedule fixed times in your study plan to revisit and solidify what you’ve learned.

Example exam questions:

  • Define the term “photosynthesis.”
  • Name the four basic principles of bioethics according to Beauchamp and Childress.
blooms taxonomy

2. Understand

The next level is understanding. Here, it’s about grasping the meaning of information and being able to explain it in your own words. Exam questions might ask you to explain concepts or clarify the significance of theories. To prep for understanding questions, discuss concepts with your classmates. Explain the concepts to each other in your own words. This deepens your understanding and helps clear up any confusion. Paraphrasing is also helpful—try summarizing complex texts in your own words. Creating concept maps or mind maps that show the relationships between different ideas can also help. This visual representation helps you grasp the bigger picture and understand how everything fits together.

Example exam questions:

  • Explain how photosynthesis works in your own words.
  • Explain the difference between microeconomic and macroeconomic models.

3. Apply

These questions test whether you can apply your theoretical knowledge in practical situations. They might ask you to apply theories and concepts to real-world problems, often using case studies or practical tasks. To prepare for application questions, regularly work on practice problems that challenge you to apply what you’ve learned in new contexts. Or look for case studies that deal with similar problems as those discussed in class and practice analyzing them.

Example exam questions:

  • Use a SWOT analysis to assess the strengths and weaknesses of a real company of your choice.
  • Apply the concept of Nash equilibrium to analyze the strategic behavior of two competing firms.
blooms taxonomy 1

4. Analyze

Analyzing involves breaking down information and understanding the relationships between the parts. These questions often require in-depth analysis of texts, data, or theories. They test your critical thinking and ability to dissect complex information. You can practice this by reading academic papers and understanding their argument structures. Or look at how data and statistics are analyzed and interpreted. You could also create argument chains to sharpen your analytical skills.

Example exam questions:

  • Analyze the argument structure in Kant’s Critique of Pure Reason and evaluate the validity of his conclusions.
  • Compare the various theories of personality development.

5. Evaluate

Evaluating means making judgments about the value and quality of information or methods. Exam questions might ask you to compare and assess different theories or models. Prepare for this by writing critical essays where you compare different theories or models. You can also create evaluation rubrics to assess your own work and that of your peers. Through peer review processes, you can evaluate others’ work and provide feedback.

Example exam questions:

  • Evaluate the effectiveness of the European Central Bank’s current monetary policy in the context of post-COVID-19 economic recovery.
  • Critique the methodology and conclusions of the study on the effectiveness of online learning compared to in-person instruction.
blooms taxonomy 2

6. Create

The highest level, creating, involves combining elements to develop something new and original. Exam questions at this level might ask you to formulate hypotheses or develop creative solutions to problems. Use techniques like brainstorming or mind mapping to develop new ideas. You could even participate in projects to work on your creative skills.

Example exam questions:

  • Develop a research plan to study the long-term effects of microplastics on marine ecosystems.
  • Design an innovative business model for a start-up.

Exam Prep with Bloom’s Taxonomy

So, how can you effectively use Bloom’s Taxonomy for your exam prep?

First, it helps you clearly and systematically define your learning goals. For example, when preparing for an exam, you can organize your study objectives according to the six levels of Bloom’s Taxonomy.

Start with memorizing basic facts (Remembering), then work your way through understanding the concepts (Understanding), and apply what you’ve learned in practice problems (Applying). Next, analyze complex problems (Analyzing), evaluate different solutions (Evaluating), and finally, develop new ideas or projects (Creating). This approach makes your studying more efficient and prepares you perfectly for exams.

Check out my YouTube channel for tutorials on different study techniques. Match them to the levels of the taxonomy: Spaced repetition for remembering. Active recall for remembering and understanding. Inquiry-based learning for analyzing and evaluating. The Feynman technique for applying. Design thinking for creating, and so on.

By using Bloom’s Taxonomy, you can target your preparation for different types of exam questions and optimize your study strategies.

This structured approach not only leads to better grades but also a deeper understanding and higher competence in your field.

By systematically working through this process, you’ll be fully prepared for exams, ace them, and be able to use your knowledge flexibly afterward.

Conclusion – Blooms Taxonomy

Bloom’s Taxonomy shows that deep and lasting learning involves multiple levels that go beyond just memorizing information.

In short, a deep understanding and the ability to apply and evaluate knowledge lead to better exam performance and top grades.

It’s about mastering knowledge and being able to use it flexibly, rather than just memorizing it temporarily.

Of course, there are exceptions. Some exams mainly test factual knowledge. Medical students in their first semester might know this all too well. In these cases, exams are 90% multiple-choice, and they “cross off” answers like there’s no tomorrow.

But now you have the ability to mentally run any kind of exam through the lens of Bloom’s Taxonomy and prepare yourself laser-focused based on that.

This puts you ahead of 99% of others.

Categories
Uncategorized

Social Network Analysis (Introduction & Tutorial)

Social Network Analysis

What is a Social Network Analysis? You’ve probably seen those colorful network graphs in newspaper articles or scientific papers. They look like a lot of work to create, right? Or maybe not?

Actually, you can conduct such an analysis without extensive programming knowledge or expensive software.

If you want to know how to do it – then you should sharpen your pencil and take notes.

In this article, I will explain everything about Social Network Analysis – where it comes from, what it’s good for, and how you can apply it. I will cover these five areas:

  1. Network Theory
  2. Applications of Social Network Analysis
  3. Data Collection
  4. Data Analysis and Visualization
  5. Overview of the Best Software Tools

By the end of this article, you’ll have all the links and further information you need to conduct your first Social Network Analysis.

#1 Network Theory

To understand Social Network Analysis, we first need to be aware of its theoretical basis: Network theory. This theory comes from the mathematical graph theory.

Network theory deals with the relationships between specific objects. In the context of Social Network Analysis, these objects are usually social actors. These relationships and objects are represented using a graph, meaning a diagram that connects (i.e., with lines) two or more objects (i.e., points).

Nodes and Edges

In the vocabulary of Social Network Analysis, an object is called a node (or vertex). The relationship between two or more nodes is represented by edges. These are the lines between the nodes.

A relationship can be either undirected or directed. Let’s imagine our network represents the relationships between Instagram accounts of famous politicians. The nodes are the people, and the edges are the follower relationships.

If Kamala Harris follows Donald Trump, but he does not follow her back, there is a directed edge from Kamala Harris to Donald Trump, usually shown with an arrow. Kamala Harris is the starting node and Donald Trump is the ending node.

If Donald Trump also follows Joe Biden, but not Kamala Harris, Donald Trump is an adjacent node to both Kamala Harris and Joe Biden. However, Joe Biden is not an adjacent node to Kamala Harris.

Centrality Measures

When facing a larger network, you might want to know certain properties of individual nodes or determine which nodes are particularly important or play a specific role in the network.

For this, you can calculate various centrality measures.

Social Network Analysis 2

Density

The density measure helps you to describe a characteristic of the entire network. It indicates how many edges there are in the network relative to the maximum possible number of edges.

For example, it shows how many users in our group of politicians are connected with each other compared to a scenario where everyone is connected with everyone. If all nodes are connected, the density is 1 or 100%. So, you always get a value between 0 and 1 for density.

Degree Centrality

Now let’s look at centrality measures. They do not describe properties of the whole network but single nodes.

This measure indicates how many edges a node has. If Kamala Harris has 9 follower-relationships (regardless in which direction), the degree of her node is 9.

For directed graphs, we distinguish between incoming edges (in-degree) and outgoing edges (out-degree).

Closeness Centrality

This measure indicates the average length of the shortest path between a node and all other nodes. It shows how central a node is within the entire network.

For example, how many contacts must Kamala Harris go through on average to reach certain politicians? The fewer, the more central she is in the network.

Betweenness Centrality

This measure indicates how often a node lies on the shortest path between two other nodes. Nodes with high betweenness centrality often lie between two or more clusters of nodes, essentially forming a bridge between them.

Eigenvector Centrality

This measure indicates how important the neighbors of a node are. The more important neighbors, the higher the value.

The best example of this measure is Google’s PageRank algorithm. It follows the rule that a web page is ranked higher in search results the more other important pages link to it.

So, if I have a blog post on my website and it is linked by major sites like CNN, BBC, and the Forbes, it’s better than if it is linked by two local newspapers and an unknown blogger.

#2 Applications of Social Network Analysis

Social Network Analysis has two main applications. The first is in academic research.

Social Network Analysis in Research

Theoretically, every discipline within the social sciences can use Social Network Analysis. But it goes beyond that. For example, you can also analyze and visualize citation relationships between papers, universities, and scientists.

Social Network Analysis 3
Citation network from Stieglitz et al. (2018)


Most commonly, you’ll find Social Network Analyses in political science, communication studies, and sociology.

Social Network Analysis in Journalism

#3 Data Collection

The basis for conducting any Social Network Analysis is data. In most cases, this data is obtained through web scraping or an API interface (for example of a Social Media platform) when dealing with online research.

If you want to practice, there are plenty of datasets available online for free. You can try Google’s search for datasets, Kaggle, or data.gov.

Data doesn’t always have to be collected automatically. It’s also possible to create small networks by manually entering your data into an Excel sheet or digitizing it in some other way.

For a Social Network Analysis, it is important that the data points reference each other, for example, using an ID for each node that is referenced in any other node that has a connection to that node.

Only then can you calculate centrality measures and visualize a network with software.

#4 Data Analysis and Visualization

Now we come to the analysis. The two most common tools for conducting a Social Network Analysis are R and Gephi.

Both programs can be downloaded and used for free. With R, you’ll need some time to get used to it, as you’ll need to learn or look up the programming language commands.

If you want to avoid programming languages entirely, I’d recommend Gephi. This software has a complete graphical user interface, and you can perform all sorts of tasks related to Social Network Analysis.

It still requires some time to learn Gephi, but there are great tutorials available on YouTube or you can get help in Gephi support groups on Facebook.

A Social Network Analysis with very large datasets requires quite a bit of computing power. To prevent your PC or laptop from reaching its limits and Gephi from crashing, you should filter your data beforehand or use a virtual machine.

The next steps to start your first Social Network Analysis would be:

  1. Read the foundational book on Social Network Analysis by Wasserman & Faust (1994)
  2. Get a free dataset to practice
  3. Watch YouTube tutorials on R or Gephi until you’re an expert
  4. Join Facebook groups where you can ask questions
  5. Learn by doing
  6. And don’t forget: Have fun! 🙂
Categories
Uncategorized

How to Overcome your Phone Addiction: 7 Strategies to Stay Focused

Are you trying to overcome your phone addiction but don’t know how?

In this article, I’ll share 7 strategies that have personally helped me keep my smartphone addiction under control.

This way, you can take your productivity to a new level, maintain a genuine connection with the people you love, and finally clear the fog in your mind.

Understand the Root Causes of Your Smartphone Addiction

How often do you look at your smartphone without really knowing why? Whether it’s during a lecture, waiting for the bus, or dining with friends – it seems we can hardly do without it. Just checking the phone quickly? And suddenly, we’re deep in the Instagram spiral or lost in an endless stream of TikTok videos. Why do we do this?

In this article, I’ll show you what’s behind our “smartphone addiction” and how we can be smarter than our smartphones. Are you ready to take back control?

Step 1 towards improvement would be to ask yourself: Why is it so hard for you to put the smartphone aside?

smartphone Addiction

#1 App Development

Behind the scenes, developers know exactly what they are doing. They design apps and social media not only to capture our attention temporarily but to keep us engaged for the long term. Endless feeds that never end and push notifications that keep bringing us back are no coincidence. They are part of a clever strategy to keep us in their apps as long as possible.

#2 Dopamine

On a neurobiological level, this can also be explained. Dopamine plays a key role. This neurotransmitter is released when we receive positive feedback – whether it’s a new message, a like, or another notification. And that’s exactly what keeps us glued to the screen. Imagine your own “Feel-Good Dealer” rewarding you with a sense of happiness. These little reward kicks are like candy for your brain, constantly luring you back to the screen.

#3 Instant Gratification

The phenomenon of instant gratification, a craving for immediate rewards, is another reason why it’s hard to put the smartphone away. Especially in moments of boredom, we automatically reach for the phone. It appears to be the easiest solution to fill that void. However, the instant gratification that smartphones provide can quickly become a habit that is hard to break.

smartphone Addiction 2

#4 FOMO

Another psychological driver for constant smartphone use is the phenomenon of FOMO. Do you know the feeling? This nagging fear that you might miss out on something if you’re not constantly checking your phone? That’s the “Fear of Missing Out.”

Social networks like Instagram and TikTok amplify this fear by bombarding us with endless updates about the supposedly exciting lives of others. This constant worry about not being up-to-date drives us to compulsively check our smartphones. But the catch is: FOMO is never truly satisfied. Every new video, every new post only confirms that life goes on without us and we might be missing out even more.

# 2 Recognize the Negative Effects of of Your Smartphone Addiction

#1 Desensitization of the Reward System

Dopamine plays a big role. The substance responsible for feelings of happiness. You might wonder what’s bad about that? Well, like many things in life, too much dopamine can be problematic.

If the brain is regularly and abundantly exposed to dopamine, as is the case with constant smartphone use, desensitization can occur. This means that the dopamine receptors in the brain become less sensitive. So, you need stronger or more frequent stimuli to feel the same reward. This can lead to everyday pleasures and interactions becoming less satisfying in the long run.

smartphone Addiction 3

#2 Increased Stress Levels

Too much dopamine can also lead to a permanent state of stress, as dopamine also affects stress regulation. Your body is then constantly on alert, which can lead to anxiety, sleep problems, and even high blood pressure.

Imagine your brain as an inbox that’s constantly flooded with emails – from messages to games to app updates, the smartphone never stops pinging. It’s no wonder that our brains eventually become overwhelmed and overloaded with stimuli. This information overload can lead to brain fatigue and further increase stress levels.

#3 Impairment of Cognitive Functions

Over time, overstimulation by dopamine can negatively affect your cognitive abilities. If your brain is constantly seeking quick rewards, it can impair your concentration. Studies even show that excessive smartphone use can affect productivity. So, if you ever feel like you’re getting nothing done, banish your phone from the room.

An excess of dopamine can also make you more impulsive and make it harder to control your emotions. This can lead to friction in everyday life and negatively affect your relationships.

And that’s interesting because I’ve often noticed how smartphones change social interactions. For example, when I meet friends and my phone is on the table, I’m much less attentive. It’s a completely different conversation when I leave my smartphone in another room. Although smartphones should theoretically bring us closer together, they often lead to physical and emotional distance between us and our fellow human beings.

3. Set a Clear Goal to Combat Your Smartphone Addiction

Sure, you could try to completely abstain from digital devices, but realistically, that would be quite limiting. Because, of course, smartphones also have their good sides. They not only keep us connected with friends and family but are also real lifesavers in studies or work. Thanks to emails, calendars, and a flood of apps, our lives remain organized, and we stay informed.

It’s not about banning all digital devices but about making conscious decisions about their use. Who’s in control? Is it your phone, or are you using it as a tool that enriches your life? Ask yourself: What are my values? What do I want to achieve? Does using app XY align with these goals, or is it merely a distraction? By asking such questions, you can ensure that your technology use supports your goals rather than hindering them.

smartphone Addiction 4

4. Discover Healthier Sources of Dopamine

While smartphones often only provide short moments of happiness (and can have negative effects), there are alternatives that release dopamine more slowly and sustainably – and these are usually much more fulfilling. Here are some suggestions for how you can increase your well-being in the long term and fight smartphone addiction:

Setting and achieving goals:

Setting goals is more than just a good intention. Because each small success releases dopamine, significantly boosting your confidence. Whether you’re rocking your next term paper, achieving a sports goal, or just meditating more regularly – each step towards your goal brings you more inner satisfaction.

Spending time with friends:

Genuine interactions, such as relaxed dinners, spontaneous outings, or deep conversations, are particularly effective at fulfilling your need for human connection.

Learning new skills:

Learning a new skill challenges your brain and expands your perspective. Whether you’re learning a new language, a musical instrument, or knitting – the success of your learning process also releases dopamine. This is a meaningful alternative to the often empty entertainment that constant scrolling on the smartphone offers.

5. Implement a Digital Detox Routine

“Digital Detox” – that is, a conscious break from all digital devices – can be an effective method to reduce overstimulation and improve your mental well-being.

By temporarily abstaining from smartphones, tablets, and computers, you give your brain a chance to recover from the constant stimulus overload. This can improve concentration, enhance sleep, and reduce your stress levels.

A successful digital detox starts with small steps: Perhaps just an evening or a weekend without digital devices. It’s important to use these times consciously to engage in activities that you enjoy and that ground you – whether it’s reading a book, spending time in nature, or just enjoying the quiet.

These digital breaks can help you reflect on your own use of technology and overcome smartphone addiction.

However, digital detox phases, like a day offline or a week without social media, often do not lead to lasting changes. Consider it more of a reset. Although they provide a break, they don’t tackle the underlying habits that lead to our dependency. Without a real change in our daily routines and our attitude towards technology, we quickly fall back into old patterns.

6. Gradually Change Your Daily Habits

Create smartphone-free zones: Decide where and when the smartphone is off-limits. This could be during dinner, on a walk, or at a concert. These moments without a smartphone help you enjoy the experience more intensely.

Monitor your screen time: To reduce our dependency on smartphones, we need to set clear boundaries and make more conscious decisions about when and how we use our devices. Screen time monitoring apps can help you manage your time more consciously.

Restructure your daily routine: Rely on a real alarm clock instead of your phone to wake you up. This way, you’re not tempted to spend the first minutes of your day on the smartphone. Ideally, banish your phone from the bedroom altogether. If you want to know the time during the day, it’s best to rely on a clock.

Regulate your notifications: Turn off unnecessary push notifications and organize your apps into folders. This way, you’re less likely to automatically reach for your smartphone and can decide for yourself when you want to view messages and updates.

Remove distractions: Leave your smartphone in another room when you need to concentrate. This way, you work more productively.
Consciously use breaks: Try not to reach for your smartphone at every opportunity. Use waiting times instead to observe your surroundings or simply think. Such breaks can foster your creativity.

7. Adopt a Critical Perspective on Smartphone Use

Let’s be honest: It’s high time to rethink your relationship with your smartphone. I invite you to take a close look at how and why you use your smartphone and consider if there might be a better way.

Start with small steps and see for yourself how your life and well-being gradually improve. Are you ready to take on this challenge? Because remember – your smartphone should be a tool, not the center of your life.

Categories
Uncategorized

Study 3x More Effectively with Mind Maps (Picture Superiority Effect)

studying with mind maps 1

Goodbye, flashcards! It’s time to learn in a way that your brain will adore. Studying with mind maps will give you a turbo boost. In this article, I’ll reveal why mind maps are a real game-changer and provide you with 3 techniques to get started.

What are Mind Maps?

Imagine if your brain could speak and sketch its thoughts on paper. The result? A mind map! Mind maps are visual diagrams that organize ideas, words, tasks, or other concepts around a central theme.

They use colors, symbols, and connections to mimic the way our brain processes information. This method was popularized in the 1970s by psychologist Tony Buzan.

In German-speaking regions, the coach and author Vera Birkenbihl also advocated for mind maps as a learning method.

Why Studying with Mind Maps is So Effective

#1 They Mirror the Brain’s Way of Working

Why are mind maps so helpful? The answer lies in how our brain functions. Rather than the dry memorization of linear lists or long blocks of text, our brain prefers to structure information in a networked way, relating different pieces of data.

Mind maps do just that: they start with a central theme and branch out into related areas, similar to how our brain networks information. This method mirrors the natural way we think and learn, thus facilitating the understanding and retention of information.

#2 Visual Stimuli Enhance Memory

Visual stimuli play a crucial role in learning. Images, symbols, and colors are recognized and remembered by our brain faster and more easily than text. Mind maps leverage this so-called picture superiority effect by presenting complex information in a visually appealing and easily understandable form.

When studying with mind maps, integrate not only words and bullet points but also small sketches or symbols into your mind maps. This strategy helps not just to process the learning material better but also to retain it long-term.

studying with mind maps 2

#3 Active Learning Instead of Passive Reception

Mind maps transform learning into an active experience. Instead of just absorbing information, they encourage you to actively think through the material, organize it, and convert it into a mind map.

While designing a mind map, you’ll discover how different concepts are interconnected. This process strengthens your critical thinking and your ability to organize complex information and place it into a broader context.

By visualizing connections between different topics in your mind map, you deepen your understanding and make it tangible.

To understand why active learning is far superior to passive learning, you can also check out my tutorial on Active Recall.

#4 Targeted Summarization of Complex Topics with Mind Maps

Mind maps allow you to condense complex subjects and extensive content to the essentials and structure them clearly. Instead of flipping through endless pages of notes, they enable you to quickly dive into the main ideas and key concepts. I have always preferred using mind maps when I needed an overview of a subject area or was preparing a presentation. This way, I always maintained a focus on the bigger picture and structure.

#5 Improved Time Management

Creating mind maps enhances your time management. Thanks to their visual layout, you can easily set priorities, organize your learning objectives, and keep track of your progress. Mind maps serve as visual learning plans. At a glance, you can see which areas have been covered and where there are gaps.

3 Techniques

There are many techniques to elevate your learning with mind maps to the next level. As I mentioned, British psychologist Tony Buzan popularized the mind map in the 1970s. Let’s take a look at his method.

The Buzan Technique

The Buzan technique is based on five central principles that together form the foundation of every effective mind map:

  1. Start with a Central Topic: Each mind map begins with a single concept placed in the center of your page. This could be the title of your course, the theme of a project, or simply an idea you want to explore.
  2. Branches for Main Ideas: From this central topic, you draw branches to the main points or key ideas associated with it. These branches should be large enough to later add sub-ideas.
  3. Sub-branches for Details: Each main branch can have further branches that contain details, examples, evidence, or other relevant information. The deeper you go, the more specific the information becomes.
  4. Use Keywords and Images: A single word or image can often represent a complex idea or concept. Buzan recommends using keywords and images wherever possible to make your thoughts more efficient and enhance your memory.
  5. Connections and Links: Draw lines or arrows to show connections between different parts of your mind map. This helps you see how ideas are interconnected and fosters a deeper understanding of the subject.
studying with mind maps 3

Additional Techniques

The original method developed by Tony Buzan has since been expanded and adapted for various purposes.

For instance, there is the Rico Cluster by Dr. Gabrielle Rico. Here, you link your ideas as a network rather than starting from a central key concept. This is intended to support the normal communication between the right and left hemispheres of the brain.

The previously mentioned Vera F. Birkenbihl worked with her KAWA method. A KAWA is created when you find associations for the initial letters of your starting word. To better understand, let’s create a KAWA on the topic of Marketing.

  • M – Market Research
  • A – Audience Analysis
  • R – Rebranding
  • K – Customer Retention
  • E – Engagement in Social Media
  • T – Testimonials from Customers
  • I – Influencer Marketing
  • N – Neuromarketing
  • G – Contests and Promotions

This method is particularly helpful for brainstorming. Are you still looking for a topic for your next term paper? Using the KAWA method, you can generate additional ideas.

I believe that there isn’t ONE correct method. You need to find out for yourself what works best for you.

After all, each mind map reflects YOUR thought processes. The more you stick to a step-by-step guide, the more it may constrain your own thoughts. I recommend just getting started and experimenting.

Studying with Mind Maps: How to Become a Pro

Remember, your mind map is not fixed. Rather, it accompanies you during your learning process and allows you to be flexible.

As you absorb new information or come up with new ideas, whether through reading, listening, or discussing, you can gradually expand your mind map.

It’s important that your mind map remains dynamic and evolves with your understanding and thoughts. However, you should ensure that it doesn’t become too cluttered and that you maintain focus.

Whenever I wanted to learn a topic, I always liked to start with a mind map to keep an overview, and then created additional mind maps related to relevant companies or topics.

In the end, I didn’t have just one mind map, but perhaps eight different ones. These represented my natural thought processes much better than flashcards or linear notes could.

For exam preparation, you could use questions as the central point of a mind map. For example: “How can companies use the concept of ‘Customer Lifetime Value’ to build long-term customer relationships and increase revenue?”

Around this central question, a mind map can be built. This approach ensures that you’re not just memorizing a text answer, but gradually expanding your thoughts, discovering new connections, and developing a deeper understanding of the topic.

Digital Mind Map Tools vs. Old-School Mind Mapping on Paper

Okay, but should you now use digital mind map tools or create your mind map old-school with paper and pen?

Digital Tools for Creating Mind Maps

Welcome to the digital age of mind maps! With tools like MindNode, XMind, Coggle, and even ChatGPT, you can incorporate multimedia content into your mind maps. Whether it’s images, links, videos, or documents—all these can help make your mind map vibrant and informative. This is especially useful if you are a visual learner or want to use your mind map as the basis for a presentation.

Many digital mind map tools offer features for real-time collaboration. This means you and any team members can work on the same mind map simultaneously, no matter where you are.

This is a game-changer for group projects and joint learning sessions! Plus, your mind maps are less likely to get lost in a pile of papers since everything is stored digitally.

studying with mind maps shribe

Old-School Mind Mapping

And yet… sometimes less is more. There’s something satisfying about bringing your ideas to life directly with pen on paper and visualizing your thought processes.

This method fosters creativity and can help you process information better. No distractions from notifications or dead batteries—just you and your thoughts.

Additionally, with old-school mind mapping on paper, you have complete freedom in design. You’re not confined to the structures or designs of software, and you can make your mind maps as simple or complex as you like.

It’s all about what works for you. You might even find that a mix of both—digital for university projects and paper for your personal brainstorming sessions—is the best solution for you.

Experiment a bit and find your own way to make the most out of studying with mind maps for yourself.

The goal is to feel organized and enjoy learning. So, grab your pen or tablet and give it a try.

Categories
Uncategorized

How to do a Deductive Thematic Analysis (Theory-Driven Qualitative Coding)

You want to conduct a deductive thematic analysis and categorize your qualitative data based on pre-existing theory and concepts?

You’re in the right place.

In this article, I’ll explain in detail and in an easy-to-understand manner how to perform a deductive thematic analysis in 5 simple steps.

What is Thematic Analysis?

To make sure that we are on the same page about thematic analysis, I will mainly refer to the understanding of the method described by Braun and Clarke (2006;2019; 2021).

Please note that there are other authors that have their own idea of thematic analysis, however, the explications by Braun and Clark seem to be most useful to the research community.

It is important to understand that thematic analysis is flexible, which means you can apply it in different variations and customize it for your own needs.

The only thing you need to be aware of is that you do not slap a label on your approach in your methods section and then do something completely different that is not in line with the methodical ideas of the authors you just cited (the “label”).

Flexibility in thematic analysis means that you can combine different approaches such as inductive and deductive coding within the same study. But you don’t have to. It’s up to you and what makes most sense to achieving your research objectives.

In this tutorial, however, I am going to focus on applying thematic analysis in a theory-driven, deductive logic.

If you would like to learn more about inductive thematic analysis, please refer to my previous tutorial about this method.

deductive thematic analysis

What is Deductive Thematic Analysis?

Whereas inductive thematic analysis is data-driven (bottom-up), deductive thematic analysis is theory-driven (top-down).

This means that you do not develop your themes based on the statements you find in your qualitative data (e.g., interview transcripts) or the underlying patterns of meaning of these statements.

Instead, you take these statements, and classify them into a pre-existing theoretical structure.

For deductive thematic analysis, therefore, you need to think about theory before you enter your analysis.

Let’s look at the steps that you need to go through.

#1 Define your Themes

Option 1: Use pre-existing theory

For a research question that involves a specific theory, you should develop your set of pre-defined themes based on that theory. For instance, if the research question is: “What influence does remote work have on organizational identity?”, it references the “Organizational Identity Theory” (Whetten, 2006).

Theories in social sciences are usually based on the work of individual authors who have defined a specific model or the components of a theory. These can be dimensions, variables, or constructs. We take those and make them our pre-defined “themes”.

Themes

Choose the work of an author or team of authors and read the corresponding book or paper thoroughly. In our example, this would be Whetten’s 2006 paper. The author names three dimensions representing organizational identity: the ideational, definitional, and phenomenological dimensions. These dimensions would serve as excellent main themes for your study.

Sub-themes

The original source will provide further details on how these dimensions are defined, which you can use to form subthemes. It may also help to read additional literature that builds on or explains this theory. Often, primary sources are quite complex.

However, you can break down any theory into its components and logically assemble a list of themes and subthemes. A popular approach in thematic analysis is to take the main themes from the theory (in the example: ideational, definitional, and phenomenological aspects of organizational identity) and develop the subthemes inductively, based on the content you find in your data.

Option 2: Derive Themes from the Current State of Research

If your research question does not target a specific theory, such as “How are remote work models implemented in the manufacturing industry?”, you can consult not a single pre-existing theory, but turn to various current studies and extract your themes upfront.

It’s best to work with a table where you create themes and subthemes on the left and note the source(s) from which you derived them on the right.

Examples of categories related to the example could include: “Technological Infrastructure”, “Corporate Culture”, “Work Time Models”, and so on.

It’s essential that these themes clearly emerge from your review of the literature. Don’t worry about overlooking something – you can always expand or adjust your list of themes after an initial analysis round if you notice certain contents are not covered or if you encounter new literature in the meantime.

#2 Create a Codebook

To guide your analysis, you can work with coding guidelines, which are often referred to as a codebook.

Coding in this context simply means classifying a piece of content or statement as part of a theme.

A codebook is particularly useful if you are not the only person coding the data. But it also gives your method more rigour as you systematize your coding process.

Three things are particularly important to consider when you create a codebook for your deductive thematic analysis:

2.1 Define the Themes

Cover this step with the aforementioned table and put it in the document that is supposed to be your codebook.

Add another column where you precisely define when a text segment belongs to a specific category or not.

You can use a concise description of the theme to do so. Make sure to use the references of the particular theory or the literature that you used to derive the theme.

2.2 Use Anchor Examples

For each theme or subtheme, you should insert at least one example in the codebook.

This example represents the respective theme. This could simply be a direct quote from your interviews. If you are analyzing social media content, it would be a tweets, etc.

You might need to do some initial coding until you find a suitable example that you can put in your codebook.

3.3 Define Coding Rules

You can then add further comments that establish the rules how a coder should decide when a data segment is not clearly assignable to a theme.

This ensures that you act consistently throughout the coding process.

#3 Do the Coding

Consider using a software to support your coding such as Nvivo or Taguette.

This helps you to organize your analysis, especially if you have a lot of data.

Option 1: Coding alone

If you are coding alone, you can set some percentages of coding the whole dataset to set aside time for review.

There is no strict rule for this, but I would recommend doing 10% of all the coding and then stop and check if your codebook works.

If not, make changes to it and start over.

The next milstone could be somewhere around 50% of coding all your data.

Check the distribution of data segments that you assigned to your themes and make adjustments to the coding rules if necessary.

Then go ahead and finalize the coding.

Option 2: Coding in a team

If you are coding in a team, the same milestones apply.

However, now you meet as a team and discuss your coding. Compare different examples and check with each other if you are all using the codebook as it was intended.

After you have finalized the coding, you may consider calculating an inter-rater reliability measure such as Cohen’s Kappa.

Here you will get a statistical value that shows how big the agreement is between you and the other coders.

You can only calculate it, if the team members code some portion of the data simultaneously.

For example, you could take 10% of your data and everyone codes it independently. Based on this coding, you calculate the inter-rater reliability and report the value in your methods section.

If the inter-rater reliability is not good, you might have to consider going back to the codebook or redo some of the coding so that you reach better agreement among the team.

#4 Present the Findings

The next challenge is to translate this deductive thematic analysis into a structured and reader-friendly findings section.

I recommend balancing descriptive reporting of the results (e.g., with raw anchor examples (=direct quotes) from your data) and some analytical interpretation (in your own words).

Start with the structure by turning your list of themes into headings. Use subthemes, if you have any, as subheadings.

Then, add the quote examples and explain them in your own words.

Expand these explanations and examples with additional paraphrases that you consider important, and try to explain how the data connects to the pre-defined theme you have derived from literature or theory.

Always support arguments with paraphrases or direct quotes from your data.

Also, make sure to link the subchapters with appropriate transitions.

#5 Discuss Your Findings

What do these findings mean?

Use the discussion section of your paper, report, or thesis to connect back to the theory.

For a deductive thematic analysis, you must discuss your findings in light of the theory or literature you started with.

Writing an outstanding discussion is an art that goes beyond the scope of this tutorial – feel free to check out my tutorial on writing a discussion.

However, consider using tables and figures as additional tools to organize your findings or make it easier for the reader to spot what your most important findings are.

Maybe there were very few data segements assigned to one prticular theme? Or a lot in one?

Discuss what this means in regard to your research question.

If you have specific questions about thematic analysis, leave them in the comments under this video.

If you want me to dive deeper into a particular topic in a separate video, let me know in the comments as well.

Literature

📚 Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.

📚 Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597.

📚 Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18(3), 328–352.

📚 Whetten, D. A. (2006). Albert and Whetten revisited: Strengthening the concept of organizational identity. Journal of management inquiry, 15(3), 219-234.