Categories
Research Methods

Mixed Methods: Combining Qualitative and Quantitative Research

mixed methods

Are you pondering your research design and have been advised to look into mixed methods research?

Then you’ve come to the right place at the right time.

In the next few minutes, I will provide you with the basics of the mixed methods approach. You’ll learn when it makes sense to combine qualitative and quantitative data and analysis elements. Additionally, you’ll become familiar with the most common methodologies and some helpful foundational texts.

This way, you can quickly decide whether mixed methods are suitable for your work and where to continue your reading on this topic.

The Philosophical Backstory of Mixed Methods (Highly Simplified)

To understand the concept of mixed methods, it’s worth taking a brief look into the philosophy of science.

Epistemology refers to theories of knowledge and describes how researchers can convert reality to knowledge. In short: How is knowledge created?

In today’s research landscape, we can broadly distinguish between two prevailing epistemologies: Positivism and Interpretivism.

These camps have long debated which paradigm is the true one. In specific research disciplines like business studies, sociology, or political science, both camps can be represented.

Some disciplines are so dominated by one camp that the other is rarely recognized. For example, natural sciences are truly positivist – which means that there is only one objective natural world out there, which can be best represented by numbers and maths.

Some social sciences like psychology adopt this view and others are somewhere in the middle. Here, different philosophical stands are accepted and methodologies are followed. These methodologies are classically either qualitative (on the side of interpretivists) or quantitative (on the side of positivists).

Since the social sciences have become much more pluralistic, the combinations of these methodologies has become more common and the advantages of “the other side” are appreciated.

Mixed methods were born!

Definition of Mixed Methods

To speak of mixed methods, a study design must include both qualitative and quantitative methods.

This means that the study design is intentionally developed with this combination in mind, and the research question can only be answered through the combination.

The choice of method or combination should always be closely linked to the research question, research goal, and the context of the research.

The first question you should ask yourself is:

What added value do mixed methods offer compared to a study design with only qualitative or only quantitative methods?

To help you with this, here are some advantages of mixed methods that you can use to justify your study design.

Advantages of Mixed Methods

#1 Mixed Methods can simultaneously answer open and closed research questions

What does that mean?

Through the qualitative part of the study design, you can explore and answer an open research question (e.g., How does an infodemic, a spread of misinformation, propagate on social media?).

With the quantitative part, you can test specific hypotheses (e.g., a warning label indicating unverified third-party information reduces the spread of an infodemic), which helps answer a closed research question.

Additionally, qualitative methods and open research questions often aim at developing new theories (exploratory research), while quantitative methods and closed research questions typically test existing theories (confirmatory research).

#2 Mixed Methods can offset the weaknesses of each method

For example: Qualitative interviews can add depth to a scientific study, but often only a small sample of experts can be interviewed.

If you additionally send a quantitative online survey based on your interview findings to many more people, you achieve the breadth that a purely qualitative study couldn’t provide. This could, for example, increase the statistical generalizability of your findings.

The argument works in reverse as well.

#3 Mixed Methods can yield contradictory results

This might initially sound like a disadvantage. However, it’s not. Contrasting results from both approaches can provide a deeper understanding of the phenomenon and highlight the limitations of each single methodology. This leads to more discussion points and more nuanced results.

Should mixed methods always be used then?

No, of course not. It all depends on the research question, research goal, and context. If your work aims to test existing theory, incorporating qualitative elements might not make much sense.

If you are at the early stages and exploring a topic scarcely covered in existing literature, you might focus on qualitative exploratory research only.

mixed methods 2

Implementing Mixed Methods

When implementing a mixed methods study, you need to be clear about why you are doing it. This will also determine the study design and the chronology of implementation. Here are the three most common variants of mixed methods designs.

#1 Complementation

In this variant, qualitative and quantitative elements are equally prioritized and are intended to provide complementary results on the investigated phenomenon.

Order: Quantitative (50%), then qualitative (50%). Or vice versa.

#2 Completion

In this variant, one of the methods is prioritized and subsequently supported by the other to ensure the phenomenon is fully covered.

Order: Quantitative (90%), then qualitative (10%). Or vice versa.

#3 Sequential Designs

The most common mixed methods studies follow a seuqential design. This means you complete one method first, and then start with the other.

Explorative sequential design

A qualitative study is used to develop constructs or hypotheses, which are then tested with a quantitative study.

Order: Qualitative (exploration), then quantitative (testing).

Explanatory sequential design

First, A quantitative study is used to test hypothesis. Second, a qualitative study follows to understand “why” these results occured.

Order: Quantitative (explanation), then qualitative (understanding).

#4 Parallel Designs

The alternative to a sequential design is a parallel mixed methods design. Here, you conduct multiple methods at the same time.

In this case, the second study does not build on the results of the first, but instead, the results of both studies are compared and contrasted once they are completed.

Validating Results

For mixed methods research, validating the results is an important quality criterion.

For quantitative data, there are well-standardized calculations (e.g., Cronbach’s Alpha) that can validate constructs using SPSS or similar programs.

For qualitative data, the validation process is much softer, and there is less consensus on the standard. Leung (2015) suggests these five criteria:

  • Does the research question align with the analysis results?
  • Is the chosen methodology suitable for answering the research question?
  • Does the research design match the methodology?
  • Is the sample appropriate for studying the phenomenon?
  • Do the results and conclusions fit the sample and the research context?

You basically apply the same techniques to ensure validity and reliability as you would for just one method (Venkatesh et al., 2013). Now you do it for two. This is why mixed methods often means more workload, but in the end, you also have a more valuable study.

What is the difference between mixed methods and Triangulation?

In mixed methods, your qualitative and quantitative parts are typically treated equally. If one method is only worth 10% or so, then you can speak of triangulation.

The purpose of triangulation is to provide additional perspectives to the object under study, for example by adding additional researchers, another theoretical perspective, or addition data that is all different from the main study.

Mixed methods is highly respected because it has some aspects of triangulation already built-in. However, you can also perform triangulation without using quantitative and qualitative elements. Therefore, mixed methods and triangulation are related concepts but not the same!

What to read next?

If you now want to deepen your understanding, I suggest you get your hands on a copy of Creswell and Clark’s “Designing and Conducting Mixed Methods Research” (2010).

It is one of the standard methods book on this topic and it is applicable for any social science discipline!

Categories
Scientific Writing

How to Write a Peer Review Report (Real Example)

Are you curious about how to write a peer review report?

Grab your note-taking app and pay close attention.

In this article, I’ll walk you through 7 simple steps to structure your peer review report and highlight what you need to consider. I’ll also provide examples from my own reviews, which you can adapt for your use.

What is a Peer Review?

A peer review is an anonymous evaluation report on an academic paper. Writing reviews is a routine part of research work.

Peer review processes are also sometimes simulated in university seminars as graded assignments. Writing a peer review is a crucial skill for academics; let’s explore how to do it effectively.

writing a peer review

Step 1: Overview

In the academic world, it’s customary to thank the editors for the opportunity to write a review. You can also start your peer review report with a brief summary.

“Thank you for the opportunity to review this submission. The paper, ‘Virtual Reality in Digital Education: A Network Affordance Perspective on Effective Use,’ aims to enhance our understanding of using immersive technologies in digital learning environments. It reports the results of an interview study and develops a theoretical framework to help educators integrate Virtual Reality into their curriculum.”

Step 2: Your Expertise

Next, it’s essential to note that a review is always anonymous. The authors, and especially the editors who summarize the reviews, should know about your expertise in the specific research topic.

Dedicate 1-2 sentences to describe your background in this area. You can also briefly mention how you will structure your review.

“I commend the authors for their successful paper and the results of an interesting empirical study. Having contributed to several studies in Virtual Reality myself, I consider my expertise significant. However, as I primarily use quantitative methods, I will focus less on methodology in this review and ask the editors to consider other reviewers’ opinions on this aspect. To provide the most useful feedback for improving this paper, I have divided my review into three main areas of critique.”

Step 3: Main Sections of Your Peer Review Report

There are two ways to structure the main body of your review. You can go chronologically through the paper and provide feedback on each section (introduction, literature review, results, etc.). However, this is not the approach that real professionals use, so you shouldn’t start with it.

In learning how to write a peer review, I recommend using your major points of criticism as structuring elements. For example:

#1 Theoretical Motivation of the Study

#2 Lack of Transparency in Method Description

#3 Missing Theoretical Contribution
…”

You may have 5 or 6 main points, but no more. These should be equally weighted. Write a similar amount for each main point.

writing a peer review 2

Step 4: Constructive Criticism

In your critique, it’s essential to consider three things:

  1. Positives For each main point, start by mentioning what you liked. When writing reviews, it’s easy to drift into very harsh criticism. Imagine how hard it can be for the authors, who have put a lot of work into their paper.
  2. Negatives Then, bring up your criticism, but remain objective. Specify the parts of the text you had issues with, so the authors can quickly find and revise them.
  3. Suggestions for Improvement This is the most crucial part. The quality of a review is determined by how constructive its improvement suggestions are. Provide as many specific suggestions for each point of criticism as you can. Recommend literature, better arguments, or methodological approaches that you know but found lacking in the paper.

Step 5: Minor Points

These are less dramatic points but caught your attention while reading. They can be linguistic inconsistencies or citation errors. Create a small list of these points and indicate the pages.

“Here are some additional points:

  • ‘affect’ should be ‘effect’ (p.2)
  • ‘their’ should be ‘there’ (p.3)
  • The direct quote on p.4 lacks a page number

If there are too many minor errors, you don’t need to list them all. You’re not a proofreader, but a reviewer. Make a general comment like:

“There are numerous minor errors throughout the manuscript. A professional proofreading service should address these.”

Step 6: References

Yes, you heard right. A truly professional review includes a short reference list. Here, you list all sources you cited to support your critique or recommended to the authors. This small but significant detail elevates a mediocre review to a very good one.

This makes it as easy as possible for the authors to address your critique. They can read the sources you cited and tackle the points you raised. This can be a general guideline when writing reviews: Make it as easy as possible for the authors to implement your suggestions!

Whoever reviews YOUR review will be impressed!

writing a peer review 3

Step 7: The Recommendation

Finally, you need to make a decision. What is your recommendation? Should the submission be rejected? Or should it proceed to the next round (major/minor revisions)? Or can it be accepted immediately?

Your recommendation should ideally not be directly in your review text. This makes it easier for the person summarizing all reviews. As a reviewer, you make a recommendation, but the final decision is not yours. If you recommend rejection, but all other reviewers disagree, your critique still needs to be addressed.

A concluding section might look like this:

“In summary, this paper addresses an interesting case for implementing Virtual Reality in digital education. Unfortunately, in its current state, it shows fundamental weaknesses, such as in theoretical motivation, transparency in methodological implementation, and theoretical contribution. I hope my critique helps the authors improve their work and find something useful in this review. Best of luck in developing this study further.”

As you can see, the recommendation is not mentioned in the text. The conclusion could lead to either revision or rejection. A good review is characterized by not pre-empting this decision.

Now that you have all the essentials, you are ready to start writing a peer review with confidence and precision. Understanding how to write a peer review ensures that your feedback is both constructive and helpful, guiding the authors toward improving their work.

Categories
Uncategorized

How to Overcome your Phone Addiction: 7 Strategies to Stay Focused

Are you trying to overcome your phone addiction but don’t know how?

In this article, I’ll share 7 strategies that have personally helped me keep my smartphone addiction under control.

This way, you can take your productivity to a new level, maintain a genuine connection with the people you love, and finally clear the fog in your mind.

Understand the Root Causes of Your Smartphone Addiction

How often do you look at your smartphone without really knowing why? Whether it’s during a lecture, waiting for the bus, or dining with friends – it seems we can hardly do without it. Just checking the phone quickly? And suddenly, we’re deep in the Instagram spiral or lost in an endless stream of TikTok videos. Why do we do this?

In this article, I’ll show you what’s behind our “smartphone addiction” and how we can be smarter than our smartphones. Are you ready to take back control?

Step 1 towards improvement would be to ask yourself: Why is it so hard for you to put the smartphone aside?

smartphone Addiction

#1 App Development

Behind the scenes, developers know exactly what they are doing. They design apps and social media not only to capture our attention temporarily but to keep us engaged for the long term. Endless feeds that never end and push notifications that keep bringing us back are no coincidence. They are part of a clever strategy to keep us in their apps as long as possible.

#2 Dopamine

On a neurobiological level, this can also be explained. Dopamine plays a key role. This neurotransmitter is released when we receive positive feedback – whether it’s a new message, a like, or another notification. And that’s exactly what keeps us glued to the screen. Imagine your own “Feel-Good Dealer” rewarding you with a sense of happiness. These little reward kicks are like candy for your brain, constantly luring you back to the screen.

#3 Instant Gratification

The phenomenon of instant gratification, a craving for immediate rewards, is another reason why it’s hard to put the smartphone away. Especially in moments of boredom, we automatically reach for the phone. It appears to be the easiest solution to fill that void. However, the instant gratification that smartphones provide can quickly become a habit that is hard to break.

smartphone Addiction 2

#4 FOMO

Another psychological driver for constant smartphone use is the phenomenon of FOMO. Do you know the feeling? This nagging fear that you might miss out on something if you’re not constantly checking your phone? That’s the “Fear of Missing Out.”

Social networks like Instagram and TikTok amplify this fear by bombarding us with endless updates about the supposedly exciting lives of others. This constant worry about not being up-to-date drives us to compulsively check our smartphones. But the catch is: FOMO is never truly satisfied. Every new video, every new post only confirms that life goes on without us and we might be missing out even more.

# 2 Recognize the Negative Effects of of Your Smartphone Addiction

#1 Desensitization of the Reward System

Dopamine plays a big role. The substance responsible for feelings of happiness. You might wonder what’s bad about that? Well, like many things in life, too much dopamine can be problematic.

If the brain is regularly and abundantly exposed to dopamine, as is the case with constant smartphone use, desensitization can occur. This means that the dopamine receptors in the brain become less sensitive. So, you need stronger or more frequent stimuli to feel the same reward. This can lead to everyday pleasures and interactions becoming less satisfying in the long run.

smartphone Addiction 3

#2 Increased Stress Levels

Too much dopamine can also lead to a permanent state of stress, as dopamine also affects stress regulation. Your body is then constantly on alert, which can lead to anxiety, sleep problems, and even high blood pressure.

Imagine your brain as an inbox that’s constantly flooded with emails – from messages to games to app updates, the smartphone never stops pinging. It’s no wonder that our brains eventually become overwhelmed and overloaded with stimuli. This information overload can lead to brain fatigue and further increase stress levels.

#3 Impairment of Cognitive Functions

Over time, overstimulation by dopamine can negatively affect your cognitive abilities. If your brain is constantly seeking quick rewards, it can impair your concentration. Studies even show that excessive smartphone use can affect productivity. So, if you ever feel like you’re getting nothing done, banish your phone from the room.

An excess of dopamine can also make you more impulsive and make it harder to control your emotions. This can lead to friction in everyday life and negatively affect your relationships.

And that’s interesting because I’ve often noticed how smartphones change social interactions. For example, when I meet friends and my phone is on the table, I’m much less attentive. It’s a completely different conversation when I leave my smartphone in another room. Although smartphones should theoretically bring us closer together, they often lead to physical and emotional distance between us and our fellow human beings.

3. Set a Clear Goal to Combat Your Smartphone Addiction

Sure, you could try to completely abstain from digital devices, but realistically, that would be quite limiting. Because, of course, smartphones also have their good sides. They not only keep us connected with friends and family but are also real lifesavers in studies or work. Thanks to emails, calendars, and a flood of apps, our lives remain organized, and we stay informed.

It’s not about banning all digital devices but about making conscious decisions about their use. Who’s in control? Is it your phone, or are you using it as a tool that enriches your life? Ask yourself: What are my values? What do I want to achieve? Does using app XY align with these goals, or is it merely a distraction? By asking such questions, you can ensure that your technology use supports your goals rather than hindering them.

smartphone Addiction 4

4. Discover Healthier Sources of Dopamine

While smartphones often only provide short moments of happiness (and can have negative effects), there are alternatives that release dopamine more slowly and sustainably – and these are usually much more fulfilling. Here are some suggestions for how you can increase your well-being in the long term and fight smartphone addiction:

Setting and achieving goals:

Setting goals is more than just a good intention. Because each small success releases dopamine, significantly boosting your confidence. Whether you’re rocking your next term paper, achieving a sports goal, or just meditating more regularly – each step towards your goal brings you more inner satisfaction.

Spending time with friends:

Genuine interactions, such as relaxed dinners, spontaneous outings, or deep conversations, are particularly effective at fulfilling your need for human connection.

Learning new skills:

Learning a new skill challenges your brain and expands your perspective. Whether you’re learning a new language, a musical instrument, or knitting – the success of your learning process also releases dopamine. This is a meaningful alternative to the often empty entertainment that constant scrolling on the smartphone offers.

5. Implement a Digital Detox Routine

“Digital Detox” – that is, a conscious break from all digital devices – can be an effective method to reduce overstimulation and improve your mental well-being.

By temporarily abstaining from smartphones, tablets, and computers, you give your brain a chance to recover from the constant stimulus overload. This can improve concentration, enhance sleep, and reduce your stress levels.

A successful digital detox starts with small steps: Perhaps just an evening or a weekend without digital devices. It’s important to use these times consciously to engage in activities that you enjoy and that ground you – whether it’s reading a book, spending time in nature, or just enjoying the quiet.

These digital breaks can help you reflect on your own use of technology and overcome smartphone addiction.

However, digital detox phases, like a day offline or a week without social media, often do not lead to lasting changes. Consider it more of a reset. Although they provide a break, they don’t tackle the underlying habits that lead to our dependency. Without a real change in our daily routines and our attitude towards technology, we quickly fall back into old patterns.

6. Gradually Change Your Daily Habits

Create smartphone-free zones: Decide where and when the smartphone is off-limits. This could be during dinner, on a walk, or at a concert. These moments without a smartphone help you enjoy the experience more intensely.

Monitor your screen time: To reduce our dependency on smartphones, we need to set clear boundaries and make more conscious decisions about when and how we use our devices. Screen time monitoring apps can help you manage your time more consciously.

Restructure your daily routine: Rely on a real alarm clock instead of your phone to wake you up. This way, you’re not tempted to spend the first minutes of your day on the smartphone. Ideally, banish your phone from the bedroom altogether. If you want to know the time during the day, it’s best to rely on a clock.

Regulate your notifications: Turn off unnecessary push notifications and organize your apps into folders. This way, you’re less likely to automatically reach for your smartphone and can decide for yourself when you want to view messages and updates.

Remove distractions: Leave your smartphone in another room when you need to concentrate. This way, you work more productively.
Consciously use breaks: Try not to reach for your smartphone at every opportunity. Use waiting times instead to observe your surroundings or simply think. Such breaks can foster your creativity.

7. Adopt a Critical Perspective on Smartphone Use

Let’s be honest: It’s high time to rethink your relationship with your smartphone. I invite you to take a close look at how and why you use your smartphone and consider if there might be a better way.

Start with small steps and see for yourself how your life and well-being gradually improve. Are you ready to take on this challenge? Because remember – your smartphone should be a tool, not the center of your life.

Categories
Uncategorized

Study 3x More Effectively with Mind Maps (Picture Superiority Effect)

studying with mind maps 1

Goodbye, flashcards! It’s time to learn in a way that your brain will adore. Studying with mind maps will give you a turbo boost. In this article, I’ll reveal why mind maps are a real game-changer and provide you with 3 techniques to get started.

What are Mind Maps?

Imagine if your brain could speak and sketch its thoughts on paper. The result? A mind map! Mind maps are visual diagrams that organize ideas, words, tasks, or other concepts around a central theme.

They use colors, symbols, and connections to mimic the way our brain processes information. This method was popularized in the 1970s by psychologist Tony Buzan.

In German-speaking regions, the coach and author Vera Birkenbihl also advocated for mind maps as a learning method.

Why Studying with Mind Maps is So Effective

#1 They Mirror the Brain’s Way of Working

Why are mind maps so helpful? The answer lies in how our brain functions. Rather than the dry memorization of linear lists or long blocks of text, our brain prefers to structure information in a networked way, relating different pieces of data.

Mind maps do just that: they start with a central theme and branch out into related areas, similar to how our brain networks information. This method mirrors the natural way we think and learn, thus facilitating the understanding and retention of information.

#2 Visual Stimuli Enhance Memory

Visual stimuli play a crucial role in learning. Images, symbols, and colors are recognized and remembered by our brain faster and more easily than text. Mind maps leverage this so-called picture superiority effect by presenting complex information in a visually appealing and easily understandable form.

When studying with mind maps, integrate not only words and bullet points but also small sketches or symbols into your mind maps. This strategy helps not just to process the learning material better but also to retain it long-term.

studying with mind maps 2

#3 Active Learning Instead of Passive Reception

Mind maps transform learning into an active experience. Instead of just absorbing information, they encourage you to actively think through the material, organize it, and convert it into a mind map.

While designing a mind map, you’ll discover how different concepts are interconnected. This process strengthens your critical thinking and your ability to organize complex information and place it into a broader context.

By visualizing connections between different topics in your mind map, you deepen your understanding and make it tangible.

To understand why active learning is far superior to passive learning, you can also check out my tutorial on Active Recall.

#4 Targeted Summarization of Complex Topics with Mind Maps

Mind maps allow you to condense complex subjects and extensive content to the essentials and structure them clearly. Instead of flipping through endless pages of notes, they enable you to quickly dive into the main ideas and key concepts. I have always preferred using mind maps when I needed an overview of a subject area or was preparing a presentation. This way, I always maintained a focus on the bigger picture and structure.

#5 Improved Time Management

Creating mind maps enhances your time management. Thanks to their visual layout, you can easily set priorities, organize your learning objectives, and keep track of your progress. Mind maps serve as visual learning plans. At a glance, you can see which areas have been covered and where there are gaps.

3 Techniques

There are many techniques to elevate your learning with mind maps to the next level. As I mentioned, British psychologist Tony Buzan popularized the mind map in the 1970s. Let’s take a look at his method.

The Buzan Technique

The Buzan technique is based on five central principles that together form the foundation of every effective mind map:

  1. Start with a Central Topic: Each mind map begins with a single concept placed in the center of your page. This could be the title of your course, the theme of a project, or simply an idea you want to explore.
  2. Branches for Main Ideas: From this central topic, you draw branches to the main points or key ideas associated with it. These branches should be large enough to later add sub-ideas.
  3. Sub-branches for Details: Each main branch can have further branches that contain details, examples, evidence, or other relevant information. The deeper you go, the more specific the information becomes.
  4. Use Keywords and Images: A single word or image can often represent a complex idea or concept. Buzan recommends using keywords and images wherever possible to make your thoughts more efficient and enhance your memory.
  5. Connections and Links: Draw lines or arrows to show connections between different parts of your mind map. This helps you see how ideas are interconnected and fosters a deeper understanding of the subject.
studying with mind maps 3

Additional Techniques

The original method developed by Tony Buzan has since been expanded and adapted for various purposes.

For instance, there is the Rico Cluster by Dr. Gabrielle Rico. Here, you link your ideas as a network rather than starting from a central key concept. This is intended to support the normal communication between the right and left hemispheres of the brain.

The previously mentioned Vera F. Birkenbihl worked with her KAWA method. A KAWA is created when you find associations for the initial letters of your starting word. To better understand, let’s create a KAWA on the topic of Marketing.

  • M – Market Research
  • A – Audience Analysis
  • R – Rebranding
  • K – Customer Retention
  • E – Engagement in Social Media
  • T – Testimonials from Customers
  • I – Influencer Marketing
  • N – Neuromarketing
  • G – Contests and Promotions

This method is particularly helpful for brainstorming. Are you still looking for a topic for your next term paper? Using the KAWA method, you can generate additional ideas.

I believe that there isn’t ONE correct method. You need to find out for yourself what works best for you.

After all, each mind map reflects YOUR thought processes. The more you stick to a step-by-step guide, the more it may constrain your own thoughts. I recommend just getting started and experimenting.

Studying with Mind Maps: How to Become a Pro

Remember, your mind map is not fixed. Rather, it accompanies you during your learning process and allows you to be flexible.

As you absorb new information or come up with new ideas, whether through reading, listening, or discussing, you can gradually expand your mind map.

It’s important that your mind map remains dynamic and evolves with your understanding and thoughts. However, you should ensure that it doesn’t become too cluttered and that you maintain focus.

Whenever I wanted to learn a topic, I always liked to start with a mind map to keep an overview, and then created additional mind maps related to relevant companies or topics.

In the end, I didn’t have just one mind map, but perhaps eight different ones. These represented my natural thought processes much better than flashcards or linear notes could.

For exam preparation, you could use questions as the central point of a mind map. For example: “How can companies use the concept of ‘Customer Lifetime Value’ to build long-term customer relationships and increase revenue?”

Around this central question, a mind map can be built. This approach ensures that you’re not just memorizing a text answer, but gradually expanding your thoughts, discovering new connections, and developing a deeper understanding of the topic.

Digital Mind Map Tools vs. Old-School Mind Mapping on Paper

Okay, but should you now use digital mind map tools or create your mind map old-school with paper and pen?

Digital Tools for Creating Mind Maps

Welcome to the digital age of mind maps! With tools like MindNode, XMind, Coggle, and even ChatGPT, you can incorporate multimedia content into your mind maps. Whether it’s images, links, videos, or documents—all these can help make your mind map vibrant and informative. This is especially useful if you are a visual learner or want to use your mind map as the basis for a presentation.

Many digital mind map tools offer features for real-time collaboration. This means you and any team members can work on the same mind map simultaneously, no matter where you are.

This is a game-changer for group projects and joint learning sessions! Plus, your mind maps are less likely to get lost in a pile of papers since everything is stored digitally.

studying with mind maps shribe

Old-School Mind Mapping

And yet… sometimes less is more. There’s something satisfying about bringing your ideas to life directly with pen on paper and visualizing your thought processes.

This method fosters creativity and can help you process information better. No distractions from notifications or dead batteries—just you and your thoughts.

Additionally, with old-school mind mapping on paper, you have complete freedom in design. You’re not confined to the structures or designs of software, and you can make your mind maps as simple or complex as you like.

It’s all about what works for you. You might even find that a mix of both—digital for university projects and paper for your personal brainstorming sessions—is the best solution for you.

Experiment a bit and find your own way to make the most out of studying with mind maps for yourself.

The goal is to feel organized and enjoy learning. So, grab your pen or tablet and give it a try.

Categories
Uncategorized

How to do a Deductive Thematic Analysis (Theory-Driven Qualitative Coding)

You want to conduct a deductive thematic analysis and categorize your qualitative data based on pre-existing theory and concepts?

You’re in the right place.

In this article, I’ll explain in detail and in an easy-to-understand manner how to perform a deductive thematic analysis in 5 simple steps.

What is Thematic Analysis?

To make sure that we are on the same page about thematic analysis, I will mainly refer to the understanding of the method described by Braun and Clarke (2006;2019; 2021).

Please note that there are other authors that have their own idea of thematic analysis, however, the explications by Braun and Clark seem to be most useful to the research community.

It is important to understand that thematic analysis is flexible, which means you can apply it in different variations and customize it for your own needs.

The only thing you need to be aware of is that you do not slap a label on your approach in your methods section and then do something completely different that is not in line with the methodical ideas of the authors you just cited (the “label”).

Flexibility in thematic analysis means that you can combine different approaches such as inductive and deductive coding within the same study. But you don’t have to. It’s up to you and what makes most sense to achieving your research objectives.

In this tutorial, however, I am going to focus on applying thematic analysis in a theory-driven, deductive logic.

If you would like to learn more about inductive thematic analysis, please refer to my previous tutorial about this method.

deductive thematic analysis

What is Deductive Thematic Analysis?

Whereas inductive thematic analysis is data-driven (bottom-up), deductive thematic analysis is theory-driven (top-down).

This means that you do not develop your themes based on the statements you find in your qualitative data (e.g., interview transcripts) or the underlying patterns of meaning of these statements.

Instead, you take these statements, and classify them into a pre-existing theoretical structure.

For deductive thematic analysis, therefore, you need to think about theory before you enter your analysis.

Let’s look at the steps that you need to go through.

#1 Define your Themes

Option 1: Use pre-existing theory

For a research question that involves a specific theory, you should develop your set of pre-defined themes based on that theory. For instance, if the research question is: “What influence does remote work have on organizational identity?”, it references the “Organizational Identity Theory” (Whetten, 2006).

Theories in social sciences are usually based on the work of individual authors who have defined a specific model or the components of a theory. These can be dimensions, variables, or constructs. We take those and make them our pre-defined “themes”.

Themes

Choose the work of an author or team of authors and read the corresponding book or paper thoroughly. In our example, this would be Whetten’s 2006 paper. The author names three dimensions representing organizational identity: the ideational, definitional, and phenomenological dimensions. These dimensions would serve as excellent main themes for your study.

Sub-themes

The original source will provide further details on how these dimensions are defined, which you can use to form subthemes. It may also help to read additional literature that builds on or explains this theory. Often, primary sources are quite complex.

However, you can break down any theory into its components and logically assemble a list of themes and subthemes. A popular approach in thematic analysis is to take the main themes from the theory (in the example: ideational, definitional, and phenomenological aspects of organizational identity) and develop the subthemes inductively, based on the content you find in your data.

Option 2: Derive Themes from the Current State of Research

If your research question does not target a specific theory, such as “How are remote work models implemented in the manufacturing industry?”, you can consult not a single pre-existing theory, but turn to various current studies and extract your themes upfront.

It’s best to work with a table where you create themes and subthemes on the left and note the source(s) from which you derived them on the right.

Examples of categories related to the example could include: “Technological Infrastructure”, “Corporate Culture”, “Work Time Models”, and so on.

It’s essential that these themes clearly emerge from your review of the literature. Don’t worry about overlooking something – you can always expand or adjust your list of themes after an initial analysis round if you notice certain contents are not covered or if you encounter new literature in the meantime.

#2 Create a Codebook

To guide your analysis, you can work with coding guidelines, which are often referred to as a codebook.

Coding in this context simply means classifying a piece of content or statement as part of a theme.

A codebook is particularly useful if you are not the only person coding the data. But it also gives your method more rigour as you systematize your coding process.

Three things are particularly important to consider when you create a codebook for your deductive thematic analysis:

2.1 Define the Themes

Cover this step with the aforementioned table and put it in the document that is supposed to be your codebook.

Add another column where you precisely define when a text segment belongs to a specific category or not.

You can use a concise description of the theme to do so. Make sure to use the references of the particular theory or the literature that you used to derive the theme.

2.2 Use Anchor Examples

For each theme or subtheme, you should insert at least one example in the codebook.

This example represents the respective theme. This could simply be a direct quote from your interviews. If you are analyzing social media content, it would be a tweets, etc.

You might need to do some initial coding until you find a suitable example that you can put in your codebook.

3.3 Define Coding Rules

You can then add further comments that establish the rules how a coder should decide when a data segment is not clearly assignable to a theme.

This ensures that you act consistently throughout the coding process.

#3 Do the Coding

Consider using a software to support your coding such as Nvivo or Taguette.

This helps you to organize your analysis, especially if you have a lot of data.

Option 1: Coding alone

If you are coding alone, you can set some percentages of coding the whole dataset to set aside time for review.

There is no strict rule for this, but I would recommend doing 10% of all the coding and then stop and check if your codebook works.

If not, make changes to it and start over.

The next milstone could be somewhere around 50% of coding all your data.

Check the distribution of data segments that you assigned to your themes and make adjustments to the coding rules if necessary.

Then go ahead and finalize the coding.

Option 2: Coding in a team

If you are coding in a team, the same milestones apply.

However, now you meet as a team and discuss your coding. Compare different examples and check with each other if you are all using the codebook as it was intended.

After you have finalized the coding, you may consider calculating an inter-rater reliability measure such as Cohen’s Kappa.

Here you will get a statistical value that shows how big the agreement is between you and the other coders.

You can only calculate it, if the team members code some portion of the data simultaneously.

For example, you could take 10% of your data and everyone codes it independently. Based on this coding, you calculate the inter-rater reliability and report the value in your methods section.

If the inter-rater reliability is not good, you might have to consider going back to the codebook or redo some of the coding so that you reach better agreement among the team.

#4 Present the Findings

The next challenge is to translate this deductive thematic analysis into a structured and reader-friendly findings section.

I recommend balancing descriptive reporting of the results (e.g., with raw anchor examples (=direct quotes) from your data) and some analytical interpretation (in your own words).

Start with the structure by turning your list of themes into headings. Use subthemes, if you have any, as subheadings.

Then, add the quote examples and explain them in your own words.

Expand these explanations and examples with additional paraphrases that you consider important, and try to explain how the data connects to the pre-defined theme you have derived from literature or theory.

Always support arguments with paraphrases or direct quotes from your data.

Also, make sure to link the subchapters with appropriate transitions.

#5 Discuss Your Findings

What do these findings mean?

Use the discussion section of your paper, report, or thesis to connect back to the theory.

For a deductive thematic analysis, you must discuss your findings in light of the theory or literature you started with.

Writing an outstanding discussion is an art that goes beyond the scope of this tutorial – feel free to check out my tutorial on writing a discussion.

However, consider using tables and figures as additional tools to organize your findings or make it easier for the reader to spot what your most important findings are.

Maybe there were very few data segements assigned to one prticular theme? Or a lot in one?

Discuss what this means in regard to your research question.

If you have specific questions about thematic analysis, leave them in the comments under this video.

If you want me to dive deeper into a particular topic in a separate video, let me know in the comments as well.

Literature

📚 Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.

📚 Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597.

📚 Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18(3), 328–352.

📚 Whetten, D. A. (2006). Albert and Whetten revisited: Strengthening the concept of organizational identity. Journal of management inquiry, 15(3), 219-234.

Categories
Uncategorized

How Do You Become a Professor? (3 Possible Paths)

how to become a professor

How do you become a professor?

Well, if you are considering a career in academia, then becoming a professor is the ultimate, and often only goal.

But also if you are just curious about what trials those old folks endured to earn their spot at the front of the lecture hall?

Then this video is for you.

When I was a student, I had no clue how the academic system worked.

And I bet you feel the same!

But we’re going to change that. By the end of this article, you’ll know exactly three paths that can lead to a professorship, and you’ll be able to decide whether this is something for you, or if you’d rather quickly turn your back on academia after your studies.

How Do You Become a Professor?

First, let’s explore the typical career trajectory for academics. You might be familiar with some of these steps, as they form the backbone of any academic career:

  1. PhD
  2. Postdoctoral Fellowship
  3. Assistant Professor
  4. Associate Professor
  5. Full Professor
Level 1: The PhD

Embarking on a PhD is like signing up for an academic marathon that takes around four years—if you’re lucky. Your completion time might depend on your field’s pace, your advisor’s style, and how often your experiments decide to actually work.

You can tackle a PhD as a full-time employee with teaching obligations in most countries, on a scholarship as it is mostly the case in the US or Australia, or as a side hustle to your full-time job. Get ready to learn, burn, and occasionally yearn for the finish line!

how to become a professor 1
Level 2: Postdoctoral Fellowship

Think of it as the academic victory lap after your PhD. You’re not quite a professor yet, but you’re doing mostly research, maybe teaching a bit, and definitely networking like it’s your job (because it is).

It’s your time to shine in your field, beef up that publication pipeline, and charm future colleagues. Ready, set, research!

Level 3: Assistant Professor

This is the entry-level, tenure-track position where the academic rubber meets the road.

Here, you’ll teach, research, and contribute to university life, all while aiming for the grand prize of tenure.

Over about five to six years, you’ll need to impress with publications, teaching evaluations, and community involvement.

It’s your chance to prove you have what it takes for a long-haul career in academia. Get ready to juggle tasks and time like a pro!

Level 4: Associate Professor

The academic “level up” that comes after you’ve survived the tenure trials as an Assistant Professor.

In the US, this is typically when you have “earned” tenure, which means you can stay a professor for the rest of your career if you don’t mess up big time.

In other countries, tenure can also be granted at the assistant professor level.

Anyhow, you’ve now earned the luxury of job security and the joy of juggling even more duties.

More research, more grants, more students to mentor, and even more committee meetings.

Think of it as gaining the power to bend the academic universe, just a little bit, to your will.

Congratulations, you’re in the middle of the academic ladder — don’t look down!

how to become a professor 2
Level 5: Full Professor

The academic world’s equivalent of reaching the mountaintop!

After years of research, teaching, and coffee-fueled late nights, becoming a Full Professor means you’ve published aplenty, shaped young minds, and possibly even figured out how to work the departmental photocopier.

It’s the peak where you get to enjoy the view, influence university policies, and still — yes, still — chase after the elusive work-life balance.

3 Different Paths to Becoming a Professor

What we’ve discussed so far reflects the career mechanisms of the academic system.

However, the actual achievements necessary to climb the ranks are another story.

Let’s now look at three different paths or strategies that can lead to the same goal—a professorship.

Path #1: Passion for Research

The most intuitive route to a professorship is through your talent and passion for research in your field. Here, it’s crucial how well you can translate this passion into tangible research results.

This route also often faces a major criticism of the academic system: the publish-or-perish culture. If you don’t publish enough or well enough, a career in academia is hard to achieve.

The good news? If research comes naturally to you, and you quickly see significant success, that’s a good indicator that this path might be the right one for you.

What awaits at the end of the journey, once you’ve secured a professorship?

Well, more research. It doesn’t stop. So, if research neither excites you nor comes easily, it could be challenging.

I often hear from PhD candidates that their passion lies not in research, but in teaching.

In this case, a career at a college specified in teaching might be suitable.

Here, it’s not research but teaching and sometimes industry experience that pave the way to a professorship.

Path #2: Through Savvy Science Management

If the university route is your choice, there’s another path I’ve often observed: savvy science management and strategic planning.

This approach allows you to anticipate and occupy niches in topics with high demand. This can aid in advancing your research because journals are eager to publish these topics.

Or it might attract funding from third parties, such as government bodies, due to societal interest in a topic. An example is the High-Tech Agenda Bavaria in Germany, which has created 1000 or so professorships in areas like sustainable technologies and AI.

This means that a well-chosen thematic focus can aid you in appointment processes. It makes sense to align yourself in a way that your topics are likely to grow in significance in the future.

People who have secured a professorship this way are often also excellent at networking, although this is just a personal observation.

Path #3: The Roger Federer Way

The passionate researcher and the gifted networker represent two extremes. There’s also a middle path.

This path is about being a generalist.

My favorite analogy comes from the book “Range: Why Generalists Triumph in a Specialized World,” which includes the example of Roger Federer, one of the most successful tennis players of all time.

Throughout his career, Roger Federer was never the best at any single aspect of tennis.

Andy Roddick had the best serve.

Rafael Nadal had the best forehand, and Novak Djokovic had the best backhand.

However, Roger Federer was the most complete player overall, allowing him to achieve one success after another.

This analogy applies to academia as well: a generalist who can integrate diverse skills and knowledge may not stand out in one particular niche but excels by combining multiple strengths, potentially leading to a successful career in academia.

In science, as in nearly every other career, these principles apply.

how to become a professor 3

Bonus Path: The Detour via other Countries

My personal favorite route to a professorship is through international experience. This aspect of the academic system is often a topic of heated debate.

This path is definitely a “to each their own” and “you have to decide for yourself” kind of deal. Moving abroad to secure a professorship isn’t something that’s expected of you.

Deciding how much other areas of your life should be sacrificed for the dream of becoming a professor is a choice you have to make yourself.

However, if you view an extended stay abroad as an opportunity for growth and a decidedly positive experience, then it could be the missing piece in your path to becoming a professor.

One of the advantages of the academic system is its compatibility across almost all national borders. The entire globe is your playing field.

If you choose to limit your playing field based on geographic factors, that will reduce your options, but that’s completely fine.

You decide, not the system.

If you have any questions about this, feel free to drop me a comment!

Categories
Uncategorized

David Hume’s Problem of Induction (Simply Explained)

Induction Problem shribe

The problem of induction, as formulated by David Hume, addresses one of the most significant questions in epistemology: what can science truly know?

If you’ve ever delved into empirical research methods, you’ve likely encountered the terms induction and deduction.

While a Grounded Theory approach follows an inductive logic, an experiment relies on deductive logic. Is one better than the other? How are both connected, and why are scientific results never definitive?

The answers to all these questions are tied to Hume’s problem of induction. In this video, you’ll learn everything you need to know to hold your own in a discussion with a ninth-semester philosophy student.

Additionally, this knowledge will help you better understand and critique scientific methods. It’s definitely worth sticking around.

What is Inductive Reasoning?

In science, inductive reasoning involves deriving a general theory from the observation of a specific phenomenon.

For instance, consider an interview study where 30 interviews are conducted. The data collected is analyzed using Grounded Theory, leading to a new theory.

Induction isn’t limited to qualitative research. Any type of research that draws conclusions about a theory or natural law from observations employs induction.

This could be a statistical evaluation, where conclusions about the entire population are drawn from a sample, or it could be a physicist making repeated measurements from which she derives a natural law.

What is Hume’s Problem of Induction?

David Hume’s problem of induction is a fundamental question in epistemology that deals with whether and under what conditions inductive inferences can be considered reliable or rational.

The Scottish philosopher first raised this question in the 18th century in his work “A Treatise of Human Nature.” Although Hume initially discussed the problem only in the context of empirical science, it remains relevant to all sciences that recognize induction as a valid proof method.

And there are many.

Having a bit of knowledge about the problem of induction is certainly beneficial. It continues to be referred to as “the problem of induction” because it has yet to be solved. For over two centuries, philosophers of science have been grappling with it, including the famous Karl Popper. But more on that later.

An Example of an Inductive Inference

To better understand the problem of induction, let’s look at an example of an inductive inference.

An ornithologist conducts an observation in nature. During his research expedition, he observes 100 swans, all of which are white. That’s 100%.

Assumption 1: 100% of the observed swans are white.

From this, he concludes that all swans are white.

Conclusion 1: All swans are white.

If he reasons in this way, it doesn’t matter how many more swans he observes. He could even observe 100,000 swans. The conclusion remains, which logicians describe as non-compelling. The 100,001st swan could be black, and his conclusion would be false.

Induction Problem

The Uniformity of Nature

For this conclusion to become logically rational and allow the ornithologist’s colleagues to rest easy, he must add an additional condition.

Assumption 1: 100% of the observed swans are white.

Assumption 2: All swans are similar to those already observed.

Conclusion 1: All swans are white.

This second assumption is also known as the principle of the uniformity of nature. It means that all future observations will be similar to past observations.

Or, put simply: In the future, everything will always occur as it has in the past.

So far so good.

If the principle of the uniformity of nature is true, then there is no problem of induction. The inductive conclusion would be logically valid.

But then David Hume comes into play.

He asserts: There is no logical basis for the law of the uniformity of nature. It cannot be true.

Hume himself and those who followed have tried to logically justify this principle, but have failed. This is partly because these attempts at justification themselves require inductive reasoning, which is subject to the problem of induction.

Hume writes:

“It is therefore impossible that any arguments from experience can prove this resemblance of the past to the future; since all such arguments are founded on the supposition of that resemblance. Let the course of things be ever so regular hitherto, that alone, by no means, assures us of the continuance of such regularity.”

If you’ve ever invested money in the stock market, then you know what he means.

Is Deduction the Solution to the Problem of Induction?

Two hundred years after Hume, another big player in the field of epistemology enters the scene: Karl Popper.

And he believes he has found the solution to the problem of induction.

Actually, he can’t solve it but suggests instead to simply ignore it. He completely agreed with David Hume that general laws cannot be derived through induction.

What one can logically do, however, is falsify general laws.

Instead of generating a theory based on an inductive conclusion, one could simply concoct a theory (form a hypothesis), and then try to falsify it.

What remains are only the theories that have not been falsified (yet).

Here, we are no longer in the realm of induction but in that of deductive reasoning (from general to specific).

For the philosophy of science, Popper’s new approach was a milestone. However, it was not the hoped-for solution to the problem of induction.

Induction Problem 2

Why We Should Sometimes Trust Induction

Many philosophers later showed that even Popper’s approach to falsification relies partly on inductive reasoning.

While Popper initially rejected all forms of induction as irrational early in his career, he softened his stance towards the end.

He acknowledged that under certain circumstances, there might be a pragmatic justification for induction. Consider the context of medicine, for example.

If we were to completely reject induction, both doctors and patients would face a significant problem.

After diagnosing a disease, we choose a medication that has led to healing in thousands of past cases. We thus hope that the future will behave like the past and follow an inductive conclusion.

If we rejected induction as Popper originally intended, we would have to rely more on a medication that has never been tested.

Therefore, there seems to be a difference between pragmatic and purely theoretical induction. Due to these complications, the discourse in the philosophy of science largely reached a consensus that Popper could not solve the problem of induction either.

Induction Problem 4

What This Means for Today’s Science

The problem of induction remains unsolved to this day. Concluding from this that science can know nothing with 100% accuracy is theoretically correct, but not practically helpful.

To better interpret the results of scientific studies, scientists must make a series of so-called judgment calls.

These are the additional assumptions we must make for science to be pragmatically implementable. That is, everyone must define for themselves what they are willing to assume, even if there is no formal logical basis for it.

As a scientist, one must therefore take a certain risk of being wrong. How high that risk is, can be decided by oneself.

Lee and Baskerville (2012) define 4 such judgment calls.

The first one you already know:

#1 The future will behave like the past.

The risk here is that a theory or result may no longer be true once it is applied to a new context.

#2 The conditions in the new context are similar enough to apply the theory or result there.

Imagine you’ve determined a natural law on Earth. If you apply this law to understand a phenomenon on Mars, you must assume that the conditions there are similar enough to those on Earth.

This second judgment call must also be made on a smaller scale. If you want to apply the results of a management case study from Amazon to your mid-sized company, you must assume that the conditions are similar enough to do so.

#3 The theory or natural law covers all relevant variables.

When you want to apply a theory, you must assume that it is complete and hasn’t overlooked any variable.

#4 The theory is true

This judgment call would probably not sit well with Karl Popper. But to apply a theory, you must assume it is true, even though Popper would argue this is never possible.

References

Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information systems research14(3), 221-243. https://pubsonline.informs.org/doi/abs/10.1287/isre.14.3.221.16560

Lee, A. S., & Baskerville, R. L. (2012). Conceptualizing generalizability: New contributions and a reply. MIS quarterly, 749-761. https://www.jstor.org/stable/41703479

Categories
Uncategorized

How Inquiry-Based Learning Can Get You Top 1% Grades

What is Inquiry-Based Learning?

Tired of memorizing your lecture notes? It’s pretty dull, right? How about starting your exam prep with questions instead of answers?

With inquiry-based learning, you dive deeper into your course material and discover connections you didn’t see before. Find out how questions can transform your learning experience.

In this video, I’ll show you the 3 principles behind the “inquiry-based learning” approach, how you can become more active in your learning process, and why it leads to better exam results.

The Principles of Inquiry-Based Learning

In university, your professor typically spoon-feeds you information during lectures, or you read summaries in books or your notes. That means you’re quite passive when taking in information.

You can change that with inquiry-based learning.

Inquiry-based learning is a method where you actively ask questions and independently seek answers to understand a topic.

Instead of just memorizing facts, you can be curious and think critically. You discover knowledge and connections based on the questions YOU ask, not the other way around. In short, it’s about letting your curiosity run wild.

Inquiry-based learning is based on three principles: self-directed learning, critical thinking, and the role of questions.

Self-directed learning means you take control of your learning process.

Your critical thinking is fostered as you learn to question and verify information.

And questions are your tool and starting point to discover and understand new things.

Inquiry Based Learning

Differences from Traditional Learning Approaches

Like most other students, do you learn with flashcards? Or maybe you use practice questions and past exams?

The result is that you become very good at answering those flashcards or practice questions. But it’s unlikely that these things will be tested exactly as is in the exam.

And when a question comes up that wasn’t on your flashcards or practice questions, you struggle.

The challenge with unexpected exam questions is that they’re new and unfamiliar – you’ve never seen this kind of question before. Even if your practice questions are similar, these new questions require you to think differently to achieve the best grade.

These questions are fundamentally about identifying who really understands the material.

It’s about the ability to grasp multiple concepts simultaneously and discover connections that perhaps weren’t directly taught in the lecture. This deep understanding comes from connecting knowledge.

#1 Interleaving

And this is what you practice through inquiry-based learning. It’s all about the process:

How do things connect? Why are certain facts the way they are? So, it’s about the “why” behind the facts. Instead of just memorizing information, you try to connect topics. This aligns with the Interleaving Method.

With interleaving, you switch between different topics while learning, instead of focusing on a single topic through block learning.

Studies* show that interleaving is especially effective for problem-solving. It also promotes better long-term memory and enhances your ability to flexibly apply what you’ve learned to new situations. This is exactly what you need to tackle unexpected exam questions and get the best grade.

*Taylor, K.; Rohrer, D. The effects of interleaved practice. Appl. Cogn. Psychol. 2010,24, 837–848.

Inquiry Based Learning 2

#2 Getting Practical with Inquiry-Based Learning

It’s all about recognizing connections and understanding that concepts, facts, and details only show their true meaning in comparison to others.

Let’s take an example:

In economics, a single price doesn’t tell us much without considering supply and demand. The balance between these forces helps us understand market dynamics and predict trends.

In literature, an isolated character description doesn’t mean much without understanding their relationships with other characters. The connections and conflicts between characters give stories depth and meaning, making literature richer and more engaging.

Understanding relationships gives learning its relevance. Since people tend to remember meaningful things better, these connections help us understand and retain complex topics.

Let’s consider a analogy in music:

In a song, a single note might seem insignificant without the surrounding melody. The way each note harmonizes with the others creates a beautiful tune, which gives the song its character and emotion. The context of each note within the melody and rhythm makes the music coherent and enjoyable.

Suddenly, all the pieces fit together. Instead of hearing isolated notes, you understand how they fit into the larger composition, which gives everything more meaning and solidifies and deepens your knowledge.

3# Fostering Curiosity

I’ve already mentioned several times how important curiosity is. With some topics, it’s easy to spark a natural curiosity.

Out of genuine interest, more and more questions about the topic come to mind, and you automatically delve deeper into the subject matter. But what if you struggle with certain topics? (Which, by the way, is completely normal.)

In this case, you could rely on pre-made questions to better understand connections and their importance.

Questions like “Why is this concept important?” and “How is this related to other concepts?” help you dive deeper into the topic. Once you have the answer to one question, move on.

What new questions come up now?

It’s best to write down the answers so you can revisit your thought process later.

Linear notes (writing from left to right, top to bottom) aren’t ideal because your thought processes aren’t linear. So, it’s best to start in the middle of the page and observe how your thoughts develop.

You can also go a step further and visualize connections using mind maps.

Inquiry Based Learning 3

3 Benefits of Inquiry-Based Learning

If you’re still not convinced, I’ve got three benefits of this method to motivate you to give it a try.

  1. Boost for Your Brain: Inquiry-based learning trains your brain to analyze complex problems and find creative solutions. You need this not only in your studies but also in the “real” world at work. The earlier you adopt the perspective of inquiry-based learning, the better.
  2. Bye-bye, Boredom! By pursuing your own questions, you incorporate your interests and identity into the learning process. When you follow a topic with curiosity, it becomes relevant to you. That’s why you can’t remember your neighbor’s license plate but can quote several episodes of “Friends.” You followed “Friends” with curiosity, so it was relevant to you – while your neighbor’s license plate isn’t connected to you, so it’s pretty irrelevant.
  3. Fit for the Future: The world needs people who can solve problems, and inquiry-based learning prepares you for that. It teaches you to ask questions, recognize challenges, and find creative solutions. And the best part? It makes you a lifelong learner, always open to new knowledge and experiences. In a world where the ability to adapt, think critically, and continuously learn is priceless.
Categories
Uncategorized

Statistical Significance (Simply Explained)

“When a study’s result is statistically significant,” is a phrase you’ve likely heard someone use while discussing scientific research. But what exactly does that mean?

What calculation is behind statistical significance, and when is it helpful?

In this video, you will find answers to these questions, and more.

I will also explain how statistical significance can deceive us – if we forget what can not tell us.

This knowledge will empower you to critically review scientific studies and their results, allowing you to judge whether the arguments made are actually robust.

Statistical Significance

Firstly, let’s distinguish between ‘significance’ in everyday language and ‘statistical significance.’ We usually call something significant if it’s large or noteworthy.

However, ‘statistically significant’ doesn’t necessarily imply importance. Indeed, a statistically significant result can be quite minor and inconsequential in some cases.

Statistical significance becomes relevant when we use statistical methods to analyze quantitative datasets, especially to check if there’s a potential effect between two variables.

Imagine conducting an experiment where we manipulate one variable (like giving people a dietary supplement) and observe its effect on another (such as their training endurance).

If we find this effect to be statistically significant, it’s time to celebrate and head home, right? Well, it’s not that straightforward, but more on that later.

Statistical significance helps us determine the likelihood of a measurement result occurring by chance versus indicating a real effect.

If we deem a result statistically significant, it suggests that the result from the analysis of our sample might also apply to a wider population.

Statistical Significance

Statistical Significance and Sample Size

Typically, studies are not conducted with all individuals representing a specific group (i.e., the entire population) but with a sample from this population.

For example, if you conduct a survey, maybe 200 people participate. In an experiment, it might be 60. Or perhaps you’ve collected data from social media or businesses, involving 1000 or more subjects.

These samples always represent a population, such as all “citizens who are allowed to vote in the US” or all “higher education students” and so on. Researchers then aim to generalise the results of a survey or experiment with a small group from this population (i.e., the sample) to the whole population.

The size of these samples is crucial when interpreting significance tests.

The smaller the sample, the harder it is to detect a statistically significant relationship. This is because chance plays a greater role, and a very large effect must be present for chance to be statistically ruled out.

The larger the sample, the quicker statistically significant relationships can be measured. This is because larger samples more closely approximate the entire population, making a random result increasingly unlikely.

p-Value, Test Statistic, and Null Hypothesis

A central mathematical figure for testing statistical significance is the p-value. The p-value summarizes the results of a measurement and helps determine how likely it is that the result is due to chance or an actual effect. However, the magnitude of this effect cannot be determined from the p-value alone.

More specifically, the p-value is the probability that, assuming the null hypothesis is true, the test statistic will take the observed value or an even more extreme one.

Wait a moment – let’s slow down. Here we’ve introduced two new terms.

Test Statistic and Null Hypothesis

In a significance test, two hypotheses are crucial:

H0: There is no effect.

H1: There is an effect.

Through a significance test, the null hypothesis (H0) can be rejected.

For example, this might happen if the p-value is below 0.05. If so, there is reason to believe that an effect exists beyond mere chance.

The test statistic, a function of potential outcomes, defines a rejection region. If the result falls into this area, the null hypothesis is to be rejected.

The size of this region is determined by the significance level, usually set at 0.05, or 5%. This was once arbitrarily established by someone (named Ronald Fisher), but sometimes the significance level is set at 0.01, or 1%.

Whether a result is statistically significant largely depends on the significance level used. However, a p-value becomes increasingly impressive the smaller it is.

Determining Statistical Significance with the Student’s t-Test

A popular test for checking significance is the so-called Student’s t-Test. It’s not named so because it’s meant to drive students to despair.

Its inventor, William Sealy Gosset, initially published his ideas on this test under his alter ego “Student.”

The t-test is a hypothesis test and is often used with small samples. It aids in deciding whether to reject the null hypothesis. The null hypothesis is represented by the t-distribution, which offers an advantage over other functions like the normal distribution for small samples.

The t-test is applied to detect statistically significant differences between two variables. It can compare the mean of one variable with the mean of another. This is the most common application of the test.

Example:

We conduct an experiment with two groups of students. Both groups take the same English exam. However, one group studied using a flashcard app, while the other did not.

We might hypothesize that the group using the app achieved better results. In a t-test, we would compare the mean test scores of both groups.

It is also possible to compare the mean of a variable with a specific target or expected value.

The t-distribution also follows the shape of a bell curve.

Statistical Significance 1

For the t-test, a t-value is calculated using a specific formula. The formula for a t-test comparing a sample mean to a hypothetical mean (target value) is given by:

t = (x̄ – μ) / (s / √n)

  • is the sample mean,
  • μ is the hypothetical mean (target value),
  • s is the sample standard deviation, and
  • n is the sample size.

The t-value

The calculated t-value is then compared to the critical values from the t-distribution, based on the degrees of freedom (which, in this context, is typically n – 1) and the desired level of significance. If the t-value is close to zero, it indicates no significant difference between the sample mean and the hypothetical mean (target value). If the t-value falls in the critical region at the tails of the distribution, the difference is significant enough that the null hypothesis (no difference) should be rejected, suggesting an effect.

The critical regions (α/2) are determined by the significance level. For a two-tailed test, with a significance level of 5%, you would have 2.5% in the left tail and 2.5% in the right tail of the distribution. A two-tailed test is used when the hypothesis is non-directional (“There is some effect”). The test is one-sided when the hypothesis is directional (“There is a positive/negative effect”). In that case, the entire α (e.g., 5%) is allocated to one side of the distribution, depending on the direction of the hypothesis.

Summary

Statistical Significance is an important tool to assess the results of quantitative studies that aim to measure an effect between two variables. It tells us how probable it is that our result is based on an actual effect, or that the result was based on mere chance.

However, statistical significance does not tell us how big an effect is. This means that even though an effect is statistically significant, the effect might be very minimal. We can also never say with absolute certainty that the result was not created by chance – even with a statistically significant result, there is still a small probability left that there is no effect.

Categories
Uncategorized

Theoretical Sampling in Grounded Theory (Simply Explained)

theoretical sampling

What is theoretical sampling in grounded theory and other qualitative research?

Today, we’re going to dive into this question by exploring the origin of this approach and distinguishing theoretical sampling from other types of sampling.

By the end of this video, you’ll fully understand the tradition of the term, why theoretical sampling is different, and, of course, how you can apply it in your own empirical work.

Grounded Theory (Background)

To grasp what we mean by theoretical sampling, we need to go back to the origin of the Grounded Theory methodology.

In the 1960s, sociologists Barney Glaser and Anselm Strauss developed the Grounded Theory approach together. Their aim was to counter the prevailing quantitative paradigm and its deductive logic with a structured method for inductive theory building based on qualitative data.

The goal of Grounded Theory is not to test predefined hypotheses and thereby review or refine existing theories. Instead, its main task is generating new theories based on empirical data.

What is Sampling?

Next, we need to understand what sampling involves. The term refers to the selection of a sample.

A sample is a “selection of people or objects that represents a larger population and provides information about it” (Statista, 2020). Samples play a crucial role in empirical social research as they provide access to the data to be analyzed for a research project.

Theoretical insights are drawn from the results based on the investigation of the sample. These insights are generally intended to be valid beyond the scope of the sample itself. That’s why choosing the right sample is so important.

When writing the methodology section of your academic work, you should always make a strong case for how your sample is composed and why this composition is advantageous for your research goal.

theoretical sampling 1

Sampling in Quantitative Research

The Statista definition I just mentioned is influenced by a core principle of quantitative research: the generalizability of statistical relationships from a small sample to a larger group of people or objects.

Let’s say 100 kindergarten teachers fill out a survey, and the results are analyzed. These results are often interpreted in a way that makes statements about all kindergarten teachers represented by the sample.

In quantitative research designs, we can broadly distinguish between random samples and non-probabilistic samples. An ideal random sample consists of a group randomly selected from all persons or objects belonging to the total population.

Implementing this is challenging, as you likely cannot access all kindergarten teachers in one country or the world. Therefore, systematic or arbitrary selection methods also exist, where you might include individuals or objects in the sample that you simply have access to.

Sampling in Qualitative Research

In qualitative research, we need different sampling techniques. Here, randomness is not crucial, but rather the researcher’s judgement.

In “Purposeful Sampling,” cases or individuals are selected who, in the researcher’s view, offer a particularly high degree of information richness in relation to the research subject.

In “Snowball Sampling,” an initial case or expert is identified. Based on the knowledge or contacts of this individual, the researcher then gains access to further interesting cases and experts.

This approach can be helpful because the researcher alone might never have noticed these cases or gained access without a facilitator.

What is Theoretical Sampling?

The sampling methods mentioned so far have one thing in common: the sampling occurs BEFORE data analysis.

And that brings us back to Grounded Theory and theoretical sampling. For Grounded Theory to function, data analysis and sampling must work closely together.

Round 1

Since there is no theory at the beginning of the process, you start with a typical Purposeful Sampling and collect data from an organization or individuals based on the most important criteria for you.

Then, you perform typical steps of the Grounded Theory approach. I won’t go into these steps here – please refer to other tutorials on my channel.

After performing open, selective, and theoretical coding according to Glaser or open, axial, and selective coding according to Strauss, you have identified one or more central theoretical concepts. You may already suspect connections between them or have identified subthemes.

The fact is, your theoretical idea is still in its infancy. To solidify it, you need new data.

Round 2 (Theoretical Sampling)

This is where theoretical sampling comes into play. This time, you make your selection deliberately, based on the theory you have developed at this point.

What does that mean exactly?

Let’s say you’re developing a theory that explains the factors influencing the identity formation of employees in the context of working-from-home.

After your initial interviews with employees, you might have found that the characteristics of their workplace technology are central to their identity-building.

However, you don’t know exactly what about the use of technology is so crucial for identity formation. Could it be the type of hardware, consisting of laptops and smartphones? Or the software tools? Or how they are used?

theoretical sampling

To learn more, you now select new individuals who have extensive knowledge in this particular area. This could, for example, be members of the IT department of the company. You could also interview the same individuals again, but this time ask targeted questions about the specific theoretical connection you want to better understand.

After the second round, your mini-theory may already be taking shape. But there’s still something you don’t know:

(For example) Why must employees work with technology that is outdated and, from the company’s perspective, actually has a negative impact on their identity formation?

To find the answer, there’s no way around speaking with decision-makers. To complete your theory, you finally interview employees in management positions.

Round 3?

Now your mini-theory looks quite solid. But have you overlooked something? After speaking again with two employees, they couldn’t tell you anything new. Your theory seems accurate.

This is your cue to stop data collection.

Theoretical Sampling according to Strauss and Corbin (1998)

Strauss and Corbin further specified theoretical sampling in their seminal book. They distinguish between four stages:

  1. Open Sampling
  2. Relational Sampling
  3. Variational Sampling
  4. Discriminate Sampling

These stages provide more structure and define individual steps, which can be particularly helpful at the beginning.

Note that the recommendations by Strauss and Corbin work well only with the coding methods they also propose (open, axial, and selective).

After Barney Glaser and Anselm Strauss had a bit of an argument, two interpretations of Grounded Theory developed: one by Glaser and the other by Strauss. Make sure you understand the differences and align your own work with one of these interpretations.

If you want to learn more about the dispute between the two and the differences between Glaserian and Straussian Grounded Theory, you can read it here.