Categories
Uncategorized

How to do a Deductive Thematic Analysis (Theory-Driven Qualitative Coding)

You want to conduct a deductive thematic analysis and categorize your qualitative data based on pre-existing theory and concepts?

You’re in the right place.

In this article, I’ll explain in detail and in an easy-to-understand manner how to perform a deductive thematic analysis in 5 simple steps.

What is Thematic Analysis?

To make sure that we are on the same page about thematic analysis, I will mainly refer to the understanding of the method described by Braun and Clarke (2006;2019; 2021).

Please note that there are other authors that have their own idea of thematic analysis, however, the explications by Braun and Clark seem to be most useful to the research community.

It is important to understand that thematic analysis is flexible, which means you can apply it in different variations and customize it for your own needs.

The only thing you need to be aware of is that you do not slap a label on your approach in your methods section and then do something completely different that is not in line with the methodical ideas of the authors you just cited (the “label”).

Flexibility in thematic analysis means that you can combine different approaches such as inductive and deductive coding within the same study. But you don’t have to. It’s up to you and what makes most sense to achieving your research objectives.

In this tutorial, however, I am going to focus on applying thematic analysis in a theory-driven, deductive logic.

If you would like to learn more about inductive thematic analysis, please refer to my previous tutorial about this method.

deductive thematic analysis

What is Deductive Thematic Analysis?

Whereas inductive thematic analysis is data-driven (bottom-up), deductive thematic analysis is theory-driven (top-down).

This means that you do not develop your themes based on the statements you find in your qualitative data (e.g., interview transcripts) or the underlying patterns of meaning of these statements.

Instead, you take these statements, and classify them into a pre-existing theoretical structure.

For deductive thematic analysis, therefore, you need to think about theory before you enter your analysis.

Let’s look at the steps that you need to go through.

#1 Define your Themes

Option 1: Use pre-existing theory

For a research question that involves a specific theory, you should develop your set of pre-defined themes based on that theory. For instance, if the research question is: “What influence does remote work have on organizational identity?”, it references the “Organizational Identity Theory” (Whetten, 2006).

Theories in social sciences are usually based on the work of individual authors who have defined a specific model or the components of a theory. These can be dimensions, variables, or constructs. We take those and make them our pre-defined “themes”.

Themes

Choose the work of an author or team of authors and read the corresponding book or paper thoroughly. In our example, this would be Whetten’s 2006 paper. The author names three dimensions representing organizational identity: the ideational, definitional, and phenomenological dimensions. These dimensions would serve as excellent main themes for your study.

Sub-themes

The original source will provide further details on how these dimensions are defined, which you can use to form subthemes. It may also help to read additional literature that builds on or explains this theory. Often, primary sources are quite complex.

However, you can break down any theory into its components and logically assemble a list of themes and subthemes. A popular approach in thematic analysis is to take the main themes from the theory (in the example: ideational, definitional, and phenomenological aspects of organizational identity) and develop the subthemes inductively, based on the content you find in your data.

Option 2: Derive Themes from the Current State of Research

If your research question does not target a specific theory, such as “How are remote work models implemented in the manufacturing industry?”, you can consult not a single pre-existing theory, but turn to various current studies and extract your themes upfront.

It’s best to work with a table where you create themes and subthemes on the left and note the source(s) from which you derived them on the right.

Examples of categories related to the example could include: “Technological Infrastructure”, “Corporate Culture”, “Work Time Models”, and so on.

It’s essential that these themes clearly emerge from your review of the literature. Don’t worry about overlooking something – you can always expand or adjust your list of themes after an initial analysis round if you notice certain contents are not covered or if you encounter new literature in the meantime.

#2 Create a Codebook

To guide your analysis, you can work with coding guidelines, which are often referred to as a codebook.

Coding in this context simply means classifying a piece of content or statement as part of a theme.

A codebook is particularly useful if you are not the only person coding the data. But it also gives your method more rigour as you systematize your coding process.

Three things are particularly important to consider when you create a codebook for your deductive thematic analysis:

2.1 Define the Themes

Cover this step with the aforementioned table and put it in the document that is supposed to be your codebook.

Add another column where you precisely define when a text segment belongs to a specific category or not.

You can use a concise description of the theme to do so. Make sure to use the references of the particular theory or the literature that you used to derive the theme.

2.2 Use Anchor Examples

For each theme or subtheme, you should insert at least one example in the codebook.

This example represents the respective theme. This could simply be a direct quote from your interviews. If you are analyzing social media content, it would be a tweets, etc.

You might need to do some initial coding until you find a suitable example that you can put in your codebook.

3.3 Define Coding Rules

You can then add further comments that establish the rules how a coder should decide when a data segment is not clearly assignable to a theme.

This ensures that you act consistently throughout the coding process.

#3 Do the Coding

Consider using a software to support your coding such as Nvivo or Taguette.

This helps you to organize your analysis, especially if you have a lot of data.

Option 1: Coding alone

If you are coding alone, you can set some percentages of coding the whole dataset to set aside time for review.

There is no strict rule for this, but I would recommend doing 10% of all the coding and then stop and check if your codebook works.

If not, make changes to it and start over.

The next milstone could be somewhere around 50% of coding all your data.

Check the distribution of data segments that you assigned to your themes and make adjustments to the coding rules if necessary.

Then go ahead and finalize the coding.

Option 2: Coding in a team

If you are coding in a team, the same milestones apply.

However, now you meet as a team and discuss your coding. Compare different examples and check with each other if you are all using the codebook as it was intended.

After you have finalized the coding, you may consider calculating an inter-rater reliability measure such as Cohen’s Kappa.

Here you will get a statistical value that shows how big the agreement is between you and the other coders.

You can only calculate it, if the team members code some portion of the data simultaneously.

For example, you could take 10% of your data and everyone codes it independently. Based on this coding, you calculate the inter-rater reliability and report the value in your methods section.

If the inter-rater reliability is not good, you might have to consider going back to the codebook or redo some of the coding so that you reach better agreement among the team.

#4 Present the Findings

The next challenge is to translate this deductive thematic analysis into a structured and reader-friendly findings section.

I recommend balancing descriptive reporting of the results (e.g., with raw anchor examples (=direct quotes) from your data) and some analytical interpretation (in your own words).

Start with the structure by turning your list of themes into headings. Use subthemes, if you have any, as subheadings.

Then, add the quote examples and explain them in your own words.

Expand these explanations and examples with additional paraphrases that you consider important, and try to explain how the data connects to the pre-defined theme you have derived from literature or theory.

Always support arguments with paraphrases or direct quotes from your data.

Also, make sure to link the subchapters with appropriate transitions.

#5 Discuss Your Findings

What do these findings mean?

Use the discussion section of your paper, report, or thesis to connect back to the theory.

For a deductive thematic analysis, you must discuss your findings in light of the theory or literature you started with.

Writing an outstanding discussion is an art that goes beyond the scope of this tutorial – feel free to check out my tutorial on writing a discussion.

However, consider using tables and figures as additional tools to organize your findings or make it easier for the reader to spot what your most important findings are.

Maybe there were very few data segements assigned to one prticular theme? Or a lot in one?

Discuss what this means in regard to your research question.

If you have specific questions about thematic analysis, leave them in the comments under this video.

If you want me to dive deeper into a particular topic in a separate video, let me know in the comments as well.

Literature

📚 Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.

📚 Braun, V., & Clarke, V. (2019). Reflecting on reflexive thematic analysis. Qualitative Research in Sport, Exercise and Health, 11(4), 589–597.

📚 Braun, V., & Clarke, V. (2021). One size fits all? What counts as quality practice in (reflexive) thematic analysis? Qualitative Research in Psychology, 18(3), 328–352.

📚 Whetten, D. A. (2006). Albert and Whetten revisited: Strengthening the concept of organizational identity. Journal of management inquiry, 15(3), 219-234.

Categories
Uncategorized

How Do You Become a Professor? (3 Possible Paths)

how to become a professor

How do you become a professor?

Well, if you are considering a career in academia, then becoming a professor is the ultimate, and often only goal.

But also if you are just curious about what trials those old folks endured to earn their spot at the front of the lecture hall?

Then this video is for you.

When I was a student, I had no clue how the academic system worked.

And I bet you feel the same!

But we’re going to change that. By the end of this article, you’ll know exactly three paths that can lead to a professorship, and you’ll be able to decide whether this is something for you, or if you’d rather quickly turn your back on academia after your studies.

How Do You Become a Professor?

First, let’s explore the typical career trajectory for academics. You might be familiar with some of these steps, as they form the backbone of any academic career:

  1. PhD
  2. Postdoctoral Fellowship
  3. Assistant Professor
  4. Associate Professor
  5. Full Professor
Level 1: The PhD

Embarking on a PhD is like signing up for an academic marathon that takes around four years—if you’re lucky. Your completion time might depend on your field’s pace, your advisor’s style, and how often your experiments decide to actually work.

You can tackle a PhD as a full-time employee with teaching obligations in most countries, on a scholarship as it is mostly the case in the US or Australia, or as a side hustle to your full-time job. Get ready to learn, burn, and occasionally yearn for the finish line!

how to become a professor 1
Level 2: Postdoctoral Fellowship

Think of it as the academic victory lap after your PhD. You’re not quite a professor yet, but you’re doing mostly research, maybe teaching a bit, and definitely networking like it’s your job (because it is).

It’s your time to shine in your field, beef up that publication pipeline, and charm future colleagues. Ready, set, research!

Level 3: Assistant Professor

This is the entry-level, tenure-track position where the academic rubber meets the road.

Here, you’ll teach, research, and contribute to university life, all while aiming for the grand prize of tenure.

Over about five to six years, you’ll need to impress with publications, teaching evaluations, and community involvement.

It’s your chance to prove you have what it takes for a long-haul career in academia. Get ready to juggle tasks and time like a pro!

Level 4: Associate Professor

The academic “level up” that comes after you’ve survived the tenure trials as an Assistant Professor.

In the US, this is typically when you have “earned” tenure, which means you can stay a professor for the rest of your career if you don’t mess up big time.

In other countries, tenure can also be granted at the assistant professor level.

Anyhow, you’ve now earned the luxury of job security and the joy of juggling even more duties.

More research, more grants, more students to mentor, and even more committee meetings.

Think of it as gaining the power to bend the academic universe, just a little bit, to your will.

Congratulations, you’re in the middle of the academic ladder — don’t look down!

how to become a professor 2
Level 5: Full Professor

The academic world’s equivalent of reaching the mountaintop!

After years of research, teaching, and coffee-fueled late nights, becoming a Full Professor means you’ve published aplenty, shaped young minds, and possibly even figured out how to work the departmental photocopier.

It’s the peak where you get to enjoy the view, influence university policies, and still — yes, still — chase after the elusive work-life balance.

3 Different Paths to Becoming a Professor

What we’ve discussed so far reflects the career mechanisms of the academic system.

However, the actual achievements necessary to climb the ranks are another story.

Let’s now look at three different paths or strategies that can lead to the same goal—a professorship.

Path #1: Passion for Research

The most intuitive route to a professorship is through your talent and passion for research in your field. Here, it’s crucial how well you can translate this passion into tangible research results.

This route also often faces a major criticism of the academic system: the publish-or-perish culture. If you don’t publish enough or well enough, a career in academia is hard to achieve.

The good news? If research comes naturally to you, and you quickly see significant success, that’s a good indicator that this path might be the right one for you.

What awaits at the end of the journey, once you’ve secured a professorship?

Well, more research. It doesn’t stop. So, if research neither excites you nor comes easily, it could be challenging.

I often hear from PhD candidates that their passion lies not in research, but in teaching.

In this case, a career at a college specified in teaching might be suitable.

Here, it’s not research but teaching and sometimes industry experience that pave the way to a professorship.

Path #2: Through Savvy Science Management

If the university route is your choice, there’s another path I’ve often observed: savvy science management and strategic planning.

This approach allows you to anticipate and occupy niches in topics with high demand. This can aid in advancing your research because journals are eager to publish these topics.

Or it might attract funding from third parties, such as government bodies, due to societal interest in a topic. An example is the High-Tech Agenda Bavaria in Germany, which has created 1000 or so professorships in areas like sustainable technologies and AI.

This means that a well-chosen thematic focus can aid you in appointment processes. It makes sense to align yourself in a way that your topics are likely to grow in significance in the future.

People who have secured a professorship this way are often also excellent at networking, although this is just a personal observation.

Path #3: The Roger Federer Way

The passionate researcher and the gifted networker represent two extremes. There’s also a middle path.

This path is about being a generalist.

My favorite analogy comes from the book “Range: Why Generalists Triumph in a Specialized World,” which includes the example of Roger Federer, one of the most successful tennis players of all time.

Throughout his career, Roger Federer was never the best at any single aspect of tennis.

Andy Roddick had the best serve.

Rafael Nadal had the best forehand, and Novak Djokovic had the best backhand.

However, Roger Federer was the most complete player overall, allowing him to achieve one success after another.

This analogy applies to academia as well: a generalist who can integrate diverse skills and knowledge may not stand out in one particular niche but excels by combining multiple strengths, potentially leading to a successful career in academia.

In science, as in nearly every other career, these principles apply.

how to become a professor 3

Bonus Path: The Detour via other Countries

My personal favorite route to a professorship is through international experience. This aspect of the academic system is often a topic of heated debate.

This path is definitely a “to each their own” and “you have to decide for yourself” kind of deal. Moving abroad to secure a professorship isn’t something that’s expected of you.

Deciding how much other areas of your life should be sacrificed for the dream of becoming a professor is a choice you have to make yourself.

However, if you view an extended stay abroad as an opportunity for growth and a decidedly positive experience, then it could be the missing piece in your path to becoming a professor.

One of the advantages of the academic system is its compatibility across almost all national borders. The entire globe is your playing field.

If you choose to limit your playing field based on geographic factors, that will reduce your options, but that’s completely fine.

You decide, not the system.

If you have any questions about this, feel free to drop me a comment!

Categories
Uncategorized

David Hume’s Problem of Induction (Simply Explained)

Induction Problem shribe

The problem of induction, as formulated by David Hume, addresses one of the most significant questions in epistemology: what can science truly know?

If you’ve ever delved into empirical research methods, you’ve likely encountered the terms induction and deduction.

While a Grounded Theory approach follows an inductive logic, an experiment relies on deductive logic. Is one better than the other? How are both connected, and why are scientific results never definitive?

The answers to all these questions are tied to Hume’s problem of induction. In this video, you’ll learn everything you need to know to hold your own in a discussion with a ninth-semester philosophy student.

Additionally, this knowledge will help you better understand and critique scientific methods. It’s definitely worth sticking around.

What is Inductive Reasoning?

In science, inductive reasoning involves deriving a general theory from the observation of a specific phenomenon.

For instance, consider an interview study where 30 interviews are conducted. The data collected is analyzed using Grounded Theory, leading to a new theory.

Induction isn’t limited to qualitative research. Any type of research that draws conclusions about a theory or natural law from observations employs induction.

This could be a statistical evaluation, where conclusions about the entire population are drawn from a sample, or it could be a physicist making repeated measurements from which she derives a natural law.

What is Hume’s Problem of Induction?

David Hume’s problem of induction is a fundamental question in epistemology that deals with whether and under what conditions inductive inferences can be considered reliable or rational.

The Scottish philosopher first raised this question in the 18th century in his work “A Treatise of Human Nature.” Although Hume initially discussed the problem only in the context of empirical science, it remains relevant to all sciences that recognize induction as a valid proof method.

And there are many.

Having a bit of knowledge about the problem of induction is certainly beneficial. It continues to be referred to as “the problem of induction” because it has yet to be solved. For over two centuries, philosophers of science have been grappling with it, including the famous Karl Popper. But more on that later.

An Example of an Inductive Inference

To better understand the problem of induction, let’s look at an example of an inductive inference.

An ornithologist conducts an observation in nature. During his research expedition, he observes 100 swans, all of which are white. That’s 100%.

Assumption 1: 100% of the observed swans are white.

From this, he concludes that all swans are white.

Conclusion 1: All swans are white.

If he reasons in this way, it doesn’t matter how many more swans he observes. He could even observe 100,000 swans. The conclusion remains, which logicians describe as non-compelling. The 100,001st swan could be black, and his conclusion would be false.

Induction Problem

The Uniformity of Nature

For this conclusion to become logically rational and allow the ornithologist’s colleagues to rest easy, he must add an additional condition.

Assumption 1: 100% of the observed swans are white.

Assumption 2: All swans are similar to those already observed.

Conclusion 1: All swans are white.

This second assumption is also known as the principle of the uniformity of nature. It means that all future observations will be similar to past observations.

Or, put simply: In the future, everything will always occur as it has in the past.

So far so good.

If the principle of the uniformity of nature is true, then there is no problem of induction. The inductive conclusion would be logically valid.

But then David Hume comes into play.

He asserts: There is no logical basis for the law of the uniformity of nature. It cannot be true.

Hume himself and those who followed have tried to logically justify this principle, but have failed. This is partly because these attempts at justification themselves require inductive reasoning, which is subject to the problem of induction.

Hume writes:

“It is therefore impossible that any arguments from experience can prove this resemblance of the past to the future; since all such arguments are founded on the supposition of that resemblance. Let the course of things be ever so regular hitherto, that alone, by no means, assures us of the continuance of such regularity.”

If you’ve ever invested money in the stock market, then you know what he means.

Is Deduction the Solution to the Problem of Induction?

Two hundred years after Hume, another big player in the field of epistemology enters the scene: Karl Popper.

And he believes he has found the solution to the problem of induction.

Actually, he can’t solve it but suggests instead to simply ignore it. He completely agreed with David Hume that general laws cannot be derived through induction.

What one can logically do, however, is falsify general laws.

Instead of generating a theory based on an inductive conclusion, one could simply concoct a theory (form a hypothesis), and then try to falsify it.

What remains are only the theories that have not been falsified (yet).

Here, we are no longer in the realm of induction but in that of deductive reasoning (from general to specific).

For the philosophy of science, Popper’s new approach was a milestone. However, it was not the hoped-for solution to the problem of induction.

Induction Problem 2

Why We Should Sometimes Trust Induction

Many philosophers later showed that even Popper’s approach to falsification relies partly on inductive reasoning.

While Popper initially rejected all forms of induction as irrational early in his career, he softened his stance towards the end.

He acknowledged that under certain circumstances, there might be a pragmatic justification for induction. Consider the context of medicine, for example.

If we were to completely reject induction, both doctors and patients would face a significant problem.

After diagnosing a disease, we choose a medication that has led to healing in thousands of past cases. We thus hope that the future will behave like the past and follow an inductive conclusion.

If we rejected induction as Popper originally intended, we would have to rely more on a medication that has never been tested.

Therefore, there seems to be a difference between pragmatic and purely theoretical induction. Due to these complications, the discourse in the philosophy of science largely reached a consensus that Popper could not solve the problem of induction either.

Induction Problem 4

What This Means for Today’s Science

The problem of induction remains unsolved to this day. Concluding from this that science can know nothing with 100% accuracy is theoretically correct, but not practically helpful.

To better interpret the results of scientific studies, scientists must make a series of so-called judgment calls.

These are the additional assumptions we must make for science to be pragmatically implementable. That is, everyone must define for themselves what they are willing to assume, even if there is no formal logical basis for it.

As a scientist, one must therefore take a certain risk of being wrong. How high that risk is, can be decided by oneself.

Lee and Baskerville (2012) define 4 such judgment calls.

The first one you already know:

#1 The future will behave like the past.

The risk here is that a theory or result may no longer be true once it is applied to a new context.

#2 The conditions in the new context are similar enough to apply the theory or result there.

Imagine you’ve determined a natural law on Earth. If you apply this law to understand a phenomenon on Mars, you must assume that the conditions there are similar enough to those on Earth.

This second judgment call must also be made on a smaller scale. If you want to apply the results of a management case study from Amazon to your mid-sized company, you must assume that the conditions are similar enough to do so.

#3 The theory or natural law covers all relevant variables.

When you want to apply a theory, you must assume that it is complete and hasn’t overlooked any variable.

#4 The theory is true

This judgment call would probably not sit well with Karl Popper. But to apply a theory, you must assume it is true, even though Popper would argue this is never possible.

References

Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information systems research14(3), 221-243. https://pubsonline.informs.org/doi/abs/10.1287/isre.14.3.221.16560

Lee, A. S., & Baskerville, R. L. (2012). Conceptualizing generalizability: New contributions and a reply. MIS quarterly, 749-761. https://www.jstor.org/stable/41703479

Categories
Uncategorized

How Inquiry-Based Learning Can Get You Top 1% Grades

What is Inquiry-Based Learning?

Tired of memorizing your lecture notes? It’s pretty dull, right? How about starting your exam prep with questions instead of answers?

With inquiry-based learning, you dive deeper into your course material and discover connections you didn’t see before. Find out how questions can transform your learning experience.

In this video, I’ll show you the 3 principles behind the “inquiry-based learning” approach, how you can become more active in your learning process, and why it leads to better exam results.

The Principles of Inquiry-Based Learning

In university, your professor typically spoon-feeds you information during lectures, or you read summaries in books or your notes. That means you’re quite passive when taking in information.

You can change that with inquiry-based learning.

Inquiry-based learning is a method where you actively ask questions and independently seek answers to understand a topic.

Instead of just memorizing facts, you can be curious and think critically. You discover knowledge and connections based on the questions YOU ask, not the other way around. In short, it’s about letting your curiosity run wild.

Inquiry-based learning is based on three principles: self-directed learning, critical thinking, and the role of questions.

Self-directed learning means you take control of your learning process.

Your critical thinking is fostered as you learn to question and verify information.

And questions are your tool and starting point to discover and understand new things.

Inquiry Based Learning

Differences from Traditional Learning Approaches

Like most other students, do you learn with flashcards? Or maybe you use practice questions and past exams?

The result is that you become very good at answering those flashcards or practice questions. But it’s unlikely that these things will be tested exactly as is in the exam.

And when a question comes up that wasn’t on your flashcards or practice questions, you struggle.

The challenge with unexpected exam questions is that they’re new and unfamiliar – you’ve never seen this kind of question before. Even if your practice questions are similar, these new questions require you to think differently to achieve the best grade.

These questions are fundamentally about identifying who really understands the material.

It’s about the ability to grasp multiple concepts simultaneously and discover connections that perhaps weren’t directly taught in the lecture. This deep understanding comes from connecting knowledge.

#1 Interleaving

And this is what you practice through inquiry-based learning. It’s all about the process:

How do things connect? Why are certain facts the way they are? So, it’s about the “why” behind the facts. Instead of just memorizing information, you try to connect topics. This aligns with the Interleaving Method.

With interleaving, you switch between different topics while learning, instead of focusing on a single topic through block learning.

Studies* show that interleaving is especially effective for problem-solving. It also promotes better long-term memory and enhances your ability to flexibly apply what you’ve learned to new situations. This is exactly what you need to tackle unexpected exam questions and get the best grade.

*Taylor, K.; Rohrer, D. The effects of interleaved practice. Appl. Cogn. Psychol. 2010,24, 837–848.

Inquiry Based Learning 2

#2 Getting Practical with Inquiry-Based Learning

It’s all about recognizing connections and understanding that concepts, facts, and details only show their true meaning in comparison to others.

Let’s take an example:

In economics, a single price doesn’t tell us much without considering supply and demand. The balance between these forces helps us understand market dynamics and predict trends.

In literature, an isolated character description doesn’t mean much without understanding their relationships with other characters. The connections and conflicts between characters give stories depth and meaning, making literature richer and more engaging.

Understanding relationships gives learning its relevance. Since people tend to remember meaningful things better, these connections help us understand and retain complex topics.

Let’s consider a analogy in music:

In a song, a single note might seem insignificant without the surrounding melody. The way each note harmonizes with the others creates a beautiful tune, which gives the song its character and emotion. The context of each note within the melody and rhythm makes the music coherent and enjoyable.

Suddenly, all the pieces fit together. Instead of hearing isolated notes, you understand how they fit into the larger composition, which gives everything more meaning and solidifies and deepens your knowledge.

3# Fostering Curiosity

I’ve already mentioned several times how important curiosity is. With some topics, it’s easy to spark a natural curiosity.

Out of genuine interest, more and more questions about the topic come to mind, and you automatically delve deeper into the subject matter. But what if you struggle with certain topics? (Which, by the way, is completely normal.)

In this case, you could rely on pre-made questions to better understand connections and their importance.

Questions like “Why is this concept important?” and “How is this related to other concepts?” help you dive deeper into the topic. Once you have the answer to one question, move on.

What new questions come up now?

It’s best to write down the answers so you can revisit your thought process later.

Linear notes (writing from left to right, top to bottom) aren’t ideal because your thought processes aren’t linear. So, it’s best to start in the middle of the page and observe how your thoughts develop.

You can also go a step further and visualize connections using mind maps.

Inquiry Based Learning 3

3 Benefits of Inquiry-Based Learning

If you’re still not convinced, I’ve got three benefits of this method to motivate you to give it a try.

  1. Boost for Your Brain: Inquiry-based learning trains your brain to analyze complex problems and find creative solutions. You need this not only in your studies but also in the “real” world at work. The earlier you adopt the perspective of inquiry-based learning, the better.
  2. Bye-bye, Boredom! By pursuing your own questions, you incorporate your interests and identity into the learning process. When you follow a topic with curiosity, it becomes relevant to you. That’s why you can’t remember your neighbor’s license plate but can quote several episodes of “Friends.” You followed “Friends” with curiosity, so it was relevant to you – while your neighbor’s license plate isn’t connected to you, so it’s pretty irrelevant.
  3. Fit for the Future: The world needs people who can solve problems, and inquiry-based learning prepares you for that. It teaches you to ask questions, recognize challenges, and find creative solutions. And the best part? It makes you a lifelong learner, always open to new knowledge and experiences. In a world where the ability to adapt, think critically, and continuously learn is priceless.
Categories
Uncategorized

Statistical Significance (Simply Explained)

“When a study’s result is statistically significant,” is a phrase you’ve likely heard someone use while discussing scientific research. But what exactly does that mean?

What calculation is behind statistical significance, and when is it helpful?

In this video, you will find answers to these questions, and more.

I will also explain how statistical significance can deceive us – if we forget what can not tell us.

This knowledge will empower you to critically review scientific studies and their results, allowing you to judge whether the arguments made are actually robust.

Statistical Significance

Firstly, let’s distinguish between ‘significance’ in everyday language and ‘statistical significance.’ We usually call something significant if it’s large or noteworthy.

However, ‘statistically significant’ doesn’t necessarily imply importance. Indeed, a statistically significant result can be quite minor and inconsequential in some cases.

Statistical significance becomes relevant when we use statistical methods to analyze quantitative datasets, especially to check if there’s a potential effect between two variables.

Imagine conducting an experiment where we manipulate one variable (like giving people a dietary supplement) and observe its effect on another (such as their training endurance).

If we find this effect to be statistically significant, it’s time to celebrate and head home, right? Well, it’s not that straightforward, but more on that later.

Statistical significance helps us determine the likelihood of a measurement result occurring by chance versus indicating a real effect.

If we deem a result statistically significant, it suggests that the result from the analysis of our sample might also apply to a wider population.

Statistical Significance

Statistical Significance and Sample Size

Typically, studies are not conducted with all individuals representing a specific group (i.e., the entire population) but with a sample from this population.

For example, if you conduct a survey, maybe 200 people participate. In an experiment, it might be 60. Or perhaps you’ve collected data from social media or businesses, involving 1000 or more subjects.

These samples always represent a population, such as all “citizens who are allowed to vote in the US” or all “higher education students” and so on. Researchers then aim to generalise the results of a survey or experiment with a small group from this population (i.e., the sample) to the whole population.

The size of these samples is crucial when interpreting significance tests.

The smaller the sample, the harder it is to detect a statistically significant relationship. This is because chance plays a greater role, and a very large effect must be present for chance to be statistically ruled out.

The larger the sample, the quicker statistically significant relationships can be measured. This is because larger samples more closely approximate the entire population, making a random result increasingly unlikely.

p-Value, Test Statistic, and Null Hypothesis

A central mathematical figure for testing statistical significance is the p-value. The p-value summarizes the results of a measurement and helps determine how likely it is that the result is due to chance or an actual effect. However, the magnitude of this effect cannot be determined from the p-value alone.

More specifically, the p-value is the probability that, assuming the null hypothesis is true, the test statistic will take the observed value or an even more extreme one.

Wait a moment – let’s slow down. Here we’ve introduced two new terms.

Test Statistic and Null Hypothesis

In a significance test, two hypotheses are crucial:

H0: There is no effect.

H1: There is an effect.

Through a significance test, the null hypothesis (H0) can be rejected.

For example, this might happen if the p-value is below 0.05. If so, there is reason to believe that an effect exists beyond mere chance.

The test statistic, a function of potential outcomes, defines a rejection region. If the result falls into this area, the null hypothesis is to be rejected.

The size of this region is determined by the significance level, usually set at 0.05, or 5%. This was once arbitrarily established by someone (named Ronald Fisher), but sometimes the significance level is set at 0.01, or 1%.

Whether a result is statistically significant largely depends on the significance level used. However, a p-value becomes increasingly impressive the smaller it is.

Determining Statistical Significance with the Student’s t-Test

A popular test for checking significance is the so-called Student’s t-Test. It’s not named so because it’s meant to drive students to despair.

Its inventor, William Sealy Gosset, initially published his ideas on this test under his alter ego “Student.”

The t-test is a hypothesis test and is often used with small samples. It aids in deciding whether to reject the null hypothesis. The null hypothesis is represented by the t-distribution, which offers an advantage over other functions like the normal distribution for small samples.

The t-test is applied to detect statistically significant differences between two variables. It can compare the mean of one variable with the mean of another. This is the most common application of the test.

Example:

We conduct an experiment with two groups of students. Both groups take the same English exam. However, one group studied using a flashcard app, while the other did not.

We might hypothesize that the group using the app achieved better results. In a t-test, we would compare the mean test scores of both groups.

It is also possible to compare the mean of a variable with a specific target or expected value.

The t-distribution also follows the shape of a bell curve.

Statistical Significance 1

For the t-test, a t-value is calculated using a specific formula. The formula for a t-test comparing a sample mean to a hypothetical mean (target value) is given by:

t = (x̄ – μ) / (s / √n)

  • is the sample mean,
  • μ is the hypothetical mean (target value),
  • s is the sample standard deviation, and
  • n is the sample size.

The t-value

The calculated t-value is then compared to the critical values from the t-distribution, based on the degrees of freedom (which, in this context, is typically n – 1) and the desired level of significance. If the t-value is close to zero, it indicates no significant difference between the sample mean and the hypothetical mean (target value). If the t-value falls in the critical region at the tails of the distribution, the difference is significant enough that the null hypothesis (no difference) should be rejected, suggesting an effect.

The critical regions (α/2) are determined by the significance level. For a two-tailed test, with a significance level of 5%, you would have 2.5% in the left tail and 2.5% in the right tail of the distribution. A two-tailed test is used when the hypothesis is non-directional (“There is some effect”). The test is one-sided when the hypothesis is directional (“There is a positive/negative effect”). In that case, the entire α (e.g., 5%) is allocated to one side of the distribution, depending on the direction of the hypothesis.

Summary

Statistical Significance is an important tool to assess the results of quantitative studies that aim to measure an effect between two variables. It tells us how probable it is that our result is based on an actual effect, or that the result was based on mere chance.

However, statistical significance does not tell us how big an effect is. This means that even though an effect is statistically significant, the effect might be very minimal. We can also never say with absolute certainty that the result was not created by chance – even with a statistically significant result, there is still a small probability left that there is no effect.

Categories
Uncategorized

Theoretical Sampling in Grounded Theory (Simply Explained)

theoretical sampling

What is theoretical sampling in grounded theory and other qualitative research?

Today, we’re going to dive into this question by exploring the origin of this approach and distinguishing theoretical sampling from other types of sampling.

By the end of this video, you’ll fully understand the tradition of the term, why theoretical sampling is different, and, of course, how you can apply it in your own empirical work.

Grounded Theory (Background)

To grasp what we mean by theoretical sampling, we need to go back to the origin of the Grounded Theory methodology.

In the 1960s, sociologists Barney Glaser and Anselm Strauss developed the Grounded Theory approach together. Their aim was to counter the prevailing quantitative paradigm and its deductive logic with a structured method for inductive theory building based on qualitative data.

The goal of Grounded Theory is not to test predefined hypotheses and thereby review or refine existing theories. Instead, its main task is generating new theories based on empirical data.

What is Sampling?

Next, we need to understand what sampling involves. The term refers to the selection of a sample.

A sample is a “selection of people or objects that represents a larger population and provides information about it” (Statista, 2020). Samples play a crucial role in empirical social research as they provide access to the data to be analyzed for a research project.

Theoretical insights are drawn from the results based on the investigation of the sample. These insights are generally intended to be valid beyond the scope of the sample itself. That’s why choosing the right sample is so important.

When writing the methodology section of your academic work, you should always make a strong case for how your sample is composed and why this composition is advantageous for your research goal.

theoretical sampling 1

Sampling in Quantitative Research

The Statista definition I just mentioned is influenced by a core principle of quantitative research: the generalizability of statistical relationships from a small sample to a larger group of people or objects.

Let’s say 100 kindergarten teachers fill out a survey, and the results are analyzed. These results are often interpreted in a way that makes statements about all kindergarten teachers represented by the sample.

In quantitative research designs, we can broadly distinguish between random samples and non-probabilistic samples. An ideal random sample consists of a group randomly selected from all persons or objects belonging to the total population.

Implementing this is challenging, as you likely cannot access all kindergarten teachers in one country or the world. Therefore, systematic or arbitrary selection methods also exist, where you might include individuals or objects in the sample that you simply have access to.

Sampling in Qualitative Research

In qualitative research, we need different sampling techniques. Here, randomness is not crucial, but rather the researcher’s judgement.

In “Purposeful Sampling,” cases or individuals are selected who, in the researcher’s view, offer a particularly high degree of information richness in relation to the research subject.

In “Snowball Sampling,” an initial case or expert is identified. Based on the knowledge or contacts of this individual, the researcher then gains access to further interesting cases and experts.

This approach can be helpful because the researcher alone might never have noticed these cases or gained access without a facilitator.

What is Theoretical Sampling?

The sampling methods mentioned so far have one thing in common: the sampling occurs BEFORE data analysis.

And that brings us back to Grounded Theory and theoretical sampling. For Grounded Theory to function, data analysis and sampling must work closely together.

Round 1

Since there is no theory at the beginning of the process, you start with a typical Purposeful Sampling and collect data from an organization or individuals based on the most important criteria for you.

Then, you perform typical steps of the Grounded Theory approach. I won’t go into these steps here – please refer to other tutorials on my channel.

After performing open, selective, and theoretical coding according to Glaser or open, axial, and selective coding according to Strauss, you have identified one or more central theoretical concepts. You may already suspect connections between them or have identified subthemes.

The fact is, your theoretical idea is still in its infancy. To solidify it, you need new data.

Round 2 (Theoretical Sampling)

This is where theoretical sampling comes into play. This time, you make your selection deliberately, based on the theory you have developed at this point.

What does that mean exactly?

Let’s say you’re developing a theory that explains the factors influencing the identity formation of employees in the context of working-from-home.

After your initial interviews with employees, you might have found that the characteristics of their workplace technology are central to their identity-building.

However, you don’t know exactly what about the use of technology is so crucial for identity formation. Could it be the type of hardware, consisting of laptops and smartphones? Or the software tools? Or how they are used?

theoretical sampling

To learn more, you now select new individuals who have extensive knowledge in this particular area. This could, for example, be members of the IT department of the company. You could also interview the same individuals again, but this time ask targeted questions about the specific theoretical connection you want to better understand.

After the second round, your mini-theory may already be taking shape. But there’s still something you don’t know:

(For example) Why must employees work with technology that is outdated and, from the company’s perspective, actually has a negative impact on their identity formation?

To find the answer, there’s no way around speaking with decision-makers. To complete your theory, you finally interview employees in management positions.

Round 3?

Now your mini-theory looks quite solid. But have you overlooked something? After speaking again with two employees, they couldn’t tell you anything new. Your theory seems accurate.

This is your cue to stop data collection.

Theoretical Sampling according to Strauss and Corbin (1998)

Strauss and Corbin further specified theoretical sampling in their seminal book. They distinguish between four stages:

  1. Open Sampling
  2. Relational Sampling
  3. Variational Sampling
  4. Discriminate Sampling

These stages provide more structure and define individual steps, which can be particularly helpful at the beginning.

Note that the recommendations by Strauss and Corbin work well only with the coding methods they also propose (open, axial, and selective).

After Barney Glaser and Anselm Strauss had a bit of an argument, two interpretations of Grounded Theory developed: one by Glaser and the other by Strauss. Make sure you understand the differences and align your own work with one of these interpretations.

If you want to learn more about the dispute between the two and the differences between Glaserian and Straussian Grounded Theory, you can read it here.

Categories
Uncategorized

The Peer Review Process for Scientific Journals (Simply Explained)

peer review process

Would you like to peek behind the curtain and better understand the peer review process for scientific journals?

In this article, I’ll explain to you…

  • The concept behind the peer review process for scientific journals
  • The various types of peer review processes for scientific journals
  • How to determine if an article has been peer-reviewed
  • Which types of articles you should avoid referencing in your own academic writing.

What is a Peer Review Process for scientific journals?

To ensure quality control in science, it has become standard practice for a submitted article to be anonymously reviewed by two or more experts in the same field of research.

An article is only published if the authors can satisfactorily address the criticism raised by these “reviewers”.

The history of the peer review process as we know it dates back to 1731. The Royal Society of Edinburgh then inspired several editors of philosophical journals to have their contributions reviewed by a committee of experts (Spier, 2002).

It is also recorded that Albert Einstein had his issues with the peer review process.

In the early 20th century, Einstein primarily published in German-language journals, which at the time did not have a peer review process. When he sent an article (by mail, of course) to the prestigious Physical Review in America, he was surprised by their practice of presenting his paper to an independent expert.

In a letter, he fumed over this, withdrew his work, and published it elsewhere. He believed the comments to be nonsensical and saw no reason to address them.

peer review process shribe

Types of Peer Review Processes

The three most common types of peer review processes are single-blind, double-blind, and open peer review.

The Single-Blind Peer Review Process Here, the reviewers know the authors’ names, but the authors do not know the reviewers’ names.

The Double-Blind Peer Review Process In this process, both the authors and the reviewers remain anonymous. This requires an editor who knows everyone’s identity.

The Open Peer Review Process Here, everyone knows each other at all times. When an article is published, the reviewer reports are also published.

The last one is particularly progressive because it creates a lot of transparency and allows the iterations of an article to be tracked. But it creates other problems and biases if the anonymity is taken away.

What Happens During the Peer Review Process?

The process begins with the authors submitting their work.

The Desk Reject

The manuscript then lands “on the desk” of an editor, who has two options. Should the article be sent for peer review, or not?

If not, the authors receive a “desk reject,” meaning the article is not even sent to reviewers but is directly and irrevocably rejected by the editor.

Reasons for a desk reject vary. For example, an article might be linguistically or stylistically so far from a publishable standard that it doesn’t make sense to occupy the time of several reviewers. However, the most common reason for a desk reject is actually the fit with the journal.

Journals have specific thematic focuses, and if an article deviates from these, even if it is of high quality, it is immediately rejected.

peer review process shribe 1

Major and Minor Revisions

In a few cases, an article may be so good and important that it is accepted immediately after a round of brief feedback.

For instance, many journals accelerated their peer review processes temporarily during the COVID-19 pandemic. It would have made no sense to drag urgently needed research through a process that takes years.

Normally, if a manuscript passes the desk stage, it moves to an editor who will oversee the article until publication.

There are different types of editors, such as an Editor-in-Chief, Senior Editors, or Associate Editors. The “lowest” category of editors is responsible for recruiting reviewers. Sometimes this editor remains anonymous, and sometimes not.

This editor sends the article to 2-3 reviewers, sets a deadline, and then it’s a waiting game.

Once the reports come back, the editor reviews the reports and, of course, the article, and writes their own report. This usually summarizes the key points of the reviewer reports and may also include additional points noticed by the editor.

The editor also decides on the next steps for the article. They can follow the reviewers’ recommendations or override them. In either case, all reports are sent to the authors. If the editor unjustifiably overrides all reviewers, they risk trouble from above, such as from the Editor-in-Chief.

If the article is voted for, the authors receive a deadline by which they may revise their manuscript, and then the process starts over.

Ideally, the same reviewers are invited to check the revision. A “Major Revision” involves substantial changes to the manuscript, while “Minor Revisions” or a “Conditionally Accept” only require minor adjustments.

The number of rounds an article must go through depends on the journal. The most prestigious journals often have the most difficult and longest processes or the toughest “desk”.

What Happens After a Peer Review Process?

In single- and double-blind processes, reviewer reports are generally not published, even if they are anonymous. This has its advantages, such as not having to worry about offending someone when criticizing their work or even rejecting it from publication.

Editors often face the unenviable task of having to reject works from renowned author teams, subsequently facing their anger and disappointment.

The reason the peer review process works is solely due to the reputation people gain from being a reviewer or editor of a particular journal. Moreover, everyone wants their own articles to be reviewed, so you might think twice before declining such a request, especially at the start of your scientific career.

How Can You Identify Peer-Reviewed Articles?

There are essentially two ways to do this.

Option 1: Research the Outlet

You’re not sure how, but somehow you stumbled upon an article through Google Scholar or Google. Research the name of the journal or conference and visit its website. There, you will usually find information on whether it employs a peer review process or not.

But that’s not the end of the story. There are thousands of questionable journals, such as the Open Access journals published by MDPI.

peer review process shribe 2

Although they officially have a peer review process, it’s a joke. Their business model is that authors pay a fee, and then their article gets published. If you’re interested in a video about questionable practices in science, just leave me a comment under this article!

With established publishers, authors or universities must also pay a fee, but you can assume that the peer review process is conducted properly.

Over time, try to identify the established publishers and journals or conferences of your discipline. Citing articles of dubious origin can negatively impact your own academic work. So, even if the article fits perfectly, it might be best to steer clear.

Option 2: Filter During Your Search

If you only search databases that index peer-reviewed articles, you won’t even have to ask this question.

Find out which databases list the most important publication outlets of your discipline and limit your search to these databases.

Further Reading Spier, R. (2002). The history of the peer-review process. TRENDS in Biotechnology, 20(8), 357-358.

Categories
Study Hacks

How to Review Lecture Notes: 5 Strategies for A+ Grades

How to review lecture notes effectively is a secret that I only uncovered very late in my studies.

But you don’t have to make the same mistakes that I did, and many others still do.

I wish I had known the techniques that I am about to show you much earlier.

In this video, I’ll show you 5 strategies to transform the chaos in your notebook and your mind into structured knowledge and A+ grades.

Why Revising Lectures is the Key to Success

Knowing how to review lecture notes properly is the missing piece to your puzzle. During the lecture, you collect the other pieces—facts, concepts, ideas.

In your revision, you assemble these into a meaningful whole.

This process is crucial for a deeper understanding and the long-term retention of the material.

Active learning is the key here: It’s not just about absorbing information, which you do during the lecture, but truly processing and applying it, after the lecture.

how to review lecture notes

When Should you Review Your Lecture Notes?

The ideal time to revise your lecture notes is as soon as possible after attending the lecture.

Why?

Your brain processes fresh information most effectively. Based on Ebbinghaus’s forgetting curve, you should ideally begin within 24 hours.

Otherwise, according to this curve, you might forget about half of the lecture material within a day.

Starting your revision right after the lecture provides the best foundation for effectively embedding the information in your long-term memory.

Additionally, instructors are generally more available during the lecture period than during the exam period, when suddenly all students come with questions.

If you still don’t understand something after reviewing the lecture, it’s best to ask the teaching staff directly or attend their next office hour.

How Much Time Should You Spend on Revising Lectures?

You might be wondering, “How long should I spend revising my lectures?”

A good rule of thumb is to allocate at least half the duration of the lecture for revision. So if the lecture was 90 minutes long, try to block about 45 minutes for review within the next 24 hours.

Remember: quality over quantity.

It’s not about grinding for hours, but truly understanding the material.

If you find a topic particularly challenging, take more time. If something is clear right away, you can go through it more quickly.

During revision, you might apply techniques like the Pomodoro Technique: 25 minutes of focused work followed by a 5-minute break. Another session, and that’s it.

This helps you stay focused and productive. It’s important that you don’t just spend your time reading the script over and over. Instead, you should actively engage with the material during your revision time.

5 Strategies for Revising Your Lecture Notes Like a Pro

#1 Clarify Uncertainties

If you noticed any uncertainties or had questions during the lecture, now is the perfect time to clear things up.

If the lecture notes aren’t enough, look into the specialized literature recommended by your instructor. Sometimes the topics there are complex but explained from a different perspective.

Look for a YouTube tutorial or ask Perplexity AI.

These sources often present the material in an understandable and beginner-friendly way. And if you still have questions, don’t hesitate to get help from others.

#2 Separate Important from Unimportant – Focus on the Exam Phase

You might be at the beginning of the semester and are just revising your notes from one of the first lectures. But this is exactly when you can be smart and keep the exam period in mind.

Examine your lecture notes closely to see where the instructor placed their focus, where they explained many examples, or referenced further readings.

All these can be clues as to what might be relevant for the exam.

It’s crucial to distinguish between central concepts and less important details to make the most of your study time.

Ask yourself which information contributes to a deeper understanding of the core topics and which is more supplementary.

This way, you can set your priorities correctly, for example, if you want to start your spaced repetition sessions early.

how to review lecture notes 2

#3 Identifying Key Concepts and Central Ideas

The goal is to organize your lecture notes and check if they are complete. Focus on the main topics and central ideas of the lecture.

Begin by breaking down the lecture content into smaller segments.

Examine each topic or section individually and ask yourself:

  • What is the main message?
  • Which examples support this idea?

This analytical approach helps you understand the structure of the study material and distinguish important information from less important details.

After you have identified the central ideas, consider how they relate to each other.

  • Are there connections between different topics or concepts?
  • How do these pieces fit into the larger picture of the course?

Making such connections is crucial for deep understanding and helps you develop a comprehensive view of the material. Try representing these connections with a mind map.

#4 Making Your Lecture Revision Tangible

When revising your lecture notes, try to integrate examples or analogies to facilitate understanding of complex topics.

Ensure that these examples are closely linked to the study materials. Sometimes, using everyday situations can help make theoretical concepts tangible.

For instance, if you are trying to understand a specific economic principle, relate it to real-life shopping behavior in a supermarket.

Such real-life examples help you better understand and remember abstract ideas.

#5 Test Yourself

Pose questions about the study material to yourself and try to answer them without looking at your notes.

Focus on complex questions that require deeper understanding. By attempting to explain the concepts in your own words, you gain a clear picture of how well you truly understand the topic.

A combination that many have found to be key to success is using the flashcard app Anki and the principle of active recall.

Use practice problems or past exams to test your knowledge and see where you stand. If you encounter difficulties, do not hesitate to review those topics again.

Honest self-assessment is crucial. It’s easy to overestimate yourself and assume you’ve understood a topic. But the real challenge is to challenge yourself and admit where more work is needed.

Tools and Resources for Lecture Revision

Having the right tools and resources is indispensable for mastering the question of how to review lecture notes effectively.

#1 Digital Note-Taking Apps

Let’s start with digital note-taking apps like Evernote or OneNote. These digital tools are perfect for organizing your notes, making them searchable, and enhancing them with additional information such as images or audio recordings.

Pull what you need from your university’s Learning Management System, like Moodle or Canvas, and feed it into your “Second Brain.”

This way, everything is in one place.

#2 Research Tools

If you want to dive deeper into a subject, online databases like Google Scholar are invaluable.

Here you have access to a vast array of academic articles and studies. And if you prefer having complex topics explained to you, check out platforms like Khan Academy or Coursera.

They convey complex topics in simple and understandable ways.

#3 Flashcard Tools

For the study sessions itself, try apps like Quizlet or Anki. They make reviewing material with flashcards and clever memory training methods really effective by leveraging Spaced Repetion with personalized algorithms that always suggest you to study a certain topic at the right time.

#4 AI Tools

Moreover, always keep an eye out for the latest AI tools. They can assist you in all areas, from summarizing and researching to reviewing the learned material.

Browse my channel for some ideas for AI prompts you can implement in your study routine.

#5 Other People

Last but not least: don’t forget the social aspect of learning!

Study groups offer a great way to interact with your peers and learn together. Learning is often easier together, and sometimes new perspectives and solutions emerge in a group setting.

Plus, procrastinating together is more fun than alone 🙂

Categories
Study Hacks

How to be Productive without Burning Out (Slow Productivity)

Are you wondering how to be productive without burning out?

Well, I might just have the solution for you.

It’s called “Slow Productivity,” the title of Georgetown Professor and bestselling author Cal Newport’s latest book.

I’ve just finished reading the book, and… I’m impressed. So much so that I can’t resist sharing with you the key lessons.

The book deeply resonates with me because for years, I have been struggling with doing too much at the same time and often feeling stressed out. A PhD, a YouTube channel, writing a book, you name it.

In this video, I’ll introduce you to the three core principles of the “Slow Productivity” philosophy and offer my insights on how you can best put them into practice—no matter whether you are doing your first job, work for yourself, or study in college.

Who is Cal Newport and why should we listen to him?

In my videos, I often draw upon ideas from Cal Newport’s earlier books on achieving success in academia and, of course, his more renowned works “Deep Work” and “Digital Minimalism.”

Cal is a professor of computer science, consistently produces bestsellers, writes columns for The New Yorker, and hosts a podcast. So, if anyone embodies objective productivity criteria, it’s him.

In “Slow Productivity,” Cal shares his latest philosophy on knowledge work productivity, and it’s quite compelling. It’s not about cramming more into less time, which eventually wears us down.

Instead, it’s about accomplishing fewer things over an extended period—but things that truly matter.

#1 Do fewer things


Knowledge work involves using our cognitive abilities to add value to the world. This covers all sorts of tasks and jobs that can be done in an office or, these days, from home. But also, studying could be seen as a type of knowledge work.

What many people overlook is that knowledge work is still influenced by the Taylorist paradigm of the second industrial revolution. This means tasks are split up based on expertise, people come together in one place, and work outputs are measured quantitatively.

The idea of working from Monday to Friday and then spending our paycheck in the city over the weekend also comes from that time.

But knowledge work is changing rapidly. Especially since the beginning of the Covid pandemic, we don’t always gather in one central place anymore. How do you even measure how productive an individual knowledge worker is?

Pseudo-Productivity

According to Newport, this is where pseudo-productivity comes into play, where productivity is measured based on visibility. What’s your level of “busy” during work? How fast do you respond to emails? How many meetings do you have per day, or how often do you participate in lectures?

All of these are metrics that employers and universities use to gauge the productivity of knowledge work.

But that’s nonsense.

True productivity isn’t achieved by merely showcasing visibility and “busyness,” but by working quietly on a few, valuable projects.

To return to this state, Newport recommends a two-tiered system for managing your active projects. Projects are knowledge tasks that can only be completed over an extended period, such as writing a term paper, launching a social media campaign, or compiling a job application portfolio.

Level 1 consists of your 3 active projects

List no more than 3 active projects. Not a single one more. Only when a project is completed does a new one take its place.

Try to work on just one of these projects per day. Mentally switching between them consumes unnecessary energy.

Level 2 is your waiting list of new projects and ideas

Whether it’s your boss, your academic advisor, your client, or yourself coming up with a new project, it goes on the waiting list along with an estimated timeframe for when you’ll get to it.

If you already have 3 active projects and 2 on the waiting list, then this project takes the 6th spot. You can even communicate this if necessary.

slow productivity 2

#2 Work at a natural pace

By following Principle Number 1 and tackling fewer but more meaningful projects, you unlock an entirely new level of productivity.

What you achieve in 3 months doesn’t matter. That’s not the benchmark. The timeframe that truly speaks volumes is what you accomplish over the next 3 years (for example, the duration of your studies).

Did you know that Isaac Newton, Copernicus, or Marie Curie spent several decades working on their groundbreaking ideas before publishing them? If you were to pick any random month in their lives, they would appear quite unproductive by today’s standards.

They spent a lot of time walking, took weeks off to vacation in the countryside, and dedicated themselves to their work at a moderate pace. Yet, these individuals profoundly influenced human history with their work. They were as productive as we may never be!

So, if you expand your time horizon wide enough, you can afford to take time for other enjoyable aspects of life and avoid short-term stress. The prerequisite, of course, is that you heed Principle Number 1 and choose projects that are meaningful and important to you, so they have the potential to make an impact over a long period.

A short-term tactic Newport recommends in this regard is ritualizing your work. Associate it with something special that inspires you.

If you’re writing a vampire novel, do it at night. If you’re working on a paper about the hotel industry, do it in the lobby of your favorite hotel. These rituals help you get into a natural flow, and the work that matters to you gets done almost effortlessly.

#3 Obsess over quality

By taking on fewer projects with Slow Productivity and only seeing results much later, you’ll inevitably have to pass up short-term opportunities and say “no” to many things.

The things you do, you do them right. And by giving yourself time, you can invest more in the quality of these things.

slow productivity 1

Examples

In my research discipline, business informatics, there are two top journals. Publishing an article in one of these journals typically takes four years. During these four years, one has to forgo small successes and appears “less productive.” However, once such an article is eventually published, it influences the discipline ten times more strongly than ten small publications one could have made in those four years.

Delaying gratification for your work for so long is incredibly difficult. But if the quality of the result is significantly better as a result, you have achieved much more “productivity.” Just slowly.

Another example is articles on Amazon. A product with 4.9 stars sells 100 times more than a product with 4.5 stars. So, even if it takes 10 times longer to bring the product to this level of quality, you still get 10 times more results than from 10 4.5-star products!

Of course, Cal Newport also has a technique to increase the quality of your projects. Simply double each of your deadlines for a project. But remember, just double. Not more. You should still feel the commitment to complete the project and put in enough work. But with the extended project timeline, you have more room to focus on quality.

But never forget that high quality also requires corresponding effort. With Slow Productivity, you shouldn’t give yourself a free pass to procrastinate. Instead, give yourself enough time to achieve true mastery in your project. Create the necessary space for creativity and the freedom to breathe, so that you can approach your work with passion.

Your results will speak for themselves. Don’t be a hamster. Be a turtle. Slow is smooth and smooth is fast.

Categories
Study Hacks

Prompt Engineering for Students (Master ChatGPT & Co.)

what is prompt engineering

Prompt engineering for students might be the most important skill in 2024 and beyond – if you want to succeed in academia.

Have you ever asked ChatGPT or another AI model for advice and felt underwhelmed by the response? You might start to question all the hype—if AI can’t nail the basics, what good is it?

Hold that thought—before you pin the blame on AI, consider this: maybe the way you’re asking is part of the problem.

Yes, you heard that right! The issue might not be the AI itself but how you’re communicating with it.

In the next decade, mastering the art of crafting the right prompts—known as Prompt Engineering—will be crucial to unlocking the full potential of AI.

In this article, I’ll dive into what Prompt Engineering for students really entails and how you can start improving your interactions with ChatGPT and similar technologies immediately. Mastering this skill can not only impress your peers but also dramatically boost your productivity as a student.

Understanding Prompt Engineering for students

In today’s world, AI isn’t just a futuristic idea—it’s a part of our daily lives.

AI appears everywhere: powering search engines, guiding us through apps, and facilitating customer support through chatbots and virtual assistants.

Being able to communicate effectively with AI simplifies life and amplifies your efficiency at work, making Prompt Engineering a critical skill in the modern job market.

Good Prompt Engineering for students hinges on your ability to give precise, clear instructions. Think about it—how much time do you actually spend crafting a prompt for ChatGPT?

If it’s merely 5 seconds, that’s likely not enough. Taking a bit more time to consider your prompt can make a world of difference. A hastily typed sentence can lead to misinterpretations, whereas a thoughtfully crafted prompt, though taking a few minutes longer, can yield results that are ten times better.

How to Formulate Prompts that AI Understands

By learning how to make your requests more precise, you unleash the AI’s potential to deliver exactly the answers you’re looking for.

There are 5 basic principles that can help you successfully communicate with AI models like ChatGPT or DALL-E. These principles are Clarity, Context, Conciseness, Explicitness, and Iteration.

  1. Clarity is crucial for the AI to understand exactly what is expected of it. A clearly formulated prompt reduces misunderstandings and leads to more accurate responses. It’s about being unambiguous without leaving room for interpretation.
  2. Context provides the AI with background information relevant to answering the query. It helps the AI understand the request in the correct frame and respond appropriately. For example, adding that a text is intended for a specialist audience can influence the type of response.
  3. Conciseness aims to keep the query as compact as possible. A long, rambling prompt can confuse the AI. Instead, the prompt should be to the point, without unnecessary details.
  4. Explicitness means that specific instructions or expectations are clearly communicated. The more precisely the request is formulated, the better the AI can deliver the desired results.
  5. Iteration and Experimentation acknowledge that not every prompt is perfect right away. Prompt Engineering for students is a process of trial and adjustment. You ask a question, analyze the AI’s response, and refine your next inquiry based on the feedback. This cyclical process helps you perfect your prompts, so you receive more accurate and relevant answers.

The quality of your prompts also directly influences how developers train and improve their AI systems. Understanding which types of prompts lead to the desired responses allows developers to more specifically tailor their models to better respond to human queries.

To professionalize your prompts and systematize your Prompt Engineering, consider creating a Prompt Library today.

what is prompt engineering 3

What is a Prompt Library?

A Prompt Library is essentially a database full of prepared questions or instructions designed to elicit specific and high-quality responses from AI language models.

These collections can range from simple request examples to complex prompt sets developed for advanced applications.

3 Benefits of a Prompt Library

  1. Time-saving and Efficiency: Having access to a collection of tested prompts allows you to interact with AI systems more quickly and effectively. It saves you the effort of constantly formulating new prompts by providing proven options right at your disposal.
  2. Quality Improvement: The prompts in a library are usually optimized and tested by you or someone else to ensure they deliver reliable results. This ensures greater consistency and quality of responses from language models.
  3. Source of Inspiration: A Prompt Library serves not only as a practical tool but also as a source of inspiration. As you browse the library, you might come across new ideas for formulating your AI queries, leading to more creative and effective prompts.

For knowledge workers and students who regularly work with AI, integrating a Prompt Library into their daily workflow can be a real game-changer. Many tools and platforms now allow for the direct integration of Prompt Libraries, simplifying access and use.

Platforms

Some platforms, like PromptHero or PromptBase, offer a wide range of prompts specifically for image generators. Here, users not only share the prompts themselves but also useful information about the creative process and the results achieved. FlowGPT, on the other hand, focuses on prompts for ChatGPT and allows users to try them out directly on the platform.

In addition to using these existing libraries, you can, of course, create your own collection. Simply use an Excel spreadsheet or a Notion page and sort your prompts by category or frequency of use.

If you’ve used individual prompts and maybe made slight adjustments, save the improved prompt each time. This keeps your Prompt Library up to date, and you save valuable time the next time you need the prompt.

Prompt Engineering for students isn’t just helpful in the short term. If you’ve been following my channel for a while, you know that lifelong learning is one of my core values.

Starting to think and act like a Prompt Engineer will not only help you with your next term paper but also in applying for a job next year, your probationary work in 2 years, your important client project in 4 years, and your big career leap in 10 years.

AI is here to stay. Whether you like it or not doesn’t matter. Acting pragmatically means making friends with AI. The sooner, the better.

Why Should You Use Prompt Engineering as a student?

Why should you dive into Prompt Engineering starting TODAY? Here are the reasons why it can be of great benefit to you – whether you’re a student, a creative, or a knowledge worker:

Sharper Research

Students and knowledge workers often rely on search engines and specialized AI-powered tools to gather information. A well-formulated prompt can help filter out more precise and relevant information from a flood of data. This is crucial for academic work, literature reviews, or gathering data for projects.

Soon, companies will use their own specialized language models. These will be integrated into products from companies like Microsoft or SAP and will have access to the databases and systems within the company.

The better you become at Prompt Engineering NOW, the more valuable your skills will be for any company or for yourself.

what is prompt engineering 4

Save Time and Increase Your Efficiency

Effective Prompt Engineering enables you to save time by generating quicker and more targeted responses from language models and other AI systems.

And time is the most important resource you have.

With good Prompt Engineering for students, you can not only gain a productivity edge but also more time for the essentials. If you work more efficiently, there’s more time for travel, your family, or your hobbies.

Who says you still have to work 40 or 50 hours a week in the future? It’s up to you to define what the future of your work looks like.

Become More Creative

Whether you want to write an original text or design a unique graphic, AI needs specific prompts to generate useful suggestions. Learning how to formulate creative and inspiring prompts can support your creative process and lead to more innovative ideas.

A fairly well-known study by Professor Andreas Fügener and colleagues concluded that humans, in collaboration with AI, make both the AI and the human look old.

Relying solely on AI or ignoring it will cause you to fall behind. The key is to combine your skills with those of AI.

Understand the Mechanisms Behind AI

Learning Prompt Engineering gives you better insights into how AI models work.

This knowledge enables you to use the technology responsibly and understand its limitations.

This knowledge is valuable in an increasingly AI-driven world and can help you better understand and critically examine the ethical, technical, and social implications of using AI. This can help you position yourself as an expert and always be one step ahead of others.

Stay Flexible and Future-Proof

The ability to create effective prompts is a transferable skill that can be adapted to new AI systems and technologies.

As AI development progresses rapidly, it’s important to remain flexible and quickly adapt to new tools.

Maybe ChatGPT will no longer be relevant in a few years. But communication via natural language will remain, regardless of which tools come next.

Here, Prompt Engineering for students offers you a long-term competitive advantage. It’s not just about keeping up with current technology, but also about preparing yourself for what’s to come in the next few decades.

Applications of Prompt Engineering

Generating images and writing texts isn’t part of your job?

No matter.

Prompt Engineering is already making big waves in many areas, not only in creative fields but also in technical professions or the education sector.

In the world of technology, Prompt Engineering enables developers to work more efficiently, whether it’s creating code with tools like GitHub Copilot or automatically troubleshooting software. Researchers use AI to analyze data and make scientific discoveries by using specific prompts that direct the AI in the right direction.

Prompt Engineering is also finding applications in the education sector. It enables the creation of customized learning materials and supports interactive learning experiences that are precisely tailored to the needs of learners. Through targeted prompts, the teaching material can be dynamically adjusted, making learning more effective and interesting.

These examples are just a snippet of the many possibilities that Prompt Engineering offers. It combines creativity with technical solutions in a way that was unthinkable a few years ago.

what is prompt engineering 5

Act Now! Your Prompt Engineering Challenge

Now it’s your turn!

Use Prompt Engineering to take your next study project, a report for your supervisor, or your side hustle to the next level.

Here’s a small challenge for you: Choose a topic you’re currently working on or a project that’s coming up. Maybe you want to conduct comprehensive research on a specific subject or analyze complex data.

Apply the principles of Prompt Engineering to make more effective use of AI tools.

Start with a clear, context-related request. For example, formulate a prompt asking ChatGPT to give you a summary of the latest research findings on a specific topic, or use an AI tool for data analysis to identify patterns in your research data.

Document your steps, the various prompts you try, and the results you obtain in your own Prompt Library. Reflect on how adjusting your prompts has influenced the AI’s responses and which techniques were most effective.

Share your discoveries and insights with your classmates, in a study group, with your colleagues, or in the comment section under this video.

Let’s discuss together how we can use Prompt Engineering to inspire others with our work.