Categories
Uncategorized

How Do You Become a Professor? (3 Possible Paths)

how to become a professor

How do you become a professor?

Well, if you are considering a career in academia, then becoming a professor is the ultimate, and often only goal.

But also if you are just curious about what trials those old folks endured to earn their spot at the front of the lecture hall?

Then this video is for you.

When I was a student, I had no clue how the academic system worked.

And I bet you feel the same!

But we’re going to change that. By the end of this article, you’ll know exactly three paths that can lead to a professorship, and you’ll be able to decide whether this is something for you, or if you’d rather quickly turn your back on academia after your studies.

How Do You Become a Professor?

First, let’s explore the typical career trajectory for academics. You might be familiar with some of these steps, as they form the backbone of any academic career:

  1. PhD
  2. Postdoctoral Fellowship
  3. Assistant Professor
  4. Associate Professor
  5. Full Professor
Level 1: The PhD

Embarking on a PhD is like signing up for an academic marathon that takes around four years—if you’re lucky. Your completion time might depend on your field’s pace, your advisor’s style, and how often your experiments decide to actually work.

You can tackle a PhD as a full-time employee with teaching obligations in most countries, on a scholarship as it is mostly the case in the US or Australia, or as a side hustle to your full-time job. Get ready to learn, burn, and occasionally yearn for the finish line!

how to become a professor 1
Level 2: Postdoctoral Fellowship

Think of it as the academic victory lap after your PhD. You’re not quite a professor yet, but you’re doing mostly research, maybe teaching a bit, and definitely networking like it’s your job (because it is).

It’s your time to shine in your field, beef up that publication pipeline, and charm future colleagues. Ready, set, research!

Level 3: Assistant Professor

This is the entry-level, tenure-track position where the academic rubber meets the road.

Here, you’ll teach, research, and contribute to university life, all while aiming for the grand prize of tenure.

Over about five to six years, you’ll need to impress with publications, teaching evaluations, and community involvement.

It’s your chance to prove you have what it takes for a long-haul career in academia. Get ready to juggle tasks and time like a pro!

Level 4: Associate Professor

The academic “level up” that comes after you’ve survived the tenure trials as an Assistant Professor.

In the US, this is typically when you have “earned” tenure, which means you can stay a professor for the rest of your career if you don’t mess up big time.

In other countries, tenure can also be granted at the assistant professor level.

Anyhow, you’ve now earned the luxury of job security and the joy of juggling even more duties.

More research, more grants, more students to mentor, and even more committee meetings.

Think of it as gaining the power to bend the academic universe, just a little bit, to your will.

Congratulations, you’re in the middle of the academic ladder — don’t look down!

how to become a professor 2
Level 5: Full Professor

The academic world’s equivalent of reaching the mountaintop!

After years of research, teaching, and coffee-fueled late nights, becoming a Full Professor means you’ve published aplenty, shaped young minds, and possibly even figured out how to work the departmental photocopier.

It’s the peak where you get to enjoy the view, influence university policies, and still — yes, still — chase after the elusive work-life balance.

3 Different Paths to Becoming a Professor

What we’ve discussed so far reflects the career mechanisms of the academic system.

However, the actual achievements necessary to climb the ranks are another story.

Let’s now look at three different paths or strategies that can lead to the same goal—a professorship.

Path #1: Passion for Research

The most intuitive route to a professorship is through your talent and passion for research in your field. Here, it’s crucial how well you can translate this passion into tangible research results.

This route also often faces a major criticism of the academic system: the publish-or-perish culture. If you don’t publish enough or well enough, a career in academia is hard to achieve.

The good news? If research comes naturally to you, and you quickly see significant success, that’s a good indicator that this path might be the right one for you.

What awaits at the end of the journey, once you’ve secured a professorship?

Well, more research. It doesn’t stop. So, if research neither excites you nor comes easily, it could be challenging.

I often hear from PhD candidates that their passion lies not in research, but in teaching.

In this case, a career at a college specified in teaching might be suitable.

Here, it’s not research but teaching and sometimes industry experience that pave the way to a professorship.

Path #2: Through Savvy Science Management

If the university route is your choice, there’s another path I’ve often observed: savvy science management and strategic planning.

This approach allows you to anticipate and occupy niches in topics with high demand. This can aid in advancing your research because journals are eager to publish these topics.

Or it might attract funding from third parties, such as government bodies, due to societal interest in a topic. An example is the High-Tech Agenda Bavaria in Germany, which has created 1000 or so professorships in areas like sustainable technologies and AI.

This means that a well-chosen thematic focus can aid you in appointment processes. It makes sense to align yourself in a way that your topics are likely to grow in significance in the future.

People who have secured a professorship this way are often also excellent at networking, although this is just a personal observation.

Path #3: The Roger Federer Way

The passionate researcher and the gifted networker represent two extremes. There’s also a middle path.

This path is about being a generalist.

My favorite analogy comes from the book “Range: Why Generalists Triumph in a Specialized World,” which includes the example of Roger Federer, one of the most successful tennis players of all time.

Throughout his career, Roger Federer was never the best at any single aspect of tennis.

Andy Roddick had the best serve.

Rafael Nadal had the best forehand, and Novak Djokovic had the best backhand.

However, Roger Federer was the most complete player overall, allowing him to achieve one success after another.

This analogy applies to academia as well: a generalist who can integrate diverse skills and knowledge may not stand out in one particular niche but excels by combining multiple strengths, potentially leading to a successful career in academia.

In science, as in nearly every other career, these principles apply.

how to become a professor 3

Bonus Path: The Detour via other Countries

My personal favorite route to a professorship is through international experience. This aspect of the academic system is often a topic of heated debate.

This path is definitely a “to each their own” and “you have to decide for yourself” kind of deal. Moving abroad to secure a professorship isn’t something that’s expected of you.

Deciding how much other areas of your life should be sacrificed for the dream of becoming a professor is a choice you have to make yourself.

However, if you view an extended stay abroad as an opportunity for growth and a decidedly positive experience, then it could be the missing piece in your path to becoming a professor.

One of the advantages of the academic system is its compatibility across almost all national borders. The entire globe is your playing field.

If you choose to limit your playing field based on geographic factors, that will reduce your options, but that’s completely fine.

You decide, not the system.

If you have any questions about this, feel free to drop me a comment!

Categories
Uncategorized

David Hume’s Problem of Induction (Simply Explained)

Induction Problem shribe

The problem of induction, as formulated by David Hume, addresses one of the most significant questions in epistemology: what can science truly know?

If you’ve ever delved into empirical research methods, you’ve likely encountered the terms induction and deduction.

While a Grounded Theory approach follows an inductive logic, an experiment relies on deductive logic. Is one better than the other? How are both connected, and why are scientific results never definitive?

The answers to all these questions are tied to Hume’s problem of induction. In this video, you’ll learn everything you need to know to hold your own in a discussion with a ninth-semester philosophy student.

Additionally, this knowledge will help you better understand and critique scientific methods. It’s definitely worth sticking around.

What is Inductive Reasoning?

In science, inductive reasoning involves deriving a general theory from the observation of a specific phenomenon.

For instance, consider an interview study where 30 interviews are conducted. The data collected is analyzed using Grounded Theory, leading to a new theory.

Induction isn’t limited to qualitative research. Any type of research that draws conclusions about a theory or natural law from observations employs induction.

This could be a statistical evaluation, where conclusions about the entire population are drawn from a sample, or it could be a physicist making repeated measurements from which she derives a natural law.

What is Hume’s Problem of Induction?

David Hume’s problem of induction is a fundamental question in epistemology that deals with whether and under what conditions inductive inferences can be considered reliable or rational.

The Scottish philosopher first raised this question in the 18th century in his work “A Treatise of Human Nature.” Although Hume initially discussed the problem only in the context of empirical science, it remains relevant to all sciences that recognize induction as a valid proof method.

And there are many.

Having a bit of knowledge about the problem of induction is certainly beneficial. It continues to be referred to as “the problem of induction” because it has yet to be solved. For over two centuries, philosophers of science have been grappling with it, including the famous Karl Popper. But more on that later.

An Example of an Inductive Inference

To better understand the problem of induction, let’s look at an example of an inductive inference.

An ornithologist conducts an observation in nature. During his research expedition, he observes 100 swans, all of which are white. That’s 100%.

Assumption 1: 100% of the observed swans are white.

From this, he concludes that all swans are white.

Conclusion 1: All swans are white.

If he reasons in this way, it doesn’t matter how many more swans he observes. He could even observe 100,000 swans. The conclusion remains, which logicians describe as non-compelling. The 100,001st swan could be black, and his conclusion would be false.

Induction Problem

The Uniformity of Nature

For this conclusion to become logically rational and allow the ornithologist’s colleagues to rest easy, he must add an additional condition.

Assumption 1: 100% of the observed swans are white.

Assumption 2: All swans are similar to those already observed.

Conclusion 1: All swans are white.

This second assumption is also known as the principle of the uniformity of nature. It means that all future observations will be similar to past observations.

Or, put simply: In the future, everything will always occur as it has in the past.

So far so good.

If the principle of the uniformity of nature is true, then there is no problem of induction. The inductive conclusion would be logically valid.

But then David Hume comes into play.

He asserts: There is no logical basis for the law of the uniformity of nature. It cannot be true.

Hume himself and those who followed have tried to logically justify this principle, but have failed. This is partly because these attempts at justification themselves require inductive reasoning, which is subject to the problem of induction.

Hume writes:

“It is therefore impossible that any arguments from experience can prove this resemblance of the past to the future; since all such arguments are founded on the supposition of that resemblance. Let the course of things be ever so regular hitherto, that alone, by no means, assures us of the continuance of such regularity.”

If you’ve ever invested money in the stock market, then you know what he means.

Is Deduction the Solution to the Problem of Induction?

Two hundred years after Hume, another big player in the field of epistemology enters the scene: Karl Popper.

And he believes he has found the solution to the problem of induction.

Actually, he can’t solve it but suggests instead to simply ignore it. He completely agreed with David Hume that general laws cannot be derived through induction.

What one can logically do, however, is falsify general laws.

Instead of generating a theory based on an inductive conclusion, one could simply concoct a theory (form a hypothesis), and then try to falsify it.

What remains are only the theories that have not been falsified (yet).

Here, we are no longer in the realm of induction but in that of deductive reasoning (from general to specific).

For the philosophy of science, Popper’s new approach was a milestone. However, it was not the hoped-for solution to the problem of induction.

Induction Problem 2

Why We Should Sometimes Trust Induction

Many philosophers later showed that even Popper’s approach to falsification relies partly on inductive reasoning.

While Popper initially rejected all forms of induction as irrational early in his career, he softened his stance towards the end.

He acknowledged that under certain circumstances, there might be a pragmatic justification for induction. Consider the context of medicine, for example.

If we were to completely reject induction, both doctors and patients would face a significant problem.

After diagnosing a disease, we choose a medication that has led to healing in thousands of past cases. We thus hope that the future will behave like the past and follow an inductive conclusion.

If we rejected induction as Popper originally intended, we would have to rely more on a medication that has never been tested.

Therefore, there seems to be a difference between pragmatic and purely theoretical induction. Due to these complications, the discourse in the philosophy of science largely reached a consensus that Popper could not solve the problem of induction either.

Induction Problem 4

What This Means for Today’s Science

The problem of induction remains unsolved to this day. Concluding from this that science can know nothing with 100% accuracy is theoretically correct, but not practically helpful.

To better interpret the results of scientific studies, scientists must make a series of so-called judgment calls.

These are the additional assumptions we must make for science to be pragmatically implementable. That is, everyone must define for themselves what they are willing to assume, even if there is no formal logical basis for it.

As a scientist, one must therefore take a certain risk of being wrong. How high that risk is, can be decided by oneself.

Lee and Baskerville (2012) define 4 such judgment calls.

The first one you already know:

#1 The future will behave like the past.

The risk here is that a theory or result may no longer be true once it is applied to a new context.

#2 The conditions in the new context are similar enough to apply the theory or result there.

Imagine you’ve determined a natural law on Earth. If you apply this law to understand a phenomenon on Mars, you must assume that the conditions there are similar enough to those on Earth.

This second judgment call must also be made on a smaller scale. If you want to apply the results of a management case study from Amazon to your mid-sized company, you must assume that the conditions are similar enough to do so.

#3 The theory or natural law covers all relevant variables.

When you want to apply a theory, you must assume that it is complete and hasn’t overlooked any variable.

#4 The theory is true

This judgment call would probably not sit well with Karl Popper. But to apply a theory, you must assume it is true, even though Popper would argue this is never possible.

References

Lee, A. S., & Baskerville, R. L. (2003). Generalizing generalizability in information systems research. Information systems research14(3), 221-243. https://pubsonline.informs.org/doi/abs/10.1287/isre.14.3.221.16560

Lee, A. S., & Baskerville, R. L. (2012). Conceptualizing generalizability: New contributions and a reply. MIS quarterly, 749-761. https://www.jstor.org/stable/41703479

Categories
Uncategorized

How Inquiry-Based Learning Can Get You Top 1% Grades

What is Inquiry-Based Learning?

Tired of memorizing your lecture notes? It’s pretty dull, right? How about starting your exam prep with questions instead of answers?

With inquiry-based learning, you dive deeper into your course material and discover connections you didn’t see before. Find out how questions can transform your learning experience.

In this video, I’ll show you the 3 principles behind the “inquiry-based learning” approach, how you can become more active in your learning process, and why it leads to better exam results.

The Principles of Inquiry-Based Learning

In university, your professor typically spoon-feeds you information during lectures, or you read summaries in books or your notes. That means you’re quite passive when taking in information.

You can change that with inquiry-based learning.

Inquiry-based learning is a method where you actively ask questions and independently seek answers to understand a topic.

Instead of just memorizing facts, you can be curious and think critically. You discover knowledge and connections based on the questions YOU ask, not the other way around. In short, it’s about letting your curiosity run wild.

Inquiry-based learning is based on three principles: self-directed learning, critical thinking, and the role of questions.

Self-directed learning means you take control of your learning process.

Your critical thinking is fostered as you learn to question and verify information.

And questions are your tool and starting point to discover and understand new things.

Inquiry Based Learning

Differences from Traditional Learning Approaches

Like most other students, do you learn with flashcards? Or maybe you use practice questions and past exams?

The result is that you become very good at answering those flashcards or practice questions. But it’s unlikely that these things will be tested exactly as is in the exam.

And when a question comes up that wasn’t on your flashcards or practice questions, you struggle.

The challenge with unexpected exam questions is that they’re new and unfamiliar – you’ve never seen this kind of question before. Even if your practice questions are similar, these new questions require you to think differently to achieve the best grade.

These questions are fundamentally about identifying who really understands the material.

It’s about the ability to grasp multiple concepts simultaneously and discover connections that perhaps weren’t directly taught in the lecture. This deep understanding comes from connecting knowledge.

#1 Interleaving

And this is what you practice through inquiry-based learning. It’s all about the process:

How do things connect? Why are certain facts the way they are? So, it’s about the “why” behind the facts. Instead of just memorizing information, you try to connect topics. This aligns with the Interleaving Method.

With interleaving, you switch between different topics while learning, instead of focusing on a single topic through block learning.

Studies* show that interleaving is especially effective for problem-solving. It also promotes better long-term memory and enhances your ability to flexibly apply what you’ve learned to new situations. This is exactly what you need to tackle unexpected exam questions and get the best grade.

*Taylor, K.; Rohrer, D. The effects of interleaved practice. Appl. Cogn. Psychol. 2010,24, 837–848.

Inquiry Based Learning 2

#2 Getting Practical with Inquiry-Based Learning

It’s all about recognizing connections and understanding that concepts, facts, and details only show their true meaning in comparison to others.

Let’s take an example:

In economics, a single price doesn’t tell us much without considering supply and demand. The balance between these forces helps us understand market dynamics and predict trends.

In literature, an isolated character description doesn’t mean much without understanding their relationships with other characters. The connections and conflicts between characters give stories depth and meaning, making literature richer and more engaging.

Understanding relationships gives learning its relevance. Since people tend to remember meaningful things better, these connections help us understand and retain complex topics.

Let’s consider a analogy in music:

In a song, a single note might seem insignificant without the surrounding melody. The way each note harmonizes with the others creates a beautiful tune, which gives the song its character and emotion. The context of each note within the melody and rhythm makes the music coherent and enjoyable.

Suddenly, all the pieces fit together. Instead of hearing isolated notes, you understand how they fit into the larger composition, which gives everything more meaning and solidifies and deepens your knowledge.

3# Fostering Curiosity

I’ve already mentioned several times how important curiosity is. With some topics, it’s easy to spark a natural curiosity.

Out of genuine interest, more and more questions about the topic come to mind, and you automatically delve deeper into the subject matter. But what if you struggle with certain topics? (Which, by the way, is completely normal.)

In this case, you could rely on pre-made questions to better understand connections and their importance.

Questions like “Why is this concept important?” and “How is this related to other concepts?” help you dive deeper into the topic. Once you have the answer to one question, move on.

What new questions come up now?

It’s best to write down the answers so you can revisit your thought process later.

Linear notes (writing from left to right, top to bottom) aren’t ideal because your thought processes aren’t linear. So, it’s best to start in the middle of the page and observe how your thoughts develop.

You can also go a step further and visualize connections using mind maps.

Inquiry Based Learning 3

3 Benefits of Inquiry-Based Learning

If you’re still not convinced, I’ve got three benefits of this method to motivate you to give it a try.

  1. Boost for Your Brain: Inquiry-based learning trains your brain to analyze complex problems and find creative solutions. You need this not only in your studies but also in the “real” world at work. The earlier you adopt the perspective of inquiry-based learning, the better.
  2. Bye-bye, Boredom! By pursuing your own questions, you incorporate your interests and identity into the learning process. When you follow a topic with curiosity, it becomes relevant to you. That’s why you can’t remember your neighbor’s license plate but can quote several episodes of “Friends.” You followed “Friends” with curiosity, so it was relevant to you – while your neighbor’s license plate isn’t connected to you, so it’s pretty irrelevant.
  3. Fit for the Future: The world needs people who can solve problems, and inquiry-based learning prepares you for that. It teaches you to ask questions, recognize challenges, and find creative solutions. And the best part? It makes you a lifelong learner, always open to new knowledge and experiences. In a world where the ability to adapt, think critically, and continuously learn is priceless.
Categories
Uncategorized

Statistical Significance (Simply Explained)

“When a study’s result is statistically significant,” is a phrase you’ve likely heard someone use while discussing scientific research. But what exactly does that mean?

What calculation is behind statistical significance, and when is it helpful?

In this video, you will find answers to these questions, and more.

I will also explain how statistical significance can deceive us – if we forget what can not tell us.

This knowledge will empower you to critically review scientific studies and their results, allowing you to judge whether the arguments made are actually robust.

Statistical Significance

Firstly, let’s distinguish between ‘significance’ in everyday language and ‘statistical significance.’ We usually call something significant if it’s large or noteworthy.

However, ‘statistically significant’ doesn’t necessarily imply importance. Indeed, a statistically significant result can be quite minor and inconsequential in some cases.

Statistical significance becomes relevant when we use statistical methods to analyze quantitative datasets, especially to check if there’s a potential effect between two variables.

Imagine conducting an experiment where we manipulate one variable (like giving people a dietary supplement) and observe its effect on another (such as their training endurance).

If we find this effect to be statistically significant, it’s time to celebrate and head home, right? Well, it’s not that straightforward, but more on that later.

Statistical significance helps us determine the likelihood of a measurement result occurring by chance versus indicating a real effect.

If we deem a result statistically significant, it suggests that the result from the analysis of our sample might also apply to a wider population.

Statistical Significance

Statistical Significance and Sample Size

Typically, studies are not conducted with all individuals representing a specific group (i.e., the entire population) but with a sample from this population.

For example, if you conduct a survey, maybe 200 people participate. In an experiment, it might be 60. Or perhaps you’ve collected data from social media or businesses, involving 1000 or more subjects.

These samples always represent a population, such as all “citizens who are allowed to vote in the US” or all “higher education students” and so on. Researchers then aim to generalise the results of a survey or experiment with a small group from this population (i.e., the sample) to the whole population.

The size of these samples is crucial when interpreting significance tests.

The smaller the sample, the harder it is to detect a statistically significant relationship. This is because chance plays a greater role, and a very large effect must be present for chance to be statistically ruled out.

The larger the sample, the quicker statistically significant relationships can be measured. This is because larger samples more closely approximate the entire population, making a random result increasingly unlikely.

p-Value, Test Statistic, and Null Hypothesis

A central mathematical figure for testing statistical significance is the p-value. The p-value summarizes the results of a measurement and helps determine how likely it is that the result is due to chance or an actual effect. However, the magnitude of this effect cannot be determined from the p-value alone.

More specifically, the p-value is the probability that, assuming the null hypothesis is true, the test statistic will take the observed value or an even more extreme one.

Wait a moment – let’s slow down. Here we’ve introduced two new terms.

Test Statistic and Null Hypothesis

In a significance test, two hypotheses are crucial:

H0: There is no effect.

H1: There is an effect.

Through a significance test, the null hypothesis (H0) can be rejected.

For example, this might happen if the p-value is below 0.05. If so, there is reason to believe that an effect exists beyond mere chance.

The test statistic, a function of potential outcomes, defines a rejection region. If the result falls into this area, the null hypothesis is to be rejected.

The size of this region is determined by the significance level, usually set at 0.05, or 5%. This was once arbitrarily established by someone (named Ronald Fisher), but sometimes the significance level is set at 0.01, or 1%.

Whether a result is statistically significant largely depends on the significance level used. However, a p-value becomes increasingly impressive the smaller it is.

Determining Statistical Significance with the Student’s t-Test

A popular test for checking significance is the so-called Student’s t-Test. It’s not named so because it’s meant to drive students to despair.

Its inventor, William Sealy Gosset, initially published his ideas on this test under his alter ego “Student.”

The t-test is a hypothesis test and is often used with small samples. It aids in deciding whether to reject the null hypothesis. The null hypothesis is represented by the t-distribution, which offers an advantage over other functions like the normal distribution for small samples.

The t-test is applied to detect statistically significant differences between two variables. It can compare the mean of one variable with the mean of another. This is the most common application of the test.

Example:

We conduct an experiment with two groups of students. Both groups take the same English exam. However, one group studied using a flashcard app, while the other did not.

We might hypothesize that the group using the app achieved better results. In a t-test, we would compare the mean test scores of both groups.

It is also possible to compare the mean of a variable with a specific target or expected value.

The t-distribution also follows the shape of a bell curve.

Statistical Significance 1

For the t-test, a t-value is calculated using a specific formula. The formula for a t-test comparing a sample mean to a hypothetical mean (target value) is given by:

t = (x̄ – μ) / (s / √n)

  • is the sample mean,
  • μ is the hypothetical mean (target value),
  • s is the sample standard deviation, and
  • n is the sample size.

The t-value

The calculated t-value is then compared to the critical values from the t-distribution, based on the degrees of freedom (which, in this context, is typically n – 1) and the desired level of significance. If the t-value is close to zero, it indicates no significant difference between the sample mean and the hypothetical mean (target value). If the t-value falls in the critical region at the tails of the distribution, the difference is significant enough that the null hypothesis (no difference) should be rejected, suggesting an effect.

The critical regions (α/2) are determined by the significance level. For a two-tailed test, with a significance level of 5%, you would have 2.5% in the left tail and 2.5% in the right tail of the distribution. A two-tailed test is used when the hypothesis is non-directional (“There is some effect”). The test is one-sided when the hypothesis is directional (“There is a positive/negative effect”). In that case, the entire α (e.g., 5%) is allocated to one side of the distribution, depending on the direction of the hypothesis.

Summary

Statistical Significance is an important tool to assess the results of quantitative studies that aim to measure an effect between two variables. It tells us how probable it is that our result is based on an actual effect, or that the result was based on mere chance.

However, statistical significance does not tell us how big an effect is. This means that even though an effect is statistically significant, the effect might be very minimal. We can also never say with absolute certainty that the result was not created by chance – even with a statistically significant result, there is still a small probability left that there is no effect.

Categories
Uncategorized

Theoretical Sampling in Grounded Theory (Simply Explained)

theoretical sampling

What is theoretical sampling in grounded theory and other qualitative research?

Today, we’re going to dive into this question by exploring the origin of this approach and distinguishing theoretical sampling from other types of sampling.

By the end of this video, you’ll fully understand the tradition of the term, why theoretical sampling is different, and, of course, how you can apply it in your own empirical work.

Grounded Theory (Background)

To grasp what we mean by theoretical sampling, we need to go back to the origin of the Grounded Theory methodology.

In the 1960s, sociologists Barney Glaser and Anselm Strauss developed the Grounded Theory approach together. Their aim was to counter the prevailing quantitative paradigm and its deductive logic with a structured method for inductive theory building based on qualitative data.

The goal of Grounded Theory is not to test predefined hypotheses and thereby review or refine existing theories. Instead, its main task is generating new theories based on empirical data.

What is Sampling?

Next, we need to understand what sampling involves. The term refers to the selection of a sample.

A sample is a “selection of people or objects that represents a larger population and provides information about it” (Statista, 2020). Samples play a crucial role in empirical social research as they provide access to the data to be analyzed for a research project.

Theoretical insights are drawn from the results based on the investigation of the sample. These insights are generally intended to be valid beyond the scope of the sample itself. That’s why choosing the right sample is so important.

When writing the methodology section of your academic work, you should always make a strong case for how your sample is composed and why this composition is advantageous for your research goal.

theoretical sampling 1

Sampling in Quantitative Research

The Statista definition I just mentioned is influenced by a core principle of quantitative research: the generalizability of statistical relationships from a small sample to a larger group of people or objects.

Let’s say 100 kindergarten teachers fill out a survey, and the results are analyzed. These results are often interpreted in a way that makes statements about all kindergarten teachers represented by the sample.

In quantitative research designs, we can broadly distinguish between random samples and non-probabilistic samples. An ideal random sample consists of a group randomly selected from all persons or objects belonging to the total population.

Implementing this is challenging, as you likely cannot access all kindergarten teachers in one country or the world. Therefore, systematic or arbitrary selection methods also exist, where you might include individuals or objects in the sample that you simply have access to.

Sampling in Qualitative Research

In qualitative research, we need different sampling techniques. Here, randomness is not crucial, but rather the researcher’s judgement.

In “Purposeful Sampling,” cases or individuals are selected who, in the researcher’s view, offer a particularly high degree of information richness in relation to the research subject.

In “Snowball Sampling,” an initial case or expert is identified. Based on the knowledge or contacts of this individual, the researcher then gains access to further interesting cases and experts.

This approach can be helpful because the researcher alone might never have noticed these cases or gained access without a facilitator.

What is Theoretical Sampling?

The sampling methods mentioned so far have one thing in common: the sampling occurs BEFORE data analysis.

And that brings us back to Grounded Theory and theoretical sampling. For Grounded Theory to function, data analysis and sampling must work closely together.

Round 1

Since there is no theory at the beginning of the process, you start with a typical Purposeful Sampling and collect data from an organization or individuals based on the most important criteria for you.

Then, you perform typical steps of the Grounded Theory approach. I won’t go into these steps here – please refer to other tutorials on my channel.

After performing open, selective, and theoretical coding according to Glaser or open, axial, and selective coding according to Strauss, you have identified one or more central theoretical concepts. You may already suspect connections between them or have identified subthemes.

The fact is, your theoretical idea is still in its infancy. To solidify it, you need new data.

Round 2 (Theoretical Sampling)

This is where theoretical sampling comes into play. This time, you make your selection deliberately, based on the theory you have developed at this point.

What does that mean exactly?

Let’s say you’re developing a theory that explains the factors influencing the identity formation of employees in the context of working-from-home.

After your initial interviews with employees, you might have found that the characteristics of their workplace technology are central to their identity-building.

However, you don’t know exactly what about the use of technology is so crucial for identity formation. Could it be the type of hardware, consisting of laptops and smartphones? Or the software tools? Or how they are used?

theoretical sampling

To learn more, you now select new individuals who have extensive knowledge in this particular area. This could, for example, be members of the IT department of the company. You could also interview the same individuals again, but this time ask targeted questions about the specific theoretical connection you want to better understand.

After the second round, your mini-theory may already be taking shape. But there’s still something you don’t know:

(For example) Why must employees work with technology that is outdated and, from the company’s perspective, actually has a negative impact on their identity formation?

To find the answer, there’s no way around speaking with decision-makers. To complete your theory, you finally interview employees in management positions.

Round 3?

Now your mini-theory looks quite solid. But have you overlooked something? After speaking again with two employees, they couldn’t tell you anything new. Your theory seems accurate.

This is your cue to stop data collection.

Theoretical Sampling according to Strauss and Corbin (1998)

Strauss and Corbin further specified theoretical sampling in their seminal book. They distinguish between four stages:

  1. Open Sampling
  2. Relational Sampling
  3. Variational Sampling
  4. Discriminate Sampling

These stages provide more structure and define individual steps, which can be particularly helpful at the beginning.

Note that the recommendations by Strauss and Corbin work well only with the coding methods they also propose (open, axial, and selective).

After Barney Glaser and Anselm Strauss had a bit of an argument, two interpretations of Grounded Theory developed: one by Glaser and the other by Strauss. Make sure you understand the differences and align your own work with one of these interpretations.

If you want to learn more about the dispute between the two and the differences between Glaserian and Straussian Grounded Theory, you can read it here.

Categories
Uncategorized

The Peer Review Process for Scientific Journals (Simply Explained)

peer review process

Would you like to peek behind the curtain and better understand the peer review process for scientific journals?

In this article, I’ll explain to you…

  • The concept behind the peer review process for scientific journals
  • The various types of peer review processes for scientific journals
  • How to determine if an article has been peer-reviewed
  • Which types of articles you should avoid referencing in your own academic writing.

What is a Peer Review Process for scientific journals?

To ensure quality control in science, it has become standard practice for a submitted article to be anonymously reviewed by two or more experts in the same field of research.

An article is only published if the authors can satisfactorily address the criticism raised by these “reviewers”.

The history of the peer review process as we know it dates back to 1731. The Royal Society of Edinburgh then inspired several editors of philosophical journals to have their contributions reviewed by a committee of experts (Spier, 2002).

It is also recorded that Albert Einstein had his issues with the peer review process.

In the early 20th century, Einstein primarily published in German-language journals, which at the time did not have a peer review process. When he sent an article (by mail, of course) to the prestigious Physical Review in America, he was surprised by their practice of presenting his paper to an independent expert.

In a letter, he fumed over this, withdrew his work, and published it elsewhere. He believed the comments to be nonsensical and saw no reason to address them.

peer review process shribe

Types of Peer Review Processes

The three most common types of peer review processes are single-blind, double-blind, and open peer review.

The Single-Blind Peer Review Process Here, the reviewers know the authors’ names, but the authors do not know the reviewers’ names.

The Double-Blind Peer Review Process In this process, both the authors and the reviewers remain anonymous. This requires an editor who knows everyone’s identity.

The Open Peer Review Process Here, everyone knows each other at all times. When an article is published, the reviewer reports are also published.

The last one is particularly progressive because it creates a lot of transparency and allows the iterations of an article to be tracked. But it creates other problems and biases if the anonymity is taken away.

What Happens During the Peer Review Process?

The process begins with the authors submitting their work.

The Desk Reject

The manuscript then lands “on the desk” of an editor, who has two options. Should the article be sent for peer review, or not?

If not, the authors receive a “desk reject,” meaning the article is not even sent to reviewers but is directly and irrevocably rejected by the editor.

Reasons for a desk reject vary. For example, an article might be linguistically or stylistically so far from a publishable standard that it doesn’t make sense to occupy the time of several reviewers. However, the most common reason for a desk reject is actually the fit with the journal.

Journals have specific thematic focuses, and if an article deviates from these, even if it is of high quality, it is immediately rejected.

peer review process shribe 1

Major and Minor Revisions

In a few cases, an article may be so good and important that it is accepted immediately after a round of brief feedback.

For instance, many journals accelerated their peer review processes temporarily during the COVID-19 pandemic. It would have made no sense to drag urgently needed research through a process that takes years.

Normally, if a manuscript passes the desk stage, it moves to an editor who will oversee the article until publication.

There are different types of editors, such as an Editor-in-Chief, Senior Editors, or Associate Editors. The “lowest” category of editors is responsible for recruiting reviewers. Sometimes this editor remains anonymous, and sometimes not.

This editor sends the article to 2-3 reviewers, sets a deadline, and then it’s a waiting game.

Once the reports come back, the editor reviews the reports and, of course, the article, and writes their own report. This usually summarizes the key points of the reviewer reports and may also include additional points noticed by the editor.

The editor also decides on the next steps for the article. They can follow the reviewers’ recommendations or override them. In either case, all reports are sent to the authors. If the editor unjustifiably overrides all reviewers, they risk trouble from above, such as from the Editor-in-Chief.

If the article is voted for, the authors receive a deadline by which they may revise their manuscript, and then the process starts over.

Ideally, the same reviewers are invited to check the revision. A “Major Revision” involves substantial changes to the manuscript, while “Minor Revisions” or a “Conditionally Accept” only require minor adjustments.

The number of rounds an article must go through depends on the journal. The most prestigious journals often have the most difficult and longest processes or the toughest “desk”.

What Happens After a Peer Review Process?

In single- and double-blind processes, reviewer reports are generally not published, even if they are anonymous. This has its advantages, such as not having to worry about offending someone when criticizing their work or even rejecting it from publication.

Editors often face the unenviable task of having to reject works from renowned author teams, subsequently facing their anger and disappointment.

The reason the peer review process works is solely due to the reputation people gain from being a reviewer or editor of a particular journal. Moreover, everyone wants their own articles to be reviewed, so you might think twice before declining such a request, especially at the start of your scientific career.

How Can You Identify Peer-Reviewed Articles?

There are essentially two ways to do this.

Option 1: Research the Outlet

You’re not sure how, but somehow you stumbled upon an article through Google Scholar or Google. Research the name of the journal or conference and visit its website. There, you will usually find information on whether it employs a peer review process or not.

But that’s not the end of the story. There are thousands of questionable journals, such as the Open Access journals published by MDPI.

peer review process shribe 2

Although they officially have a peer review process, it’s a joke. Their business model is that authors pay a fee, and then their article gets published. If you’re interested in a video about questionable practices in science, just leave me a comment under this article!

With established publishers, authors or universities must also pay a fee, but you can assume that the peer review process is conducted properly.

Over time, try to identify the established publishers and journals or conferences of your discipline. Citing articles of dubious origin can negatively impact your own academic work. So, even if the article fits perfectly, it might be best to steer clear.

Option 2: Filter During Your Search

If you only search databases that index peer-reviewed articles, you won’t even have to ask this question.

Find out which databases list the most important publication outlets of your discipline and limit your search to these databases.

Further Reading Spier, R. (2002). The history of the peer-review process. TRENDS in Biotechnology, 20(8), 357-358.

Categories
Uncategorized

Hermeneutics Simply Explained (Hermeneutic Circle & Gadamer)

Are you looking for someone to explain the concept of hermeneutics in simple terms? Then buckle up, because things are about to get philosophical.

In this article, in less than 10 minutes, you’ll get an overview of the following 3 things:

  1. What is the theory or philosophy behind the term hermeneutics?
  2. What is the hermeneutic circle and what is it used for?
  3. And who are the key thinkers in hermeneutics that you definitely should have heard of?

So, if you’re looking for a quick and painless overview of the topic, keep reading. By the end of this article, you’ll be at least a bit wiser than before.

Hermeneutics Explained

What is Hermeneutics?

Hermeneutics is “the art of methodically guided understanding” (Kaus, 2022, p.1), which means that it helps us as a researcher in understanding the overarching structures of meaning in human life and action.

Primarily dealing with written texts, as they offer potential access to these structures of meaning, hermeneutics is particularly relevant in the humanities. However, the significance of hermeneutics extends far beyond the boundaries of individual disciplines.

It involves interpreting texts or other symbols, as well as interpreting the act of interpreting itself. It’s about how we, as researchers, can better understand the social life around us.

Hermeneutics can thus serve as an auxiliary science for various disciplines. Whether a theologian aiming to understand the Bible, a lawyer interpreting legislation, or an educator decoding youth slang – all these scenarios require guidance on interpretation.

Are you still with me? I hope so. We’re about to get to the more tangible part. Hang in there.

Philosophical Hermeneutics (Gadamer)

Philosopher Hans-Georg Gadamer takes a step further, conceiving hermeneutics not just as a tool for interpretation but as a process that touches on something much more fundamental.

In a philosophical sense, hermeneutics can also deal with how people or even entire nations understand each other.

For Gadamer’s teacher Martin Heidegger, it was already clear that hermeneutics represents a fundamental principle of human existence. That is, we humans are constantly engaged in understanding, and existence itself means to understand.

Gadamer particularly emphasized the role of language in hermeneutics. For him, understanding is always connected with language.

He also coined what I consider the most beautiful metaphor for understanding hermeneutics better. He describes hermeneutics as a never-ending conversation (“The Infinite Conversation”).

Imagine you’re a researcher looking for structures of meaning, and your data material is a text. Imagine you are you, and the text is your counterpart.

The metaphor of the infinite conversation suggests that you approach your counterpart (the text) with an open attitude.

You have certain preconceptions, which you “put at risk”. You’re open to the idea that the assumptions you entered the conversation with might be replaced by others, depending on what you learn from the conversation (with the text).

The conversation is infinite because hermeneutics repeats this adjustment of pre-knowledge and new knowledge over and over again. We’ll take a closer look at this important principle in a moment.

Another famous image used by Gadamer is the horizon. It represents the structure of meaning and the knowledge we are exposed to. The horizon affects us when we want to understand something new and provides us with orientation.

The Hermeneutic Circle

Having introduced Gadamer’s ontological considerations on hermeneutics, let’s see how this principle can be applied to concrete scientific methods.

If you’re familiar with my tutorials on qualitative content analysis or thematic analysis, this will seem familiar. These approaches can be seen as a hermeneutic process:

The analysis of qualitative data often does not proceed sequentially from start to finish. Instead, the process is dynamic. You can always return from the analysis to the research question, or from the presentation of results back to the development of categories.

This approach is fundamentally based on the hermeneutic circle. It envisages moving back and forth in spiral movements between pre-understanding and text understanding, similar to Gadamer’s infinite conversation. This principle is particularly evident in a quote from Jürgen Bolten (1985):

“Understanding a text means, therefore, to comprehend features of the text’s structure or content and its production, incorporating the text and reception history as well as reflecting on one’s own interpretation stance within a reciprocal justification relationship. The fact that there can be no false or correct interpretations, but at best more or less appropriate ones, follows from the […] historicity of the constituents of understanding and the related unfinishability of the hermeneutic spiral. […] According to the spiral movement, the interpretation regarding its hypothesis formation is subject to a mechanism of self-correction.” (pp. 362-363)

It’s also important to mention the relationship between the whole and the parts in the hermeneutic circle. Understanding a text means understanding its parts in relation to the whole and vice versa. This is why it’s called a circle or a spiral.

This principle also applies to the relationship between the text and its context, or between different texts. The interpretation always moves in a circle between understanding the individual parts and the whole.

Who Should You Know?

  1. Hans-Georg Gadamer: As mentioned, Gadamer is a key figure in hermeneutics. His work “Truth and Method” is a foundational text in the field.
  2. Martin Heidegger: Before Gadamer, Heidegger laid the groundwork for existential hermeneutics. His main work, “Being and Time,” is crucial for understanding the philosophical underpinnings of hermeneutics.
  3. Friedrich Schleiermacher: Often considered the father of modern hermeneutics, Schleiermacher emphasized the importance of understanding the author’s intention and the historical context.
  4. Wilhelm Dilthey: Dilthey developed the concept of the hermeneutic circle and stressed the difference between explaining natural phenomena and understanding human expressions.
hermeneutics

Challenges of Hermeneutics

The inductive reasoning and the “infinity” of the hermeneutic approach can lead to challenges.

One of the most well-known issues in the history of hermeneutics revolves around the interpretation of the Bible. Here, a particular case arises due to the Bible being written by various authors, at different times, and within different cultural epochs.

If one understands the Bible as a cohesive work and deduces the whole from its parts, things become tricky.

The second challenge of hermeneutics and the hermeneutic circle is their infinity. If knowledge can never truly be considered complete, then there’s always a certain provisional nature to it.

We can never arrive at definitive statements, but must consider everything with reservations. This can be unsatisfying in some cases.

Conclusion

Hermeneutics, then, is not just a fancy term for interpreting texts. It’s a fundamental approach to understanding the world around us, grounded in the principle that our preconceptions and the context of our understanding are always in dialogue with what we seek to understand.

Whether you’re a student, a researcher, or simply someone interested in the philosophy of understanding, grappling with hermeneutics can deepen your appreciation for the complexities of interpretation.

So, the next time you sit down to interpret a text, remember: you’re engaging in a process that philosophers have pondered for centuries, and you’re part of the infinite conversation that is understanding itself.


Categories
Uncategorized

How to Get over Fear of Presenting in Class (7 Quick Remedies)

The announcement “In this subject, the assessement is a 45-minute presentation,” immediately triggers your fear of presenting in class?

Oh no! Where’s the exit?

The mere thought of your next presentation sends your pulse racing, induces sweat, and triggers an urge to flee? Then, it’s time to conquer your fear of presentations.

Don’t worry, from personal experience, I know exactly how daunting it can be to stand in front of the entire class and have to deliver a speech.

To help you feel more confident in your next presentation, I’ll share 7 tips in this video on how to manage your stage fright and perform with confidence.

#1 Practice Makes Perfect

Practice? Well, that’s nothing new. True, but this tip is indispensable for overcoming your fear of presentating in class.

The better prepared you are for your presentation, the greater your confidence will be, and simultaneously, your fear and nervousness will decrease.

Once you’ve developed your presentation, practice delivering it. Start by presenting to yourself.

It’s often recommended to practice in front of a mirror. This is a great way to see and improve your body language.

However, it can sometimes feel odd to watch yourself in the mirror, especially if you’re just starting out and feeling uncertain.

Initially, you can practice without a mirror and go through the presentation out loud at your desk.

Or simply walk around the room while going through your speech (this is my favorite method).

Speak loudly and clearly. The greatest learning effect comes from having to actually articulate the sentences you’ve planned.

Just by presenting out loud, you’ll save yourself from long pauses for thought when it really counts, and you’ll automatically gain more confidence.

Next, you can practice your speech in front of a mirror or even record yourself on video. Even if it still feels a bit uncomfortable, give it a try!

No one but you will ever see it, and you can observe your own body language and identify weaknesses in your delivery.

A test audience is also great for practicing your presentation. Ask family or friends for feedback!

This way, you’ll go into the presentation even more confidently. It’s important to practice exactly how the presentation will be conducted at the university. So, use the same laptop, the same notes, and slides.

Don’t forget to also prepare for possible discussion questions. A stuttering discussion can ruin a lot of your hard work.

By the way, even the world’s great speakers practice before they go on stage. All professionals practice, which is why they are so good.

If you want to learn more about this, I recommend reading “Turning Pro” by Steven Pressfield.

#2 Convey Confidence with Body Language

During a presentation, it’s not just about what you say, but also how you convey it nonverbally.

Body language plays a crucial role, as it affects not only the audience but also your subconscious.

Even if you’d rather shrink and disappear into the ground, stand up straight with a firm stance, feet a bit wider than usual, pull your shoulders back, and put on a smile – you’ll automatically feel more confident.

And if you’re still panicking on the inside, then just don’t let anyone see it. “Fake it ’til you make it.”

If some of your friends are among your classmates, even better. Try to make eye contact with them. They want you to succeed, and as your cheerleaders, they boost your confidence!

From my own experience, however, you need to master Step 1, practicing the content, before you can optimize your body language.

If you have to constantly think about what to say next, you’ll forget to pay attention to your posture.

The more you practice, the more attention you can give to the details, and the better you can overcome your fear of presenting in class.

If you don’t know what to do with your arms, hold a pen or a presenter. This gives your hands something to do, and you can gesture more effectively.

Overcoming the fear of presentations

#3 Calmness through Breathing and Relaxation Exercises

In stressful situations, we tend to breathe shallowly and quickly, which amplifies our nervousness. Through targeted breathing exercises, you can interrupt this reaction, lower your pulse, and get your nervousness under control.

A simple technique is diaphragmatic breathing, where you breathe deeply into your belly and slowly exhale. This improves oxygen flow, better energizes your brain, and automatically calms you down.

Place your hand on your belly to feel if you’re really breathing deeply into it.

You can also try to take a deep breath through your nose after your normal, shallow breath. As if you were trying to fill your lungs completely with air. Repeat it 3 times.

This technique slows down your pulse and calms you down.

Relaxation exercises like progressive muscle relaxation or meditation can also help reduce your tension and strengthen your concentration.

Regular practice of these techniques can decrease your fear of presenting in class over the long term and help you approach presentations more calmly and confidently. If you’re not already using a meditation app, now is the time!

My favorite is “Waking Up” by Sam Harris.

Before your next speech, it’s definitely worth spending a few minutes on breathing and relaxation exercises. You don’t necessarily have to do the exercises and meditations right before your speech, as that wouldn’t be practical.

Immediately before your speech, however, you will remember the techniques you have practiced extensively and can apply them unnoticed and spontaneously.

#4 Embrace Your Inner Stoic

“Surely I’ll forget half of it.” “I’m not prepared at all.” “My presentation is bad, the others are much better.”

Is your inner critic in top form on the day of the presentation? We definitely need to counteract this, as it unnecessarily increases your fear of presenting in class.

You can’t change your external circumstances. But you can change how you respond to them.

  • Accept Your Emotions: It’s normal to feel fear or nervousness. Accept these feelings, but don’t let them dictate your actions.
  • Focus on What You Can Control: Concentrate on the things you can influence in your preparation and presentation, and let go of things that are beyond your control, such as the audience’s reactions.
  • Use Negative Visualization: Imagine the worst-case scenario as mental training. Visualize your presentation going wrong, but then visualize how you calmly and collectedly respond to it.

Familiarize Yourself with Stoic Philosophy if your emotions and fears often overwhelm you. Often, giving a situation a new, more positive framing is enough to become more composed.

Look into books by Ryan Holiday and William B. Irvine.

overcoming the fear of presentations 2

#5 Leave Nothing to Chance

You’re optimally prepared, the speech is ready, and belly breathing is already showing its effects. Time to start!

Quickly connect your laptop… Oh God, what kind of projector is this?

Why doesn’t the cable fit? And suddenly, the nervousness rises, and you’re completely thrown off.

To avoid this, familiarize yourself with the technology and the premises in advance. Arrive at least half an hour early and test the technology calmly.

Even better: Stay a little longer in the room the week before and get familiar with the set up in the room. If you want to show a video, test the sound and whether it plays through the room’s speakers.

Also, prepare backups. Prepare for the eventuality that the internet in the room doesn’t work. Download all files locally to your computer. Pull everything onto a USB stick, just in case. Bring your own HDMI adapter, just in case.

Be a pro. Be prepared. This way, you can overcome your fear of presentations.

#6 Focus on the Essentials

You’re deep into the topic, and so far, everything is going well. But what’s going on in the third row on the left? Why is someone yawning? And in the fifth row, someone has their head on the table!

Am I putting everyone to sleep? Even the professor just glanced out the window! Oh no! Even the Stoic gods can’t help me now. My presentation is boring and therefore bad!

Don’t make assumptions.

Try not to take everything personally. Even if someone yawns, it doesn’t automatically have to do with your presentation.

Maybe the person just slept poorly or was out late at a party. Stay focused on yourself and don’t be distracted by the audience.

I’ve had rows of students fall asleep during my lectures. But honestly, that’s happened to me before, too. It certainly wasn’t because of the lecture… 😉

It’s normal for different reactions to occur in the audience, and this doesn’t necessarily reflect the quality of your presentation. Stay calm and focus on what you have prepared well and how you want to convey the knowledge.

If they’re not listening, it’s their loss.

You do your part.

overcoming the fear of presenting in class

#7 Always Compare Yourself to Yourself

You don’t want to forget anything during the presentation, misspeak, blush, and definitely want to be able to answer ALL questions.

If you aim to do everything perfectly, you’re more afraid of failing. Try not to be so hard on yourself because there’s no such thing as perfection.

Even Justin Bieber has had a blackout during a concert, and the news anchors mess up despite years of experience.

And who remembers it the next day?

No one.

Realize: Nothing is perfect, so you don’t have to be perfect either. If you lose the thread, pause for a moment, collect yourself, and continue.

As long as your presentation is 1% better than your last, you’re doing everything right.

You’ll find that it becomes easier to give a presentation with each one.

So, be patient with yourself and don’t expect to overcome your fear of presentations overnight.

It’s a learning process, and with each presentation, you’ll become more confident and better. University is there to grow and make mistakes.

That’s the only way you’ll improve.

Why Nervousness (in Moderation) Can Also Be Positive to overcome your fear of presenting in class

You can alleviate the fear of presentations with the aforementioned 7 tips and a bit of patience.

However, a certain degree of nervousness before your presentation will almost certainly remain.

And that’s a good thing and often even makes your presentation better. So, you don’t need to completely overcome your fear of presenting in class.

Our body produces adrenaline in stressful situations. This hormone boost provides your body with more oxygen and energy – you’re more alert and your concentration and performance improve.

In short: The tension helps you to successfully manage the presentation.

If you view your nervousness as a positive companion, you can accept it as part of the natural process and deal with it. It’s important to recognize that you don’t have to be perfect and that it’s okay to be nervous.

Pros are nervous too. Nervous but prepared. Be a pro.

Categories
Uncategorized

How to Write a Thesis in 2 Weeks: A 7-Step Emergency Plan

Need to write your 10,000 word thesis in 2 weeks? Oh dear! Well, let’s quickly figure out a solution.

Since you don’t have much time, the next 15 minutes of reading should be enough.

In this video, you’ll get personal emergency coaching consisting of 7 steps.

If you follow these steps one after the other in the given time, you still have a chance to submit your thesis in 2 weeks without failing.

Disclaimer Before we start, here’s a 10-second disclaimer:

I would never, ever recommend letting it come to this. If you’re only giving yourself 2 weeks for your thesis or final paper, you’ll have your reasons. These reasons are none of my business and are entirely your own.

If your goal is to excel intellectually in your thesis and get a top grade, then this video is not for you—in that case, you should check out my other tutorials.

This article is for you if you just want to get your thesis onto paper as quickly as possible—and pass.

So, let’s get to work.

#1 Lower Expectations and Drastically Increase Priority

The first question that comes to mind in such a situation is this:

Is it even possible to write a 10,000 word thesis in 2 weeks?

Of course, it’s possible.

It is even possible without any dirty tricks, plagiarism, or a Red Bull poisoning.

But only under certain conditions.

Lower Your Expectations to Zero

The first condition is to accept the situation and eliminate your expectations of a good grade or anything else. Approach the situation stoically and do your best to show yourself that you don’t give up.

That you don’t throw in the towel but make the best of the situation.

Visualize that you fail this thesis and accept that too.

This shouldn’t lead you to half-hearted action in the 2 weeks you have left. Rather, these 2 weeks are your chance to take on the challenge.

Now you can intensively train how to write a thesis in 2 weeks.

And if a second attempt comes, you won’t have to start from scratch.

Treat Your Thesis as Priority Number 1

Whether you can write a thesis in 2 weeks doesn’t depend on whether you can write well or are blessed with other talents. What decides now is your time management.

If you don’t make your thesis your top priority, it will be very difficult.

If you really want a chance, then your thesis must be the only thing you focus on now.

(Besides your health, but more on that later)

Make a contract with yourself and signal to your environment that you will be unavailable for a short time as usual. Put your phone in the fridge in the morning and don’t get it out until after work.

(Please don’t actually put it in the fridge, put it in a drawer.)

#2 Your Research Strategy for a Thesis in 2 Weeks

Enough of the admonishing words, now let’s move on to the substantive strategy for your daring venture.

The first strategic decision you must make concerns your research strategy.

What do I mean by that?

Basically, your research question and the method you want to use to address it.

Scientific papers in social sciences, but also beyond, can be divided into empirical and non-empirical works.

Why No Empirical Thesis?

For an empirical research design, you need qualitative or quantitative data that you collect through a survey or interviews and then analyze.

I would advise you, if you can make this decision yourself, to not pursue an empirical research design in this situation.

Not because it’s more difficult or time-consuming than a conceptual or literature-based work.

The reason is that you depend on other people.

You need to get people to fill out your survey or give you an interview.

Any situation in which you rely on others should be avoided if you want to write a thesis in 2 weeks.

Last Resort: A Literature-Based Thesis?

The only scenario in which an empirical research design makes sense is if you have already collected the data or have been provided with it.

Or any other case where you completely control the implementation of your methods, such as a simulation.

Bachelor thesis in 2 weeks

So, ask yourself what dependencies you have in your strategy and eliminate them all.

If you don’t have data available, you’re left with a literature-based thesis.

You can write a review for which you collect your “data” in the form of scientific articles.

The good thing about it is that you can decide for yourself how quickly you get your literature.

What types of reviews there are and what the literature collection looks like, you’ll learn in my tutorial on how to write a literature review.

Not all supervisers would expect you to write a critical review or a systematic review in a 10,000 word thesis.

Your thesis can also address a research question that you answer with an unstructured analysis of literature.

However, I would recommend choosing a recognized review strategy and implementing it step-by-step.

The advantage is that you can refer to one or two methodology articles that explain exactly how to proceed.

All you have to do is follow the instructions.

So you don’t have to waste thoughts on how to structure your thesis or what your research question should look like.

Everything is predetermined, and you save valuable time and energy.

If you choose the research strategy of a standalone literature review, you don’t need to feel bad about writing a thesis that is less valuable than an empirical one.

The synthesis of literature is an important part of scientific practice and can lead to great results!

#3 Set Up a Work Plan to Write Your Thesis in 2 Weeks

The next step in your emergency plan is a strict time allocation. Since every minute counts, you must work with sharp deadlines that you keep for yourself.

To not make it unnecessarily complicated, I suggest dividing the remaining time into three equal thirds.

Bachelor thesis in 2 weeks 2

1. Data Collection (first third)

2. Data Analysis (second third)

3. Text Production (third third)

Assuming you have 2 weeks available, you have 4 days and a few hours per third. Set the deadline for each third in the calendar and stick to it.

This way, you also have a sense of achievement every 4 days that motivates you to keep going.

Now let’s focus on each specific third and what you need to do.

#4 Data Collection (First Third)

For simplicity, let’s assume you’ve chosen a systematic literature review as your research approach.

Literature Search & Screening (Day 1)

On this day, your goal is to gather all the literature you need. Define your search key words and databases and try to land somewhere between 100 and 500 hits.

Once you’ve collected all the hits based on your keywords, the screening follows in the second half of the day. Now read the titles and abstracts and sort out.

If you end up somewhere between 20 and 30 relevant articles, that’s OK.

If you’re below that, keep searching through forward and backward search. You’ll learn what that is in my other tutorial on literature reviews.

Read, Read, Read (Day 2 and Day 3)

Now make yourself comfortable somewhere where you’re undisturbed and read your 20-30 relevant articles.

No one said you can’t have fun with your turbo thesis. So go to your favorite place and start reading. You can’t get around reading. Because without input, no output. The more you read on these days, the easier text production will be later on.

Collect text passages for indirect and direct quotes in an excerpt table.

How to set it up, you’ll learn in my tutorial on how to write an excerpt, where you also get a template for a table to start with right away.

Literature Management (Day 4)

Load all your relevant articles into your literature management tool (e.g., Mendeley or Zotero) and check if all metadata are correctly entered.

If not, supplement them for each article. If you have additional literature that you already know from your studies, add it and check the meta data of those.

If your thesis has about plus or minus 10% as many references as pages required, then you’re in a good range regarding the length of your reference list.

For a literature review, you can rather assume plus 10%.

Don’t skip the steps with literature management, because at the end of text production, you can generate your bibliography with one click and save valuable time.

If you still have time left, continue reading your 20-30 relevant articles.

Bachelor thesis in 2 weeks 3

#5 Data Analysis (Second Third)

The next 4 days you’ll be busy with data analysis. You’re preparing everything for the results and discussion section of your thesis here.

Analysis (Day 5-7)

For a qualitative evaluation of literature, as is the case with a most review types, the analysis mainly consists of coding.

This is nothing more than forming abstract categories based on your material, which consists of your 20-30 relevant articles. You can find plenty of tutorials on coding techniques on my channel.

The goal now is to form categories that summarize the contents of all your relevant articles. That’s now your task.

No matter which method you follow, empirical, literature-based, programming, design science – watch how the pros do it and follow their structure.

Scientific papers always follow the same blueprint. You just need to recognize the blueprint that is right for your thesis, adopt it, and fill it with your own content.

It’s not necessary to reinvent the wheel. On the contrary: Your supervisors want to recognize a blueprint that is common in their research discipline.

Creating Figures and Tables (Day 8)

Create a figure for your methodology section that reflects your data collection.

In the findings section, add tables that summarize your literature analysis.

For the discussion section, add a table or figure that abstracts your results (which are the categories you have built) and provide a small theoretical contribution (e.g., organize the categories in a small framework).

Again, I can only recommend that you take an example from existing research papers.

It is important that you create all your figures yourself and insert them in high resolution into your thesis. No pixels!

Detailed tutorials on writing methodology chapters, a findings and discussion section can be found on my channel or my online course.

Check out the video description for more info.

#6 Text Production (Third Third)

The last third is dedicated to text production. Don’t be intimidated by the fact that you haven’t written anything yet.

In the first and second thirds, you laid the foundation for what you’re writing about. If you start writing on day 1, you write into the blue without knowing where the journey is going.

Normally, I would recommend not starting with the introduction. In this case, however, you have already done all the preliminary work and can “write from the top.”

Open your literature management software and your excerpts from the first third and get started.

  • Introduction and Background Section (Day 9)
  • Methodology and Findings (Day 10)
  • Discussion and Conclusion (Day 11)
  • Revision (Day 12)

On day 12, you start again with the introduction and revise all chapters so that they are linked to each other.

Use the same terms, add references where you need more evidence, and check where you can make grammar improvements.

If you’ve been counting, this emergency plan leaves 2 days left.

At least one day is a buffer for formatting. After all, you still have to create your reference list, maybe an appendix, proofread and print your work, prepare a digital submission, and so on.

The last remaining day is your joker. Save it as long as possible and use it for unforeseen emergencies that are more important than this stupid thesis.

If everything goes well and you still have the joker day after completing the second third, then use it for a break. Which brings us to the last point of the video.

#7 Mental and Physical Health Management

As stressful as it may sound to want to write a thesis in 2 weeks – a 10,000 word thesis is by no means more important than your health.

You can simply write it again and failing is not bad at all. Who cares?

Only go through such a sprint, as I have described it, if you feel physically and mentally fit. If you’re already at your limit, then listen to your body and don’t make it worse.

Your health always has priority number 1, because if it’s out of balance, then you won’t enjoy a passed thesis anyway.

Since this emergency plan requires full days of work, I recommend planning them intelligently.

Work with 90-minute deep work sessions and take breaks in between.

In the middle of the day, I recommend a longer break. Go running or to go to the gym – after that you can continue fresh.

In the evening, set a limit that you don’t exceed. So that you still have enough time to wind down and don’t get less than 8 hours of sleep.

Try not to rely too much on junk food and caffeine, but on food that supplies your brain as best as possible.

Over a period of 2 weeks, it makes a big difference what fuel you give your body.

And now stop procrastinating and get started – time is ticking!

Categories
Uncategorized

Active Recall: The #1 Study Technique Behind Every A+ Exam

Why doesn’t anyone teach us how to study properly for exams?

I know students from my time at university who studied for 10 hours every day and still failed the exam. How is that possible?

What do people do differently to achieve better results in less time?

One thing I can clarify upfront: they’re not necessarily smarter.

They simply have a better study method at their disposal.

In this article, I’ll explain the Active Recall study method, which has been found to be the key to peak academic performance in numerous scientific studies.

Passive vs. Active Learning

When it comes to excelling in exams, it’s not just about the number of hours you spend studying, but rather the effectiveness of your study methods.

In a comprehensive 58-page meta-analysis conducted by Dunlosky et al. in 2013*, examining various learning approaches, the authors revealed that commonly used techniques such as re-reading, highlighting, or summarizing notes often do not yield the desired results.

But why do these methods remain so popular?

The answer is quite simple: they are straightforward and have been the traditional way of studying for a long time.

Reading and highlighting notes are very convenient.

And who hasn’t experienced this: after reading something multiple times, you might even feel well-prepared.

However, a word of caution!

When suddenly asked for specific details, many find themselves at a loss. There’s a difference between merely recognizing information and actually recalling it.

Active Recall Learning Method

The Study Secret: How Our Brain Functions

To understand the best way to study, let’s take a brief journey into the realm of neuroscience. To make this more relatable, I’ll illustrate it using the example of Belinda.

Picture Belinda as she prepares for her upcoming exam, diligently reviewing her lecture slides repeatedly. At this very moment, various regions of her brain are operating at full capacity.

The occipital lobe is busy creating mental images of what she’s currently perusing, while the angular gyrus and the fusiform cortex are hard at work, deciphering the meanings of the words she’s reading.

Once the information has been processed, the brain dispatches it to the hippocampus – essentially the brain’s memory hub.

But here’s the twist: if you merely read through notes, only a fraction of the information tends to stick. Think of it as you would strengthening your muscles: it requires targeted exercises.

Similarly, your memory – especially the hippocampus – needs the right ‘workout regimen.’

This is where the Active Recall study method comes into play.

While straightforward reading primarily activates the visual aspects of your brain, the hippocampus often gets sidelined.

Hence, passive study techniques like reading scripts or highlighting notes pale in comparison to Active Recall.

What Is the Active Recall Study Method?

Active Recall operates by compelling our brains to actively engage in the study process. Instead of passively absorbing information through reading or listening, Active Recall encourages us – as the name implies – to actively retrieve information from our memory.

The act of actively recalling information trains the hippocampus and increases the likelihood that you’ll remember the information when you need to recall it later (e.g., during an exam).

When employing the Active Recall study method, you can retain what you’ve learned for a significantly extended period and apply it in various contexts.

Let’s explore five ways to incorporate the Active Recall learning method into your exam preparation.

#1 Stop and Recite

Following your reading of a section, momentarily set aside your study materials. Attempt to express the content in your own words.

Afterwards, retrieve the script and compare: What did you manage to remember, and where are the gaps? Fill in those information gaps using the script, and repeat the process until you’ve confidently internalized the content.

Active Recall Learning Method 2

#2 Flashcards

Flashcards have been a tried-and-true study method for many students for quite some time. Even the act of creating the cards, where you must articulate information precisely, serves as an effective part of the exam prep.

In today’s digital age, you can leverage tools like Anki to craft flashcards. Anki incorporates another study method that works particularly well for memorization, known as “spaced repetition.” This means that flashcards are revisited at specific intervals.

These intervals are carefully calibrated to enhance your ability to remember the content effectively.

However, a word of caution is in order: when using flashcards for studying, it’s crucial not to overlook active recall.

What do I mean by that? As soon as you can’t explain a term or concept, flipping the flashcard to reveal the answer right away won’t be enough.

This approach does not align with active recall because it doesn’t provide your brain with the opportunity to recall the information from memory. Instead, take a moment to challenge yourself to explain as much of the answer in your own words as possible before checking the back of the flashcard.

Additionally, a small tip regarding flashcards:

They often require you to condense information significantly. This can lead to a situation where you may remember numerous isolated facts but struggle to grasp the broader context or how these facts interconnect.

#3 Create Questions: Your Path to Profound Understanding

Instead of relying solely on your study materials and flashcards, try crafting your own questions related to the content you’re studying.

This strategy encourages you to critically evaluate what you’ve learned and ensures you maintain a firm grasp of the overall context.

Creating questions actively engages your learning process, allowing you to forge stronger connections between the new information and your existing knowledge.

Following each chapter or section, make a note of important questions, and later, challenge yourself to answer them without consulting your notes.

Active Recall Learning Method 3

#4 Engage an Audience

A highly effective way to master the material is by teaching it to others.

This process activates various cognitive pathways in your brain. Try explaining the subject of your exam to your classmates, friends, or family members.

Doing so compels you to think deeply about the topic and structure your explanation logically. Furthermore, if you encounter difficulties during your explanation, it serves as an immediate indicator of areas where you need improvement.

Pro tip: In the absence of a live audience, you can even imagine one or engage in a chat with ChatGPT. You can prepare the AI with a prompt to ask you specific questions.

If you’re interested in a dedicated tutorial on using ChatGPT for exam preparation, feel free to leave a comment under this video.

#5 Utilize Past Exams as a Recipe for Success

Reviewing past exams is like a trial run for the real test. You become familiar with the question formats and develop an understanding of what to anticipate in the exam.

When you tackle previous exams under timed conditions, you’ll also hone your time management skills and gauge your level of readiness.

An additional benefit: The more you engage in this practice, the more at ease you’ll feel during the actual exam because you’ll possess insights into what lies ahead.

The Drawbacks of Active Recall

Certainly, Active Recall boasts numerous advantages, but are there any downsides? Undoubtedly!

This method can be rather demanding. It necessitates true engagement and cognitive effort, which is substantially more challenging than simply skimming through your notes or reading the script.

Particularly with complex subjects, it can be frustrating when answers don’t readily come to mind. This study approach requires stepping out of your comfort zone and demonstrating substantial initiative.

Yes, the allure of quickly consulting your notes may be strong, but trust me, the additional effort required by Active Recall is well worth it!

*https://journals.sagepub.com/doi/abs/10.1177/1529100612453266