25 Mar

Increasing the Reliability, Validity, and Replicability of research

In my previous blog post, I addressed the criteria that are vital for both scientific research and thesis work in advanced, as well as academic education. To read this blog, click here.

In this blog, I plan to explore deeper into how you can enhance the reliability, validity, and replicability of your research, complete with more details, tips, and tricks. Given that these are the paramount aspects of solid research, these three criteria will be extensively discussed in this blog. The tips and advice I provide here might serve as useful tools if you are striving towards conducting well-founded research and aiming for higher quality results.

Increasing the reliability, validity, and replicability of a study

Explanation of the main criteria of good research

Let’s start at the beginning. Reliability, validity, and replicability are fundamental concepts within research methodology. They are essential to ensure the quality and credibility of research results. But what do they mean?

  • Reliability: Reliability refers to the consistency and stability of measurement results when measurements are repeated under the same conditions. A reliable measurement thus consistently produces comparable results when the same phenomenon is measured under similar conditions. A reliable instrument or method of measurement consistently yields the same results in repeated measurements, allowing researchers to rely on the accuracy and consistency of their data. In other words, if someone wants to replicate your study, your research report should contain sufficient information to make this possible and obtain findings comparable to your own study. Click here for more details about reliability.
  • Validity: Validity refers to the extent to which a measuring instrument or test actually measures what it is intended to measure. It happens when an instrument effectively measures the concept or variable it is intended to apply to. Validity implies that the measurement makes sense and is truly representative of the concept being studied. A valid measurement instrument will provide accurate and reliable information about the variables it targets.
  • Replicability refers to the ability to re-conduct a study or experiment using the same methodology and procedures, but with new data or in a different setting. If the results of the repeated research are similar to the original findings, this re-strengthens the reliability and credibility of the original study, because it demonstrates that the results were not accidental and are reproducible in different circumstances.
    • It’s important not to confuse replicability with reproducibility. The difference between reproducibility and replicability of a study resides in the approach to data: reproducibility implies that the same results are obtained when existing data is reanalyzed, whereas replicability means that the same research can be conducted again with new data, resulting in the same outcomes.
    • Reproducibility and replicability mainly add value to scientific studies by strengthening the reliability and credibility of the results, thereby enabling other researchers to validate the findings and build further on existing research. In addition, both contribute to ensuring the validity of scientific findings and strengthening the credibility of research. The tips given below for replicability can also be used for reproducibility.

In conclusion, reliability focuses on the consistency of measurements, validity on the accuracy and representativeness of measurements, and replicability on the repeatability of research results. These concepts are of paramount importance in ensuring the credibility and validity of scientific research.

Now that we understand what these terms mean, let’s explore how you can enhance reliability, validity, and replicability during your research.

Fundamental research and Technical research.

Before we delve into each approach, it is essential to understand that there are distinct methodologies for conducting research, namely fundamental research and technical research. In fundamental research, our primary focus lies on desk research and field research, whereas technical research emphasizes data-driven methodologies often connected to computers. Examples of these involve business intelligence, artificial intelligence, and quantum computing, used for instance, for conducting trend research using predictive analytics. For more insights into this, click here.

Qualitative research and Quantitative research

It is crucial to be aware that there are generally two approaches to enhancing reliability, validity, and replicability: qualitative and quantitative.

  • Qualitative research focuses on understanding complex phenomena through in-depth exploration and analysis of qualitative data. Consider opinions and attitudes, often gathered through methods like interviews and observations.
  • Quantitative research, on the other hand, aims at collecting and analyzing numerical data to identify patterns, relationships, or trends, typically through standardized surveys or observations, using statistical methods for analysis. Quantitative research further employs mathematical data extensively, where conclusions are often drawn using programs such as SPSS. To learn more about the SPSS program and quantitative methods, I refer you to this book.
    • If desired, you can read more here about the differences between qualitative and quantitative research.

In practice, highly professional researchers often combine both approaches, depending on the research question and the purpose of the study, which is known as mixed-methods research, or often referred to as triangulation.

In this blog, we do not focus on these quantitative methods but look down upon the ways to improve qualitative results. Before we substantively discuss qualitative research, it is important to further elucidate the basic principles of research.

Main Question or Hypothesis

When it comes to research, we frequently find ourselves at the junction of two separate paths, each of which denotes the start of an investigation.

The primary path is the formulation of the main question—a research question, a general inquiry, a design question, or in some instances, a management question. They serve as compass points for research direction, essentially carrying the same meaning. Their purpose? To answer a specific question, supported by subsidiary questions that illuminate the finer details of inquiry.

The secondary path veers into the world of hypotheses. Here, one draws from existing research, theories, or presuppositions about the future to posit that certain outcomes will ensue without direct evidence yet in hand. For those yearning for a more thorough explanation of the crafting and testing of hypotheses, click here.

Whether one opts for a main question or ventures into formulating hypotheses, the approach to conducting research remains strikingly consistent. Focusing on a research question entails a quest for solutions to a puzzle, while hypothesis testing is an endeavor to confirm or refute presuppositions. Despite their differences, the foundational approach stays remarkably identical.

In both modalities—be it in shaping a main question or framing hypotheses—the critical task of rendering certain terminologies measurable, such as ‘desired effects,’ by operationalizing key terms, is imperative.

Operationalization of concepts/terms

Operationalization is the process by which an abstract concept or terminology is translated into concrete, measurable variables, thus making it usable in research or experiments. In our pursuit of knowledge, this means defining a concept in such a way that it can be systematically observed or measured. This step is pivotal, for it enables the researcher to collect and analyze data with objectivity and systematic precision.

By operationalizing concepts, we lend our research greater validity, ensuring that what we measure truly reflects our intended inquiry. It means that by transforming abstract notions into quantifiable variables, we elevate the accuracy and reliability of research findings. This fortifies the legitimacy of conclusions, as we scrutinize elements truly pertinent to our research question or hypothesis. Delving deeper into the operationalization of concepts can be found here.

From clear objectives and requirements to proven Solutions

Ensuring that the objectives of your project or study are eloquently articulated at the outset holds paramount importance. Clearly describe what you are developing, be it a proof of concept, a product, or a service prototype. Likewise, define the requirements that your solution must meet early in your research, perhaps employing the Moscow method. These requirements, encompassing feasibility, viability, and longevity, must from the beginning be clearly explained, described, motivated, and where necessary quantified.

The manner in which these requirements are tested must be meticulously documented in both the methodology and the results, and also within the implementation plan. By the end of your endeavor, you should be poised to demonstrate how your chosen prototype or solution meets these requirements and its impact on the initial goals of your thesis or research report.

For a deeper exploration of requirements [click here], and for guidance on composing a solid research report, further reading is available here

Increasing reliability, validity, and replicability

Increasing reliability

In order to increase the reliability of a study, it is essential to apply various strategies, such as accurately defining the research question. This step helps in establishing the focus and relevance of the study. In addition, it is crucial to carefully design the research methods, ensuring that they measure what is intended (validity) and that repetition of the research yields consistent results (reliability). Another important step is the careful selection of respondents to ensure a representative sample that reflects the entire population. The use of standardized measuring instruments contributes to consistency in data collection. Moreover, it is important to train researchers and interviewers to minimize human errors. Conducting a pilot study can help in testing and, if necessary, adjusting the research methods. Documenting all steps of the research contributes to the transparency and replicability of the process. Systematically analyzing data using validated statistical methods is also of great importance. It is also essential to check for measurement errors and correct them where possible. Finally, it is valuable to critically reflect on personal biases and possible influences of these on the research.

  • Transferability
    • Lincoln and Guba (1985, p. 217) argue that a researcher cannot determine if findings are transferable to another context, as he does not know where they will be applied. Therefore, it is important that the researcher provides detailed information about the researched context, so that others can assess whether the findings are useful in their own context. This can be achieved by using rich description. A rich description, also known as “thick description,” offers detailed accounts of experiences, placing cultural and social interactions in context. This allows the reader to empathize with the described situation and compare it with their own experiences. For example, incorporating quotations and contextual information can enhance the transferability of the story. In short, a rich description tells a story that the reader can recognize and understand in different situations (Tracy, 2010).

Click here for more details on increasing reliability.

Increasing Replicability

To increase the replicability of a study, there are several strategies that can be applied, such as detailing the research methods, which enables others to accurately repeat the study. It is also crucial to provide a clear and extensive documentation of all research steps, promoting transparency. The use of standardized procedures contributes to consistency in the study. Sharing raw data and making code and algorithms available, especially when using computational methods, facilitates further replication and analysis by others. Conducting pilot studies offers the opportunity to test and refine the methods before the actual study is conducted. Transparency in data analysis is essential, where it’s important to clearly communicate how the data is analyzed and interpreted. Encouraging others to replicate the study and publishing the results contributes to the credibility of the findings. The use of checklists can help to ensure that all necessary information is stated and no essential steps are missed. Moreover, it’s important to be open to feedback and peer review, which can improve the quality of the study by integrating different perspectives and insights.

Increasing Validity

To increase the validity of your study, there are several strategies you can consider. Firstly, it’s important to formulate a clear research question (see above). A clearly defined question helps guide your study and increases its validity. Then, it’s essential to choose the appropriate research methods that align well with your research question. These methods contribute to the validity of your study. The use of validated measuring tools is also of great importance. Instruments that have previously been tested and validated provide more valid results. Conducting a pilot study can also contribute to increasing the validity of your study. By testing your methods in advance, you can identify and adjust any issues. Additionally, it’s crucial to thoroughly train your research team, so everyone knows how to correctly apply the methods. Checking for measurement errors and correcting them is also an important step to ensure validity. Identify and correct errors in your measurement process to ensure the validity of your study. Furthermore, the application of triangulation can be valuable. By using multiple methods or data sources, you can confirm your findings. It’s also important to guarantee transparency in your research process. Document all steps so your study is reproducible. Reflection on personal bias is a crucial step. Be aware of personal biases that can affect validity. Lastly, it’s important to consider external validity. Consider how your results are generalizable to other situations or populations.

  • Utilizing control variables and operationalizing terms.
    Make use of as many control variables as possible to accurately measure what you intend to measure. Control variables can include a person’s age, gender, region or occupation, thus specific characteristics of your target groups. Therefore, it’s very important to describe in detail who your target group is at the beginning of your study, for example by using a Persona. More information about control variables can be read here. Furthermore, it’s important to operationalize key terms. More information about operationalizing concepts can be found here. Finally, it’s essential to ensure confirmability. How you can do that is explained below.

More tips & tricks

To measure the effect of a prototype or solution and conduct a reliable and valid research, there are various steps and methods available that can be applied:

  • Ensure there is a test plan, also known as an experiment plan, included as an appendix. This plan provides a detailed description of the tests and experiments that will be conducted, including the design, procedures, and measurement instruments used. See below for more information on this.
  • Additionally, it is important to include an overview of the asked questions, also known as topic lists, as an appendix. These lists provide insight into the questions posed during interviews or surveys, and assist in interpreting the collected data. It is also advisable to include interview reports as an appendix to support your narrative in your thesis or research. These reports offer detailed information about the conversations conducted and the responses received, allowing the reader a deeper insight into the collected data and the conclusions drawn.
  • If possible, conduct a short survey to collect feedback on the prototype from the target audience and other stakeholders. A structured questionnaire offers an organized way to collect feedback. This questionnaire should be focused on specific aspects of the prototype and should be short enough not to overburden the respondents, yet sufficiently in-depth to obtain valuable insights.
  • Clearly define the variables and measurable aspects of the prototype/solution, as previously mentioned. This we call operationalization of concepts/effects. So, ensure that important terms are measurable by operationalizing them and then test them in practice.
  • Validate results by looking at to what extent the solution meets criteria of desirability, feasibility, and viability. These criteria should be, as mentioned earlier, named at the start of the research and tested thereafter. This involves checking if the prototype/solution not only meets the needs and wishes of the target groups but is also for instance technically and financially feasible.
  • Conduct tests among target groups or use intermediaries, contacts, or final responsible parties who are in direct contact with the target audience if direct tests are not possible. They often have insight into the desires and can confirm the findings of the research. If this is not feasible, then approach experts to validate results. Direct tests with the target audience, of course, provide the most accurate feedback, but if this is not feasible, intermediaries such as representatives or even experts in the field can be engaged to gather insights from the perspective of the target audience.
  • Sometimes even a single question may be enough, such as with the Net Promoter Score (NPS): Here, customer satisfaction and effectiveness are measured through a single question with an explanation, such as “How likely is it that you would recommend us to a friend or colleague?” This can be a quick and effective way to gain valuable insights.
  • Keep asking questions during research, and work with a combination of closed and open questions. Closed questions help in capturing numerical findings, such as ratings on a scale of 1 to 10. Whereas open questions provide insight into the reasons behind the answers and make more in-depth feedback possible. For example, after assessing the usability of the prototype/solution with a closed question, one may ask about specific aspects that were well or poorly experienced with open questions. Asking closed questions further helps you to get concrete numbers that can better capture your findings. For instance, you could ask: “What percentage of people find the product good?” This gives you a clear numerical answer. On the other hand, open questions allow you to gain more in-depth insights by giving respondents the space to extensively explain their thoughts. For example, you could ask: “Why do you think the product is good?” This opens the door for detailed explanations and provides more context behind the answers.
  • Consider using an AB test to compare different versions of the prototype/solution and determine the preference of the target audience: An AB test is an experimental method in which two different versions of the prototype/solution are shown to comparable audiences, after which the preference is measured. This could for example be done by comparing the old prototype/situation with the new prototype/situation and seeing which is better received by the target audience.
  • Ask neutral questions and ask respondents to motivate their answers to get unbiased feedback. Asking neutral questions like “What do you think of the prototype?” instead of “Do you think the product will have a positive effect?” helps to get honest and unbiased feedback. Asking for the motivation behind the answers also helps to gain deeper insight into the thoughts and preferences of the respondents.
  • Avoid adding personal perceptions and opinions to the questionnaire to obtain a reliable research result: It is important to keep the questionnaire objective and not add any personal preferences or opinions that could influence the results. This ensures a purer image of the perceptions and preferences of the target audience and contributes to the validity of the research. This approach also increases the quality of your evidence. For more information on improving the quality of your evidence, see below.
  • If possible, have the same experiment carried out by different team members or colleagues and compare the collected data. This requires a clear audit trail (see below), including clear instructions and descriptions of the technique applied during the experiment, so that a colleague researcher can replicate the research.
  • To strengthen your conclusions, you could carry out prolonged observations or experiments to collect data over a longer period. For instance, by being present in the research setting for an extended time, you can better understand the culture, context, social environment, and the phenomenon being studied (prolonged engagement). This can result in greater trust and openness among the respondents, leading them to share information that would otherwise not be available. Moreover, prolonged observation provides different perspectives on the phenomenon, resulting in broader insight. This gives a more comprehensive scope to the research. Through persistent observation, a better understanding can be obtained of which characteristics are most relevant to the study, thus adding more depth.

Results confirmation, confirmability

For sound research, it is advised to have the results confirmed, also known as confirmability. Lincoln and Guba (1985) have defined confirmability (neutrality) in qualitative research as the degree of the researcher’s neutrality. This refers to the extent to which the researcher’s biases, motivation, and interests have influenced the results. Confirmability in qualitative research corresponds to objectivity in quantitative research (Bryman & Bell, 2007, pp. 40-43).

There are various methods to ensure the confirmability of research, such as a confirmability audit (audit trail), triangulation, and reflexivity.

  • Confirmability Audit (Audit Trail): An audit trail is a detailed process description of the methodological steps taken during the research. It includes a description of all actions, including the collection, cleaning, and synthesizing of raw data, process notes, a reflection report, team role distribution, respondent selection, and interim notes. The goal is to clarify the assumptions and perspectives behind the analysis (Malterud, 2001). Sometimes research audits are also conducted to establish reliability. This involves an external party assessing the research process and data analysis to ensure that the findings are consistent and reproducible. Such an external audit enhances the accuracy and validity of the research by critically reviewing interim results and processes, which can lead to adjustments in research design and additional data collection for stronger results and clearer reporting.
  • Triangulation: This involves the use of multiple research methods, data sources, or theoretical perspectives to enhance the reliability and validity of the findings by combining different viewpoints. So make use of various methods and sources to evaluate and verify the effect of the prototype/solution. This includes both quantitative and qualitative approaches and collecting feedback from various stakeholders such as users, experts, and stakeholders. By applying triangulation, researchers can demonstrate that the findings of the research are credible and verify their accuracy.
  • Reflexivity: A process whereby the researcher consciously reflects on his or her role, background, and possible influence on the research, to identify and minimize any biases.

It is also important to avoid making assumptions when describing research results. Therefore, it is important to motivate the results and mention sources everywhere.

Sometimes it is not possible to involve the entire population, or all target groups, in your research. In such cases, you can use methods such as sampling, triangulation, and audit (Finfgeld-Connett, 2010) to promote generalization.

Experiment Plan / Test Plan

To guarantee the replicability and reliability of your research, it’s vitally important to maintain an experiment plan, sometimes referred to as a test plan. This plan summarizes the main findings and insights, with the results thereof effectively examined in the primary document. The actual experiment plan is typically appended, providing supportive substance and validation for the observations.

An experiment plan serves as a detailed blueprint of your research design, including the methodology, procedures, variables, and expected results. Accurately documenting your experiment plan not only promotes the consistency and reliability of your research, but also enables its replication by other researchers, which is essential for the validity of scientific findings. Here you can read what an experiment plan can consist of, and how you can include an experiment plan in the appendix of your research to support the conclusions you describe in the main document of your research.

Note: it is always essential to include a separate experiment plan as an appendix for each hypothesis/sub-question, as the approach and execution often differ per part and can lead to different results.

Increasing the Quality of Evidence

The reliability of evidence is of paramount importance in any research. After all, the credibility in supporting or refuting a hypothesis or research question ultimately depends on the reliability of the evidence. Additionally, more evidence contributes to increased reliability and helps to demonstrate that the research is valid, meaning that it measures what it intends to measure. As a researcher, it is imperative to collect as much strong evidence as possible and to use it responsibly in your research to strengthen your results. We often speak in terms of the ‘reliability of evidence’.

Gathering a considerable amount of strong evidence inevitably increases the quality of your evidence. Strong evidence is derived from real events and unbiased observations. On the contrary, weak evidence often emerges from social pressure or external influences on participants during research, necessitating caution when considering its validity as evidence. For more insights on the distinctions between strong and weak evidence, and for tips on bolstering your evidence, click here.

Frequently, we also see discussions claiming that data in artificial intelligence programs are not reliable. This is also due to the strong evidence that is used as data input to guarantee the quality of output.

The blog article is written with reference to the following resources. Please note that the original sources are written in Dutch.

Tags: evaluation criteria, scientific research, conducting scientific research, scientific research agency, research assessment, research methodology, scientific method, research methods, research strategy, scientific analysis, research results, scientific publication, research evaluation, scientific innovation, research design, research protocols, scientific knowledge development, experiment plan, testing, test plans, research justification.

Leave A Reply

Your email address will not be published. Required fields are marked *