• Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

true experimental design pretest posttest

Pretest-Posttest Designs

For many true experimental designs , pretest-posttest designs are the preferred method to compare participant groups and measure the degree of change occurring as a result of treatments or interventions.

This article is a part of the guide:

  • Experimental Research
  • Third Variable
  • Research Bias
  • Independent Variable
  • Between Subjects

Browse Full Outline

  • 1 Experimental Research
  • 2.1 Independent Variable
  • 2.2 Dependent Variable
  • 2.3 Controlled Variables
  • 2.4 Third Variable
  • 3.1 Control Group
  • 3.2 Research Bias
  • 3.3.1 Placebo Effect
  • 3.3.2 Double Blind Method
  • 4.1 Randomized Controlled Trials
  • 4.2 Pretest-Posttest
  • 4.3 Solomon Four Group
  • 4.4 Between Subjects
  • 4.5 Within Subject
  • 4.6 Repeated Measures
  • 4.7 Counterbalanced Measures
  • 4.8 Matched Subjects

Pretest-posttest designs grew from the simpler posttest only designs, and address some of the issues arising with assignment bias and the allocation of participants to groups.

One example is education, where researchers want to monitor the effect of a new teaching method upon groups of children. Other areas include evaluating the effects of counseling, testing medical treatments, and measuring psychological constructs. The only stipulation is that the subjects must be randomly assigned to groups, in a true experimental design, to properly isolate and nullify any nuisance or confounding variables .

true experimental design pretest posttest

The Posttest Only Design With Non-Equivalent Control Groups

Pretest-posttest designs are an expansion of the posttest only design with nonequivalent groups, one of the simplest methods of testing the effectiveness of an intervention.

In this design, which uses two groups, one group is given the treatment and the results are gathered at the end. The control group receives no treatment, over the same period of time, but undergoes exactly the same tests.

Statistical analysis can then determine if the intervention had a significant effect . One common example of this is in medicine; one group is given a medicine, whereas the control group is given none, and this allows the researchers to determine if the drug really works. This type of design, whilst commonly using two groups, can be slightly more complex. For example, if different dosages of a medicine are tested, the design can be based around multiple groups.

Whilst this posttest only design does find many uses, it is limited in scope and contains many threats to validity . It is very poor at guarding against assignment bias , because the researcher knows nothing about the individual differences within the control group and how they may have affected the outcome. Even with randomization of the initial groups, this failure to address assignment bias means that the statistical power is weak.

The results of such a study will always be limited in scope and, resources permitting; most researchers use a more robust design, of which pretest-posttest designs are one. The posttest only design with non-equivalent groups is usually reserved for experiments performed after the fact, such as a medical researcher wishing to observe the effect of a medicine that has already been administered.

true experimental design pretest posttest

The Two Group Control Group Design

This is, by far, the simplest and most common of the pretest-posttest designs, and is a useful way of ensuring that an experiment has a strong level of internal validity . The principle behind this design is relatively simple, and involves randomly assigning subjects between two groups, a test group and a control . Both groups are pre-tested, and both are post-tested, the ultimate difference being that one group was administered the treatment.

Confounding Variable

This test allows a number of distinct analyses, giving researchers the tools to filter out experimental noise and confounding variables . The internal validity of this design is strong, because the pretest ensures that the groups are equivalent. The various analyses that can be performed upon a two-group control group pretest-posttest designs are (Fig 1):

Pretest Posttest Design With Control Group

  • This design allows researchers to compare the final posttest results between the two groups, giving them an idea of the overall effectiveness of the intervention or treatment. (C)
  • The researcher can see how both groups changed from pretest to posttest, whether one, both or neither improved over time. If the control group also showed a significant improvement, then the researcher must attempt to uncover the reasons behind this. (A and A1)
  • The researchers can compare the scores in the two pretest groups, to ensure that the randomization process was effective. (B)

These checks evaluate the efficiency of the randomization process and also determine whether the group given the treatment showed a significant difference.

Problems With Pretest-Posttest Designs

The main problem with this design is that it improves internal validity but sacrifices external validity to do so. There is no way of judging whether the process of pre-testing actually influenced the results because there is no baseline measurement against groups that remained completely untreated. For example, children given an educational pretest may be inspired to try a little harder in their lessons, and both groups would outperform children not given a pretest, so it becomes difficult to generalize the results to encompass all children.

The other major problem, which afflicts many sociological and educational research programs, is that it is impossible and unethical to isolate all of the participants completely. If two groups of children attend the same school, it is reasonable to assume that they mix outside of lessons and share ideas, potentially contaminating the results. On the other hand, if the children are drawn from different schools to prevent this, the chance of selection bias arises, because randomization is not possible.

The two-group control group design is an exceptionally useful research method, as long as its limitations are fully understood. For extensive and particularly important research, many researchers use the Solomon four group method , a design that is more costly, but avoids many weaknesses of the simple pretest-posttest designs.

  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Martyn Shuttleworth (Nov 3, 2009). Pretest-Posttest Designs. Retrieved Sep 11, 2024 from Explorable.com: https://explorable.com/pretest-posttest-designs

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

true experimental design pretest posttest

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - For iBooks here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

true experimental design pretest posttest

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Logo for The University of Regina OEP Program

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

30 8.1 Experimental design: What is it and when should it be used?

Learning objectives.

  • Define experiment
  • Identify the core features of true experimental designs
  • Describe the difference between an experimental group and a control group
  • Identify and describe the various types of true experimental designs

Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.

true experimental design pretest posttest

Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.

Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:

  • random assignment of participants into experimental and control groups
  • a “treatment” (or intervention) provided to the experimental group
  • measurement of the effects of the treatment in a post-test administered to both groups

Some true experiments are more complex.  Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.

Experimental and control groups

In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.

Treatment or intervention

In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.

In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.

The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test .  In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.

Types of experimental design

Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.

Steps in classic experimental design: Sampling to Assignment to Pretest to intervention to Posttest

An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.

In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963).  The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.

Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.

Table 8.1 Solomon four-group design
Group 1 X X X
Group 2 X X
Group 3 X X
Group 4 X

Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.

Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we  will discuss in the next section–can be used.  However, the differences in rigor from true experimental designs leave their conclusions more open to critique.

Experimental design in macro-level research

You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals.  For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change.  There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013).  Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments.  For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.

Key Takeaways

  • True experimental designs require random assignment.
  • Control groups do not receive an intervention, and experimental groups receive an intervention.
  • The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
  • Testing effects may cause researchers to use variations on the classic experimental design.
  • Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
  • Control group- the group in an experiment that does not receive the intervention
  • Experiment- a method of data collection designed to test hypotheses under controlled conditions
  • Experimental group- the group in an experiment that receives the intervention
  • Posttest- a measurement taken after the intervention
  • Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
  • Pretest- a measurement taken prior to the intervention
  • Random assignment-using a random process to assign people into experimental and control groups
  • Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
  • Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
  • True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups

Image attributions

exam scientific experiment by mohamed_hassan CC-0

Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Experimental Design

  • Living reference work entry
  • First Online: 28 August 2020
  • Cite this living reference work entry

true experimental design pretest posttest

  • Kim Koh 2  

75 Accesses

Experiments ; Randomized clinical trial ; Randomized trial

In quality-of-life and well-being research specifically, and in medical, nursing, social, educational, and psychological research more generally, experimental design can be used to test cause-and-effect relationships between the independent and dependent variables.

Description

Experimental design was pioneered by R. A. Fisher in the fields of agriculture and education (Fisher 1935 ). In studies that use experimental design, the independent variables are manipulated or controlled by researchers, which enables the testing of the cause-and-effect relationship between the independent and dependent variables. An experimental design can control many threats to internal validity by using random assignment of participants to different treatment/intervention and control/comparison groups. Therefore, it is considered one of the most statistically robust designs in quality-of-life and well-being research, as well as in...

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research . Chicago: Rand MçNally & Company.

Google Scholar  

Fisher, R. A. (1935). The design of experiments . Edinburgh: Oliver and Boyd.

Kerlinger, F. N., & Lee, H. B. (2000). Foundations of behavioral research (4th ed.). Belmont: Cengage Learning.

Schneider, B., Carnoy, M., Kilpatrick, J., Schmidt, W. H., & Shavelson, R. J. (2007). Estimating causal effects: Using experimental designs and observational design . Washington, DC: American Educational Research Association.

Download references

Author information

Authors and affiliations.

Werklund School of Education, University of Calgary, Calgary, AB, Canada

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Kim Koh .

Editor information

Editors and affiliations.

Dipartimento di Scienze Statistiche, Sapienza Università di Roma, Roma, Italy

Filomena Maggino

Section Editor information

Department of ECPS & Intitute of Applied Mathematics, University of British Columbia, Vancouver, BC, Canada

Bruno Zumbo

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this entry

Cite this entry.

Koh, K. (2020). Experimental Design. In: Maggino, F. (eds) Encyclopedia of Quality of Life and Well-Being Research. Springer, Cham. https://doi.org/10.1007/978-3-319-69909-7_967-2

Download citation

DOI : https://doi.org/10.1007/978-3-319-69909-7_967-2

Received : 11 October 2019

Accepted : 02 December 2019

Published : 28 August 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-319-69909-7

Online ISBN : 978-3-319-69909-7

eBook Packages : Springer Reference Social Sciences Reference Module Humanities and Social Sciences Reference Module Business, Economics and Social Sciences

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

AllPsych

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

The Perils of Ignoring Design Effects in Experimental Studies: Lessons from a Mammography Screening Trial

Beth a. glenn.

1 Fielding School of Public Health, and Jonsson Comprehensive Cancer Center, University of California, Los Angeles, CA, 650 Charles Young Dr. South, Room A2-125, CHS, Los Angeles, CA 90095-6900, Phone: (310) 206-9715, Fax: (310) 206-3566, ude.alcu@nnelgb

Roshan Bastani

Annette e. maxwell.

Threats to external validity including pretest sensitization and the interaction of selection and an intervention are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomized trial designed to promote mammography use in a high risk sample of women.

During the trial, recruitment and intervention implementation took place in three cohorts (with different ethnic composition), utilizing two different designs (pretest-posttest control group design; posttest only control group design).

Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample.

These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity.

INTRODUCTION

The pretest-posttest control group design ( Shadish, Cook, & Campbell, 2002 ) is one of the most frequently used study designs in behavioral and psychosocial intervention research. Although use of this design controls for the majority of threats to internal validity, several important threats to external validity remain, including the interaction of testing and the intervention (i.e., pretest sensitization effects) and the interaction between participant selection and the intervention. Pretest sensitization occurs when the effect of an intervention, assessed at follow-up, is influenced by or dependent upon the presence of a pretest. Although pretest sensitization has been proposed as a significant problem in research ( Kim, 2010 ; Lana, 2009 ; Shadish et al., 2002 ), it is unknown how frequently it occurs. The interaction between selection and the intervention occurs when the intervention effect is specific to the particular characteristics of the study sample, and would not be present if the intervention were to be implemented in a different sub-group or the population as a whole. These threats to external validity are frequently overlooked despite their potential to significantly influence study outcomes ( Kim, 2010 ) and the effectiveness of policy or practice applications of the findings.

There has been a growing recognition of the need to enhance the external validity of intervention research across multiple disciplines, including increasing the representativeness of participant samples ( Glasgow et al., 2006 ; Green & Glasgow, 2006 ). However, the best methods of doing so are not clear. Suggestions have included less reliance on randomized trials with strict research protocols and greater use of alternative designs with adoption of sophisticated statistical analyses to control for confounds ( Bonell et al., 2011 ; Cousens et al., 2011 ; Glasgow, Lichtenstein, & Marcus, 2003 ; Green, 2001 ). Although implementing such suggestions may reduce the effect of selection on the outcome, the resultant loss of internal validity, may be undesirable. The effect of pretest sensitization can be reduced or controlled by selecting a post-test only or Solomon four-group design (e.g., intervention and control conditions, with and without a pretest) ( Shadish et al., 2002 ; Solomon, 1949 ). In addition to providing a means of controlling pretest sensitization, the Solomon design allows one to assess the presence and magnitude of pretest sensitization and the interaction of sensitization and the intervention. Although scientifically advantageous, these designs are infrequently utilized. Researchers are often hesitant to use post-test only designs because they will not be able to confirm equality of randomized groups at baseline or, conversely, detect a failure of randomization. Solomon four-group designs are often considered unfeasible or prohibitively expensive in the context of an intervention trial, given the resultant increase in the sample size required compared to a two-group design.

In an attempt to assess the presence of external validity threats in controlled trials, we conducted retrospective analyses utilizing data from a randomized trial designed to increase mammography use in women at increased risk for breast cancer due to a family history of the disease ( Bastani, Maxwell, & Bradford, 1996 ; Bastani, Maxwell, Bradford, Das, & Yan, 1999 ). Recruitment and intervention implementation took place in three cohorts, utilizing two different designs. The same intervention was delivered both in the context of a pretest-posttest control group design and a posttest only design providing an opportunity to observe the potential interaction between our intervention and the pretest. Further, the intervention was delivered across three cohorts of participants that varied on a number of demographic characteristics allowing for examination of the effect of the intervention across different participant samples.

Overview of research designs

Three cohorts of first-degree relatives of breast cancer survivors were successively recruited into randomized experiments to assess the effectiveness of a mailed, personalized risk notification intervention in increasing screening mammography rates. Figure 1 provides an overview of the research designs utilized in each of the three cohorts. Cohort 1 involved a pretest-posttest control group design in a sample of predominantly white high-risk women. Cohort 2 was also predominantly white but the research design was a post-test only control group design. Cohorts 1 and 2, in combination, simulate a Solomon four-group design ( Shadish et al., 2002 ; Solomon, 1949 ) with the factors being presence or absence of a pretest and presence or absence or the intervention. The two experiments do not qualify as a pure Solomon four-group design because they were not conducted simultaneously in time, but rather were separated by a period of one year. Cohort 3 was nearly exclusively non-white and utilized a pretest-posttest control group design.

An external file that holds a picture, illustration, etc.
Object name is nihms440050f1.jpg

Notes : Baseline surveys (Cohorts 1 and 3) assessed knowledge, attitudes, beliefs and behavior; Follow-up surveys across all cohorts assessed content similar to the baseline surveys; R=random assignment; X=Intervention, 0 = Survey

Recruitment of participants and data collection

Under the Statewide Cancer Reporting Act of 1985, all newly diagnosed cancer cases in California are reported to the California Cancer Registry (CCR). For all three cohorts, contact information for women diagnosed with breast cancer was received from the CCR. For Cohorts 1 and 2 the CCR identified random samples of female breast cancer survivors diagnosed in 1988 (N=2500) and 1989 (N=2500). For Cohort 3 we obtained contact information for all Latina (N=2334), African-American (N=1541) and Asian (N=1400) survivors diagnosed in California in 1989 and 1990. Contact information for these women was obtained from the registry in late 1990, 1991 and 1992 respectively. Initially, the physician of record was sent a letter to inquire about any contraindication to contacting the individual (e.g., death or incapacitation). Breast cancer survivors whose physicians provided information that would preclude contact were excluded from the sample. Next, survivors were contacted by mail to inform them of the study and solicit information on their female first degree relatives, > 30 years of age. Eligible relatives identified in the above step were then contacted for recruitment into the study as described below.

Cohort 1 (white, with pretest) and Cohort 3 (non-white, with pretest)

Relatives in Cohorts 1 ( white, with pretest ) and 3 ( non-white, with pretest ) were sent an informational letter regarding the study and told to expect a telephone call in the next few weeks. A return form was included with the letter to allow participants to indicate good times for the telephone interview and their language preference (English or Spanish). Two to three weeks following the mailed notification, eligible relatives were contacted by telephone to recruit them into the study and to obtain baseline information on eligibility, risk factors, screening behavior, knowledge, attitudes, beliefs, and other psycho-social variables. Eligibility criteria included being the mother, sister or daughter of the index case, being 30 years of age or older, residing in the United States or Canada, and having no personal history of breast cancer. Following the baseline survey, participants were randomized into an intervention or control group. Intervention participants received a mailed intervention consisting of a personalized, tailored risk assessment as well as a brochure and bookmark targeting high risk women that included messages regarding the importance of obtaining regular screening mammography. Approximately one year following the baseline survey participants were re-contacted and asked to complete a post-test survey to assess screening behavior, knowledge, attitudes, beliefs and other psycho-social variables. Participants in Cohort 1 were randomly assigned to complete the follow-up survey by mail or by telephone. All participants in Cohort 3 were invited to complete the follow-up survey by telephone.

Cohort 2 (white, no pretest)

Relatives in Cohort 2 ( white, no pretest ) were sent an informational letter regarding the study accompanied by a brief risk assessment form to be returned by mail. The risk assessment form only included items needed to assess breast cancer risk factors that were used for tailoring the intervention. Unlike Cohorts 1 and 3, knowledge, attitudes, beliefs, and screening behavior were not assessed at baseline. The protocol used to deliver the intervention (timing, content, etc) to this cohort was identical to that used in Cohorts 1 and 3. Twelve-month follow-up data were obtained via a mailed survey, which was identical to that used in Cohorts 1 and 3. Additional details related to participant recruitment and the intervention are reported elsewhere ( Bastani et al., 1996 ; Bastani et al., 1999 ). No difference in mammography rates at follow-up were observed between participants providing post-test data by mail versus the telephone.( Bastani et al., 1999 ). The research was approved by the institutional review board of the University of California, Los Angeles.

Sample Characteristics by Cohort

Table 1 displays response rates at the various stages of participant accrual as well as the process used to collect follow-up data across the three cohorts. Cohorts 1 and 2 (mostly white) are very similar with respect to response rates at each step. Cohort 3 (non-white), on the other hand, is dramatically different. Over 50% of survivors in Cohort 1 and 2 responded to our letter requesting contact information for their first degree relatives. In contrast, the response rate among non-white survivors (Cohort 3) was only around 25%. However, response rates among eligible relatives referred to the study by survivors were uniformly higher among non-white (85%, for Cohort 3) compared to white participants (72%, Cohorts 1 and 2 combined). Retention rates at the 12-month follow-up were high and quite similar across all three cohorts.

Ethnic Differences in Accrual of First Degree Female Relatives of Breast Cancer Survivors Identified through the California Cancer Registry (CCR)

Table 2 provides a comparison of the demographic characteristics of the three cohorts. Cohorts 1 and 2 (white) were very similar to one another on all demographic variables. Both cohorts were over 90% white, with relatively high levels of income and education. Also, control group posttest rates were similar in Cohorts 1 and 2 (see Table 3 ), further supporting a priori comparability of these two groups. In contrast, Cohort 3 was mostly non-white (i.e., 42%Latino, 31% African American, 27% Asian). Income and education levels were somewhat lower in Cohort 3 compared to the other two cohorts, as was the proportion of participants who were married or living as married. Insurance coverage was high across all three cohorts.

Characteristics of Respondents

Cohort 1
(N=753)
%
Cohort 2
(N=736)
%
Cohort 3
(N=993)
%
30-39272733
40-49262827
50-64252226
> 65222315
White90931
Latino3142
African-American2331
Asian--24
Other 432
Less than high school4713
High school diploma273027
1 - 3 years college353135
College degree or higher343225
Married or living as married717061
< 20,000141623
20,000 - 29,000191515
30,000 - 39,999171519
40,000 - 49,999151412
≥ 50,000364130
Yes939288

Screening Rates by Cohort

Baseline %Follow-Up %Change %P-Value
Cohort 1 (mostly white, pretest)I (N=382)55.065.210.2
C (N=371)54.957.72.5.05
Cohort 2 (mostly white, no pretest)I (N=370)-----59.0
C (N=366)-----58.0.84
Cohort 3 (non-white, pretest)I (N=493)45.050.15.1
C (N=502)47.247.0-0.2.001

Analysis of the Intervention Effect by Cohort

The main outcome of interest in all three experiments/cohorts was whether or not participants obtained a screening mammogram in the period between study enrollment and the 12 month follow-up survey. Table 3 displays screening rates at baseline and follow-up by intervention versus control condition for all three cohorts. For Cohorts 1 and 3, no differences were observed in baseline mammography rates between intervention and control groups. To assess intervention effectiveness, change scores calculated separately for the intervention and control groups were directly compared, within each cohort, using the Mann-Whitney U test. First, two indicator variables were created for each woman to note whether she had a mammogram in the 12 months preceding baseline and in the 12 months between baseline and follow-up. For each variable, a “0” indicated no mammogram and a “1” indicated receipt of a mammogram. For each woman, a change score was created by subtracting the value of the indicator variable for baseline from the follow-up value. Significantly greater increases in mammography rates were observed in the intervention compared to control groups for both Cohorts 1 and 3, indicating a significant intervention effect. Logistic regression analysis, controlling for baseline mammography rates and covariates, yielded identical results.

In Cohort 2, direct comparison of post-test rates between intervention and control groups yielded non-significant results in chi-square analyses. To complete the picture, post-test comparisons were also conducted in Cohorts 1. A significant intervention versus control group difference at post-test was observed for Cohort 1 (p < .05).

The pattern of results obtained revealed significant intervention effects in Cohorts 1 and 3, but not in Cohort 2, suggesting that the intervention was only effective in the presence of a pretest. This illustrates the classic external validity threat of “the interaction of testing and X” described by Campbell and Stanley in 1966, in which the pretest may prompt participants to be more receptive to the intervention. Also, a comparison of the control group rates for Cohorts 1 and 3 seems to indicate that the pretest alone resulted in a slight increase in screening among the predominantly white women in Cohort 1 but not in the minority women in Cohort 3. Furthermore, the intervention effect appeared somewhat greater among white compared to ethnic minority women, providing support for an “interaction of selection and the intervention” in which the demographic characteristics of the participants may have influenced the study outcome. Additional evidence that participant ethnicity was an important factor in influencing the effectiveness of the intervention is provided in Table 4 , which displays outcomes separately for the three ethnic minority groups within Cohort 3. No intervention effect appears to be present within African Americans or Latinos (< 3% point difference in mammography rates between intervention and control groups). However, the effect of the intervention among Asians is substantially larger compared to other ethnic groups (9% point advantage for intervention versus control condition), although the intervention effect is not statistically significant due to the small size of this group (n = 251). Therefore, in the presence of a pretest, it appears that there is an unambiguous intervention effect in the white cohort. In the non-white cohort, the effect is smaller and less clearly visible. Namely, the results depended upon the design utilized and the particular ethnic group examined.

Screening Rates by Ethnicity for Cohort 3

Baseline %Follow-Up %Change %P-Value
African AmericanI (N=151)44.450.05.6
C (N=156)46.750.03.30.50
LatinoI (N=205)46.546.1-0.4
C (N=219)48.550.01.50.73
AsianI (N=141)45.752.87.1
C (N=128)46.444.6-1.80.10

The present study demonstrates the effects of two often overlooked threats to external validity in randomized trials: pretest sensitization and the participant selection. Examination of the intervention effect in any of the three cohorts in isolation would have lead to inaccurate conclusions. The randomized pretest-posttest control group design employed in Cohorts 1 and 3 would lead us to conclude that our intervention was effective among Whites as well as among minority participants. We would likely feel justified in encouraging wide adoption of this “evidence based” intervention in community practice. However, examination of the results of our intervention in all three cohorts collectively provides a much different picture. We discover in Cohort 2 that when the intervention was delivered to a sample of White women at high risk for breast cancer (almost identical to the Cohort 1 sample) without administering a pretest, screening rates did not increase, suggesting that our intervention was only effective when implemented following a pretest. This is a classic illustration of the interaction of the pretest and the intervention. Given that most intervention trials utilize a two-group pretest-posttest randomized design, the frequency at which pretest sensitization occurs is largely unknown. It is conceivable that researchers often erroneously assume that an intervention is effective, when in fact the outcome improvements observed would occur only in the context of a pretest. This may, in part, explain the failure of many interventions, found to be effective in randomized controlled trials, to be successful when implemented in “real life” settings without a pretest.

Our results also provide support for the presence of an interaction of selection and the intervention such that the intervention was not uniformly effective across all ethnic groups. The effect of our intervention was more pronounced among white versus ethnic minority women suggesting that the intervention effect was influenced by characteristics of the sample. Upon taking a closer look at the effect of the intervention among ethnic minority women, we found that the effect was present only for Asian women. These results illustrate the importance of stratified and exploratory analyses to assess not only whether an intervention is effective but for whom it has an effect.

Stratified sampling (e.g., oversampling ethnic minority populations) is only one method to enhancing the ability to examine “for whom did an intervention work.” Often stratification is not feasible, and therefore one may want to consider statistical methods to explore these issues (e.g., subgroup, moderator, or responder analyses). One advantage to utilizing stratification is that one decides a priori, based on theoretical or data-based assumptions what factors are anticipated to be related to the effect of the intervention and an attempt is make to ensure sufficient sample sizes within each subgroup for analyses. Statistical methods of examining “for whom” an intervention is effective may be an acceptable alternative. However, statistical methods have their limits, particularly if the resultant study sample is very homogeneous or heterogeneous, small in size, or when analyses are based primarily on post-hoc observations ( Pocock, Assmann, Enos, & Kasten, 2002 ; Senn & Julious, 2009 ; Wang, Lagakos, Ware, Hunter, & Drazen, 2007 ). Statistical methods of control also rely heavily on the quality of measures implemented and the assumption that the all of the important concepts have been assessed. Meta-analysis is another method of reducing the biases that may occur when interpreting results of individual studies ( Egger & Smith, 1997 ). However, the strengths of meta-analysis and data-synthesis techniques are diminished with greater heterogeneity of the existing literature ( Higgins & Thompson, 2002 ; Howard, Maxwell, & Fleming, 2000 ). In addition, meta-analysis represents only a long-term solution, since these analyses can only be utilized after multiple studies using similar methods have been published within a particular area of research.

Threats to external validity, including pretest sensitization and the interaction of testing and the intervention, have historically not been given due attention in the field of intervention research ( Kim, 2010 ; Moore & Moore, 2011 ). This realization has resulted in a relatively recent move towards effectiveness studies and away from strictly controlled efficacy trials and has fed the rapidly developing fields of dissemination, implementation, and translational research ( Glasgow et al., 2006 ; Glasgow et al., 2003 ; Green & Glasgow, 2006 ). Although increasingly acknowledged as important, only a handful of recent studies in the fields of health psychology, behavioral medicine, and public health have directly examined the impact of threats to external validity on research outcomes ( Donovan, Wood, Frayjo, Black, & Surette, 2012 ; Kim, 2010 ; Rubel et al., 2011 ; Spence, Burgess, Rodgers, & Murray, 2009 ). Thus, this study provides a valuable contribution to the literature.

The present study was not conducted as a true Solomon four-group design therefore our ability to determine the extent of the effect of pretest sensitization is diminished. The contribution of differences in the treatment of the three cohorts and the effect of time on the pattern of results obtained cannot be directly assessed. Despite these limitations, our study serves as a powerful illustration of the potential effect of these two threats to external validity.

Failure to acknowledge the influence of pretest sensitization or selection may lead researchers to make misguided and inaccurate conclusions about an intervention’s effectiveness. In the present study, these factors led to false positive results. Given the goal of population-wide dissemination of evidence based interventions, it becomes important to closely examine our criteria for what is “evidence based”. Researchers should increase their attention to issues of external validity when making decisions regarding intervention research design.

  • Bastani R, Maxwell A, Bradford C. A tumor registry as a tool for recruiting a multi-ethnic sample of women at high risk for breast cancer. Journal of Registry Management. 1996; 23 :74–78. [ Google Scholar ]
  • Bastani R, Maxwell AE, Bradford C, Das IP, Yan KX. Tailored risk notification for women with a family history of breast cancer. Prev Med. 1999; 29 (5):355–364. [ PubMed ] [ Google Scholar ]
  • Bonell CP, Hargreaves J, Cousens S, Ross D, Hayes R, Petticrew M, et al. Alternatives to randomisation in the evaluation of public health interventions: design challenges and solutions. J Epidemiol Community Health. 2011; 65 (7):582–587. [ PubMed ] [ Google Scholar ]
  • Cousens S, Hargreaves J, Bonell C, Armstrong B, Thomas J, Kirkwood BR, et al. Alternatives to randomisation in the evaluation of public-health interventions: statistical analysis and causal inference. J Epidemiol Community Health. 2011; 65 (7):576–581. [ PubMed ] [ Google Scholar ]
  • Donovan E, Wood M, Frayjo K, Black RA, Surette DA. A randomized, controlled trial to test the efficacy of an online, parent-based intervention for reducing the risks associated with college-student alcohol use. Addict Behav. 2012; 37 (1):25–35. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Egger M, Smith GD. Meta-Analysis. Potentials and promise. BMJ. 1997; 315 (7119):1371–1374. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Glasgow RE, Green LW, Klesges LM, Abrams DB, Fisher EB, Goldstein MG, et al. External validity: we need to do more. Ann Behav Med. 2006; 31 (2):105–108. [ PubMed ] [ Google Scholar ]
  • Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. Am J Public Health. 2003; 93 (8):1261–1267. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Green LW. From research to “best practices” in other settings and populations. Am J Health Behav. 2001; 25 (3):165–178. [ PubMed ] [ Google Scholar ]
  • Green LW, Glasgow RE. Evaluating the relevance, generalization, and applicability of research: issues in external validation and translation methodology. Eval Health Prof. 2006; 29 (1):126–153. [ PubMed ] [ Google Scholar ]
  • Higgins JP, Thompson SG. Quantifying heterogeneity in a meta-analysis. Stat Med. 2002; 21 (11):1539–1558. [ PubMed ] [ Google Scholar ]
  • Howard GS, Maxwell SE, Fleming KJ. The proof of the pudding: an illustration of the relative strengths of null hypothesis, meta-analysis, and Bayesian analysis. Psychol Methods. 2000; 5 (3):315–332. [ PubMed ] [ Google Scholar ]
  • Kim ES, W VL. Evaluation pretest effects in pre-post studies. Educational and Psychological Measurement. 2010; 70 (5):744–759. [ Google Scholar ]
  • Lana RE. Pretest Sensitization. In: Rosenthal R, Rosonow RL, editors. Artifacts in behavioral research: Robert Rosenthal and Ralph L Rosnow’s classic books. New York, New York: Oxford University Press; 2009. [ Google Scholar ]
  • Moore L, Moore GF. Public health evaluation: which designs work, for whom and under what circumstances? J Epidemiol Community Health. 2011; 65 (7):596–597. [ PubMed ] [ Google Scholar ]
  • Pocock SJ, Assmann SE, Enos LE, Kasten LE. Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practice and problems. Stat Med. 2002; 21 (19):2917–2930. [ PubMed ] [ Google Scholar ]
  • Rubel SK, Miller JW, Stephens RL, Xu Y, Scholl LE, Holden EW, et al. Testing the effects of a decision aid for prostate cancer screening. J Health Commun. 2011; 15 (3):307–321. [ PubMed ] [ Google Scholar ]
  • Senn S, Julious S. Measurement in clinical trials: a neglected issue for statisticians? Stat Med. 2009; 28 (26):3189–3209. [ PubMed ] [ Google Scholar ]
  • Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin; 2002. [ Google Scholar ]
  • Solomon RL. An extension of control group design. Psychol Bull. 1949; 46 (2):137–150. [ PubMed ] [ Google Scholar ]
  • Spence JC, Burgess J, Rodgers W, Murray T. Effect of pretesting on intentions and behaviour: a pedometer and walking intervention. Psychol Health. 2009; 24 (7):777–789. [ PubMed ] [ Google Scholar ]
  • Wang R, Lagakos SW, Ware JH, Hunter DJ, Drazen JM. Statistics in medicine--reporting of subgroup analyses in clinical trials. N Engl J Med. 2007; 357 (21):2189–2194. [ PubMed ] [ Google Scholar ]

Breadcrumbs Section. Click here to navigate to respective pages.

True Experimental Designs

True Experimental Designs

DOI link for True Experimental Designs

Click here to navigate to parent product.

This chapter examines some specific designs for experimental studies. It considers a classic design for exploring cause-and-effect relationships. Design 1 is the pretest-posttest randomized control group design. By assigning participants at random to groups, researchers are assured that there are no biases in the assignment. At first, the lack of a pretest may seem to be a flaw, but remember that the comparability of the two groups in Design 1 was achieved by assigning participants at random to the two groups. This initial comparability is also achieved in Design 2 above by this random assignment. Researchers can have the best of both designs by using the Solomon randomized four-group design, which is a combination of Designs 1 and 2. Solomon's design is shown in Design 3. All three of the preceding designs are called true experimental designs. True experimental designs are easy to spot because they are all characterized by random assignment to treatments.

  • Privacy Policy
  • Terms & Conditions
  • Cookie Policy
  • Taylor & Francis Online
  • Taylor & Francis Group
  • Students/Researchers
  • Librarians/Institutions

Connect with us

Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited

U.S. flag

An official website of the United States government, Department of Justice.

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

NCJRS Virtual Library

Experimental and quasi-experimental designs for research, additional details.

437 Madison Avenue , New York , NY 10036 , United States

Box 7600 , Chicago , IL 60680 , United States

No download available

Availability, related topics.

IMAGES

  1. Pretest-Posttest Design: Definition & Examples

    true experimental design pretest posttest

  2. Pretest-Posttest Design: Definition & Examples

    true experimental design pretest posttest

  3. More Pretest-Posttest Design

    true experimental design pretest posttest

  4. Pretest-Posttest Design

    true experimental design pretest posttest

  5. Pretest-Posttest Design

    true experimental design pretest posttest

  6. Pretest-Posttest Design

    true experimental design pretest posttest

VIDEO

  1. 9. Basic Between-Subjects Design (Part 2)

  2. Difference in Difference using SPSS الفروق داخل الفروق باستخدام برنامج

  3. Experimental Psych (Part 5): True Experimental research designs

  4. Features of True Experimental design

  5. Experimental Research || True Experimental Research Or True Experimental Design

  6. Quasi-Experimental Designs II: Separate Sample Pretest-Posttest Design

COMMENTS

  1. Pretest-Posttest Designs

    For many true experimental designs, pretest-posttest designs are the preferred method to compare participant groups and measure the degree of change occurring as a result of treatments or interventions. Pretest-posttest designs grew from the simpler posttest only designs, and address some of the issues arising with assignment bias and the ...

  2. Pretest-Posttest Design: Definition & Examples

    A pretest-posttest design is an experiment in which measurements are taken on individuals both before and after they're involved in some treatment. Pretest-posttest designs can be used in both experimental and quasi-experimental research and may or may not include control groups. The process for each research approach is as follows:

  3. 8.1 Experimental design: What is it and when should it be used

    Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design.

  4. PDF Experimental Designs

    The simplest true experimental designs are two group designs involving one treatment group and one control group, and are ideally suited for testing the effects of a single independent variable that can be manipulated as a treatment. The two basic two-group designs are the pretest-posttest control group design and the posttest-only control

  5. Experimental Design

    According to Campbell and Stanley (1963), there are three basic types of true experimental designs: (1) pretest-posttest control group design, (2) Solomon four-group design, and (3) posttest-only control group design. The pretest-posttest control group design is the most widely used design in medical, social, educational, and psychological ...

  6. Chapter 5.4 True Experimental Design

    True Experimental Design True experimental design makes up for the shortcomings of the two designs previously discussed. ... The pretest posttest equivalent groups design provides for both a control group and a measure of change but also adds a pretest to assess any differences between the groups prior to the study taking place.

  7. PDF Experimental Designs

    true experimental designs—pretest-posttest : control-group, posttest-only control-group, and Solomon four-group designs. All three of these designs are presented in terms of comparing a single independent variable and dependent variable with a control condition. Designs with more than one independent variable also rep­

  8. EXPERIMENTAL DESIGNS FOR RESEARCH

    True experimental designs include: Pre-test/Post-test control group design This is also called the classic controlled experimental design, and the randomized pre-test/post-test design because it: 1) Controls the assignment of subjects to experimental (treatment) and control groups through the use of a table of random numbers.

  9. Pretest-Posttest Design

    The basic premise behind the pretest-posttest design involves obtaining a pretest measure of the outcome of interest prior to administering some treatment, followed by a posttest on the same measure after treatment occurs. Pretest-posttest designs are employed in both experimental and quasi-experimental research and can be used with or ...

  10. Pretest-Posttest Design

    A pretest-posttest experimental design is a quasi-experimental approach, which means the aim of the approach is to establish a cause-and-effect relationship. ... True Experimental Design Elements ...

  11. PDF Pretest-Posttest Comparison Group Designs: Analysis and Interpretation

    Reprint requests to Dr.Harmon, CPH Room 2K04, UCHSCBox C268-52, 4200EastNinthAvenue, Denver CO 80262. 500 subjects or repeated-measures independent variable is change over time from pretest to posttest. Time is a within-subjects independent variable when two or more measures are recorded for each person.

  12. The Perils of Ignoring Design Effects in Experimental Studies: Lessons

    INTRODUCTION. The pretest-posttest control group design (Shadish, Cook, & Campbell, 2002) is one of the most frequently used study designs in behavioral and psychosocial intervention research.Although use of this design controls for the majority of threats to internal validity, several important threats to external validity remain, including the interaction of testing and the intervention (i.e ...

  13. PDF Pretest-posttest designs and measurement of change

    507 White Hall, College of Education, Kent State University, Kent, OH 44242-0001, USA Tel.: +1 330 672 0582; Fax: +1 330 672 3737; E-mail: [email protected]. Abstract. The article examines issues involved in comparing groups and measuring change with pretest and posttest data.

  14. PDF Experimental Designs

    One group pretest-posttest design O X O 7 . III. Experimental Designs (Cont.) B. (True) Experimental Designs: Essential components 1) Random assignment (* Random selection) 2) Experimental group and control group 3) Compare changes between the groups 8 . III. ...

  15. PDF Chapter 11: Quasi-Experimental Designs

    Pretest-Posttest design. !Regression toward the mean: The more extreme a score is, the more likely it is to be closer to the mean at a later measurement. "Example: Yao Ming is 7' 6" tall. If he were to have children, the chances of him having a child that is taller than him is statistically smaller due to the extremity of his height.

  16. Why Is the One-Group Pretest-Posttest Design Still Used?

    More than 50 years ago, Donald Campbell and Julian Stanley (1963) carefully explained why the one-group pretest-posttest pre-experimental design (Y 1 X Y 2) was a very poor choice for testing the effect of an independent variable X on a dependent variable Y that is measured at Time 1 and Time 2.The reasons ranged from obvious matters such as the absence of a control group to technical ...

  17. True Experimental Designs

    This chapter examines some specific designs for experimental studies. It considers a classic design for exploring cause-and-effect relationships. Design 1 is the pretest-posttest randomized control group design. By assigning participants at random to groups, researchers are assured that there are no biases in the assignment.

  18. Experimental and Quasi-experimental Designs for Research

    the pretest-posttest control group design, the solomon four-group design, and the posttest-only control group design are the true experimental designs examined. quasi-experimental designs are recommended for many natural settings where the researcher lacks full control over the scheduling of experimental stimuli, such that a true experimental ...

  19. Quasi-Experimental Design (Pre-Test and Post-Test Studies) in

    This article is another in a series that discusses research methods frequently used in prehospital and disaster research. A common type of submission to Prehospital and Disaster Medicine is research based on a pre-test and post-test evaluation of an education curriculum, triage scheme, or simulation training method. This is particularly true of studies comparing or proposing validation of mass ...

  20. PDF CHAPTER III METHODOLOGY 3.1 Research Method

    this design. There are four types of pre-experimental designs which are: one-shot case study, one-group pretest-posttest design, posttest-only with nonequivalent groups, and posttest-only with nonequivalent groups design (Creswell, 2017). Referring to the case example of the designs, the pre-experimental method was

  21. PDF Quasi-Experimental Design (Pre-Test and Post-Test Studies) in

    A common type of submission to Prehospital and Disaster Medicine is research based on a pre-test and post-test evaluation of an education curriculum, triage scheme, or simulation training method. This is particularly true of studies comparing or proposing validation of mass-casualty triage algorithms. Pre-test and post-test research is one of ...

  22. PDF Experimental Design supplemental

    Basic Experiments: Pretest-Posttest Design With a Pretest-Posttest design, a researcher must: Obtain two equivalent groups* Introduce the independent variable Measure the effect of the independent variable on the dependent variable * A larger sample size increases the likelihood that the that the groups will differ.