(N=753)
%
Screening Rates by Cohort
Baseline % | Follow-Up % | Change % | P-Value | |||
---|---|---|---|---|---|---|
Cohort 1 (mostly white, pretest) | I (N=382) | 55.0 | 65.2 | 10.2 | ||
C (N=371) | 54.9 | 57.7 | 2.5 | .05 | ||
Cohort 2 (mostly white, no pretest) | I (N=370) | ----- | 59.0 | |||
C (N=366) | ----- | 58.0 | .84 | |||
Cohort 3 (non-white, pretest) | I (N=493) | 45.0 | 50.1 | 5.1 | ||
C (N=502) | 47.2 | 47.0 | -0.2 | .001 |
The main outcome of interest in all three experiments/cohorts was whether or not participants obtained a screening mammogram in the period between study enrollment and the 12 month follow-up survey. Table 3 displays screening rates at baseline and follow-up by intervention versus control condition for all three cohorts. For Cohorts 1 and 3, no differences were observed in baseline mammography rates between intervention and control groups. To assess intervention effectiveness, change scores calculated separately for the intervention and control groups were directly compared, within each cohort, using the Mann-Whitney U test. First, two indicator variables were created for each woman to note whether she had a mammogram in the 12 months preceding baseline and in the 12 months between baseline and follow-up. For each variable, a “0” indicated no mammogram and a “1” indicated receipt of a mammogram. For each woman, a change score was created by subtracting the value of the indicator variable for baseline from the follow-up value. Significantly greater increases in mammography rates were observed in the intervention compared to control groups for both Cohorts 1 and 3, indicating a significant intervention effect. Logistic regression analysis, controlling for baseline mammography rates and covariates, yielded identical results.
In Cohort 2, direct comparison of post-test rates between intervention and control groups yielded non-significant results in chi-square analyses. To complete the picture, post-test comparisons were also conducted in Cohorts 1. A significant intervention versus control group difference at post-test was observed for Cohort 1 (p < .05).
The pattern of results obtained revealed significant intervention effects in Cohorts 1 and 3, but not in Cohort 2, suggesting that the intervention was only effective in the presence of a pretest. This illustrates the classic external validity threat of “the interaction of testing and X” described by Campbell and Stanley in 1966, in which the pretest may prompt participants to be more receptive to the intervention. Also, a comparison of the control group rates for Cohorts 1 and 3 seems to indicate that the pretest alone resulted in a slight increase in screening among the predominantly white women in Cohort 1 but not in the minority women in Cohort 3. Furthermore, the intervention effect appeared somewhat greater among white compared to ethnic minority women, providing support for an “interaction of selection and the intervention” in which the demographic characteristics of the participants may have influenced the study outcome. Additional evidence that participant ethnicity was an important factor in influencing the effectiveness of the intervention is provided in Table 4 , which displays outcomes separately for the three ethnic minority groups within Cohort 3. No intervention effect appears to be present within African Americans or Latinos (< 3% point difference in mammography rates between intervention and control groups). However, the effect of the intervention among Asians is substantially larger compared to other ethnic groups (9% point advantage for intervention versus control condition), although the intervention effect is not statistically significant due to the small size of this group (n = 251). Therefore, in the presence of a pretest, it appears that there is an unambiguous intervention effect in the white cohort. In the non-white cohort, the effect is smaller and less clearly visible. Namely, the results depended upon the design utilized and the particular ethnic group examined.
Screening Rates by Ethnicity for Cohort 3
Baseline % | Follow-Up % | Change % | P-Value | ||
---|---|---|---|---|---|
African American | I (N=151) | 44.4 | 50.0 | 5.6 | |
C (N=156) | 46.7 | 50.0 | 3.3 | 0.50 | |
Latino | I (N=205) | 46.5 | 46.1 | -0.4 | |
C (N=219) | 48.5 | 50.0 | 1.5 | 0.73 | |
Asian | I (N=141) | 45.7 | 52.8 | 7.1 | |
C (N=128) | 46.4 | 44.6 | -1.8 | 0.10 |
The present study demonstrates the effects of two often overlooked threats to external validity in randomized trials: pretest sensitization and the participant selection. Examination of the intervention effect in any of the three cohorts in isolation would have lead to inaccurate conclusions. The randomized pretest-posttest control group design employed in Cohorts 1 and 3 would lead us to conclude that our intervention was effective among Whites as well as among minority participants. We would likely feel justified in encouraging wide adoption of this “evidence based” intervention in community practice. However, examination of the results of our intervention in all three cohorts collectively provides a much different picture. We discover in Cohort 2 that when the intervention was delivered to a sample of White women at high risk for breast cancer (almost identical to the Cohort 1 sample) without administering a pretest, screening rates did not increase, suggesting that our intervention was only effective when implemented following a pretest. This is a classic illustration of the interaction of the pretest and the intervention. Given that most intervention trials utilize a two-group pretest-posttest randomized design, the frequency at which pretest sensitization occurs is largely unknown. It is conceivable that researchers often erroneously assume that an intervention is effective, when in fact the outcome improvements observed would occur only in the context of a pretest. This may, in part, explain the failure of many interventions, found to be effective in randomized controlled trials, to be successful when implemented in “real life” settings without a pretest.
Our results also provide support for the presence of an interaction of selection and the intervention such that the intervention was not uniformly effective across all ethnic groups. The effect of our intervention was more pronounced among white versus ethnic minority women suggesting that the intervention effect was influenced by characteristics of the sample. Upon taking a closer look at the effect of the intervention among ethnic minority women, we found that the effect was present only for Asian women. These results illustrate the importance of stratified and exploratory analyses to assess not only whether an intervention is effective but for whom it has an effect.
Stratified sampling (e.g., oversampling ethnic minority populations) is only one method to enhancing the ability to examine “for whom did an intervention work.” Often stratification is not feasible, and therefore one may want to consider statistical methods to explore these issues (e.g., subgroup, moderator, or responder analyses). One advantage to utilizing stratification is that one decides a priori, based on theoretical or data-based assumptions what factors are anticipated to be related to the effect of the intervention and an attempt is make to ensure sufficient sample sizes within each subgroup for analyses. Statistical methods of examining “for whom” an intervention is effective may be an acceptable alternative. However, statistical methods have their limits, particularly if the resultant study sample is very homogeneous or heterogeneous, small in size, or when analyses are based primarily on post-hoc observations ( Pocock, Assmann, Enos, & Kasten, 2002 ; Senn & Julious, 2009 ; Wang, Lagakos, Ware, Hunter, & Drazen, 2007 ). Statistical methods of control also rely heavily on the quality of measures implemented and the assumption that the all of the important concepts have been assessed. Meta-analysis is another method of reducing the biases that may occur when interpreting results of individual studies ( Egger & Smith, 1997 ). However, the strengths of meta-analysis and data-synthesis techniques are diminished with greater heterogeneity of the existing literature ( Higgins & Thompson, 2002 ; Howard, Maxwell, & Fleming, 2000 ). In addition, meta-analysis represents only a long-term solution, since these analyses can only be utilized after multiple studies using similar methods have been published within a particular area of research.
Threats to external validity, including pretest sensitization and the interaction of testing and the intervention, have historically not been given due attention in the field of intervention research ( Kim, 2010 ; Moore & Moore, 2011 ). This realization has resulted in a relatively recent move towards effectiveness studies and away from strictly controlled efficacy trials and has fed the rapidly developing fields of dissemination, implementation, and translational research ( Glasgow et al., 2006 ; Glasgow et al., 2003 ; Green & Glasgow, 2006 ). Although increasingly acknowledged as important, only a handful of recent studies in the fields of health psychology, behavioral medicine, and public health have directly examined the impact of threats to external validity on research outcomes ( Donovan, Wood, Frayjo, Black, & Surette, 2012 ; Kim, 2010 ; Rubel et al., 2011 ; Spence, Burgess, Rodgers, & Murray, 2009 ). Thus, this study provides a valuable contribution to the literature.
The present study was not conducted as a true Solomon four-group design therefore our ability to determine the extent of the effect of pretest sensitization is diminished. The contribution of differences in the treatment of the three cohorts and the effect of time on the pattern of results obtained cannot be directly assessed. Despite these limitations, our study serves as a powerful illustration of the potential effect of these two threats to external validity.
Failure to acknowledge the influence of pretest sensitization or selection may lead researchers to make misguided and inaccurate conclusions about an intervention’s effectiveness. In the present study, these factors led to false positive results. Given the goal of population-wide dissemination of evidence based interventions, it becomes important to closely examine our criteria for what is “evidence based”. Researchers should increase their attention to issues of external validity when making decisions regarding intervention research design.
Breadcrumbs Section. Click here to navigate to respective pages.
True Experimental Designs
DOI link for True Experimental Designs
Click here to navigate to parent product.
This chapter examines some specific designs for experimental studies. It considers a classic design for exploring cause-and-effect relationships. Design 1 is the pretest-posttest randomized control group design. By assigning participants at random to groups, researchers are assured that there are no biases in the assignment. At first, the lack of a pretest may seem to be a flaw, but remember that the comparability of the two groups in Design 1 was achieved by assigning participants at random to the two groups. This initial comparability is also achieved in Design 2 above by this random assignment. Researchers can have the best of both designs by using the Solomon randomized four-group design, which is a combination of Designs 1 and 2. Solomon's design is shown in Design 3. All three of the preceding designs are called true experimental designs. True experimental designs are easy to spot because they are all characterized by random assignment to treatments.
Connect with us
Registered in England & Wales No. 3099067 5 Howick Place | London | SW1P 1WG © 2024 Informa UK Limited
An official website of the United States government, Department of Justice.
Here's how you know
Official websites use .gov A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS A lock ( Lock A locked padlock ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.
Experimental and quasi-experimental designs for research, additional details.
437 Madison Avenue , New York , NY 10036 , United States
Box 7600 , Chicago , IL 60680 , United States
Availability, related topics.
IMAGES
VIDEO
COMMENTS
For many true experimental designs, pretest-posttest designs are the preferred method to compare participant groups and measure the degree of change occurring as a result of treatments or interventions. Pretest-posttest designs grew from the simpler posttest only designs, and address some of the issues arising with assignment bias and the ...
A pretest-posttest design is an experiment in which measurements are taken on individuals both before and after they're involved in some treatment. Pretest-posttest designs can be used in both experimental and quasi-experimental research and may or may not include control groups. The process for each research approach is as follows:
Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design.
The simplest true experimental designs are two group designs involving one treatment group and one control group, and are ideally suited for testing the effects of a single independent variable that can be manipulated as a treatment. The two basic two-group designs are the pretest-posttest control group design and the posttest-only control
According to Campbell and Stanley (1963), there are three basic types of true experimental designs: (1) pretest-posttest control group design, (2) Solomon four-group design, and (3) posttest-only control group design. The pretest-posttest control group design is the most widely used design in medical, social, educational, and psychological ...
True Experimental Design True experimental design makes up for the shortcomings of the two designs previously discussed. ... The pretest posttest equivalent groups design provides for both a control group and a measure of change but also adds a pretest to assess any differences between the groups prior to the study taking place.
true experimental designs—pretest-posttest : control-group, posttest-only control-group, and Solomon four-group designs. All three of these designs are presented in terms of comparing a single independent variable and dependent variable with a control condition. Designs with more than one independent variable also rep
True experimental designs include: Pre-test/Post-test control group design This is also called the classic controlled experimental design, and the randomized pre-test/post-test design because it: 1) Controls the assignment of subjects to experimental (treatment) and control groups through the use of a table of random numbers.
The basic premise behind the pretest-posttest design involves obtaining a pretest measure of the outcome of interest prior to administering some treatment, followed by a posttest on the same measure after treatment occurs. Pretest-posttest designs are employed in both experimental and quasi-experimental research and can be used with or ...
A pretest-posttest experimental design is a quasi-experimental approach, which means the aim of the approach is to establish a cause-and-effect relationship. ... True Experimental Design Elements ...
Reprint requests to Dr.Harmon, CPH Room 2K04, UCHSCBox C268-52, 4200EastNinthAvenue, Denver CO 80262. 500 subjects or repeated-measures independent variable is change over time from pretest to posttest. Time is a within-subjects independent variable when two or more measures are recorded for each person.
INTRODUCTION. The pretest-posttest control group design (Shadish, Cook, & Campbell, 2002) is one of the most frequently used study designs in behavioral and psychosocial intervention research.Although use of this design controls for the majority of threats to internal validity, several important threats to external validity remain, including the interaction of testing and the intervention (i.e ...
507 White Hall, College of Education, Kent State University, Kent, OH 44242-0001, USA Tel.: +1 330 672 0582; Fax: +1 330 672 3737; E-mail: [email protected]. Abstract. The article examines issues involved in comparing groups and measuring change with pretest and posttest data.
One group pretest-posttest design O X O 7 . III. Experimental Designs (Cont.) B. (True) Experimental Designs: Essential components 1) Random assignment (* Random selection) 2) Experimental group and control group 3) Compare changes between the groups 8 . III. ...
Pretest-Posttest design. !Regression toward the mean: The more extreme a score is, the more likely it is to be closer to the mean at a later measurement. "Example: Yao Ming is 7' 6" tall. If he were to have children, the chances of him having a child that is taller than him is statistically smaller due to the extremity of his height.
More than 50 years ago, Donald Campbell and Julian Stanley (1963) carefully explained why the one-group pretest-posttest pre-experimental design (Y 1 X Y 2) was a very poor choice for testing the effect of an independent variable X on a dependent variable Y that is measured at Time 1 and Time 2.The reasons ranged from obvious matters such as the absence of a control group to technical ...
This chapter examines some specific designs for experimental studies. It considers a classic design for exploring cause-and-effect relationships. Design 1 is the pretest-posttest randomized control group design. By assigning participants at random to groups, researchers are assured that there are no biases in the assignment.
the pretest-posttest control group design, the solomon four-group design, and the posttest-only control group design are the true experimental designs examined. quasi-experimental designs are recommended for many natural settings where the researcher lacks full control over the scheduling of experimental stimuli, such that a true experimental ...
This article is another in a series that discusses research methods frequently used in prehospital and disaster research. A common type of submission to Prehospital and Disaster Medicine is research based on a pre-test and post-test evaluation of an education curriculum, triage scheme, or simulation training method. This is particularly true of studies comparing or proposing validation of mass ...
this design. There are four types of pre-experimental designs which are: one-shot case study, one-group pretest-posttest design, posttest-only with nonequivalent groups, and posttest-only with nonequivalent groups design (Creswell, 2017). Referring to the case example of the designs, the pre-experimental method was
A common type of submission to Prehospital and Disaster Medicine is research based on a pre-test and post-test evaluation of an education curriculum, triage scheme, or simulation training method. This is particularly true of studies comparing or proposing validation of mass-casualty triage algorithms. Pre-test and post-test research is one of ...
Basic Experiments: Pretest-Posttest Design With a Pretest-Posttest design, a researcher must: Obtain two equivalent groups* Introduce the independent variable Measure the effect of the independent variable on the dependent variable * A larger sample size increases the likelihood that the that the groups will differ.