Skip to main content
U.S. flag

An official website of the United States government

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock () or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

September 2012Vol. 13, No. 8Evaluation Designs for Assessing Practice Models

By Anita P. Barbee, M.S.S.W., Ph.D., Kent School of Social Work, University of Louisville

When nonprofit agencies, counties, and States embark on implementation of a practice model, it is important to plan the evaluation of its effectiveness right from the beginning. Indeed, one of the first objectives for the team should be a decision about the evaluation design. It is imperative that the organization choose the most rigorous evaluation design that it can accommodate. This brief article describes a few rigorous evaluation designs that could be or have been utilized in assessing child welfare practice models.

  1. Randomized Controlled Trial or Experimental Design. This evaluation design randomly assigns individual workers within nonprofit agencies, counties, or Tribes to one of two conditions. These conditions include an experimental condition in which a new practice model is adopted by a randomly chosen subset of individuals, and a control condition in which another subset of individuals practice as usual. These two groups are then compared on safety, permanency, and well-being outcomes related to the model

    Very seldom can an experimental test of a practice model be carried out, even though that is the gold standard for assessing the efficacy of any intervention. Few organizations are structured in a way that allows individuals to be randomly assigned to conditions, kept from talking to people in the other condition, or kept from feeling resentful about having to learn new skills or being excluded from learning new skills.
     
  2. Non-Equivalent Group Design. In this design, a set of similar agencies, counties, or Tribes that choose to participate in learning and executing a new practice model are assigned to the treatment group or a comparison group. The outcomes of the treatment group are compared to the outcomes of the comparison group both before practice model implementation and after. In addition, measures of characteristics of both groups can help control for confounds (variables that account for the difference in outcomes between groups besides the intervention) to the design.

    This design was used in the evaluation of Alternative Response in Ohio (Loman, Filonow, & Siegel, 2010). Counties that adopted Alternative Response had more improvement from before to after implementation and better outcomes than counties that had not yet adopted Alternative Response. This design is even better if the alternate group is a wait-list control group that receives the intervention after the treatment group. This design is almost as strong as an experiment if measures are taken before and after each treatment condition for both groups. This design is being carried out in the New Hampshire practice model evaluation supported by the Northeast and Caribbean Child Welfare Implementation Center and the National Resource Center for Organizational Improvement.
     
  3. Comparison of High Versus Low Adherence Groups. In this evaluation design, researchers measure worker adherence to the practice model. This usually occurs through observation or case reviews. In both methods, the researcher develops a measurement tool that assesses the degree to which a worker's practice behaviors align with the practice model. This design compares workers who practice with high fidelity based on a cutoff score or percentage of adherence (high adherence group) to workers who practice with low fidelity (low adherence group). Differences in outcomes between the two groups are attributed to the new practice, although poorer ability or motivation of the low adherence group must be assessed and ruled out.

    The high vs. low adherence design is not ideal, but it addresses the reality that it is difficult to engage in a randomized controlled trial or nonequivalent group design in many State-administered systems. Also, if administrators are not fully committed to the importance of measuring and ensuring high fidelity, then outcome comparisons become meaningless because the adherence to the model by staff is either unknown or low. 

In both Kentucky and Washington State, evaluators were able to utilize the high vs. low adherence design. In both States, it was found that when workers adhered to the Solution-Based Casework practice model, outcomes for families and children were more positive than when workers did not adhere to the model (Antle, Sullivan, Barbee, & Christensen, 2010; Courtney, 2011). A follow-up study assessing 4,550 case reviews conducted during the State's Continuous Quality Improvement process showed that those workers who practiced with high fidelity to the practice model achieved all the Federal Child and Family Services Reviews outcomes in those cases. This provides some support for the intervention but also raises questions about employee selection (Antle, Christensen, van Zyl & Barbee, 2012).

One innovation, which we have developed, that is utilized either during or after installation of a practice model, is to link the evaluation of training, coaching, and supervision to measurement of casework quality through quality assurance case reviews and organizational and client outcomes. That way, as new workers are added to the system and as the practice model is embedded in the organization over time, a continuous and connected evaluation is occurring to ensure fidelity and achievement of positive outcomes (Antle, Barbee, & van Zyl, 2008; Antle, Barbee, Sullivan, & Christensen, 2010; Antle, Christensen, Barbee, & Martin, 2008; Antle, et al., 2010; van Zyl, Antle, & Barbee, 2010).