From Addressing Barriers to Learning,
Vol. 3 (1), Winter 1998

Accountability:
Is it Becoming a Mantra?

Accountability should not simply be a mantra. It is an invaluable facet of effective practice; but it is just one facet and only makes sense when the other facets are properly planned and implemented.

How effective is the intervention?
Do you have data to support that approach?
Where's your proof?

The questions are so logical and simple to ask, and they can be so devastating in their impact. The problem is that such questions imply that relevant data are easy to gather, and so if data aren't available, the intervention must be ineffective or else those in charge are irresponsible. Usually ignored by the questioners are the many complexities associated with valid and ethical evaluation of major mental health and psychosocial problems.

Every mental health practitioner is aware of the importance of having data on results. All interveners want to be accountable for their actions and outcomes. But it is complicated.

Fundamental dilemmas stem from the limited validity and focus of available measures and the tendency for those demanding accountability to have inappropriate expectations that there can be rapid improvement even though youngsters and their families are experiencing severe and pervasive problems. Most widely sanctioned evaluation instruments are quite fallible. Moreover, they are designed to measure results that require a lengthy course of intervention, thereby giving short shrift to immediate benefits (benchmarks) that are essential precursors of longer-range improvements. Ironically, demands for accountability tend not to take responsibility for the negative consequences that formal assessment has on some clients. Accountability pressures increasingly require the gathering of a significant amount of data during the first session with a client; many practitioners note that this practice interferes with building positive relationships and contributes to what is already too high an intervention dropout rate.

What are practitioners and program leaders to do?
Well, not surprisingly, they often look for assistance. The topics of evaluation, accountability, and quality improvement are among the most frequent requests for technical assistance and continuing education. As a result, the number of publications and technical assistance resources in the area has increased at an exponential rate. And, there are endless lists of measures (many that have not been appropriately validated). Unfortunately, the volume of materials and other resources is not an indication that fundamental evaluation concerns have been effectively addressed. The complications remain unresolved, the status quo remains unsatisfactory; and all that any of us can do at this point is to develop aids, guidelines, and standards for practice that strive for appropriate accountability while doing the least harm to youngsters and their families.

As an aid to those involved with mental health in schools, our intent here is to support evaluative efforts by highlighting a broad range of accountability indicators and outlining ways data related to such indicators currently can be gathered. In doing so, we differentiate three areas for accountability (i.e., accountability to the society, to an institution such as schooling, and to youngsters and their families).

Accountability to Who?
In a seminal article on the evaluation of therapeutic outcomes, Strupp and Hadley (1977) stress how different the expectations of society and its institutions often are from those of individual clients. Thus, it is imperative to understand accountability from the perspective of the various parties with special interests in the results of mental health and psychosocial interventions. For our purposes here, the focus is on (a) the society in general and the institution of schooling in particular and (b) those specific youngsters and their families who are the direct focus of intervention efforts.

Accountability to Society and to the Institution of Schooling
Society looks at the following types of general indicators to evaluate whether efforts related to psychosocial and mental health concerns are paying appropriate dividends:

Data for Accountability to Society and the Institution of Schooling
Data related to most of the above indicators are available from the records at school sites, school districts, and city/county agencies. Some schools also are involved in administering the Youth Risk Behavior Surveillance System (sponsored by the Centers for Disease Control and Prevention) which contains relevant indicators for use in monitoring changes over time. (Many communities and child advocacy groups are gathering local and statewide data on child well-being and publishing it as "Report Cards.") If data are not available, then efforts are needed to ensure relevant indicators are gathered and made accessible. And, appropriate steps should be taken to ensure that data can be disaggregated with respect to specific subgroups.

  • Increases in youth employment (ages 16-19)
  • Reductions in
  • student mobility
  • youth pregnancy
  • sexually transmitted diseases
  • child abuse/neglect
  • youth arrest/citation
  • youth probation violations

  • Reductions in
  • youth emergency room use for mental health and psychosocial related events
  • foster care placements
  • homeless youth
  • youth suicide rates
  • youth death rates
  • In addition, those responsible for schools are required to demonstrate effective fulfillment of their specific mission -- which is to educate the young in ways that meet society's needs. The primary indicators currently demanded by social policy are those that reflect academic achievement at a standard competitive with other major countries. Thus, the emphasis is on increasing

  • at all grades
  • achievement test scores
  • grades
  • other indicators of progress in academics (analyses of work)
  • at high school level
  • number graduating (with a related reduction in the number dropping out)
  • number taking SATs
  • number continuing with post-secondary educ.
  • Because many youngsters are experiencing barriers to learning and performing at school, programs and services to address such barriers are increasingly essential to the ability of schools to accomplish their mission. Some major indicators for accountability related to these enabling or learning support programs are

  • Reductions in
  • unexcused absences
  • tardies
  • suspensions/expulsions
  • referrals for misbehavior
  • referrals for learning problems
  • Increases in
  • attendance
  • cooperation & work habits
  • fluency in English as Second Language
  • Reduction in numbers designated as Learning Disabled or Emotionally Disturbed
  • Accountability to Specific Youngsters and Families
    Those who work in school districts to provide programs and services related to psychosocial and mental health concerns also are accountable to the specific individuals they help. Such accountability certainly can be seen as encompassing the indicators listed above. However, for individuals who must deal with major barriers, many of the above realistically are only good indicators of progress after a lengthy period of multifaceted, comprehensive, integrated intervention. More immediate accountability indicators are needed to demonstrate progress related to objectives that are the current and direct focus of psychosocial and mental health interventions (e.g., reductions in symptoms; enhanced motivation and psychological and physical well-being). Because data on such specific objectives are not readily available, the problem of generating relevant data arises -- as do some serious dilemmas. Efforts to answer the following questions lead to an appreciation of the many problems and issues.

    What are the right indicators?
    Endless arguments arise over indicators when they are discussed in highly specific and concrete terms. At a more abstract level, there is considerable agreement around three general categories: (1) client satisfaction (the youngster; the family), (2) reduction in the youngster's symptoms/problem behaviors, and (3) increases in positive functioning (the youngster; the family).

    How can appropriate specific and concrete indicators be identified for particular clients?
    The dilemmas that arise here reflect the problem of "Who is the client?" -- the youngster? the family? a teacher who made the referral? Additional dilemmas arise because the various involved parties often have different perspectives regarding what problems should be addressed. (And , of course, the intervener may have even another perspective.) A reasonable compromise is to gather evaluative data related to (1) the specific symptoms and.behavior problems that led to the referral, (2) any objectives that the client wants help in achieving, and (3) specific objectives that the intervener believes are warranted and that the client consents to add.

    How should the deficiencies associated with existing measures be accounted for?
    Although some measures are better than others and some are designated the best that exist, best should not be equated with good or good enough. All instruments we rely on currently have limited reliability and validity; also quite limited are the normative data for various subgroups. These limitations (1) call for using formal instruments only when they are necessary, (2) require full disclosure of limitations when findings are reported, and (3) warrant making extreme efforts to look for disconfirming evidence whenever findings suggest significant pathology.

    How can the negative impact of gathering the data be minimized to an appropriate degree?
    All evaluation has the potential to produce major negative consequences. The ethical obligation is to maximize benefits and minimize costs to clients. Putting aside the financial costs, it is clear that use of any formal measure can increase a client's distress and produce psychological reactance. It is likely that the high dropout rate among clients, in part, is a reaction to too much formal assessment during the first encounters with an intervener. Accountability requirements that mandate administration of formal measures before counseling is initiated may well be contributing to the low rate of youngsters who stay in counseling long enough to reap significant benefits. From the perspective of sound standards for practice, (1) no formal measures should be administered until the intervener judges that the relationship with the client is strong enough to mediate any distress and (2) measures should be personalized to assess only the specific and concrete indicators that are relevant to a particular client.

    Measures Relevant for Accountability to Specific Youngsters and Families

    Below are listed a sample of promising instruments. Unless otherwise noted, the measure cited is reviewed in Evaluating the Outcome of Children's Mental Health Services: A Guide for the use of Available Child and Family Outcome Measures (1995) -- prepared by T.P. Cross & E. McDonald for the Technical Assistance Center for the Evaluation of Children's Mental Health Services.2

    It is essential that interveners review and choose measures that minimize negative impact on clients. Proper personalization of assessment in the best interests of the client may even call for not using a measure in its entirety or in the way the developer prescribes. We recognize that this violates standardization of administration and makes interpretation more difficult, but just as empirically supported therapeutic strategies must be adapted to ensure a good fit with a client, so must assessment practices. In both instances, empirical support for prevailing practices is not so strong as to warrant rigid implementation. Also of value are data from functional assessments (increasingly being done when students are referred for behavior problems). Finally, some interveners use projective procedures and selected items from other measures (e.g., sentence completion, drawings and related stories, Childrens Depression Inventory) as a stimulus for discussion with clients. Client responses early and near the end of the period of intervention may be useful as supplementary evaluation data.

    (1) Client Satisfaction (youngster; family)
    Client Satisfaction Questionnaire (CSQ -- Larsen, et al. -- Portland State U. Version)
    Youth Satisfaction Questionnaire (YSQ -- Portland State U.)
    Vanderbilt Satisfaction Scales -- parents/caregivers and/or adolescent self-report

    (2) Reduction in Youngster's Symptoms/Problem Behaviors
    Child Behavior Checklist (CBCL -- Achenbach & Colleagues)

    There are versions to be filled out by parents-caregivers, teachers, and youth self-report, as well as a direct observation form.
    Child Assessment Schedule (CAS -- Hodges) -- self-reports from child and/or parents-caregivers

    (3) Increases in Positive Functioning (youngster; family).
    Child and Adolescent Functional Assessment Scale (CAFAS -- Hodges) -- intervener rating
    Preschool and Early Childhood Functional Assessment (Hodges) -- intervener rating
    Quality of Well-Being Scale3 (QWB) -- client self-report
    Family Environment Scale (Moos) -- family self-report
    Family Empowerment Scale (Portland State U.) -- family self-report


    2Other instruments are reviewed in the guidebook; those included here seem most useful for practitioners concerned with mental health in schools. The guidebook is available by contacting the TA Center for the Evaluation of Children's Mental health Services at Judge Baker Children's Center, 295 Longwood Ave., Boston, MA 02115 (617) 232-8390.

    3Reviewed in W.H. Hargreaves, M. Shumway, T. Hu, & B. Cuffel (1998). Cost-Outcome Methods for Mental Health. San Diego: Academic Press.

    Sampling of Indicators with Respect to Different Accountability Demands
    As should be evident from the preceding discussion, it can be extremely costly and time consuming to be accountable to all parties
    (see also Figure 1) with interests in the productivity of an intervention. In most situations, the realities that only a sample of data can be gathered (see Figure 2).

    With respect to individual clients, the data sample should begin with assessment that has direct and immediate relevance to the specific objectives an intervener and client have agreed to pursue. Then, in response to accountability demands and in keeping with ethical and feasible practice, a subset of standardized items can be administered to stratified samples of clients. The particular subsets of items chosen should reflect matters of greatest concern to those demanding accountability. If the pool of items is large, then different subsets of items can be administered over time and later combined to provide a full picture of outcomes.

    With respect to societal and institutional accountability, the data sample initially consists of that which can be readily gathered on a regular basis. Subsequently, again reflecting matters of greatest concern to those demanding accountability, step by step strategies can be developed to establish systems for amassing regular findings related to key variables and specific population subgroups.

    Clearly, sampling requires considerable planning and careful implementation. A systematic evaluation plan must be developed, and there must be appropriate budgeting for its implementation. Many programs will.require specific consultation in developing an appropriate sampling strategies.

    Standards for Comparison
    Whatever data are collected will be imperfect and only rarely will be easily interpreted. For accountability to be rational, there must be a reasonable set of standards for comparison. In asking how good an intervention is, the question must be answered in terms of Compared to what?

    When it comes to mental health in the schools, the best comparisons are (a) data on the previous results of intervention efforts with comparable students and their families, (b) data on similar students/families at a school who have not yet been served (e.g., appropriate waiting list samples), or (c) data from a very similar school that does not have the programs being evaluated. The first approach calls for gathering a "baseline" of data before or in the early stages when an intervention is being developed. The latter approaches call for being able to gather the same data with nonserved groups. Again, the matters of systematic planning and appropriate budgeting are central considerations.

    Finding out if interventions are any good is a necessity. But in doing so, it is wise to recognize that evaluation is not simply a technical process. Evaluation involves decisions about what, how, and when to measure, and these decisions are based in great part on values and beliefs. As a result, limited knowledge, bias, vested interests, and ethical issues are constantly influencing evaluation processes and the decisions made with respect to accountability..


    Newsletter Menu
    Home Page
    Return to Resource List