Discussion: Using a Logic Model to Focus Interventions and Achieve Desired Outcomes

In social work practice and in program development, it is possible to make faulty assumptions about what clients need and what social work activities will lead to. Consider the following:

A team of social workers meets to discuss their services to low-income young mothers. One social worker states that what the young mothers need most is information about community resources. She proposes that the social workers’ activities consist of making referrals to programs for public assistance for income support, food stamps, medical insurance, employment agencies, and educational resources. However, another team member points out that most clients are referred to their program from the public welfare office and health care programs. This suggests that the clients tend to possess knowledge of these common resources and have been able to access them.

How might the team explore what problems bring the clients to their agency? What might the team learn from client assessments? How can the team verify the desired outcomes of their services? Developing a logic model will help the team see a logical connection between problems, needs, intervention activities, and corresponding outcomes. This series of logical connections leads to formulating a theory of change, that is, a theory about how our work leads to the outcomes for clients.

To prepare for this Assignment, imagine that you are part of a work group charged with creating a logic model and generating a theory of change. Select a practitioner-level intervention for which you are interested in analyzing connections. Consider how a logic model might be applied to that practice.

ASSIGNMENT (1 Page Paper)


Post a logic model and theory of change for a practitioner-level intervention. Describe the types of problems, the client needs, and the underlying causes of problems and unmet needs. Identify the short- and long-term outcomes that you think would represent an improved condition. Then describe interventions that would lead to a change in the presenting conditions. Be sure to search for and cite resources that inform your views.


Week7: Developing a Logic Model Outline Handout

Complete the tables below to develop both a practice-level logic model and a program-level logic model to address the needs of Helen in the Petrakis case history.

Practice-Level Logic Model Outline



Underlying Causes

Intervention Activities


Program-Level Logic Model Outline



Underlying Causes

Intervention Activities


© 2014 Laureate Education, Inc.
Page 1 of 1

Excerpts from Measuring Program Outcomes: A Practical Approach
© 1996 United Way of America

Introduction to Outcome Measurement

If yours is like most human service agencies or youth- and family-serving organizations, you regularly
monitor and report on how much money you receive, how many staff and volunteers you have, and what
they do in your programs. You know how many individuals participate in your programs, how many hours
you spend serving them, and how many brochures or classes or counseling sessions you produce. In
other words, you document program inputs, activities, and outputs.

Inputs include resources dedicated to or consumed by the program. Examples are money, staff and staff
time, volunteers and volunteer time, facilities, equipment, and supplies. For instance, inputs for a parent
education class include the hours of staff time spent designing and delivering the program. Inputs also
include constraints on the program, such as laws, regulations, and requirements for receipt of funding.

Activities are what the program does with the inputs to fulfill its mission. Activities include the strategies,
techniques, and types of treatment that comprise the program’s service methodology. For instance,
sheltering and feeding homeless families are program activities, as are training and counseling homeless
adults to help them prepare for and find jobs.

Outputs are the direct products of program activities and usually are measured in terms of the volume of
work accomplished–for example, the numbers of classes taught, counseling sessions conducted,
educational materials distributed, and participants served. Outputs have little inherent value in
themselves. They are important because they are intended to lead to a desired benefit for participants or
target populations.

If given enough resources, managers can control output levels. In a parent education class, for example,
the number of classes held and the number of parents served are outputs. With enough staff and
supplies, the program could double its output of classes and participants.

If yours is like most human service organizations, you do not consistently track what happens to
participants after they receive your services. You cannot report, for example, that 55 percent of your
participants used more appropriate approaches to conflict management after your youth development
program conducted sessions on that skill, or that your public awareness program was followed by a 20
percent increase in the number of low-income parents getting their children immunized. In other words,
you do not have much information on your program’s outcomes.

Outcomes are benefits or changes for individuals or populations during or after participating in program
activities. They are influenced by a program’s outputs. Outcomes may relate to behavior, skills,
knowledge, attitudes, values, condition, or other attributes. They are what participants know, think, or can
do; or how they behave; or what their condition is, that is different following the program.

For example, in a program to counsel families on financial management, outputs–what the service
produces–include the number of financial planning sessions and the number of families seen. The
desired outcomes–the changes sought in participants’ behavior or status–can include their developing
and living within a budget, making monthly additions to a savings account, and having increased financial

In another example, outputs of a neighborhood clean-up campaign can be the number of organizing
meetings held and the number of weekends dedicated to the clean-up effort. Outcomes–benefits to the
target population–might include reduced exposure to safety hazards and increased feelings of
neighborhood pride. The program outcome model depicts the relationship between inputs, activities,
outputs, and outcomes.

Note: Outcomes sometimes are confused with outcome indicators, specific items of data that are tracked to measure how well a
program is achieving an outcome, and with outcome targets, which are objectives for a program’s level of achievement.

For example, in a youth development program that creates internship opportunities for high school youth, an outcome might be that
participants develop expanded views of their career options. An indicator of how well the program is succeeding on this outcome
could be the number and percent of participants who list more careers of interest to them at the end of the program than they did at
the beginning of the program. A target might be that 40 percent of participants list at least two more careers after completing the
program than they did when they started it.

Program Outcome Model

Resources dedicated
to or consumed by
the program
staff and staff time
volunteers and
volunteer time

equipment and

Constraints on the
funders’ requirements

What the program
does with the inputs
to fulfill its mission
feed and shelter
homeless families
provide job training
educate the public
about signs of child
counsel pregnant
create mentoring
relationships for youth

The direct products of
program activities
number of classes
number of counseling
sessions conducted
number of educational
materials distributed
number of hours of
service delivered
number of participants

Benefits for
participants during
and after program
new knowledge
increased skills
changed attitudes or

modified behavior

improved condition
altered status

Why Measure Outcomes?

In growing numbers, service providers, governments, other funders, and the public are calling for clearer
evidence that the resources they expend actually produce benefits for people. Consumers of services and
volunteers who provide services want to know that programs to which they devote their time really make a
difference. That is, they want better accountability for the use of resources. One clear and compelling
answer to the question of “why measure outcomes?” is to see if programs really make a difference in the
lives of people.

Although improved accountability has been a major force behind the move to outcome measurement,
there is an even more important reason: to help programs improve services. Outcome measurement
provides a learning loop that feeds information back into programs on how well they are doing. It offers
findings they can use to adapt, improve, and become more effective.

This dividend doesn’t take years to occur. It often starts appearing early in the process of setting up an
outcome measurement system. Just the process of focusing on outcomes–on why the program is doing
what it’s doing and how participants will be better off–gives program managers and staff a clearer picture
of the purpose of their efforts. That clarification alone frequently leads to more focused and productive
service delivery.

Down the road, being able to demonstrate that their efforts are making a difference for people pays
important dividends for programs. It can, for example, help programs:

• Recruit and retain talented staff
• Enlist and motivate able volunteers
• Attract new participants
• Engage collaborators
• Garner support for innovative efforts
• Win designation as a model or demonstration site
• Retain or increase funding
• Gain favorable public recognition

Results of outcome measurement show not only where services are being effective for participants, but
also where outcomes are not as expected. Program managers can use outcome data to:

• Strengthen existing services
• Target effective services for expansion
• Identify staff and volunteer training needs
• Develop and justify budgets
• Prepare long-range plans
• Focus board members’ attention on programmatic issues

To increase its internal efficiency, a program needs to track its inputs and outputs. To assess compliance
with service delivery standards, a program needs to monitor activities and outputs. But to improve its
effectiveness in helping participants, to assure potential participants and funders that its programs
produce results, and to show the general public that it produces benefits that merit support, an agency
needs to measure its outcomes.

These and other benefits of outcome measurement are not just theoretical. Scores of human service
providers across the country attest to the difference it has made for their staff, their volunteers, their
decision makers, their financial situation, their reputation, and, most important, for the public they serve.

Eight Steps to Success

Measuring Program Outcomes provides a step-by-step approach to developing a system for measuring
program outcomes and using the results. The approach, based on methods implemented successfully by
agencies across the country, is presented in eight steps, shown below. Although the illustration suggests
that the steps are sequential, this is actually a dynamic process with a good deal of interplay among

Example Outcomes and Outcome Indicators for Various Programs
These are illustrative examples only. Programs need to identify their own outcomes and indicators,
matched to and based on their own experiences and missions and the input of their staff, volunteers,
participants, and others.

Type of Program Outcome Indicator(s)

Smoking cessation

Participants stop smoking. • Number and percent of participants who report that they have quit smoking by
the end of the course

• Number and percent of participants who have not relapsed six months after
program completion

Information and
referral program

Callers access services to which
they are referred or about which
they are given information.

• Number and percent of community agencies that report an increase in new
participants who came to their agency as a result of a call to the information
and referral hotline

• Number and percent of community agencies that indicate these referrals are

Tutorial program
for 6th grade

Students’ academic performance

• Number and percent of participants who earn better grades in the grading
period following completion of the program than in the grading period
immediately preceding enrollment in the program


Participants become proficient in

• Number and percent of participants who demonstrate increase in ability to
read, write, and speak English by the end of the course

Counseling for
parents identified
as at risk for child
abuse or neglect

Risk factors decrease. No
confirmed incidents of child
abuse or neglect.

• Number and percent of participating families for whom Child Protective
Service records report no confirmed child abuse or neglect during 12 months
following program completion


Employees with drug and/or
alcohol problems are
rehabilitated and do not lose
their jobs.

• Number and percent of program participants who are gainfully employed at
same company 6 months after intake


The home environment is
healthy, clean, and safe.
Participants stay in their own
home and are not referred to a
nursing home.

• Number and percent of participants whose home environment is rated clean
and safe by a trained observer

• Number of local nursing homes who report that applications from younger
and healthier citizens are declining (indicating that persons who in the past
would have been referred to a nursing home now stay at home longer)

Prenatal care

Pregnant women follow the
advice of the nutritionist.

• Number and percent of women who take recommended vitamin supplements
and consume recommended amounts of calcium

Shelter and
counseling for
runaway youth

Family is reunified whenever
possible; otherwise, youths are
in stable alternative housing.

• Number and percent of youth who return home
• Number and percent of youth placed in alternative living arrangements who

are in that arrangement 6 months later unless they have been reunified or

Camping Children expand skills in areas
of interest to them.

• Number and percent of campers that identify two or more skills they have
learned at camp

Family planning for
teen mothers

Teen mothers have no second
pregnancies until they have
completed high school and have
the personal, family, and
financial resources to support a
second child.

• Number and percent of teen mothers who comply with family planning visits
• Number and percent of teen mothers using a recommended form of birth

• Number and percent of teen mothers who do not have repeat pregnancies

prior to graduation
• Number and percent of teen mothers who, at the time of next pregnancy, are

high school graduates, are married, and do not need public assistance to
provide for their children

Glossary of Selected Outcome Measurement Terms

Inputs are resources a program uses to achieve program objectives. Examples are staff, volunteers,
facilities, equipment, curricula, and money. A program uses inputs to support activities.

Activities are what a program does with its inputs-the services it provides-to fulfill its mission. Examples
are sheltering homeless families, educating the public about signs of child abuse, and providing adult
mentors for youth. Program activities result in outputs.

Outputs are products of a program’s activities, such as the number of meals provided, classes taught,
brochures distributed, or participants served. A program’s outputs should produce desired outcomes for
the program’s participants.

Outcomes are benefits for participants during or after their involvement with a program. Outcomes may
relate to knowledge, skills, attitudes, values, behavior, condition, or status. Examples of outcomes include
greater knowledge of nutritional needs, improved reading skills, more effective responses to conflict,
getting a job, and having greater financial stability.

For a particular program, there can be various “levels” of outcomes, with initial outcomes leading to
longer-term ones. For example, a youth in a mentoring program who receives one-to-one encouragement
to improve academic performance may attend school more regularly, which can lead to getting better
grades, which can lead to graduating.

Outcome indicators are the specific items of information that track a program’s success on outcomes.
They describe observable, measurable characteristics or changes that represent achievement of an
outcome. For example, a program whose desired outcome is that participants pursue a healthy lifestyle
could define “healthy lifestyle” as not smoking; maintaining a recommended weight, blood pressure, and
cholesterol level; getting at least two hours of exercise each week; and wearing seat belts consistently.
The number and percent of program participants who demonstrate these behaviors then is an indicator of
how well the program is doing with respect to the outcome.

Outcome targets are numerical objectives for a program’s level of achievement on its outcomes. After a
program has had experience with measuring outcomes, it can use its findings to set targets for the
number and percent of participants expected to achieve desired outcomes in the next reporting period. It
also can set targets for the amount of change it expects participants to experience.

Benchmarks are performance data that are used for comparative purposes. A program can use its own
data as a baseline benchmark against which to compare future performance. It also can use data from
another program as a benchmark. In the latter case, the other program often is chosen because it is
exemplary and its data are used as a target to strive for, rather than as a baseline.


Administration in Social Work, 33:439–449, 2009
Copyright © Taylor & Francis Group, LLC
ISSN: 0364-3107 print/1544-4376 online
DOI: 10.1080/03643100903173040

WASW0364-31071544-4376Administration in Social Work, Vol. 33, No. 4, Jul 2009: pp. 0–0Administration in Social Work

Standardizing Practice at a Victim
Services Organization: A Case Analysis

Illustrating the Role of Evaluation

Standardizing Practice at a Victim Services OrganizationM. Larsen et al.

Institut für Rechtsmedizin, Universitätsklinikum Hamburg-Eppendorf, Germany

Safe Horizon, New York, New York, USA

This paper provides an example of how an internal evaluation
department at a midsize victim services organization led key
activities in achieving strategic organizational goals around
unifying service delivery and standardizing practice. Using the
methods of logic model development and naturalistic observation
of services, evaluation staff guided the clarification of program
expertise and outcomes, and assessed the necessary resources for
standardizing practice.

KEYWORDS program evaluation, standardized practice, victim
services, logic models, observation

There is little question that there is a growing demand for program evalua-
tion data at nonprofit organizations, stemming from government, founda-
tions, and other funding sources that want to know the impact of the
programs they are supporting and that require demonstrations of effective-
ness (Botcheva, White, & Huffman, 2002; Carman, 2007; Newcomer, Hatry, &
Wholey, 2004). This focus on accountability to funders is also an opportunity
for organizations to learn what services work best through evidence collec-
tion for outcome measurement (Botcheva et al., 2002; Buckmaster, 1999).
An organization’s ability to use this evidence and make strategic manage-
ment decisions that are evidenced-based or informed is essential in an

Address correspondence to Shelly Botuck, Safe Horizon, 2 Lafayette Street, 3rd Floor,
New York, NY 10007, USA. E-mail: sbotuck@safehorizon.org

440 M. Larsen et al.

increasingly competitive environment for funding (Menefee, 1997; Neuman,
2003; Proehl, 2001).

Despite the increased focus on evaluation from funders, limited
resources make it difficult for nonprofit organizations to carry out evalua-
tion (Hoefer, 2000). A study by Carman (2007) found that very few organi-
zations have the discretionary funds necessary to employ internal evaluation
staff members. This is in part because demands for evaluation can often
seem like detractions from the service provision, especially when funding
for services is limited (Kopczynski & Pritchard, 2004; Neuman, 2003). Poorly
developed information systems and high staff turnover at many social
service organizations also present barriers to implementing evaluation that
demonstrates program improvement over time, both in terms of data collec-
tion and institutional memory (Kopczynski & Pritchard, 2004). As a result,
organizations focus on counting products or services provided through the
activities of the organization (e.g., number of counseling sessions, number
of trainings conducted) in an attempt to meet funder demands (Carman,
2007). This emphasis on outputs can shift the focus from achieving the
mission of the organization to counting services, and may take attention
away from case documentation that could be used to monitor practice and
assess client outcomes. Despite all of these barriers, evaluation is still the
key to understanding the effects of programs and services. Thus, the
challenge lays in making evaluation useful to organizations, because
without an appreciation of its value and worth, program evaluation will not
be efficacious (Chelimsky, 1994).

Using Safe Horizon’s community and criminal justice programs (CCJP)
as an example,1 this paper provides a case analysis illustrating the role of
evaluation in furthering the implementation of our organization’s strategic
plan. It focuses on two key activities, logic modeling and assessing program
practice, and highlights the ways that these activities assisted Safe Horizon
in standardizing service delivery.


Founded in 1978, Safe Horizon’s mission is to provide support, prevent
violence, and promote justice for victims of crime and abuse, their families,
and communities. Safe Horizon is New York City’s leading victim assistance
organization delivering services to victims of domestic violence, sexual
assault, child abuse, stalking, human trafficking, and other crimes through
programs in the family and criminal courts, police precincts, child advocacy
centers, schools, and other locations. Safe Horizon also operates domestic
violence shelters; New York City’s 24-hour domestic violence, rape, and
sexual assault hotlines; drop-in centers and emergency shelters for home-
less and street-involved youth; case management services; and specialized

Standardizing Practice at a Victim Services Organization 441

mental health programs. Victims’ program involvement may last minutes or
years. Safe Horizon’s primary service obligation is to provide victims of
crime and abuse with the resources and tools needed to maximize their
personal safety and reduce their risk of further harm, whatever the presenting
victimization or service setting.

Safe Horizon’s leadership has long recognized the importance of inter-
nal research and evaluation. As Whyte (1989) noted, when knowledgeable
stakeholders conduct research, they can report on practices without the
distortion caused by the presence of an outside observer. However, external
funds obtained to answer macro social science and criminal justice ques-
tions dictated most of Safe Horizon’s research and evaluation activities. As a
result, these activities rarely informed day-to-day direct practice or service
delivery. Additionally, it was difficult to agree on measurable outcomes for
victims of violence. This was due in part to the context of traditional victim
services programs, which are often designed to prevent a negative event
from occurring (reabuse), and where the approach often holds that “the
survivor is not responsible for preventing, and is indeed often unable to
prevent, this negative event from occurring regardless of her actions”
(Sullivan & Alexy, 2001, p. 1).

Thus, as a first step in establishing evaluation that would directly
inform practice, while acknowledging the challenges of establishing
outcomes, program evaluation focused on victim satisfaction surveys. This
was helpful in improving practice and began collaboration between evalua-
tion and program staff. This also built a foundation for trust and understand-
ing that would become important in engaging with programs in thinking
about outcomes beyond victim satisfaction.

Over the course of three decades, Safe Horizon grew into a midsize
organization with the capacity to serve a wide range of victims in disperse
settings, and each program determined its own method for assessing victim
needs. As a result, the organization’s service delivery and documentation
practices were decentralized and varied. In 2003, this was addressed in the
organization’s strategic plan with the goal of unifying service delivery to
ensure coordinated and high-quality services.

Standardized practice is the creation of uniformity in the definitions,
training, staff role, and procedures for common practices within a discipline
or organization, which is “intended to promote the effectiveness of practice,
reduce variability in implementing best practice, [and] increase the predict-
ability of practice behaviors” (Rosen & Proctor, 2003, p. 1). Using our safety
assessment and risk management policy (Safe Horizon, 2007) to standardize
practice, Safe Horizon began its first steps towards unifying service delivery
and creating a continuum of care across programs. This policy places a
“victim’s needs, wishes, resources, and capacities at the center of client
work,” and thereby sets a “standard for a dynamic and collaborative process
to address the complex challenges that victims of crime or abuse face” (Tax,

442 M. Larsen et al.

Vigeant, & Botuck, 2008, pg. 6). The policy provides a framework that
acknowledges change in a victim’s risk over time, while also integrating
both the staff’s knowledge and the victim’s perspective into the safety plan-
ning process. Its implementation requires attention around standardization
because the policy emphasizes “a standard of care that will be upheld
across the organization,” while still recognizing that “specific aspects of
implementation will depend on the program and the services offered by
that program” (Safe Horizon, 2007, p. 2).

To prepare to implement the policy in a way that would unify service
delivery across programs, evaluation staff engaged CCJP in a number of key
activities, the first of which was the development of logic models. These
were intended as blueprints defining the expertise, activities, and goals of
each program, clarifying how programs work together, and setting up a
framework for monitoring program practice. To identify the necessary
resources for implementing standardized practice, evaluation staff assessed
program practice to determine to what extent the skills and practices
outlined in the policy were already taking place.


McLaughlin and Jordan (2004) have described logic models as “the basis for
a convincing story of the program’s expected performance, telling stake-
holders and others the problem the program focuses on and how it is
uniquely identified to address it” (p. 8) through a visual representation of a
program’s resources, activities, outputs, and a range of outcomes. “A logic
model provides a blueprint that delineates all the elements of the program
that need to be documented in order to fully understand the program”
(Conrad, Randolph, Kirby, & Bebout, 1999, p. 20) and represents how a
specific set of resources and activities will bring about intended outcomes.
Logic models are useful tools in pinpointing inconsistencies or redundan-
cies, as well as determining whether activities are still relevant to program
goals. Conrad et al. (1999) also noted the usefulness of logic models for
bringing the perspectives of various program stakeholders to consensus,
which can serve to establish clear and measurable expectations for a
program and a common understanding of staff roles and function across an
organization (McLaughlin & Jordan, 2004).

In order to integrate the service delivery model into the organizational
culture and everyday decision making, the logic model development
process aimed to ensure buy-in at all levels. Evaluation staff (namely, the
authors) met with all levels of program management. Prior to these
meetings with CCJP leaders and site supervisors, evaluation staff reviewed
current funding reports and objectives and investigated reporting and
documentation mechanisms in order to gain an initial understanding of

Standardizing Practice at a Victim Services Organization 443

resources and program activities. This served as preparation for building an
overall logic model with CCJP leaders to define the vision for this cluster of

Initial meetings with CCJP leaders included discussions about resources
(e.g., funding, staff expertise, external partners, documentation systems)
and activities, but primarily centered upon the expectations and vision for
this cluster of programs. This focus on vision was only possible given the
mutual trust and understanding previously built between evaluation and
program staff. Due to previous negative experiences in tying program
success to the actions of outside systems or the actions of the offender (e.g.,
receiving an order of protection, successful prosecution of the offender,
placement in a domestic violence shelter, desistance of violent behavior),
CCJP leaders voiced a general reluctance to define victim outcomes. Given
this reluctance and the challenges inherent in establishing and operational-
izing outcomes at social service organizations (Neuman, 2003), extra time
was devoted to discussing meaningful program outcomes that accurately
assess whether the program is having its intended effect. Over the course of
these discussions, consensus on appropriate outcomes was achieved
through continual grounding in the policy, which was centered upon the
organization’s guiding principles and the commitment to “support and
promote our client’s self-determination, dignity, and empowerment in a
compassionate, non-judgmental environment” (Tax et al., 2008, p. 14). With
this grounding, evaluation staff and CCJP leaders developed victim
outcomes that were not dependent upon outside actors, but measured pro-
gram success through individual victim change. These outcomes, along with
quality assurance of standardized practices, have the potential to inform
program practice by their measurement.

After developing a draft based on these discussions, evaluation staff led
CCJP leaders through the refinement and vetting of the CCJP logic model
during a daylong off-site work retreat. In this focused setting, the group
walked step-by-step through the logic model, critiquing and offering
suggestions for revision. The end result was an overall logic model of CCJP
resources and services with victim outcomes that CCJP leaders expected
would result from a victim’s involvement with the CCJP cluster.

The next task in creating a blueprint for unified service delivery was
the development of a logic model for each of the four main programs in this
cluster to clarify each program’s expertise and to define the roles the pro-
grams should play in a unified service delivery model. Separate discussions
were held with site supervisors from each of the programs (these programs
have five sites, one in each borough of New York City), walking through
the overall CCJP logic model and breaking down the aspects specific to
their program. Document review and discussions with site supervisors
revealed that programs performed similar activities, but that these activities
were conducted slightly differently in each program. For example, information

444 M. Larsen et al.

provided in a police program focused on police processes, while informa-
tion provided in a criminal court program focused on court processes. The
expected outcome of information provision (that the victim will be able to
strategize and make informed decisions about their situation) remained the
same across the CCJP, but the differences in program expertise were clear.
Evaluation staff incorporated these commonalities and differences into the
initial logic model drafts for each of the programs, totaling four logic models

Evaluation staff presented these drafts back to CCJP leaders for vetting,
walking step-by-step through each program logic model and considering
the following questions: Were there gaps in services that needed to be
addressed in order to better fit the vision of providing a continuum of care
to victims through integrated expert programs? Which services currently
offered should change? For example, while all site supervisors indicated
performing some type of community outreach, CCJP leaders did not feel
that this activity was within the appropriate scope of activities for the court
programs. They agreed that community outreach seemed beyond the goals
of court programs, whose aim is to serve those already involved in the court
systems and who do not receive any funding for outreach activities. CCJP
leaders felt that community outreach should be concentrated in the police
and community-based programs, which play an important role in ensuring
community members are aware of the services offered at Safe Horizon.

Another round of revisions resulted in four individual program logic
models that represented the vision for service delivery for the CCJP cluster.
To continue fostering a sense of ownership of these logic models, CCJP
leadership presented the models to site supervisors. This allowed for the
gathering of feedback and also allowed them to be on hand to answer
questions about strategic decisions made about standardizing services. By
the end of the logic-model development process, clear and measurable
expectations for programs were established, as was a common understanding
of staff roles and function across the CCJP.


To assess the extent to which the skills outlined in the new policy were
already occurring in day-to-day program practice, observation of service
delivery and its documentation across Safe Horizon’s point-of-entry
(gateway) programs was conducted over a two-month period. In the
absence of documentation that would clearly reflect current practice,
prudent use of naturalistic observation—where behavior is observed in its
natural environment and is recorded in a manner that is as unobtrusive as
possible (Angrosino, 2007)—can provide a representative sample of service

Standardizing Practice at a Victim Services Organization 445

Based on designated performance indicators of the new risk and safety
policy, an observation tool (see Appendix for a detailed explanation) was
developed by evaluation staff that would collect information that could: (a)
describe current practice, (b) identify differences in practice across programs,
(c) examine how practice(s) apply to different types of victim interactions,
and (d) inform decision making about future staff training.

Over a three-week period, four observers, which included evaluation staff
and interns, were trained by one of the authors to assess client interactions
according to a standard and to match to his observations for all sections of the
tool. This necessarily included a common understanding and definition of
service delivery (e.g., referral, linkage, supportive counseling, crisis interven-
tion, etc.), as well as how to be unobtrusive during observations and how to
keep appropriate boundaries with victims and staff. A 90% level of inter-rater
agreement was established between each of the four observers and the trainer.

All of the observations were scheduled in advance. Every effort was
made to ensure that the service delivery was representative of typical
sessions and workloads and did not underestimate the frequency or inten-
sity of service delivery. Observers refrained from inferring anything about
service delivery and gathered information from only directly observed staff
comments, actions, or responses to a victim. Observations were always
conducted by one observer at a time. Upon arrival at the site, the site super-
visor would introduce the observer to the staff and explain what he or she
would be doing. To gain the consent of the clients before observing a case
management interaction or counseling session, the case manager would
introduce the observer to the client and explain the process, emphasizing
that the observer was there to observe service provision only.

The data from the observations were entered into an SPSS database,
and frequencies were calculated. Twenty program sites were observed for
approximately 208 hours, totaling 213 victim interactions (162 telephone
and 51 face-to-face interactions).

The analysis of these observations revealed that expected practices
were not occurring at the frequency anticipated. Victim safety concerns
were documented, and observers noted that the need for assistance was
complex and ongoing (Vigeant, Tax, Larsen, & Botuck, 2008). Additionally,
even within the same program at different delivery sites, service provision
often had wide variability in both practice and documentation. As a result,
clients with identical presenting needs might be offered different services
depending on which program site they happened to walk into.


Evaluation activities, which included the development of logic models and
the assessment of current practice, identified gaps between the organization’s

446 M. Larsen et al.

vision for unified service delivery as identified in its strategic goals and
current practice. This pointed to additional resource needs that were not
anticipated in the original planning. The findings also revealed practice real-
ities that included considerable variability in service delivery and documen-
tation, lower than expected frequency of specific activities, and complex
client need. These findings pointed to a need for changes in existing imple-
mentation plans.

The development of logic models served to bring staff with a range of
education, experience, and expertise closer to consensus around program
practices, services offered, and victim outcomes. Walking through the
models with CCJP leaders and site supervisors necessarily focused the dis-
cussion around variation in program practice. It was not unusual for there
to be a variety of perspectives on program functioning, a lack of shared
information across sites, and a variety of documentation systems. This process
brought to the forefront the current resources, expertise, and abilities of
individual programs. The substantial variation in practice revealed the need
for in-depth clarification of roles within each program before implementing
standardized practice, a step that had not been adequately accounted for in
existing plans.

The observation of services confirmed variations in practice alluded to
during logic model discussions. While program observation required a
significant amount of time and resources, the time (and cost) provided
information that clearly indicated the need for significant ongoing training
and support in critical skill areas required to implement the risk manage-
ment policy according to the intended standard (e.g., addressing barriers,
allowing clients to determine risk factors, building risk management on
client’s current protective actions, tailoring plans to fit clients).

Providing this level of support to staff will require increased partner-
ship among different departments within the organization and will include:
the creation of appropriate training curricula and materials; the develop-
ment of staff trainers; the establishment of accountability monitoring mecha-
nisms (quality assurance indicators); and the creation of a closer exchange
between programs and information technology so that electronic documen-
tation systems reflect (or enhance) staff workflow.

Another implication of the variability in practice is in thinking about the
role of supervisors. The findings reinforce the importance of providing
ongoing training and supports for supervisors so that they are able to effec-
tively carry out their roles. Wilkins (2003) has described the deleterious
effects on line staff when frontline supervisors are unable to provide guid-
ance and assistance in problem solving. This is particularly true in organizations
where there are limited resources for staff development and increased
needs for services. Moreover, the competence and value of frontline super-
visors are vital to achieving the goals of service organizations (Burchard,
Gordon, & Pine, 1990; Haas & Robinson, 1998) and in preventing direct

Standardizing Practice at a Victim Services Organization 447

service staff from burnout and vicarious trauma (Baird & Jenkins, 2003;
Pearlman & Saakvitne, 1995).

In conclusion, evaluation activities helped determine the resources and
first steps in implementing strategic organizational goals around unifying
service delivery and implementing standardized practice. Logic model
development fostered discussion of program expertise and promoted a
shared understanding of standardized practices. These activities, which
served to focus service delivery, were successful in furthering the imple-
mentation of our organization’s strategic plan. Naturalistic observations
served to create a picture of service delivery and helped identify the
resources necessary to standardizing practices. Although internal evaluators
could present a potential bias, we believe that using internal evaluators
familiar with the programs and staff actually facilitated the evaluation activi-
ties. The foundation of mutual trust and understanding between evaluation
and program are key to overcoming barriers and establishing meaningful


1. CCJP consists of four distinct programs, each one located in each of the five boroughs of
New York City. Each program within CCJP partners with, is regulated by, and may share space with
criminal justice and community systems and structures that vary considerably from borough to borough.


Angrosino, M.V. (2007). Naturalistic observation. Walnut Creek, CA: Left Coast

Baird, S., & Jenkins, S. R. (2003). Vicarious traumatization, secondary traumatic
stress and burnout in sexual assault and domestic violence agency staff.
Violence and Victims, 18(1), 71–86.

Botcheva, L., White, C. R., & Huffman, L. C. (2002). Learning culture and outcomes
measurement practices in community agencies. American Journal of Evaluation,
23(4), 421–434.

Buckmaster, N. (1999). Associations between outcome measurement, accountability,
and learning for non-profit organisations. International Journal of Public Sector
Management, 12(2), 186–197.

Burchard, S. N., Gordon, L. R., & Pine, J. (1990). Manager competence, program
normalization and client satisfaction in group homes. Education and Training
in Mental Retardation, 25(3), 277–285.

Carman, J. G. (2007). Evaluation practice among community-based organizations:
Research into the reality. American Journal of Evaluation, 28(1), 60–75.

Chelimsky, E. (1994). Making evaluation units effective. In J. Wholey, H.
Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation
(pp. 489–509). San Francisco: Jossey-Bass.

448 M. Larsen et al.

Conrad, K. J., Randolph, F. L., Kirby, M. W. J., & Bebout, R. R. (1999). Creating
and using logic models: Four perspectives. Alcoholism Treatment Quarterly,
17(1), 17–31.

Haas, P. J., & Robinson, M. G. (1998). The view of nonprofit executives on
education nonprofit managers. Nonprofit Management and Leadership, 8(4),

Hoefer, R. (2000). Accountability in action? Program evaluation in nonprofit human
service agencies. Nonprofit Management and Leadership, 11(2), 167–177.

Kopczynski, M. E., & Pritchard, K. (2004). The use of evaluation by nonprofit
organizations. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Hand-
book of practical program evaluation (2nd ed., pp. 649–669). San Francisco:

McLaughlin, J. A., & Jordan, G. B. (2004). Using logic models. In J. S. Wholey, H. P.
Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation
(2nd ed., pp. 7–32). San Francisco: Jossey-Bass.

Menefee, D. (1997). Strategic administration of nonprofit human service organiza-
tions: A model for executive success in turbulent times. Administration in
Social Work, 21(2), 1–19.

Neuman, K. M. (2003). Developing a comprehensive outcomes management
program: A ten step process. Administration in Social Work, 27(1), 5–23.

Newcomer, K. E., Hatry, H. P., & Wholey, J. S. (2004). Meeting the need for
practical evaluation approaches: An introduction. In J. S. Wholey, H. P.
Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation
(2nd ed., pp. xxxiii–xliv). San Francisco: Jossey-Bass.

Pearlman, L. A., & Saakvitne, K. W. (1995). Treating therapists with vicarious
traumatization and secondary traumatic stress disorders. In C. R. Figley (Ed.),
Compassion fatigue: Coping with secondary traumatic stress disorder in those
who treat the traumatized (pp. 150–177). Bristol, PA: Brunner/Mazel.

Proehl, R. A. (2001). Why is change necessary? Organizational change in human
services (pp. 1–10). Thousand Oaks, CA: Sage Publications.

Rosen, A., & Proctor, E. K. (2003). Practice guidelines and the challenge of effective
practice. In A. Rosen & E. K. Proctor (Eds.), Developing practice guidelines for
social work intervention: Issues, methods, and research agenda (pp. 1–16).
New York: Columbia University Press.

Safe Horizon. (2007). Policy on client safety assessment and risk management.
New York: Author.

Sullivan, C., & Alexy, C. (2001). Evaluating the outcomes of domestic violence
service programs: Some practical considerations and strategies. Retrieved on
August 6, 2008 from VAWnet, http://new.vawnet.org/Assoc_Files_VAWnet/
AR_evaldv .

Tax, C., Vigeant, M., & Botuck, S. (2008). Integrating safety assessment with risk
management: A dynamic victim centered approach [White paper]. Retrieved on
July 28, 2008 from Safe Horizon http://www.safehorizon.org.

Vigeant, M., Tax, C., Larsen, M., & Botuck, S. (2008, February). Rethinking risk and
safety with victims of crime and abuse. Poster session presented at the
American Psychological Association’s summit on violence and abuse in
relationships: Connecting agendas and forging new directions, Bethesda, MD.

Standardizing Practice at a Victim Services Organization 449

Whyte, W. F. (1989). Advancing scientific knowledge through participatory action
research. Sociological Forum, 4(3), 367–385.

Wilkins, E. (2003). Building a committed and effective workforce through strengthen-
ing skills of frontline managers. Retrieved on July 31, 2008 from Aon Consulting
Worldwide. http://www.ecustomerserviceworld.com


Observation Tool Assessment Areas

Greeting: Assessed whether staff greeted clients warmly and introduced
themselves as employees of Safe Horizon. (1 item)

Identifying Information: Captured what identifying information (e.g.,
name, date of birth, social security number, address, phone number) was
obtained and whether the information was confirmed in accordance with
current training. (12 items)

Interaction Measures: Captured specific information about the interac-
tion that might impact the service delivery, including the location of the
interaction; people present during the interaction; whether the interaction
was a scheduled appointment; whether it took place on the phone or was
face-to-face; the time the interaction began and ended; and the number of
staff-initiated interruptions. Another set of interaction measures included the
observer’s assessment as to the level of assistance the observer perceived
the client to be seeking, and whether the client was in distress. Notation
regarding whether the staff person behaved in a professional manner was
also included in this section. (10 items)

Victimization Assessment: Captured whether staff assessed client
victimization and, if so, to what extent. This included whether staff assessed
for victimization type, whether the client was a primary or secondary victim,
information about the offender, recent incident details, duration, scope,
system involvement, and prior victimizations. (2 items)

Safety Assessment: Examined whether staff performed a safety assess-
ment by capturing initial safety assessment, as well as assessments of
current safety concerns, client protective actions, client resources and
stressors, and client coping skills. (9 items)

General Skills: Explored whether staff utilized specific skills during
client interactions, such as crisis intervention skills, general assessment
skills, and engagement skills. (20 items)

Services: Examined which services staff provided to the client, and
whether services were provided via exploration, information, referral,
advocacy, and/or linkage. (46 items)

Figure 31.1

Logic Model

Logic Models

Karen A. Randolph

logic model is a diagram of the relationship between a need that a

p rogram is designed to addret>s and the actions to be taken to address the
need and achieve program outcomes. It provides a concise, one-page pic-
ture of p rogram operations from beginning to end. The diagram is made
up of a series of boxes that represent each of the program’s com ponents,

inpu ts or resources, activities, outputs, and outcomes. The diagram shows how these
components are connected or linked to one another for the purpose of achieving
program goals. Figure 31.1 provides an example of the frame work for a basic logic model.

Th e program connections illustrate the logic of how program operations will result in
client change (McLaughlin & Jordan, 1999). The connections show the “causal” relati on-
ships between each of the program components and thus are referred to as a series of”if-
then” sequence of changes leading to th e intended outco mes for the target client group
(Chinman, hum, & Wandersman, 2004). The if-then statements represent a program’s
theory of change underlying an intervention. As such, logic models provide a framework
that g uides the evaluation process by laying out important relationships that need to b e
tested to demonstrate program results (Watso n, 2000).

Logic models come from the field of program evaluation. The idea emerged in
response to the recognition among program evaluators regardin g the need to systema tize
the p r ogram evaluation process (McLaughlin & Jordan, 20 04). Since then , logic models
have become increasingly popular among program managers for program planning and
to monitor program performance. With a growing emphasis on accountability and out-
come measurement, logic models make explicit the entire change process, Lhe assu mp-
tions t hat underlie this process, and the pathways to reach ing outcomes. Researchers have
begun to use logic models for intervention research planning (e.g., Brown, Hawkins,
Arthur, Brin ey, & Abbott, 2007).

The followin g sections provide a description of the components of a basic logic model
and how these compon ents are linked together, its relationship to a p rogram’s theory of

[ : Inputs 1–_.,•1 Ac~vities ,II—-.~•{ .Outputs ·11—~·1 Outcomes I
AUTHOR’S NOTE: The author wishes to acknowledge Dr. Tony Tripodi for his though lful comments
on a drafl of this chapter.



change, and its uses and benefits. The steps for creating a logic model as well as the chal-
lenges of the logic modeling process will be presented. The chapter concludes with an
example of how a logic model was u~cd to enhance program outcomes for a family liter-
acy program.

Components of a Logic Model

Typically, a logic model has four components: inputs or resources, activities, outputs, and
outcomes. Outcomes can be further classified into short-term outcomes, intermediate
outcomes, and long-term outcomes based on the length of time it takes to reach these
outcomes (McLa ughlin & Jordan , 2004) . The components make up the connection
between the planned work and the intended results (W. K. Kellogg Foundation, 2004).
The planned work includes the resources (the inp uts) needed to im plement the program
as well as how the resources will be used (the activities) . The intended results include the
outputs and outcomes that occur as a consequence of the planned work. Figure 31.2
expands on the model illuslrated in Figure 3 1.1 by adding examples of each component.
This particular logic model, adopted from frec htling (2007), provides an illustration of
the components of an intervention designed to prevent substance abuse and other prob-
lem behaviors among a population of youth. The intervention is targeted toward improv-
ing parenting skills, based on the assumption that positive parenting leads to prosocial
behaviors among yo uth {Bahr, Hoffman, & Yang, 2005). The following section provides
definitions and examples of each logic model component, using this illustration.

Resources, sometimes referred to as inputs, in clude the human, fin ancial, organizational,
and community asse ts that are available to a program to achieve its objectives (W. K.
Kellogg Foundation, 2004). Resources are used to support and facilitate the program
activities. They are usually categorized in terms of funding resou rces or in -kind contribu-
tion s (Frechtling, 2007) .

Some resources, such as laws, regulations, and funding requirements, are external to
the agency (United Way of America, 1996). Other resources, such as staff and money, are
easier lo quantify than others (e.g., community awareness of the program; Mertinko,
Novotney, Baker, & Lange, 2000). As Fn.:c:htli ng (2007) notes, it is important to clearly and
tho roughly id ent ify the available resources during the logic modeling process because this
information defines the scope and parameters of the program. Also, this inCormation is
critical for others who may be interes ted in replicating the program. The logic model in
Figure 31.2 includes fu nding as one of its resources.

Activities represent a program’s service methodology, showing how a program intends on
using the resources described previously to carry out its work. Activities are also referred
to as ac tion step!; (McLaughlin & Jordan, 2004). They are the highly specifi c tasks that
p rogram staffs engage in on a daily basis to provide services to clients (Mertinko
et al., 2000) . They include all aspects of pro gram implementation, the processes, tools,
events, technology, and program actions. The ac tivities form the foundation toward facil-
itating intended client changes or reaching oulcornes (W. K. Kellogg Fo undation, 2004).
Some examples are establishing community councils, providing professional develop –
ment training, or initiating a media campaign (Frechtling, 2007). Other examples are


Inputs Activities Outputs Outcomes

Short Term Intermediate Long Term

Feedback Loop j



Develop and Numbe r of Increased

youth Funds .~ initiate ~edi a st~tions a~opti ng r– awareness f- positive 1—–+ of positive substance
-~m~tg~– -.:::c -campatgn J pa renting parenti ng – abv?~d’



Develop and Number of Increased
distribute – 1> fact sheets 1- enrollment

fact sheets distributed in parenting

Fig ure 31.2 Example of l ogic Model With Com ponents, Two Types of Connections, and a Feedbaclc loop

providing shelter for homeless families, educating the public about signs of child abuse,
or providing adult mentors for youlh {United Way of Ame rica, 1996) . Two activities,
” Deve lop and initiate media campaign” and “Develop and distribute fact sheets;’ are
included in the logic model in Figure 31.2. Activities lead to or produce the program o ut-
puts, described in the following section.

The planned works (resources and activities) bring about a program’s des ired res ul ts,
including outputs and outcom es (W. K. Kell ogg Foundatio n, 2004) . Outputs, also referred
to as units of service, are the immediate results of program activities in the form of types,
levels, and targets of services to be delivered by the program (McLaughl in & Jordan ,
1999). They are tangible products, events, o r serv ices. They provide the documentation
that activities have been implemented and, as such, indicate if a program was delivered to
the intended audience at the intended dose (W. K. Kellogg FounJation, 2004). Outputs
arc typical ly desc ribed in terms of th e size and/or scope of the services an d products pro-
duced by the program and thus are expressed numerically (Frechtling, 2007). Examples of
program ou tpu ts include the number of classes ta ught, meetings held, o r materials p ro-
duced and distributed; program par ticipation rates and demography; or hours of each
type of serv ice provided (W. K. Kellogg Foundation, 2004) . Other examples are the
number of meals provided, classes taught, brochures distributed , or participants ser ved
(Frecht1ing, 2007) . W hile outputs have little inherent value in themselves, they provide
the link between a program’s activ ities and a program’s outcomes (United Way of
America, 1996). The logic model in Figure 31.2 includes Lhc number of stations adopting
the media campaign and the number of fact sheets distributed as two outputs for the pre-
vention program.


Outcomes arc Lhe specific changes experienced by the program’s clients or target group as
a consequence of participating in the program. Outcomes occur as a result of the program
activities and outputs. These changes may be in behaviors, attitudes, skill level, status, or
level of functioning (W. K. Kellogg Foundation, 2004). Examples include increased knowl-
edge of nut r itional needs, improved reading skills, more effective responses to conflict,
and finding employment (United Way of America, 1996) . Outcomes are indicalors of a
program’s level of success.

McLa ughlin and Jordan (2004) make the point that some programs have multiple,
sequential outcome structures in the form of short-term outcomes, intermediate out-
comes, and long-term outcomes. In these cases, each type of outcome is linked tempo-
rally. Short-term outcomes arc client changes or benefits th at are mos t immediately
associated with the program’s outputs. They are usually realized by clients wi thin 1 to
3 years of program completion. Short-term outcomes are linked to accomplishing inter-
mediate outcomes. Intermediate ou tcomes are generally attain able in 4 to 6 years. Long-
term outcomes are also referred to as program impacts or program goals. They occur as a
result of the intermediate outcomes, usually within 7 to 10 years. In this format, long-
term outcomes or goals are directed at macro-level change and target organizations, co m-
munities, or systems (W. K. Kellogg Foundation, 2004).

As an example, a sequen tial outcome structure with short- term, intermediate, and
long-term outcomes for the prevention program is displayed in Figure 31.2. As a result of
hearing the public service announ cemen ts about positive parenting (th e activity), parents
enroll in parenting programs to learn new parenting skills (the short-term outcome).
Then they apply these newly learned skills with their children (the intermediate out-
come), which leads to a reducti on in substance abuse among youth (the long-term impact
or goal the parenting program was designed to achieve).

Outcomes ar e often confused with outputs in logic models because their correct clas-
sification depends on the context within which they are being included. A good exa mple
of this potential confusion, provided in the United Way of America manual ( 1996, p. 19),
is as follows. The number of clients served is an output when it is meant to describe the
volume of work accomplished. In this case, it does not relate directly to cl ient changes or
benefits. H owever, the number of clients served is considered to be an outcome when the
program’s intention is to encourage clients to seek services, such as alcohol treatment.
What is important to remember is that outcomes describe intended client changes or
benefits as a result of participatin g in the program while outputs document products or
services produced as a result of activities.

Links or Connections Between Components

A critical part of a logic model is the connections or links between the components. The
connections illustrate the relationships between the components and the process by
which change is hypothesized to occur among program participants. This is referred to as
the program theory (Frechtling, 2007). It is the con nections illustrating the program’s
theory of change that make the logic model complicated. Specifying the connections is
one of the more difficult aspects of developing a logic model because the process requires
predicting the process by which client change is expected to occur as a result of program
participation (Frech tling, 2007).


Frechtling (2007) describes nvo types of connections in a logic model: connections
that link items within each compo nent and connections that illustrate the program’s
theory of change. The first type, items within a component, is connected by a straight line.
This line shows that the items make up a particularcomponent.As an example, in Figure 31.2,
nvo activities, “Develop and initiate media campaign” and ” Develop and distribute fact
sheets,” are linked together with a straight line beca use they represent the items within the
activities component. Similarly, two outputs, “Number of stations adop ting the cam-
paign” and “Number of fact sheets distributed;’ arc connected as two items within the
outputs component.

The second type of connection sh<.>ws how the components interact with or relate to
each other to reach expected outcomes (Frechtling, 2007) . In essence, this is the program’s
theory of change. Thus, instead of straight lines, arrows are used to show the direction of
influence. Frechtling (2007) clarifies that “these directional connections are not just a
kind of glue ancho ring the otherwise floating boxes. Rather they portray the changes thaL
arc expected to occur after a previous ac Livity has taken place, and as a result of it” (p. 33).
She points out that the primary purpose of the evaluation is to determine the nature of
the relationships between components (i.e., whether the predictions are correct). A logic
mod el that illustrates a fully developed theory of change includes links between every
item in each co mponent. In other words, every item in every component must be co n-
nected to at least one item in a subsequent component. This is illustrated in Figure 3 1.2,
which shows that each of the two items within th e activities co mpon en t is linked to an
item within the output co mponent.

Figure 31.2 provides an example of the predicted relationships between the compo-
nents. This is the program theory about how the target group is expected to change. The
input or resource, funding, is co nnected to the tv,ro activities, “Develop and initiate media
campaign” and “Develop and distribute fac t sheets.” Simply put, this part of Figure 31 .2
shows that funding will be used to support the development and initiati on of PSA cam-
paigns and the distribution of fact sheets.

The sequencing of the connections between components also shows th at these steps
occur over a period of time. While this may seem obvious and relatively inconsequential,
specifying an accurate sequence has time-based implications, pa rticularly when short-
term, intermediate, and long-term outco mes are proposed as a part of the theory of
change (Frechtling, 2007). Rcca11 that the short-term outcomes lead to achieving the
intermediate outcomes, and the intermediate outcomes lead to ach ieving long-term out-
comes. Thus, the belief or underl}ing ass umption is that short-term outco mes mediate
(or come between) relationships benv-een activities and intermediate o utcomes, and
intermediate outcomes mediate relations between sho rt-te rm and long-term outcomes.

Related, sometimes logic models display feedback loops. Feedback loops show how the
information gained from implementing one item can be used to refine and improve other
items (Frechlling, 2007). f or instance, in Figure 31.2, the feedback loop from the short-
term outcome, ” Increased awareness of positive parenting;’ back to the activity, “Develop
and initiate media campaign;’ indicates that the findings for ” Increased awareness of pos-
itive parenting” arc used to im prove the PSA campaigns in the next program cycle.

Contextual Factors

Logic models describe programs that exist and are affected by contextual factors in the
larger environment. Contextual factors are those important features of the environment


in which the project or inter vention takes place. They include the social, cultural, and
political aspects of the environment (Frechtling, 2007). They are typically not under the
program’s control yet are likely to influe nce the program either positively or negatively
(McLa ughlin & Jordan, 2004 ). T hu s, it is critical to identify relevant contextual factors
and to consider their potential impact on the program. McLaughlin and Jordan (1999)
point out that understanding and articulating contex tual factors co ntr ibu tes to an under-
standing of the fo undat io n u pon whi ch performance expectatio ns a re established.
Mo reover, this knowledge h elps to establish the parameters for explaining program
results and developing program improvement strategies that are li kely to be more m ean-
ingful and thus more successful because the information is more complete. finally, con-
textual factors clarify situations under which the program results might be expected to
generalize and the issues that might affect replication (Frechtling, 2007) .

Harrell, Burt, Hatry, Rossm an, a nd Roth ( 1996) identify two types of contextual fac-
tors, antecedent and media6ng, as o utside facto rs that could influence th e program’s
design, implementa tio n, and results. Anteceden t factors are thos e that exist prior to
program implemen tatio n, such as cha racteristics of the client target population o r com-
munity characteristics such as geographical and economic conditions. Mediating factors
are the environmental influences that emerge as the program unfolds, such as new laws
and policies, a change in economic con ditions, or the startup of other new programs pro-
viding similar services (McLaughlin & jordan, 2004).

Logic Models and a Program’s Theory of Change

Log ic models p rovide an illustration of the compo nents of a program’s theo t-y and how
those components are linked togeth er. Program theory is defined as “a plausible and sen-
sible model of how a program is supposed to wo rk” (Bickman, 1987, p. 5). Program
theory in corporates “program resources, program activities, and intended program out-
comes, and specifies a chain of causal assumptions linking resources, activities, interme-
di ate outcomes, and ulti mate goals” (Wholey, 1987, p. 78). Program theory e.>..-plicates the
assumptions abou t how the program components link together from program star t to
goal attainmen t to realize the program’s intended outcomes (Frechtling, 2007). Thus, it is
often referred to as a p rogram’s theory of change. Frechtling (2007) suggests that both
previous research and knowledge gained from practice experience arc useful in develop-
ing a theory of change.

Relationship to logic Models
A logic model provides an illustration of a program’s theory of change. It is a useful tool
for describing program theory because it shows the connections or if-then relationships
between program components. In other words, moving from left to right from one com-
po nent to the next, logic models provide a diagram of the rationale or reasoning underly-
ing the theory of change. If-th en statements connect the program’s co m po nents to form
the theory of change (W. K. Kellogg Founda tion, 2004). For example, certain resources or
inputs are needed to carr y out a program’s activities. The first if-then statement links
reso urces to acti vities and is stated, ” If you have access to these resources, then yo u can use
them to accomplish yo ur planned activities” (W. K. Kellogg Fo undation, 2004, p. 3). Each


component in a logic model is linked to the other components using if-then statemen ts to
show a program’s chain of reasoning about how client change is predicted to occur. The
idea is that “if the right resources are transformed into the right activities for the right
people, then these will lead to the results the program was designed to achieve”
(McLaughlin & Jordan, 2004, p. 11). It is important to define the components of an inter-
vention and make the connections between them explicit (Frechtling, 2007).

Program Theory and Evaluation Planning
Chen and Rossi (1983) were among th e first to suggest a program theory-driven
approach to evaluation. A program’s theory of change has significant utility in develop-
ing and implementing a program evaluation because the theory provides a framework
for determining the evalu ation questions (Rossi, Lipsey, & Freeman, 2004) . As such, a
logic model that ill ustrates a program’s theory of change provides a map to inform the
developmen t of relevant eval uation questions at each phase of t he evaluation. Rossi
et al. (2004) explain how a program theory-based logic mode l enha nces the devel op-
ment of evaluation questions. First, the process of articulating the logic of the
program’s change process through the development of the logic model prompts discus-
sion of relevant and meaningful evaluation questions. Second, these questions then lead
to articulating expect ations fo r p rogram performance and inform the identification o f
criteria to measure that performance. Third, obtaining input from key stakeholders
about the theory of change as it is displayed in the logic model increases the likelihood
of a more comprehensive set of questions and that critical issues have not been over-
looked. To clarify, most agree that this is a team effort that should include the program
development and program evaluation staff at a minimum, as well as other stakeholders
both internal and external to the program as they are available (Dwyer & Makin, 1997;
Frech tling, 2007; Mclaughlin & Jordan, 2004). The diversity of perspective and skill sets
among the team members (e.g., program developers vs. program evaluators) enhances
the depth of understanding of how the program will work, as diagramed by the logic
model (Frechtling, 2007). As D”vyer and Makin (1997) state, the team approach to
develop ing a theory-based logic model promotes “greater stakeholde r invo lvement, the
opportunity for open negotiation of program objectives, greater commitment to the
final co nceptualization of the program, a shared vis ion, and increased likeliho od to
accept and utilize th e evaluation results” (p. 423) .

Uses of Logic Models

Logic models have many uses. They help Lo integrate the entire program’s planning and
implementation process from beginning to end, including the evaluation process (D wyer
& Makin, 1997). They can be used at all of a program’s stages to enhance its success
(Frechlling, 2007; W. K. Kellogg Foundation, 2004). For instance, at the program design
and planning stage, going through the process of developing logic models helps to clarify
the purpose of the program, the development of program strategies, resources that are
necessary to attaining outcomes, and th e identification of possible barriers to
the program’s success. Also, identifying program components such as activities and
outcomes prior to program implementation provides an opportunity to ensure that
program outcomes inform program activities, rather than the other way aroun d (Dwyer
& Makin, 1997) .


During the p rogmm implementation phase, a logic model p rovides the basis fo r th e
development of a management plan to guide program monitoring ac tiv ities and to
improve program processes as issues arise. In other words, it helps in identifying and
highlighting the key program processes to be tracked to ensure a program’s effectiveness
(United Way of America, 1996).

Most important, a logic model facilitates evaluatio n planning by providing the evalua-
tion framework fo r shapin g the evalua tion across all stages of a project. Intended out-
comes and the process for measuring these outcomes are displayed in a logic model
(Watson, 2000), as well as key points at which evaluation activities should take place
across the life of the program (McLaughlin & Jordan) 2004). Logic models suppo rt both
formative and summative evaluations (Frechtli ng, 2007). They can be used in conducting
summativc evaluations to determine what has been accomplished and, importantly, the
process by which these accomplishments have been achieved (Frechtling, 2007) . Logic
models can also support formative evaluations by organizing evaluatio n activities, incl ud-
ing the meas urement of key variables or performance indicators (McLaughlin & Jordan,
2004) . From this info rmation, evaluation questions, relevant indicators, and data collec-
tion strategies can be developed. The following section expands on using the logic model
to develop evaluation questions.

The logic m odel provides a framework for developing eval uat ion q uestions about
prog r am co n text, program efforts, and p rogram effec tiveness ( Frech t ling, 2007;
Mer ti nko et al., 2000). Together, these three sets of quest ions help to explicate the
progr am’s theory of change by describing the assumptions about the r elationship s
between a program’s operations and its predicted outcomes (Ross i et al. , 2004) .
Context questio ns explore program capacity and relationships external to the program
and help to identify and understand the impac t of confo unding factors or externa l
infl uences. Pr ogram effort and effectiveness quest ions correspond to particular co m –
ponents in the logic model and thus exp lore program processes t oward ach ieving
program outcomes. Questions a bout effor t address the planned work of the program
and come from the input and activities sections of the eva luatio n mo d el. They address
program implementation issues such as the services that were provided and to who m.
These questio ns focus on what happene d and why. Effectiveness or outco m e questions
address program results as described in the output and outcomes section of the logic
m odel. From the questions, indicators and da ta collection strategies can the n be d evel-
oped. Guidelines for using logic mo d els to develop evaluation questi ons, ind icators,
and data collection strategies are provided in the Logic Model Development Guide
( W. K. Kellogg Foundation, 200 4 ).

In addition to supporting program effo rts, a logic model is a useful comm unication
tool (McLaughlin & Jordan, 2004 ). For instance, developing a logic model provides the
opportunity fo r key stakeholders to discuss and reach a common understanding, includ-
ing underlying assumptions, about how the program opera tes an d the resources needed
to achieve program p rocesses and outcomes. ln fact, some suggest t hat the logic model
development process is actually a form of strategic planning because it requ ires partici-
pants to articulate a program’s vision, the rationale for the program, and the program
processes and procedures (‘Watson, 2000) . T his also promotes stakeholder involvem ent in
program planning and consensus building on the program’s design and operations.
Moreover, a logic model can be used to explain program procedures and sha re a compre-
hensive yet concise picture of th e p rogram to comm unity partners, funders, and others
outside of the agency (McLaughlin & Jordan, 2004) .


Steps for Creating Logic Models

McLaughlin and Jordan (2004) descri be a five-stage process for developing logic models.
The first stage is to gather extensive baseline information from multiple sources abo ut the
nature of the problem or need and about alternative solutions. The W. K. Kellogg
Foundation (2004) also suggests collecting information about community needs and
assets. This information can then be used to both define the problem (the second stage of
developing a logic model ) and identify the program clements in the form of logic model
componen ts (the third stage of logic model development). Possible information sources
include existing program documentation, interviews with key stakeholders internal and
exte rn al to the program, strategic plans, annual performance plans, previous program
evaluations, an d relevant legislation and regulations. It is also important to review the lit-
erature about factors related to the problem and to determ ine the strategies others have
used in attemp ting to address it. This type of information provides supportive evidence
that informs the approach to addressing the problem.

The information collected in the first stage is th en used to define the problem, the
con textual factors that relate to the problem, and Lhus the need for the program. The
program sho uld be conceptualized based on what is uncovered abo ut the nature and
extent of the problem, as well as the factors that are correlated with or cause the prob-
lem. It is also impor tan t at this stage to develop a clear idea of the impact of the prob-
lem across micro, mezzo, and macro domains. The focus of the program is then to
address the “causal” factors to solve t he problem. In addition, McLaughlin and Jordan
(2004, p. 17) recommend identifyi n g the environmental factors that are likely to affect
the program, as well as ho·w these conditions might affect progr am outcomes.
Understanding the relationship between the program and relevan t environmental fac-
tors contributes to framing its parameters.

During the third stage, the elemen ts or components of the logic model are identified,
based on the findings that emerged in the second stage. McLaughlin and Jorda n (2004)
recommend starting out by categorizing each piece of information as a resource or input,
activity, o utput, short-term outcome, intermediate outcome, long-term outcome, or con-
textual factor. While some suggest that the order in which the components arc identified
is in consequen tial to developing an effective logic mod el, most recommend beginning
this process by identifying long-term outcomes and working backward (United Way of
America, 1996; W. K. Kellogg Foundation, 2004) .

The lo gic model is drawn in the fourth stage. Figure 31 .2 provi.des an example of a typ –
ical logic model. This diagram includes columns of boxes representing the items for each
component (i.e., inputs, activities, outputs, and shor t-term, intermediate, and long- ter m
outcomes). Text is provided in each box to describe the item. The connections between
the items within a component are shown with straight lines. The links or connections
between components are shown with one-way directional arrows. Prog ram components
may or may not have one-on-one rela tionships with o ne another. In fact, it is likely that
components in one group (e.g., inputs) will have multiple connections to components in
another group (e.g., activities). For example, in Figure 31.2, we show that the funding
resource leads to two activities, “Develop and initiate media campaign” and “Develop and
distribute fact sheets.” Finally, because activities can be described at many levels of detail,
McLaughlin and Jordan (2004) suggest simplifying the model by group ing activities that
lead to the same outcome. They also recommend including no more than five to seven
activity groupings in one logic model.


Stage 5 focuses on verifying the logic model by getting input from all key stakeholders.
McLaughlin and Jordan (2004) recommend applying the if-then statements presented by
United Way of America ( 1996) in developing hypotheses to check the logic model in the
following manner:

given observations of key contextual factors, if resources, then program activities; if
program activities, then out puts for targeted customer groups; if outputs change
behavior, first short term, then intermediate outcomes occur. If intermediate out-
comes occur, then longer-term outcomes lead to the problem being solved. (p. 24)

They also recommend answering the following questions as a part of the verification
process (pp. 24-25):

1. Is the level of detail sufficient to create understanding of the elements and their
interrela ti onsh ips?

2. Is the program logic complete? That is, arc all the key elements accounted for?

3. Is the program logic theoretically sound? Do all the elements fit together logically?
Are there other plausible pathways to achieving the program outcomes?

4. Have all the relevant external contextual factors been identified and their potential
influences described?

Challenges in Developing Logic Models

Frechtling (2007 ) describes three sets of challenges in developing and using logic models,
including (a) accurately portraying the basic features of the logic model, (b) determining
the appropriate level of detail in the model, and (c) having realistic expectations about
what logic models ca n and canno t contribute to program processes. These challenges are
reviewed in more detail in the following section.

Portraying the Logic Model’s Basic Features Accurately
The basic features of a logic model must be clearly understood in order for the logic
model to be useful. In particular, logic model developers often enco unter difficulty in four
areas: confusing terms, substituting specific measures for more gene ral outcomes, assum-
ing unidirectionality, and failing to specify a timefrarne for program processes (Frechtling,
2007; McLaughlin & Jordan, 2004).

One issue in developing the logic model is accurately differentiating between an activity
or outp ut and an outcome. Frequently, activities and outputs are confused witl1 outcomes
(Frechtling, 2007). They can be distinguished by remembering that activities are steps or
actions taken in pursuit of producing the output and thus achieving the outcome. Outputs
are products that come as a result of completing activities. They are typically expressed
numerically (e.g., the number of training sessions held). Outputs provide the documenta-
tion that activities have occurred. They also link activities to ou tcomes. Outcomes are
statements about participant cha nge as a result of experiencing the intervention.
Outcomes describe how participants will be different after they finish the program.

Another issue in portraying the basic features of logic models accurately is not confus-
ing outcomes with the instruments used to measure whether the outcomes were achieved.

C HAP t ER 31 • l OGIC M ODHS 557

For example, the outcome may be decreased depression, as measured by an instrument
assessing a participant’s level of depression (Center for Epidemiological Studies-
Depression Scale; Radloff, 1977). Some may confuse the outcome (i.e., decreased depres-
sion) with the instrument (i.e., Center for Ep idem iological Studies- Depression Scale) that
was used to determine whether the outcome was met. To minimize the potential for this
confusion, Frechtling (2007) recommends developing the outcome lirsl and then identify-
ing the appropriate instrument for determ ini ng that the outcome has been reached.

A thiru issue in logic model development is avoiding the assumption that the logic
model and, by implication, the theo ry of change that the logic model portrays move in a
unidirectional progression from left to right {Frechtling, 2007; McLaughlin & Jordan,
2004) . While the visual display may compel users to think about logjc mod els in this way,
logic models and the programs they represent are much more dynamic, with feedback
loops and interactions among components. The feedback loop is illustrated in Figure 31.2,
showing that the experi ences and information generated from reachin g short-term out-
comes are used to refine and, it is hoped, improve the activities in the next program cycle
that are expected to lead to these outcomes. Also, assuming uniform directionality can
enforce the belief that the inp uts dTi ve the project, rather than attaining the outcomes.
This underscores the importance of starting with the development of outcomes when
putting together a logic modeL

The final issue is including a timeframe for carrying out the processes depicted in the
logic model. The lack of a tirneframe results in an incomplete theory of chan ge as well as
problematic expectations about when outcomes will be reached (Frechtling, 2007).
Whether outcomes are expected too soon or not soon enough, key stakeholders may
assume that the theory of change was not accurate. Developing accurate predictions of
when outcomes will be reached is often d ifficu lt, especially with n ew projects in which
very li ttle is known abou t program processes and so forth. In this case, as more clarity
emerges abo ut the amount of time it will take to complete activities, tirneframes should
be revisited and modified to reflect the new information.

Determining the Appropriate Level of Detail
A second set of challenges is to determine how much detail to include in the logic model.
T he underlying dilemma is the level of complexity. Models that are too complex, with too
much detail, are lime-consuming to develop and difficult to interpret. Thus, they are
likely to be cumbersome to use. Models that lack enough information may depict an
incomplete theory of change by leaving out impor tant information. For instance, if activ-
ities are combined into par ticular groups, it is possible that important links between spe-
cific activiti es, outp uts, and outcomes wiJJ not be represented. This increases Lhe
possibility of making faulty assumptions about program opera lions and how these oper-
ations lead to positive participant outcomes.

Realistic Expectations
The fmal set of challenges in using logic models is no t expecting more from logic models
than what th ey are intended to provide. Frechtling (2007, p. 92) notes that some may
inaccurately view the logic model as a “cure-ali” a nd that, just by its mere existence, the
logic model wi ll ensure the success of the program and the evaluation. Of course, the effi-
cacy of a logic model depends on the quality of its design and components. A log ic model
cannot overcome these types of problems. Frcchtling identifies four commo n issues
here. First, sometimes new programs are such that applying the theory of change and a

558 P11RT V • CoN ctPI’UAl R ESEARCH

representative logic model is premature. This is the case for programs in which a priori
expectations about relationships between activities and outcomes do not exist.

A second risk in this area is fai ling to consider alternative theories of change.
Alternative explanations and competing hypotheses sho ul d be explored. Focusing on only
one theory of change may result in not recognizing and including important factors that
fall o utside of the theorys domain. Ignoring these competing fac to rs may result in the
fail ure of the logic model and the program.

Third and related, it is critical to acknowledge the influence of contextual factors that
arc likely to affect the program. Interventions always exist and function wiLhi n a larger
environment. Contextual factors influence the success or failure of these interventions.
For instance, one contextual factor that might affect outcomes of the program diagrammed
in Figure 31 .2 is the diversity of the target group. As Frechtling (2007) observes, this d iver-
sity may include language differences among subgroups, which need to be accounted for
in developing program m aterials.

fin ally, logic models cannot fully co mp ensate for the rigor of expe rimental design
when testing the impact of interventions o n outco m es (Frech tling, 2007) . T he logic
model explicates the critical components of a program and the processes that lead to
desired outcomes (the program theory of cha nge). The implementation of the model
provides a test of th e accuracy of the theory. However, validatio n of the logic model is not
as rigorous a proof as what is established through study designs employing experimental
or quasi-experimental methodologies. Causality cannot be determined through logic
models. \Alhen possible, an evaluation can be strengthened by combining the advantages
of logic modeling with experimental design.

Logic Modeling in Practice: Building
Blocks Family Literacy Program

The following provides an example of logic modeling in practice. The example describes the
use of a program logic model in developing, implementing, and evaluating the Building
Blocks family literacy program and how client exit data were then used to revise the model in
a way that more explicitly illustrated the program’s path•.vays to achieving intended outcomes
(i.e., feedback loop; Unrau, 2001, p. 355). The original program outcomes were to increase
(a ) children’s literacy skills and (b) parents’ abilities to assist their children in developing lit-
eracy skills. The sam ple included 89 families who participated in the 4-week program du ring
its initial year of operation. The following describes the process by which the logic model was
developed and how the client outcome data were used to fme- tune the logic model.

The family literacy program’s logic model was created at a one-day workshop facili-
tated by the evalua tor. Twenty key stakeholders representing various constituenc ies,
including program staff (i.e., steering committee members, administration, and literacy
workers), representatives from other programs (i.e., public school teachers, child welfare,
and workers and clients from other literacy programs), and oth er interested citizens, par-
ticipated in the workshop (Unrau, 2001, p. 354). A consensus decision- making process
was used to reach an agreement on all aspects of the process, including the program pur-
pose, the prog ram objectives, and the pro gram activities.

During the workshop, stakeholders created five products that defined the program
parameters and info rmed the focus of the evaluation. These products included an organi-
zational chart, the beliefs and assumptions of stakeholders about client service delivery,
the questions for the eval uation, the program’s goals and objectives, and the program

CHAPTER 31 • l OGIC MoDElS 559

activities. The program goals, objectives, and activities were then used to develop the orig-
inallogic model.

One of the evaluation methods used to assess client ou tcomes was to conduct semi-
st ructured phone interviews with the parents after families completed the program.
Random select ion procedu res were used to identify a su bset (n = 35 or 40o/o) from the
list of all parents to participate in the interviews. Random selection procedures were used
to ensure that the ex-periences of the interviewees represented those of all clients served
during the evaluation time period. Relative to the two program outcomes, respondents
were asked to provide examples of any observed changes in both their children’s literacy
skills (Outcome 1) and their ability to assist their children in developing literacy skills
(Outcome 2; Unrau, 2001, p. 357). The co nstant comparison method was used to analyze
the data (Pa tton, 2002 ). In this method, meani ngful units of texi: are assigned to similar
categories to identify common themes.

What emerged from the parent interviews was more detailed information about how
the two inten ded outcomes were achieved. Parent experiences in the program suggested
four additional processes that li nk to reaching the two final outcomes. Thi s infor mation
was added to the original logic model to more fully develop the pathways to improving
children’s literacy skills through the family literacy program. These additional outcomes
were actually steps toward meeting the two originally intended outcomes and thus iden-
tified as intermediate outcomes and ne-cessary steps toward ach ieving the or iginally stated
long-term outcomes. Figure 31.3 provides a diagram of the revised logic model. The
shaded boxes represent the components of the original logic model. The other compo-
nents were added as a result of the parent exit interview data.

Input j I Activities I Short-Term Outcomes I [ Intermediate Outcomes J I Long-Term :Outcomes j

Improve child’s

Increase parent’s
own literacy skills

Figure 31.3 Example of a Revised Program Logic Model for a Family Literacy Program

SOURCE: Unrau (200 1}. Copyright November 21 , 2007 by Elsevier limited. Reprinted with permission.

NOTE: The shaded boxes represent the logic model’s original components. The other boxes were added as a result of feedback from clients
after program compl etion.


While the parent in terview data were useful in revising the program logic about client
change, it is important to interpret this process withi n the app ropriate context. This part
of the evaluation does not provide evidence that the program caused client change (Rossi
et al., 2004). This can only be determined through the use of experimental methods with
random ass ignmen t. Nonetheless, these paren t data contr ibute to developing a mo re fully
developed model fo r unde rstanding how fam ily literacy programs wo rk to improve out-
comes for children. Experimental methods can then be used to test the revised model for
the purpose of es tablishing the causal pathways to the intended outcomes.


The purpose of this chapter was to introduce the rea der to logic models and to the logic
modeling process. Logic models present an illustration of th e components of a program
(inputs, activities, outputs, and outcomes) and how these components connect with one
another to facilitate participant change (pro gram theory). They are tools to assist key
stakeholders in program plann ing, program implementation and monitoring, and espe-
cially program eva lu ation. They can also be used as communication tools in expla ining
program processes to key stakeholders external to the program. Creating a logic model is
a time-consuming process with a number of potential challenges. Nonetheless, a well-
developed and thoughtful logic mo del is likely to ensure a program’s success in reaching
its intended outcomes.


Bahr, S., Hoffman, J., & Yang, X. (2005) . Parental and peer influence on the risks of adolescent drug
use. journal ofPrirnary Prevention, 26, 529- 551.

Bickman, L. (1987) . The function of program theory. In L. Bickman (Ed .), New directions in evalu-
ation: Vol. 33. Using program theory in evaluation (pp. 1- 16). San Francisco: Jossey- Bass.

Brown, E. C., Hawkins, J. D., Arthur, M. W., Briney, J. S., & Abbot t, R. D. (2007) . Effects of
Comm un ities that Care on prevention services systems: Findings from the Community Youth
Devcloprnenl sLudy at 1.5 years. Prevention Science, 8, 180-191.

Chen, H.-I., & Ross i, P. H. (1983) . Evaluating with sense: The theory-driven approach. Evaluation
Review, 7, 283- 302.

Chinrnan, M., Imrn, P., & Wandersman, A. (2004). Geuing to outcomes 2004. Santa Monica, CA:
RAND Corporation.

Dvvyer, J. J. M., & Makin, S. (1997) . Usi ng a program logic model that focuses on perfo rmance mea-
surement to develop a program. Canadian journal of Public Health, 88, 421-425.

Frechtling, J. A. (2007). Logic modeling methods in program evaluat.ion. San Francisco: Jossey-Bass.
Harrell, A., Burt, M ., Hatry, H ., Rossman, S., & I”l.oth, J. ( 1996). Evaluation strategies for human

services programs: A guide for policy make1·s and providers. Wash ington , DC: The Urban

McLaughlin, J. A. , & Jordan, G. B. (1999) . Logic models: A tool fo r tell ing you r program’s pe rfor-
mance stor y. Evaluation and Program Plar~ning, 22, 65- 72.

McLaughlin, J. A., & Jordan, G. B. (2004). Using logic models. In J. S. Wholey, H. P. Hatry, & K. E.
Newcomer (Eds.), Handbook of program evaluatiOn (pp. 7- 32). San Francisco: )ossey- Bass.

Mertinko, E., Novotney, L. C., Baker, T. K., & Lange, J. (2000). Evalual’ing your program: A beginner’s
self-evaluation workbook for mentoring programs. Potomac, MD: Information Technology

CHAPTER 3 I • l OviC Moons 561

Patton, M. (2002). Qualitative research and evaluation methods (3rd ed.). Thousand Oaks, CA: Sage.
Radloff, L. S. (1977). The CES-D scale: A self-report depression scale for research in the general

popuJation. Applied Psychological Measurement, 3, 385-401.
Rossi, P. H., Lipsey, M. W., & Freeman, I I. E. (2004) . Evaluation: A systematic approach. Thousand

Oaks, CA: Sage.
United Way of America. (1996). Measuring program outcomes: A practical approach. Retr ieved

N ovem ber I I, 2007, from ww·w.unitedway.o rg/Outcom es/Resources/MPO/iudcx.cfm
Unrau, Y. A. (2001). Using client exit interviews to ill uminate outcomes in program logic models: A

case example. Evaluation and Program Planning, 24, 353-361.
Watson, S. (2000). Using results to improve the lives of children and families: A guide for public-private

child care partnerships. Retrieved ~ovember 11, 2007, from www.nccic.acf.hhs.gov/ccpart:nerships/

Wholey,) . S. (1987) . Evalu ability assessment: Developing program theory. In L. Bickman ( Ed.),
New directions in evaluation: Vol. 33. Using program theory in evalua.tio11 (pp. 77 92) . Sa n
Francisco: Jossey-Bass.

W. K. Kellogg Foundation . (2001) . Logic model development guide. Retrieved November Jl , 2007,
fr om h ttp://www. wkkf.org/d efau lt.aspx?tabid= l 01 &Cm =28l&Catl0=28 l &ltemTD- 28 13669&
N ID= 20&La nguageiD=O

http:/ /www.wkkf.org
Web site from theW. K. Kellogg Foundation conta ining useful templates and exercises in developing
a logic model for a resea rch proj ect.

http:/ /www.unitedway.org/Outcomes/Resources/MPO/index.cfm
Web site from the United Way’s Outcome Mea su rement Resource Network, demonstra ting th e use of
logic models in cla rifying and com municating outcomes.

http:/ /www.cdc.gov/eval/resources.htm#logic%20modcl
Web site from the Centers for Disease Control and Prevention’s Evaluatio1 Working Group, containing
logic model resources.

1. Define the term logic model.

2. Describe th e difference between program activities, program outputs, and program outcomes.

3. Discuss the purpose of including lines with arrows in logic models.

4. Discuss the relationship between a program’s theory of change and its logic model.

5. Describe the uses of logic models.

Paper Assignments
Calculate your paper price
Pages (550 words)
Approximate price: -

Why Work with Us

Top Quality and Well-Researched Papers

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.

Professional and Experienced Academic Writers

We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.

Free Unlimited Revisions

If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.

Prompt Delivery and 100% Money-Back-Guarantee

All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.

Original & Confidential

We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.

24/7 Customer Support

Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.

Try it now!

Calculate the price of your order

Total price:

How it works?

Follow these simple steps to get your paper done

Place your order

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Receive the final file

Once your paper is ready, we will email it to you.

Our Services

No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.


Essay Writing Service

No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.


Admission Essays & Business Writing Help

An admission essay is an essay or other written statement by a candidate, often a potential student enrolling in a college, university, or graduate school. You can be rest assurred that through our service we will write the best admission essay for you.


Editing Support

Our academic writers and editors make the necessary changes to your paper so that it is polished. We also format your document by correctly quoting the sources and creating reference lists in the formats APA, Harvard, MLA, Chicago / Turabian.


Revision Support

If you think your paper could be improved, you can request a review. In this case, your paper will be checked by the writer or assigned to an editor. You can use this option as many times as you see fit. This is free because we want you to be completely satisfied with the service offered.