Evaluation Plan With Theory, Measurement, and Metaevaluation
Program Evaluation Plan
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 3
The main purpose of assessment in education is to help students attain their full
potential in academic and co-curricular activities. This is achieved by assessing the
educational needs, acquisition of skills and learning progress of students (Best & Kahn,
2016). This paper seeks to evaluate student assessment program for Orange County Public
School, Florida. Assessment of the program aims to come up with innovative and positive
solutions for improving learning, teaching practices as well the teaching environment.
History and Purpose of the program
Historically, student assessment programs are developed to improve learning ability
of students. The student assessment program for Orange County was designed to measure the
success or performance of public school students from the County with Florida assessment
standards. It will further assess their performance against national and international
performance. Critical areas of assessment include language arts, mathematics, and end of
course assessment in Geometry and Algebra. The Florida standards assessment is
administered to students in upper primary to measure their educational gains and progress.
It is also designed to assess students’ performance against national and international
standards. This assessment provides data for determining how the county public students
from Orange County are performing in a variety of subjects at different grades and ages. This
helps the stakeholders to benchmark student performance against those from other States and
countries. The critical areas of focus include mathematics and science, computer and
information literacy and language literacy.
Program Justification
Student assessment program for Orange County is of great value in the wake of
dwindling education performance in public schools. This evaluation will help educators and
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 4
key stakeholders to assess the effectiveness of teaching and learning practice in County
public schools. The school will use assessment information to carry out a thorough review of
learning methods and practices in county public schools. It is also necessary for determining
the availability of resources which is a contributing factor to poor performance in county
public schools. This will inform policymakers on better ways of improving performance in
public schools across the county.
Advantages of evaluating the program now
Evaluating the program now will help to fix the weaknesses in the program early
enough to improve its effectiveness thus saving cost and reducing wastage of resources.
Secondly, evaluations will assist in revealing the success of assessment program to
communicate the impact of the program to key stakeholders. This is essential in boosting
morale as well as attracting and receiving necessary support (Russ-Eft & Preskill, 2011).
Thirdly, evaluation of the program will help to determine the viability of the program and
whether it is worth to be continued or halted altogether.
Constraints of Conducting Program Evaluation
Program evaluators are supposed to be wary of the political, time, budgetary and data
limitations. Availability of data affects the timeline of the program. This can be addressed by
integrating evaluators in early program planning to gather data needed for determining the
baseline for critical indicators (Bamberger et al., 2004). Data availability and accessibility are
also improved through methods for continuous collection of accurate and quality data from
the onset of the program. Politics is another key barrier to quality evaluation of student
assessment program. This comes in the form of county politics or community politics as well
as stakeholders and clients perspectives. For example, stakeholder groups and clients can
oppose negative evaluation feedbacks and demand for the publishing of favorable outcomes.
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 5
To address this concern, evaluators must first accept that it is not possible to have an
evaluation that is free of political influence.
Segment 2: Assignment 2
Different Methods of Evaluation School Programs
School projects are tailored to address the educational and social-emotional needs of
the students. Different methods may be applied in this undertaking depending on some
factors such as the magnitude of the project (Rossi et al., 2004). It is for this reason that the
evaluation process needs to be planned accordingly and in such a way ensure that every step
is reflected and desired results attained.
- Assessing the needs
Within the school community, different needs are prevalent among the students. To
undertake this assessment, a survey needs to be conducted to ascertain the needs among the
students. The evaluators engaged in the process use the gathered information to ascertain if
the project can address the prevailing needs of the school community. - Assessment of the program design
Every project must be implemented through a set logic or model. This entails taking a
look at the different processes designed for implementation of the project and whether it can
achieve the set targets of the project. Evaluation of the project logic also helps in
identification of the challenges that may not have been foreseen at its inception (Rossi et al.,
2004). - Measurement of project outcomes
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 6
There is always a set target that needs to be achieved through each project. This is
achieved through detailed planning and provision of resources that work towards the success
of the project. Evaluation approach works to ascertain if there are underlying changes that
might affect the outcomes and whether this is positive or negative (Shadish et al., 1991).
The Role of Evaluators In Evaluation Of School Programs
A successful evaluation process needs to be conducted by individuals with a full
understanding of the project alongside the appropriate evaluation approaches. This includes
internal and/or external evaluators. Their main role is to evaluate the school program in order
to establish the program’s worth (Mertens & Wilson, 2012).
A Program Evaluation Plan For Education
- Setting explicit goals
The goals of the evaluation process must be clearly stipulated. This defines what the
evaluator seeks to identify in the process of evaluation (Bamberger, 2000). The process may
entail developing questions for project developers and seeking for an understanding of the
intended purpose of the project (Shadish et al., 1991). - Measurement of the program impact
The impact of the educational program has to be measured. With the outcomes of the
project, there are the desired results and the actual outcomes that the project achieves. The
evaluator takes into consideration the projected outcomes and the projected implementation
process to ascertain if the two have the potential to produce similar outcomes. - Working in the field settings
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 7
Settings where the project is to be implemented also need to be evaluated accordingly.
This may take into consideration the available physical structures and resources that are
intended for use in the process. This further takes into account the required workforce in the
implementation of the program (Bamberger, 2000).
- Analysis of the data collected
After collecting the information required for the evaluation process, an analysis is
then conducted. In this process, the evaluating team compares the findings from the
evaluation to the available plans by the educational institution or department alongside the
findings from the evaluation process (Mertens & Wilson, 2012).
Segment 3: Assignment 3
Evaluation is the process of determining a program’s worth and significance
employing the systematic approach of measuring goals, effectiveness, efficiency, and results
of the program to implement changes to ensure the success of respective programs (Berriet-
Solliec et al., 2014). There are three main types of evaluation: front-end evaluation, process
evaluation/formative evaluation, and summative evaluation. Front-end evaluation is the
process of obtaining and synthesizing information on participants’ needs, knowledge,
interests, and motivations. The aim is to facilitate development and modifications of goals
and expected results of the program or project. Process evaluation is the type of evaluation
that seeks to establish how an individual or the program is moving towards achieving the
desired goal(s) (Rondinelli, 1976). It also ensures that there is feedback on the progress of the
program. Summative evaluation determines if the program has achieved its intended goals
and outcomes and whether the personnel achieved their roles in meeting the set goals
(Rondinelli, 1976).
Goals of Evaluation
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 8
- To establish and assess the effects of programs
- To improve the desired goal(s) or outcome of various programs
- To establish and measure effectiveness and efficiency of programs or systems
of an organizations (Hopkins et al., 2016).
Rationale for the Design and Sampling Technique, Assessment Methods, and Timeline
The design is simple, easy to use, adaptable, clear and cost-efficient. The sampling
technique is purposive sampling technique that gathers data from one school in a zone. The
assessment methods seek to establish and achieve the desired outcome of effectiveness,
efficiency as well as productivity of the program. The timeline chosen is sufficient to conduct
program evaluation to capture the results.
Cultural and Political Factors that affect Evaluation
According to Nguyen et al. (2015), cultural influences can significantly determine the
success and failure of a program or project management. Culture can affect various aspects of
the program such as goal alignment, commitment, and trust levels as well as worker
orientation. Cultural influences include the diversity of personnel’s background as well as
different behavior and group’s program expectation. Cultural diversity can be a source of
human conflict resulting from communication challenges. These cultural problems can
significantly impact on the success or failure of the project or program.
Analysis and Rationale of Program Costs
Teachers’ output and performance are crucial to the success of a school. They play a
significant role in implementing teaching and other school programs to ensure that objectives.
Therefore, an evaluation of teachers’ performance is very significant, and a budget of US $
200,000 is sufficient to meet the costs of facilitating evaluation team. With this budget that
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 9
meets all the costs, evaluation of the program would be successfully conducted. The feedback
on teacher performance is essential to the improvement of school program goals.
The government and parents are very interested in students’ overall performance. The
exercise of monitoring and evaluating students by an external party is essential in assessing
and establishing the various inputs, factors, and processes that determine students’
performance. The budget allocation for this task is the US $300,000 to meet the cost of the
essential resources as well as travel expenses for the evaluation team, which is sufficient in
achieving goals of evaluation. Teacher turnover ratio is significant and indicative of specific
challenges facing teachers. Evaluation of the teacher turnover requires a budgetary allocation
of US $100,000. This enables the evaluation team to focus over time on establishing the
factors leading to turnover rates.
Segment 4: Assignment 4 – Plans for Collecting and Reporting Data
Three Data Sources
People
As a source of data, the people include the recipients and administrators of the new
educational program. People are a vital source of data as they offer information as regards the
need for the educational program, the program’s implementation, its efficacy and
effectiveness, as well as its outcomes. They will be to do this by responding to questions,
taking skill and knowledge tests, and volunteering testimony and comments (Olson, 2014).
Observations
Direct observation is the second main source of evaluative information. Direct
observation of the recipients of the program is important since it is not dependent on the
ability and willingness of the people to provide information. In essence, observations could
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 10
offer information with regard to actual circumstances and situations, which are useful in
understanding the program and its effectiveness (Kroll & Moynihan, 2018).
Documents/Records
The third main source of evaluative data comprises records/documents. Various
documents such as databases maintained by the institution, test scores and grades, and other
paper records would be reviewed. They would provide crucial information regarding the
performance and effectiveness of the program.
Open-ended questions
Seven open-ended questions would be used in determining how well the educational
program has been at meeting its goals and objectives, and whether it has attained the desired
outcome. On the whole, they will help to find out whether the program was effective or not.
They also help to determine areas of the program that could be improved in order to better the
program (Donnelly et al., 2016).
Issues in Collection, Compilation, Interpretation, and Reporting of Data
The main issue that can be encountered in the collection of data is too little data
gathered and collecting unrepresentative data. This problem may arise because of inadequate
time to collect data from all the relevant target population that comprises recipients and
administrators of the educational program. Misreporting of the findings is an issue that may
be encountered when reporting the data. The researchers may also exaggerate the findings,
which is a significant issue (Olson, 2014). The third issue that maybe encountered is
misinterpretation of data. The research results might be misinterpreted because vital variables
might be omitted (Donnelly et al., 2016).
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 11
Establishing and Maintaining Communication with Stakeholders
Communicate frequently
In order to establish and maintain effective communication with the program
stakeholders during the process of evaluation, it is recommended that regular communication
should be established. Regular communication about the failures and successes of the
educational program will be helpful in managing expectations and building stakeholder
support for the program.
Meeting regularly and provide periodic reports
Secondly, it is recommended that the stakeholders should be invited to a series of
meetings routinely. During these meetings, the stakeholders would be briefed about the
program evaluation, including what component of the program is going to be evaluated. The
meetings would be held face to face.
Soliciting Input on Evaluation
Thirdly, it is recommended that effective communication with the stakeholders should
also be established and maintained by soliciting input from the program stakeholders
regarding the evaluation design, and ongoing sharing of the evaluation findings. The
stakeholders should also be given regular feedback regarding the evaluation process (Kroll &
Moynihan, 2018).
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 12
Reporting Data from the Evaluation
Holding a Meeting to Discuss the Data
One method that can be used to report the data from the evaluation is to hold a
meeting with the program recipients and administrators. The rationale of using this method is
that it will allow all the stakeholders to participate and be informed about the evaluative data.
Publication in the Institutions Website
The data from the evaluation can also be reported by publishing it in the official
website of the institution. This way, stakeholders who are not able to attend the face-to-face
meeting can find information about the evaluation and data gathered from the evaluation on
the website.
Publication in Booklets and the Institution’s Newsletters
Furthermore, it is recommended that data from the evaluation be published in
booklets and newsletters and then handed out to the stakeholders. This will allow them to
read about the program evaluation at their own free time and get to learn more about the
findings of the evaluation.
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 13
Part B
Supporting Theory
There are quite a few ways of including supporting theory in the report, particularly as
the theory relates to the object of the evaluation. Firstly, a supporting theory can be included
by using an existing theory in educational program evaluations. In other words, the evaluator
can employ a theory that has already been developed previously by other scholars to evaluate
the educational program. Using this method, the evaluator will adopt all the steps of
evaluation outlined in the program theory in order to effectively evaluate the program
(Munter, Cobb & Shekell, 2015). When the evaluation design is informed by a clear
supporting program theory, it will help other people to see the sense in the educational
program or to see the significance of the program. During evaluation, a supporting program
theory will make it easier for the evaluator to select the most suitable results/outcomes to
measure (Munter, Cobb & Shekell, 2015).
The other way that can be used to include supporting theory is for the evaluator to
develop a conceptual framework. The rationale for developing and using a conceptual
framework is that it will allow the evaluator to elucidate concepts and propose relationships
amongst the concepts in the evaluation of the educational program, and it provides a context
for interpreting the findings of the program evaluation. It will also be useful in explaining
observations (Munter, Cobb & Shekell, 2015).
Standard and Alternative Forms of Measurement
The impact of standard and alternative forms of measurement upon the target of the
program evaluation is that the evaluator would be able to obtain a variety of crucial data from
the different forms of measurement that would be used. Standard forms of measurement and
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 14
alternative forms of measurement provide different sets of evaluative information since they
adopt different approaches to measure the outcome of the program (Slavin & Lake, 2014). In
essence, they provide outcomes of the evaluation from dissimilar perspectives which allows
the evaluator to evaluate the program from different angles (Slavin & Lake, 2014). The
results from the two forms of measurements could both be used to determine the strengths,
weaknesses, and effectiveness of the program. Using this information, the evaluator would be
able to establish how the educational program can be improved and the shortcomings or
problem areas that need to be removed.
Reasons for Conducting a Metaevaluation
Meta-evaluation is understood as an evaluation approach that entails evaluating the
evaluations. It also evaluates the evaluators, albeit indirectly. Therefore, the reasons for
conducting a meta-evaluation are, firstly, to evaluate the evaluations, and secondly to
evaluate the evaluators (Stufflebeam, 2013). It is conducted for the purpose of evaluating the
assessment methodology systematically and thoroughly in terms of timeliness, credibility,
importance, scope, relevance, cost, utility, rigor, as well as pervasiveness of dissemination.
The third reason for carrying out a meta-evaluation is to remove any potential problems that
may arise from the evaluation (Scriven, 2015). In addition, a meta-evaluation represents a
scientific and ethical obligation when other people’s welfare is involved.
One method for carrying out a meta-evaluation is the evaluator using a meta-
evaluation to evaluate his work. The second method is when an independent evaluator
evaluates the work. Even though an evaluator can use a meta-evaluation in evaluating his/her
own work, the results are likely to be unreliable. As such, it is important for the evaluator to
employ an independent evaluator to carry out a meta-evaluation of his/her evaluations
(Patton, 2014). A meta-evaluation can also be conducted formatively and utilized for on-
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 15
going evaluations, or it can be conducted using a summative method to report on the
weaknesses and strengths of the previous evaluation.
Recommended Roles and Responsibilities
The meta-evaluation on the educational program would be conducted by independent
evaluators. They would:
Identify a set of recognized evaluation standards or check list for use in evaluating the
evaluation methodology, for instance the American Evaluation Association’s
Programme Evaluation Standards (Scriven, 2015). Patton (2014) suggested that meta-
evaluators should include the following questions: was the evaluation conducted
properly? Did the evaluation meet the professional principles and standards? Is it
worth using?
Conduct a formal and systematic evaluation of the evaluation methodology that was
used to evaluate the educational program so as to report its weaknesses and strengths
Conduct interviews with the evaluation team members, recipients, and administrators
of the program to establish their expectations and the purpose of the educational
program
Check the actions of the evaluation team against accepted practice and evaluation
standards
Conduct expert review of the evaluation.
The evaluators should also participate in weekly conferences or meetings with
members of the evaluation team in order to provide feedback to the evaluation team
Communicate the findings of the meta-evaluation, including the weaknesses and
strengths of the evaluation, to the relevant stakeholders such as the evaluation team
and the administrators of the institution
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 16
Identify opportunities for improvement and suggest ways of how the evaluation could
be improved
EVALUATION PLAN WITH THEORY, MEASUREMENT, AND METAEVALUATION 17
References
Bamberger, M., Rugh, J., Church, M., & Fort, L. (2004). Shoestring evaluation: Designing
impact evaluations under budget, time and data constraints. American Journal of
Evaluation, 25(5).
Berriet-Solliec, M., Labarthe, P., Laurent, C. (2014). Goals of Evaluation and Types of
Evidence.