Engage Stakeholders:
Who should be involved?
Students, teachers, university administrators, and school division
administrators.
How might they be engaged?
University administrators will be primarily engaged through collaboration
during the evaluation. They will also be the primary recipients of evaluation
reports. The primary method of engagement for students and teachers will be
through an online survey. School division administrators comprise an important
piece of the puzzle for the placement program. They will generally only be
involved in the evaluation through summary reports of results.
Focus the Evaluation:
What are you going to evaluate?
The new intern placement program at the University of Saskatchewan will be
evaluated. Intern placement matches Teacher Candidates entering their last year
of a Bachelor of Education degree with Cooperating Teachers in the Saskatchewan
education system for a four month internship. The new program will automate the
matching process through an online survey (hereafter referred to as the
placement survey) and a quantitative matching algorithm. More information is available
in the logic model.
What is the purpose of the evaluation and what questions
will it answer?
The placement program represents a new approach to matching students with
cooperating teachers. As such many elements of the program are being attempted
for the first time. The evaluation will assess the success of the new intern
matching system. Three components of the program will be evaluated. First, the
content of the survey needs to be examined. Second, the success of the
web-based delivery method will be assessed. Finally, the success of the
matching algorithm should be measured. The results of the evaluation will be
used to improve the system for next year’s deployment.
Who will use the evaluation?
University administrators: This group will be the primary focus of the
evaluation. They will use evaluation results to improve the placement program
for the next year.
School division administrators: Evaluation results can be used to reassure
school division administrators that this is a successful program. If directors,
superintendents, and principals support the system, more teachers may be
encouraged to join the program in subsequent years.
Teachers: Successful internship placements are more likely to increase
repeat-teacher participation. Additionally, positive word of mouth from
participating teachers may increase overall teacher participation.
Students: Students should be reassured that their voice is being heard during
the placement program. Additionally, the evaluation results can be used for new
students entering the program to reduce the uncertainty and stress inherent to
an internship placement.
What information is needed to answer the evaluation
questions?
Placement Survey content: This will be evaluated by analyzing the placement survey
responses through a combination of statistical methods including exploratory
factor analysis. This evaluation question will require access to the results
database.
Web-based delivery: This question will require feedback from people who
actually used the system during the program. This feedback will be obtained
through a short online survey (hereafter referred to as the delivery survey).
Matching success: An online survey will be used to assess the success of the
internship placement (hereafter referred to as the matching survey).
Additionally, focus groups and interviews with students and teachers may be
included. The matching survey should be separate from the delivery survey, due
to the large disparity in time between filling out the survey and completing
the internship.
When is the evaluation needed?
The placement program runs in two stages during the year. Initial student and
teacher data collection happens in March and April, and internship matches are
evaluated in June. The internship runs from September to December.
This evaluation cannot be completed in full until early 2015 after students
have finished the internship placement in December of 2014. The results will be
used to improve the program for its 2015 launch in March.
What evaluation design will be used?
This will be primarily a process evaluation. The question assessment and the delivery
survey fall firmly within the bounds of a process evaluation. The matching
survey and the interview process lie in the grey area between a process
evaluation and a summative evaluation. The actual alignment of those evaluation
components will be determined by the focus of the questions in the survey and interviews.
Summative evaluation will be more useful in future years when progress towards
long-term goals becomes a greater focus. This evaluation design is envisioned
as being part of the yearly deployment cycle for the placement program. As
such, a needs assessment would likely be useful in the future. However, it is
outside the scope of the current evaluation plan.
Collect the
Information:
What sources of information will be used?
Existing information: The database of survey responses will be used for the
statistical analysis.
People: Students and teachers who participated in the placement program will
comprise the primary source of information for this evaluation.
What data collection methods and instrumentation will be
used?
Surveys will form the primary data collection method. There will be two surveys
and both will be delivered online. The infrastructure for deployment of the
surveys is already available via the placement.usask.ca website. The surveys
will need to be written, but can make use of some open-ended questions used
during pilot tests of the placement survey questions. The surveys will be
promoted to everyone who took part in the placement program, although
participation the in follow-up surveys will be completely voluntary.
Existing data will be used to evaluate the placement survey questions. I
currently have access to the database of placement survey responses.
Furthermore, a program has already been written to convert database results to
a format readable by SPSS for statistical analysis. Data from all of the
participants in the placement program will be used during this analysis.
Focus groups and interviews may take place following the internship. Access to
students for interviews and focus groups should be relatively easy due to being
on campus. Access to teachers would require additional planning. The purpose of
the interviews is twofold. First, to get feedback on the success (or failure)
of the internship matching process. Second, to generate suggestions for future improvements.
This method would be best employed after the survey results have been collated
to allow for deeper inquiries into trends and factors that come out of the
survey results. Participants for interviews and focus groups will be chosen
from teachers and students who completed the placement program.
What is the timeline for data collection?
The delivery survey should happen shortly after all the data has been
collected. This will likely be in June or July of 2014, after internship
matches have been calculated.
Statistical analysis of placement survey questions can happen any time after the
initial data has been collected. This will happen in July or August of 2014,
after internship matches have been calculated.
The matching survey should happen right at the end of the internship in
December of 2014. This coincides with the end of the yearly cycle for the
placement program.
If needed, interviews and focus groups would happen after the results of the
matching-success survey have been collated, in January and February of 2015.
Analyze and Interpret:
How will the data be analyzed and interpreted?
The primary analysis will be statistical in nature. Survey results will be
assessed to examine trends and deviations among respondents. Placement question
assessment will use a variety of statistical techniques, primarily exploratory
factor analysis.
Open-ended questions will be assessed using qualitative techniques to extract themes
and factors. These themes can be used to direct the interviews and focus
groups.
The evaluator will be responsible for stages of the evaluation analysis.
Additional help may be requested to conduct and process interviews and focus
groups.
What are the limitations?
The goal of the evaluation is to improve the placement program for its second
year of operation.
As this is a new evaluation of a new program, there are many limitations. First
among those is a lack of person resources. The evaluator (me) is also the
primary software developer and collaborating content developer for the main
program. Keeping the evaluation components distinct from the main program
components will be a challenge, as will splitting time between the two. The
benefit of this tight integration is full access to program data and
infrastructure for survey development and deployment.
Because the placement program is in its first year, the yearly deployment cycle
is very fluid. Additional features are being added regularly and timelines
often shift. Planned evaluation stages may be obsolete before they have a
chance to be implemented.
Survey participation will be voluntary. The first survey will be deployed in
the summer and the second survey will be deployed shortly before Christmas.
This timing could result in small numbers of respondents.
This evaluation process is intended to become part of the yearly schedule for
the placement program. Therefore, the timing for deployment of evaluation
components and analysis of results will be constrained by regular program
operations. Additionally, since the placement program is expected to change and
evolve from year to year, the evaluation components will likewise need to
evolve.
Use the Information:
How will the evaluation be communicated and shared?
The primary recipient of the evaluation results will be university
administrators. The results will be used to aid decision making about changes
in implementation for the following year. The results will be communicated
through reports and meetings.
Teachers and school division administrators will be informed of results at teacher’s
conventions and conferences such as the National Congress on Rural Education.
One of the goals of the placement program is an increase in the available
teacher pool to a size that exceeds the demand from students within the College
of Education. Dissemination of the successes of the program directly to
educators is an essential component of this process.
Some of this data will be used during information sessions for new students who
are preparing for the internship.
Summary data reports will be made available through the placement.usask.ca
website. It is anticipated that these will be primarily used by students
entering or leaving the program, and secondarily by teachers entering or
leaving the program.
Manage the Evaluation:
Human subject protection.
Although evaluation surveys will be accessed through the placement.usask.ca
website, they will be deployed in a separate section from the placement survey.
No login will be required and no user data will be collected for these surveys.
Question analysis is conducted based on existing data, which contains identifying
demographic information. This information will be ignored and a random number
will be assigned to each participant to preserve anonymity.
Timeline.
The survey on the web-based delivery and the question analysis will happen
during the early summer after the first stage of the placement program has been
completed. This will likely be June and July of 2014.
The final survey will be deployed in December of 2014 at the end of the
internship.
Interviews and focus groups would be conducted as needed in January and
February of 2015.
Responsibilities.
The evaluator will be responsible for the deployment of instruments and the
collection of all data with the following exceptions:
Survey questions will be written in consultation with university administrators
to ensure that appropriate target areas are addressed.
Additional help may be requested for interviews and focus groups. This will
depend on the number of interviews and focus groups that are required.
Budget.
The primary cost associated with this evaluation is time. It will take time to
create and deploy the surveys. Time to collect and summarize the results. Time
to conduct statistical analyses. Time to disseminate the results.
The only major equipment cost associated with this evaluation is a computer for
development of content and analysis of results. Proprietary software
requirements include an office suite (Microsoft Office) for report composition
and a statistical package (SPSS) for analysis.
Additional costs for interviews may include a support person to assist with
running interviews and collating results, and a recording device to use during
interviews.
A general pool for miscellaneous expenses will also be included.
Standards:
Utility.
Every effort will be made to ensure that the data from the evaluation is
effectively used. This will be aided by involving university administrators in
the planning process to better aim surveys at evaluation questions.
Feasibility.
As the infrastructure for most of the evaluation components currently exists,
the proposed evaluation should be feasible. The primary investment will be the
evaluator’s time. As this is intended to be an integral part of the yearly
deployment cycle for the program, the cost of that time should be allowed.
Propriety.
This program represents a new direction for the College of Education. As such,
it is imperative to know that the program is working. New programs can see substantial
changes in the first few years as unforeseen deficiencies are improved. This
evaluation represents a genuine attempt to make the placement program better.
Accuracy.
The evaluation is designed to address specific areas of the program. A targeted
evaluation component will be developed to address each of the three evaluation
questions. Statistical methods pertinent to the evaluation components will be
applied along with general summary statistics.