Saturday, March 22, 2014

ECUR 809 Assignment 5 - Revised Survey

**Please note - this post is the second posting for this assignment. My original survey and design considerations are found in the previous post.

I piloted this survey with four Jr./Sr. High Language Arts teachers in my school division, and of course, I tested the survey myself. Although the responses gathered matched up quite well with the type of information I was hoping to gather, I did make a few improvements:

Things that I liked about the survey and the tool:
1. Google apps was simple to build the survey with, providing a variety of question types and easy to use design features.
2. Participants did not experience any problems accessing, using or submitting the survey to me.
3. The data was compiled automatically in a Google spreadsheet, although I have to manipulate it to make sense of it, it is great to start with it already in a spreadsheet. There is also graphing capability built in.
4. The organization/order of my questions made sense. On my own, I would not have included so many questions about the participants, but while reading the materials for this module, I realized the benefit of these questions in truly interpreting the results.

Things I improved after the survey the pilot:
1. For the scale questions, I simplified the descriptors of the scale, as they were too wordy,
2. Google forms allows you to include help text for each question, which appears just below the question in a slightly lighter font. In the end, I removed this for several questions in order to clean up my survey. Rather than including the instructions "Please consider a 3 as no change" as help text in each question, I moved up to the beginning of the section that includes scale questions. This makes the survey look less cluttered.
3. After reading the responses from my participants, I realized that there was an important question missing at the end of the survey as to whether or not they would teach using this model in the future. I added this question, along with a follow-up question (depending on how they answer) providing them space (although this space is actually the only optional "question" in the survey) for them to explain or comment on their response.
4. I simplified or clarified several of the questions, not because I received feedback that they were confusing, but as I tested it myself, I just felt like it was a little "clunky".

Here is a link to my revised survey.

My final thoughts on the survey tool - Google Forms:
As for using Google Forms - I'm still on the fence as to whether or not it's really a great survey tool. I think that for most purposes as a teacher, it would allow me to survey my students quickly and easily. However, for a more formal survey, there are some layout issues that I would like to have more control over. A simple example - if there is a long list of check boxes, I would prefer to put them over two or three columns, so the question doesn't look so long. I haven't been able to figure this out in Google forms.

Friday, March 21, 2014

ECUR 809: Assignment 5 - Survey Draft

This week's assignment is to create a test a survey, using a variety of question types. I chose to create my form in Google Apps because I am quite involved in the roll-out of this system in my school division. However, I'm not quite sure if this is the method I would use for gathering data or not if completing an actual program evaluation. I haven't use Google forms very much, and I wanted to test out this aspect of Google Apps more fully.

Here is a link to my form.

I used a variety of question types (multiple choice, numeric,
scale, checkbox, text and paragraph text) in my survey, so I am curious to see how these work in the spreadsheet that gathers the data for Google forms.

As I haven't created many research surveys in the past, I referred quite heavily to the article "Formatting a Paper-based Survey Questionnaire: Best Practices", by Elizabeth Fanning, for guidelines in creating my first drat. Prior to creating the questions, I worked my way through the design considerations identified by the author - noting my thoughts and plans here.

My survey is currently being tested out by four teachers, who have agreed to respond by lunch time tomorrow. After reviewing their answers, I will fine tune my survey and post a revised version.

Saturday, March 8, 2014

ECUR 809 Assignment 4 - Logic Model

Assignment 4 in this course was to develop a logic model for the program we are considering evaluating. For this assignment, I continued to work on the Reader's Workshop program being implemented in my school division.

This assignment is available as an image below, however a PDF version is also attached for easier reading.


ECUR 809 - Assignment 3 - Evaluation Plan

The third assignment in this course is to create an evaluation assessment for a program we are interested in evaluating. This evaluation assessment is intended to determine the feasibility and direction of our evaluation. To complete my assignment, I used a template provided by the University of Wisconsin - Extension, which is accessible, along with many other excellent program evaluation resources here.

The program I am working with is a Reader's Workshop program, currently being implemented in my school division to support our literacy focus, and with the end goal of developing a love of reading in our students.

Assignment 3 - Karen Fox

I'd love to hear your feedback on the assessment, as well as any comments you have about similar programs, assessments and evaluations you have been involved in regarding Reader's Workshops.

Saturday, February 8, 2014

ECUR 809 Assignment 2: Selecting an Evaluation Method

For our second assignment, we were asked to select an appropriate method to evaluate a  program designed to prevent diabetes in high risk groups, by providing free access to exercise programs to pregnant aboriginal women. Participation in the program would hopefully prevent cases of gestational diabetes mellitus, and type 2 diabetes in pregnant women, therefore hopefully preventing occurrences of type 2 diabetes in the children of these women (Klomp, Dyck, Sheppard, 2003).


I feel that the Outcomes-Based Evaluation method fits with this program, but I realize that it also probably speaks to my own personal philosophies about what makes a program successful. The Outcomes Based Evaluation method, as described on managementhelp.org, focuses on the impact that the program has on the users.


To make my decision,  I envisioned the program using a Program Logic Model, as explained on the University of Missouri's website, reflecting on who is involved in the program, and the desired short, medium and long-term outcomes. This lead me to the Outcomes-Based Evaluation method, which is especially appropriate for the non-profit sector (Plantz, 2006).  A review of the guiding questions on Managementhelp.org also directed me to this method.  Although there are many things that could be assessed, including how the classes were delivered, the qualifications of the instructors, and several other factors, the only thing that truly matters is whether or not the program helps to reduce occurrence of diabetes.


Managementhelp.org provides a brief overview of the steps involved in this type of evaluation, and based on the information provided in the case study, I feel that it would be possible to work through these steps. Beginning with identifying the outcome to be measured (I would consider repeat participation over multiple weeks, as well as instances of GSM), moving on to identifying observable measures (attendance lists and instances of GSM). Information about the target group is clearly available, as described in the case study, and the outcomes could be assessed using a telephone survey.  Participant contact information is available, as participants were contacted each week with a reminder to attend, therefore a follow up telephone survey or interview should be easy to obtain. The biggest challenge with using this method would be tracking the long-term outcomes desired. Short term outcomes - for example, participation in the program, could be easily tracked. Medium-term outcomes - the decreased occurrence of GSM, could be tracked with telephone surveys. The decreased occurrence of type 2 diabetes in the children of the participants would take years to track. However, success in short and medium term goals would likely be enough to evaluate the program in the meantime, as studies have proven that these outcomes do impact the occurrence of type 2 diabetes in children. This program evaluation would likely drive further participation, by program participants, and program leaders or employees, as it is the method that most clearly focuses on the good that comes from this initiative.  


References
Candidate Outcome Indicators: Health Risk Reduction Program. (n.d.). The Urban Insitute. Retrieved February 6, 2014, from http://www.urban.org/center/met/projects/upload/Health_Risk_Reduc.pdf

Klomp, H., Dyck, R., and Sheppard, S. (2003). Description and evaluation of a prenatal exercise program for urban Aboriginal women. Canadian Journal of Diabetes, 27: 231–238.


McNamara, C. (n.d.). Basic Guide to Program Evaluation (Including Outcomes Evaluation). Basic Guide to Program Evaluation (Including Many Additional Resources). Retrieved February 06, 2014, from http://managementhelp.org/evaluation/program-evaluation-guide.htm#anchor184773

Plantz, M. C., Greenway, M. T., & Hendricks, M. (2006, March 3). Outcome management: Showing results in the nonprofit secotr. Outcome Measurement Resource Network. Retrieved February 6, 2014, from https://www.nationalserviceresources.gov/files/legacy/filemanager/download/ProgramMgmt/Outcome_Measurement_Showing_Results_Nonprofit_Sector.pdf

Program Development & Planning: ProgramLogic Model. (n.d.). Program Development & Planning: ProgramLogic Model. Retrieved February 06, 2014, from http://extension.missouri.edu/staff/programdev/plm/




Saturday, February 1, 2014

ECUR 809 Assignment 1: Evaluating a Program Evaluation

As an an introduction to the field of Program Evaluation, the first assignment in my current course, ETAD 809, is to review a completed evaluation.
The Calgary AfterSchool program is a project put in place by the City of Calgary and UpStart (The United Way). This city-wide version of the program was launched in September 2009, and the evaluation period ended in June 2013. The AfterSchool program is the implementation of afterschool programming for children and youth across the city, free of charge, in order to promote positive child and youth development. The program claims to be the only of its kind in Canada.
The final program evaluation was published in October of 2013, although there were several interim evaluations along the way. I reviewed the interim evaluation from 2011, mid-way through the project.
The evaluation is based on the programs two goals, identified in the final evaluation as “(i) to increase the participation of Calgary’s children and youth in high-quality after-school programming by increasing the quantity and accessibility of high-quality recreational and developmental programming throughout the city, and (ii) to improve participants’ social and emotional development and school engagement.” Both the interim and final evaluations concluded that the program met both of these goals, and was a success.
Due to the large size of the program, data collection was challenging throughout the study. Pre-and post questionnaires were administered to participants. The results were collected, and for statistical reasons that I don’t fully understand, the evaluators then selected only locations which were able to collect data from all participants to use when analyzing the results.
Two things stand out for me in these documents. First, the results, which are available in graphical format, are separated into categories for children and youth, and then further into “overall”, and those “with poor pre-test scores”. These divisions allow the reader to see how the program impacts each group differently, and highlights the results that are most important, the children and and youth deemed to be at-risk, with poor pre-test scores. I agree with this separation in results, because for many youth, they likely just transferred into the AfterSchool program from another paid program. To know if the programs have a real impact, the customer would need to see results for children that are not receiving this services elsewhere. This does raise, two follow-up questions which I would investigate if running this type of program: What percentage of the participants are included in the at-risk results? Are City’s funds going to the children and youth who really need it?
The second thing that jumped out at me was that the surveys were not administered to any students in grades one to three because they required reading and writing skills. The results in the report do not reflect how well the program serves these children. Although I understand the complications in administering surveys to such young participants, another method, perhaps interviews, anecdotal evidence from program instructors, or parent surveys, could have been used in order to provide a more complete and accurate picture of the program’s effectiveness. Without this information, how can those who manage the program truly know if it is worthwhile to deliver programs for the youngest participants?
____________________________________________

Note: I wrote my original post in Google Docs. Using the Research Tool, I included proper APA footnotes, although I have no idea, yet, if and how that needs to be done in a blog. In any case, they did not copy over to Blogger, nor can I find a way to add them in. Here's a link to my Google Doc that includes the footnotes.