Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. Clipping is a handy way to collect and organize the most important slides from a presentation. The purpose of this study was to field test an instrument incorporating a retrospective pretest to determine whether it could reliably be used as an evaluation tool for a professional development conference. Professional development programs at the national, state, regional, and local levels are as diverse as the teachers attending the programs. However, as these programs conclude and teachers return to the classroom, administrators may be left wondering what effect these programs had on their teachers: Did the teachers like the program? However, implementing program evaluations to measure change using a traditional pretest-posttest model can be difficult to plan and execute (Lynch, 2002; Martineau 2004).
The use of the retrospective pretest to evaluate program outcomes is making its way into the professional development spotlight.
Recognizing that traditional pretests are sometimes difficult or impossible to administer and citing exemplar studies conducted by Deutsch and Collins (1951), Sears, Maccoby, and Levin (1957), and Walk (1956), Campbell and Stanley (1963) advocated the retrospective pretest as an alternative technique to measure individuals’ pre-intervention behavior.
Since its inception, the retrospective pretest has been incorporated in a variety of designs.
Building on the research from the 1950s that incorporated the retrospective pretest, Howard, Ralph, Gulanick, Maxwell, and Gerber (1979) prescribed the tool as a remedy for response shift bias. Subsequent research (for a full review see Nimon & Allen, 2007), across a wide variety of measures, has indicated that retrospective pretests provide a more accurate measure of pre- intervention behavior. The purpose of this study was to field test an instrument incorporating a retrospective pretest to determine whether it could be used reliably as an evaluation tool for a professional development conference. During an annual professional development summer conference, the workforce education department of a public university hosted a professional development conference for a segment of educators employed in its state.
At the end of each session, participants were asked to complete the session evaluation instrument designed by the authors for the study (see Figure 1). Coefficient alpha was used to evaluate the reliability of the scale and subscales scores resulting from the instrument. The evaluation instrument was administered after each of the conference’s 75 sessions, providing over 1,200 responses to the survey. The validation of any instrument must be proven through multiple interventions in multiple situations. To determine whether the difference between the prior and post knowledge survey responses was driven by a desire for the participants to appear favorably with the presenters, a review of qualitative feedback was conducted. Table 2 summarizes the descriptive statistics for the reaction scores, retrospective pretest scores, and posttest scores. Results indicate that the data produced from the instrument designed for this study were reliable. While further revisions to the instrument and encompassing methodology should consider how resultant data can be validated, it is also important to consider validity issues within the context of the intervention. While the retrospective pretest has been described as a useful but imperfect tool (Lamb, 2005), it uniquely provides a technique to garner pre-intervention data that might not otherwise be feasible. In the case of this study, employing a retrospective pretest in conjunction with a posttest provided conference stakeholders with information to relate levels of learning to groups of participants and presentation content (Figures 2 and 3 provide example reports generated from the survey data). Because this instrument was designed to be content neutral, its application extends across disciplines. Retrospective analysis is an underutilized assessment tool that can serve as a practical and appropriate evaluation technique to assess the learning and performance improvements gained during professional development.
The ability of professional development program evaluators to quantitatively measure learning and performance improvement is challenged by both time and resources. Kim Nimon is a Research Assistant Professor in the College of Education and Human Development at Southern Methodist University.
Based on a prominent evaluation taxonomy, the instrument provides a practical, low-cost approach to evaluating the quality of professional development interventions across a wide variety of disciplines.


Such programs may necessitate a week-long statewide conference, or a 45-minute after-school program. One of the most common techniques to measure change is the traditional pretest-posttest model.
Not only must program evaluators gain stakeholders’ support to obtain reliable measures of change (Martineau, 2004), but they must also respond to the challenges associated with garnering repeated measures when participants arrive late or leave early and developing instruments that are sufficiently sensitive to detect small program outcomes (Lynch, 2002). In essence, a retrospective pretest is distinguished from the traditional pretest by its relationship to the intervention (or program).
Allowing individuals to report their pre- and post-intervention level of functioning using the knowledge they gained from the intervention mitigates the effect of measurement standard variance that can occur in traditional pretest-posttest designs. The instrument includes not only questions typically associated with measuring participants’ reactions, but also includes a set of questions to gauge whether and how much learning occurred. Four hundred and six secondary educators attended the conference, and of those attending, 7 were pre-service teachers, 3 were administrators, 24 did not specify their role, and the remaining identified themselves as teachers. It should be noted that this is the first instrument of this nature used for professional development conferences of this scale. In Kirkpatrick and Kirkpatrick’s (2006) evaluation model, the second level of evaluation builds on the first by determining how much knowledge was acquired as a consequence of the training. Descriptive statistics and weighted means (Hedge & Olkin, 1985) were used to compare the standardized mean differences (d) across conference sessions. The open-ended comment section allowed participants to explain the difference in prior and post responses. It also includes descriptive statistics for the resultant effect sizes generated when comparing the retrospective pretest scores to posttest scores.
However, because the data were based on participants’ memory of their pre-intervention behavior, the validity of the results may be in question. For example in this study, the data were collected at a professional development conference. As defined by Campbell and Stanley (1963), it uniquely serves to curb the rival hypotheses of history, selective mortality, and shifts in initial selection. This information served not only to measure the quality of the professional development conference described, but also to provide pertinent data to improve the quality of future conferences. Just as the retrospective pretest technique has been successfully used in medical, training, organizational development, and educational interventions (Nimon & Allen, 2007), the instrument described herein has the opportunity to be used across a wide variety of professional development interventions. However, the authors wish to note that this technique is not a replacement for traditional pretest-posttest techniques. We believe that this cost- effective technique provides another valuable tool for professional development evaluators and yields reliable information for program development administrators at any level.
Allen is an Associate Professor in the College of Education at the University of North Texas.
The instrument includes not only the questions typically associated with measuring participants’ reactions but also includes a set of questions to gauge whether and how much learning occurred. Conferences and after-school programs are often the preferred means of ongoing learning for experienced professionals.
The practical response to these challenges is that many programs do not benefit from a formal evaluation process, thereby leaving administrators with little information regarding program effectiveness. Although many professional development specialists may be unaware of these techniques, the strategy of ascertaining participants’ retrospective accounts of their knowledge, skills, or attitudes prior to an intervention is not new. That is, a retrospective pretest is a pretest administered post-intervention, asking individuals to recall their behavior prior to an intervention. Because the evaluation was administered post-intervention, participants could apply program knowledge in forming self-reports of their pre-intervention behavior. In most cases, when participants do not have sufficient knowledge to gauge their pre-intervention behavior, they tend to overestimate their level of functioning. Citing data which suggest that traditional pretests underestimate the impact of interventions, Lamb and Tschillard (2005) asserted that the retrospective pretest is just as useful as the traditional pretest in determining program impact in the absence of response shift bias and is even more useful when subjects’ understanding of their level of functioning changes as a consequence of the intervention.


Incorporating two levels (appropriate for this application) of Kirkpatrick and Kirkpatrick’s (2006) evaluation model, the instrument solicits level 1 (reaction) and level 2 (learning) evaluation data. On average, participants attended 10 professional development sessions over the course of the 3-day conference. For each session, paired-samples t tests were employed to determine whether there was a statistically significant difference in participants’ retrospective pretest and posttest scores. Although most participants recorded no clear reasons for the difference in prior and post knowledge, those who did respond indicated overwhelmingly that they had learned new knowledge and skills.
Weighted by the number of participants attending each of the 75 sessions, the average reaction rating, across all conference sessions, was 4.262. Weighted by the number of participants attending each of the 75 sessions, the average retrospective pretest score, across all conference sessions, was 2.810. Impression management therefore did not likely threaten the validity of the results because participants were not in a situation in which it was important to please the presenter or their boss. It is an evaluation technique best utilized when the ability to independently assess learning and performance improvement gains is limited due to time and resources. A review of the retrospective pretest: Implications for performance improvement evaluation and research. Use retrospective surveys to obtain complete data sets and measure impact in extension programs. Answering these questions requires that such programs be evaluated at multiple levels (Kirkpatrick & Kirkpatrick, 2006). However, by administering a retrospective pretest, practitioners were able to verify the pre-intervention equivalence of their experimental and control groups and to curb threats to validity that would have been associated with a posttest-only design. In traditional pretest-posttest designs, this effect has a negative influence on program outcome measures. Similarly, Martineau (2004) argued that the retrospective pretest correlates more highly with objective measures of change than self-report gains based on traditional pretest ratings.
The instrument was designed to be administered across all of the conference sessions, thereby providing a practical, low-cost, and useful evaluation tool (see Figure 1). To determine the practical significance of measured changes in learning, d was calculated as defined by Dunlap, Cortina, Vaslow, and Burke (1996, p. Further, if the instrument measured a perceived change in learning rather than an actual change in learning, the measurement is significant because the process of adopting an implicit theory of change is an important step in the process of transferring learning to on-the-job behavior (W.
Evaluating learning in professional development workshops: Using the retrospective pretest. When you don't know what you don't know: Evaluating workshops and training sessions using the retrospective pretest methods. When participants’ pre-intervention behavior is measured retrospectively, they generally provide more conservative estimates than they provide prior to the intervention.
As such, the study also served to measure participants’ reactions to each conference session as well as changes in learning. First, they were asked to retrospectively identify their level of knowledge prior to attending the session. Paper presented at the meeting of the American Evaluation Association Annual Conference, Arlington, VA.
Especially in the presence of late arrivers and early leavers, the instrument is useful because it can be administered at the conclusion of a program, in concert with a traditional posttest. Internal validity in pretest-posttest self- report evaluations and a re-evaluation of retrospective pretests. Areas of strength: Specifically, what did you find effective in the professional developmentexperience?____________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________2.



Nhl promo code gamecenter
How to get the guy hooked
Something every man wants in a woman




Comments to «Professional presentation evaluation form pdf»

  1. KoLDooN writes:
    (The attitude of the the only 1 fighting to preserve really.
  2. RIHANNA writes:
    And grow to be interested in someone, you.
  3. BIG_BOSS writes:
    Get him to Really feel attraction for you, you locate a loyal man??yet you are fan??guarantee!?Try.
  4. TIMON writes:
    Has run its time for us ladies guys to chase.
  5. DonJuan89 writes:
    Actually believed about it for a even though just.