Online

Schedule an appointment

Call 877-308-9954

A man stands in front of a podium addressing a crowd in an auditorium.

Evaluating the Impact of Staff Development

It has been estimated that we spend approximately 32 hours attending staff development each year at the expense of $1229 per employee1 . With all that time spent developing ourselves, you’d think we would have an abundance of talented, well-developed employees. But in reality, we really we have no idea if we do or do not. But it’s not because we haven’t tried to find out. Recall the last time you attended a staff development event. Remember filling out a satisfaction survey at the end of the session? I’m sure you do. I’m equally sure you let the facilitators know how well you liked the food, the rooms, the speaker, and what you might be interested in for future staff development events. While this information provides wonderful feedback regarding the satisfaction of the audience, it doesn’t tell us anything about the learning that took place. More importantly, it says nothing about whether or not the learning was applied or made a difference in practice.

So why is it, we keep doing the same thing over and over again? Why is it we keep gathering satisfaction data instead of digging into the actual results of our staff development work? The most commonly reported reason is lack of time and resources2. Why? Because satisfaction surveys are easy and real evaluation is seen as hard. Oftentimes, we don’t know how to do it or even where to begin. So what can help?

First of all, we need to remember that evaluation is a process of judging the worth or value of our work3. The word itself tells us this- eVALUation. In order to measure the value of our staff development work, we need to understand what it was we were trying to achieve. This can be done by asking 3 simple questions: 1) What issue or problem prompted the need for this staff development event?, 2) What are we trying to accomplish by offering this event?, and 3) What changes do we hope to see, both, short term and long term?. Answering these three questions allows us to design a basic evaluation process that answers the fundamental question—What was the value of this event?

Let’s start with the first question- What issue or problem prompted this event? In order for staff development to make a difference, it needs to address a particular organizational issue or problem in need of resolving. Without a sound purpose, it’s a waste of everyone’s time and resources. Look at the evidence that identified the issue in the first place. Think about using this same evidence as a measure of impact.

Now the second question- What are we trying to accomplish by offering this event? This ties back to the previous question probing into the reason for the event. Obviously, we’re trying to resolve the original problem or issue that sparked the need for the staff development. So, once again, consider the evidence that identified the need and use similar sources as a measure of success.

And now the third question- What changes do we hope to see, both, short term and long term?. You guessed it! Look at the original performance issue that sparked the need and think about what kind of results would satisfy short-term needs. Then think about long-term or lasting results.

Let’s look at an example using an educational setting. Imagine faculty members are continually complaining that they have iPads for teaching but have no idea how to use them. After sending around a short survey inquiring into their iPad comfort and usage level, you discover that 75% reported low confidence and an aversion to using iPads in the classroom; while 25% reported moderate to high comfort levels along with slight to moderate use. Now imagine, you provide training through a couple of workshops and a 3-month long tech buddy program. Afterwards, you’re curious if it worked. Using our 3 guiding questions we can design an evaluation process. First, what issue or problem prompted the need for this staff development event? According to our survey, 75% of faculty did not use iPads in the classroom due to discomfort with the technology. Second, what were we trying accomplished by offering this event? Increase the faculty members’ comfort and confidence with iPads. Third, what changes were we hoping to see, both, short-term and long-term? Given that some faculty will probably hesitate to join in, our short-term goal could be—50% of all faculty will feel confident using iPads in the first year; reducing our 75% to 50% discomfort. The next year we’ll pair our new iPad users with the remaining faculty who are still feeling uncomfortable and try for 75% faculty comfort. The measurement can be as simple as re-administering the original survey at the end of each year to watch for improved comfort and usage.

Did you notice how evaluation is tied to how we design our training? We need to give time for change to occur and make changes to training based on our results. You’re probably thinking, the example above is too easy. But, oftentimes, measuring staff development results is not rocket science. The key is to have evidence that a need for training exists, then determine the short-term and long-term desired results, design and implement the training, and finally measure the change based on your original evidence. Doing so allows us to move beyond satisfaction surveys and discover the real value of our staff development work.

For information on earning your M.A. in Educational Leadership degree, contact an enrollment counselor at 877-308-9954.

About the Author

Dr. Sue Hines is the Director of the Center for Excellence in Learning and Teaching and Associate Professor in the Doctor of Education in Leadership Program and teaches in the M.A. in Educational Leadership Program at Saint Mary's University’s Schools of Graduate and Professional Programs. She has over 30 years of combined teaching experience at the undergraduate, graduate, and doctoral level. Her career has primarily included college teaching, faculty development, program development, program accreditation, and academic research with a focus on faculty development program evaluation.

References:

1Association for Talent Development. 2015 State of the Industry Report. Retrieved 8/23/16 from https://www.td.org/Staff-Resources/State-Of-The-Industry-Report

2Hines, S. (2011). How mature teaching and learning centers evaluate their services. In J. Miller & J. Grocia (Eds.), To Improve the Academy Vol. 30 (pp. 277-289). San Francisco, CA: Jossey-Bass.

3Guskey, T. (2000). Evaluating Staff Development. Thousand Oaks, CA: Corwin Press.