Long Term Planning
I'm writing up some "Did we meet our 2014 goals?" blog posts for the OpenHatch blog and struggling to come up with a better way to measure success. Our problems last year:
A) Getting people to fill out our surveys. We got 41 responses from 26 events reaching hundreds of people. I'm told that this is not much worse than normal, but it makes our surveys pretty much useless! I'd love to increase participation. We tried the Software Carpentry sticky note method at a few places which gave us lots of informal feedback but no quantitative feedback.
B) Our surveys did not necessarily measure anything meaningful. We didn't repeat any questions from the sign-up forms so we don't have any "before and after" comparisons to make -- something I'm now regretting! Also, while student ratings of individual activities are great for improving those activities, they don't capture the overall effect of the workshop on our attendees.
Ideas for Improvement
re: A) Not enough responses
- Use BridgeTroll to remind people to submit feedback surveys. This may modestly increase the # of responses, as it's likely that a couple of schools simply forgot to send out the surveys in a timely manner.
- Incorporate filling out the surveys/giving feedback into the event itself, either throughout the day or at the end via something like a "checkout station".
- Offer stickers for everyone who fills out a survey (but we like to give these out anyway!) or perhaps do a penguin t-shirt raffle per event?
re: B) Not enough useful responses
- Incorporate at least one question into the sign-up form that is re-asked during the follow up survey.
- Shift focus away from rating activities.
- Add "learning goals" to the workshop. Defining goals/expectations with the help of mentors could be a valuable part of attendees experiences, while also giving us something concrete and important to measure -- that is, how well students met their self-defined goals for participation in open source.
What other ideas to people have? Any thoughts about my ideas, or about my definitions of the problem?
I'm thinking back to my daughter's college orientation a couple of years ago. There was a parent lecture where everyone was given one of those clickers that had 4 or 5 numbers. The speaker would ask questions and the software would autocalculate everyone's response.
I'm not advocating clickers but maybe there is some software app (oppia? maybe) that could be used to ask questions and have people respond during the session. Display the responses on an instructor dashboard. To make it more interactive/interesting, display a visualization of results.
On a somewhat related note, there's a new pacman game to teach vim https://github.com/jmoon018/PacVim
Maybe if the survey content was an adventure game or something it would appeal more to the OSCTC attendees. (Just brainstorming as I type.)
"Measuring Sucess" sounded interesting so I clicked, but the original post could be titled "Why Won't People Fill Out My Awesome Surveys?"
Sadly, I do think the reponse rate you got is fairly typical.
Are there other ways to measure success?
- number of events
- number of messages on mailing lists
- number of active IRC people
- number of replies on Twitter
- number of accounts
- number of projects listed
These seem a little easier to measure.
But maybe that's not what you're looking for... I'm not so involved in the event stuff. I guess I'm more thinking of OpenHatch in general.
@pdurbin - I'm with you on wanting to give up on traditional surveys. I like @willingc's ideas about integrating the feedback into the workshop itself. Now I'm imagining some sort of ridiculous measurement image - like Howard Dean's donation bat from 2004, or, like, Sufjan the Penguin making progress across the page - that shows everyone what percentage of the class has completed an activity.
The above sounds like quite the effort to implement though. In the meantime, sticky notes have been fairly successful - if we could make it as simple and easy to give feedback via BridgeTroll as to write the sticky note, that would be great, since it would let us associate feedback with attendees as well as potentially with specific activities.
The more I think about the idea of recording learning goals, the more I like it. Why should we be the only ones benefiting from attendees' input and self-reflection?
Shauna, you wrote:
We got 41 responses from 26 events reaching hundreds of people. I'm told that this is not much worse than normal, but it makes our surveys pretty much useless! I'd love to increase participation. We tried the Software Carpentry sticky note method at a few places which gave us lots of informal feedback but no quantitative feedback.
I want to clarify that in my understanding, reaching 41 people out of hundreds could be totally fine. It depends on if we got a random sample or not.
To that end, I think we should focus our energy on finding a random sample and shaking them down for answers. Even a small random sample could be suitable, so long as we are confident in the randomness.
There's no way this was a random sample. That said, the proposal to actually try to get a small random sample is a good one.
@Shauna can you post a link to the survey in this thread, so we can see the questions.