Monday, September 23, 2013

Ratings: trying to encourage student to express the usefulness of resources

One aspect of my system that hasn't taken off well has been the rating system - students are uploading things, but there are less ratings in the system than resources - which means there is an average of less than one rating per resource. From where I'm sitting, there are two possibilities for this - one is that the students aren't interested in rating resources, and the other is that the rating tool just isn't obvious, or is hard to use. The design of the tools is a big part of what my research is looking at, and affordance theory is the most powerful way of thinking about these issues. Sharon Oviatt explained (with reference to Gibson) that affordances "establish behavioural attunements that transparently but powerfully prime the likelihood of acting on objects in specific ways". Each tool needs to be designed in a way that pushes the user towards the desired behaviours. Designers can't control how people use the tools they create, but they can design in such a way that the desired behaviours are the ones that users are more likely to perform. In this case, I want students to be collaboratively discovering and sharing the best learning resources, so I need a rating tool that encourages constructive use.

I need to eliminate the second possibility before I can really start pondering the first, and gathering information from students about it in interviews. I have replaced the rating tool, to make it clearer how it works. The original looked like this - the rating tool was a simple "Rate" link at the bottom of the post:

 When you clicked on the "Rate" button, it popped up this:


And it would calculate your rating based on which adjectives you chose about the post. The idea was that it would encourage considered, focused feedback, as opposed to a generic "Like". I suspect the lack of usage was due to the obscurity of the "Rate" button/link rather than the tool being confusing. My stats showed that on a week where there were 2,000 downloads of the resources, there were only 2 ratings done on the resources.

So clearly, a redesign was in order. I did two things: increase the prominence of the rating tool, and simplified the process. The simplification involved separating out the scoring from the adjectives. Rather than just having a rate button, there is now a thumbs up/thumbs down directly on the page:






Hitting a thumbs up or thumbs down buttons opens up the rest of the rating tool, and adds colour to the thumb you hit (green for thumbs up, red for thumbs down):
So the new tool is:

  • easier to see: a bigger button, much easier to see what it does
  • immediate feedback on action - the colouring is hopefully satisfying to the users, and will encourage them to feed back on all items. Prior to giving feedback, the button looks a little empty
  • Simper relationship between action and outcome - thumbs up gives a positive rating, thumbs down gives a negative rating.
The downsides are that the comment tool is now no longer available without giving the item a rating. Also, the user can't close the rating box once they rate an item - I probably need to add a close button on the rating panel.

It will be interesting to see how the students take to this change - whether an improved interface actually results in changed behaviour and increased use of the tool.


Oviatt, S. (2009). Designing Interfaces that Stimulate Ideational Super-fluency. New Knowledge Environments, 1(1).

No comments:

Post a Comment