I need to eliminate the second possibility before I can really start pondering the first, and gathering information from students about it in interviews. I have replaced the rating tool, to make it clearer how it works. The original looked like this - the rating tool was a simple "Rate" link at the bottom of the post:
When you clicked on the "Rate" button, it popped up this:
And it would calculate your rating based on which adjectives you chose about the post. The idea was that it would encourage considered, focused feedback, as opposed to a generic "Like". I suspect the lack of usage was due to the obscurity of the "Rate" button/link rather than the tool being confusing. My stats showed that on a week where there were 2,000 downloads of the resources, there were only 2 ratings done on the resources.
So clearly, a redesign was in order. I did two things: increase the prominence of the rating tool, and simplified the process. The simplification involved separating out the scoring from the adjectives. Rather than just having a rate button, there is now a thumbs up/thumbs down directly on the page:
Hitting a thumbs up or thumbs down buttons opens up the rest of the rating tool, and adds colour to the thumb you hit (green for thumbs up, red for thumbs down):
So the new tool is:
- easier to see: a bigger button, much easier to see what it does
- immediate feedback on action - the colouring is hopefully satisfying to the users, and will encourage them to feed back on all items. Prior to giving feedback, the button looks a little empty
- Simper relationship between action and outcome - thumbs up gives a positive rating, thumbs down gives a negative rating.
It will be interesting to see how the students take to this change - whether an improved interface actually results in changed behaviour and increased use of the tool.
Oviatt, S. (2009). Designing Interfaces that Stimulate Ideational Super-fluency. New Knowledge Environments, 1(1).


No comments:
Post a Comment