Sunday, August 10, 2014

Thoughts about my result

This is where statistics gets hazy for me. There are a lot of tests one can perform on a set of data, and the real skill is in knowing which one is the right test. I can comfortably say that the use of my rating tool is significantly different to the normal usage for the March reading (the Feb 2013 and Feb 2014 levels of usage are indistinguishable). This is both using Normal and Poisson distributions (and since these are hit counts, Poisson is the better distribution to use). However, when I try a Student's t-test comparing the January-July results for 2013 and 2014, the result isn't significant.

So basically, while my intervention cause a detectable spike in usage, overall it hasn't made any difference. I'll need to talk with my supervisor about how to look at the data to firm up what I can say about it. I think that also looking at which cohorts of students were involved might help. There may be a difference in who was involved in each spike - if the 2013 spike was four cohorts of students, and the 2014 spike was only the incoming first years, then the number of hits per student is dramatically improved. I need to get back into my data and do some more work.

No comments:

Post a Comment