My previous post brought up some interesting (to me) issues around how people relate to software. By this I mean whether they perceive the software tool to be just there, kind of an immutable fact of the world, with good bits and bad bits bit still something that just exists; or whether they perceive it to be a dynamic, mutable creation of human endeavour that can be changed at will. In a sense, I think most people understand that it really is the latter if they think about it, but do they behave differently when they're using it on a day to day basis? When a problem crops up, do they think about the person who created the problem, and what they did wrong to make it behave in that bad way, or do they think of it as a deterministic, mechanical thing that they need to work around?
I'm thinking out loud here - I don't know the answers. At this stage, I'm not even sure what I'd put into a journal article search to find the research on the subject. I'm sure it has been studied by someone, somewhere. But I think it might be pertinent to my research - perhaps there was a gap in my understanding of my interview subjects. Perhaps I was expecting them to think like software developers rather than like normal people. If so, it may be that I framed my questions incorrectly, and didn't understand the answers and why the answers I was getting didn't give me the information I was interested in.
A quick straw poll of my PhD study group backs this up - I just asked what their first reactions to a software problem would be, and they answered in terms of seeking help and finding workarounds. There was no mention of any thoughts about a person or a motivation behind the software; it was just a fact that needed to be attended to.
So I think I will need to do a search of the literature to find out what has been written about the differences between how tool-makers and tool-users relate to their tools, and they try to figure out how that has affected my interview outcomes.
Thursday, July 2, 2015
Interview thoughts
I have (finally) finished coding my interviews in NVivo. It's a nifty system, but painfully slow, even on my quad-core i7 iMac. The outcomes were interesting.
I got a lot of feedback on the use of Facebook (as I mentioned previously). Interestingly, many of my interviewees were not big Facebook users, and a couple didn't use it for anything except their studies. One had created the Facebook account specifically to join these groups. I think this was partly an artifact of my sampling process - many of those I interviewed were those who had posted a lot of resources into my system, so they were the ones who hadn't instinctively gravitated towards Facebook as a first choice. There were mixed feelings about Facebook - some were very positive about what was happening there, seeing a lot of valuable resource sharing going on, and others annoyed by the amount of noise that was unrelated to their study needs. Several expressed disappointment about the quality of the engagement - that students were far too exam-focused and weren't interested in useful medical knowledge that was related to the curriculum but not assessable.
Regarding the use of my tools, the major complaint I got was that it was too fine-grained. Linking the resource sharing to Learning Objectives meant that it was hard to get an overview of the resources that were available, or to see new ones as they were posted (Facebook, on the other hand, notifies you when there is a new post in a group that you belong to). A couple mentioned that it might be better to link the resources to the week, rather than to one of the 20 objectives in the week. There were a lot of other complaints, but they were largely related to the curriculum, not the system itself. I was a bit disappointed by the lack of useable suggestions for improvements, but in retrospect that's not surprising; the students hadn't been thinking as hard about my system as I had, and to them it was just another feature in the system, something that just floated out there in the ether, rather than a concrete set of tools that was modifiable. As a software developer, this is a hard perspective for me to understand. I naturally look at each tool in my possession with a critical eye, and almost instinctively note the ways in which it could be better. I'm probably the kind of customer software companies hate, as I'm regularly contacting them and telling them how they could do it better. There is probably a whole field of research out there on how people relate to software and other tools.
In the first interview, I realized that a major motivator for students to participate in my interviews was that they wanted to get feedback through to the Faculty. I proceeded to tell each of them that I would be collating feedback and giving it to the Faculty. There ended up being a lot of feedback, some quite blistering, about the quality of the course. Each student said that overall, the experience was good, but each found significant problems with the course and the culture it promoted among students. My next step is to sort this feedback into useful groups and pass it to the head of the program.
The interviews have turned out to be extremely useful, and I probably should have conducted them a year earlier - they led to some real insights into what is happening and why it's happening, and what steps I should take next in my research.
I got a lot of feedback on the use of Facebook (as I mentioned previously). Interestingly, many of my interviewees were not big Facebook users, and a couple didn't use it for anything except their studies. One had created the Facebook account specifically to join these groups. I think this was partly an artifact of my sampling process - many of those I interviewed were those who had posted a lot of resources into my system, so they were the ones who hadn't instinctively gravitated towards Facebook as a first choice. There were mixed feelings about Facebook - some were very positive about what was happening there, seeing a lot of valuable resource sharing going on, and others annoyed by the amount of noise that was unrelated to their study needs. Several expressed disappointment about the quality of the engagement - that students were far too exam-focused and weren't interested in useful medical knowledge that was related to the curriculum but not assessable.
Regarding the use of my tools, the major complaint I got was that it was too fine-grained. Linking the resource sharing to Learning Objectives meant that it was hard to get an overview of the resources that were available, or to see new ones as they were posted (Facebook, on the other hand, notifies you when there is a new post in a group that you belong to). A couple mentioned that it might be better to link the resources to the week, rather than to one of the 20 objectives in the week. There were a lot of other complaints, but they were largely related to the curriculum, not the system itself. I was a bit disappointed by the lack of useable suggestions for improvements, but in retrospect that's not surprising; the students hadn't been thinking as hard about my system as I had, and to them it was just another feature in the system, something that just floated out there in the ether, rather than a concrete set of tools that was modifiable. As a software developer, this is a hard perspective for me to understand. I naturally look at each tool in my possession with a critical eye, and almost instinctively note the ways in which it could be better. I'm probably the kind of customer software companies hate, as I'm regularly contacting them and telling them how they could do it better. There is probably a whole field of research out there on how people relate to software and other tools.
In the first interview, I realized that a major motivator for students to participate in my interviews was that they wanted to get feedback through to the Faculty. I proceeded to tell each of them that I would be collating feedback and giving it to the Faculty. There ended up being a lot of feedback, some quite blistering, about the quality of the course. Each student said that overall, the experience was good, but each found significant problems with the course and the culture it promoted among students. My next step is to sort this feedback into useful groups and pass it to the head of the program.
The interviews have turned out to be extremely useful, and I probably should have conducted them a year earlier - they led to some real insights into what is happening and why it's happening, and what steps I should take next in my research.
Subscribe to:
Comments (Atom)