Sunday, December 1, 2013

Engaging teachers with my research

One of the key challenges of my research is that in the course I'm performing my research on, my main role is not that of a teacher; it's that of a technologist. I'm a part of the team that develops the software that the students use, and that my social resource sharing tools are built into. This means that I don't have any direct influence of the students - I can't rock up to my lecture and say "hey you guys, post some resources, and interact through these tools". I can't make use of the tools a requirement for a pass mark in the course.

I've tried to turn this into a virtue - it makes my design research purer in a sense. If the students start using the tools, then it's the value of the tools themselves which is the reason, not other pressures on them to conform. But it would be nice to have some engagement with teachers, if some of them pointed students in the direction of these tools when asked by students how they could share curriculum materials. The staff running the course have decided that they'd rather not engage with these tools - they asked me specifically to ensure that the content in the social networking tools would be hidden from staff view, so that they wouldn't then get pressure to improve the curriculum based on the student resources. This was a bit disappointing, but understandable. What it means, though, is that staff who would have been interested in these tools, and would have encouraged students to use these tools, do not even know they exist. As a result they are probably suggesting the students set up their own wikis, Facebook pages, or other tools, to share learning resources. I've come across a few of these - one I found in the very nascent stage, and suggested the use of my sharing tool, but they have dropped off the radar; another went as far as finding project funding to build a wiki, but then I started working with them and added features to my system to meet their needs (they have also dropped off the radar, now interested in other projects).

I'm hoping I'm not rediscovering the well-understood model of teacher driven social learning - that's been done to death. It works, but its not new and interesting. I still want the students to spontaneously start using the tools built into their virtual learning environment, but I don't know if I'm barking up the wrong tree or if I just haven't yet found the right bits of interface design to lower the barriers to entry enough that they start flooding in.

Sunday, November 24, 2013

Social networks and the zombie apocalypse

Today I read a short story called Feature Development for Social Networking, by Ben Rosenbaum, which tells two parallel stories of the use and development of social networking features during the early stages of a zombie apocalypse. It was fascinating, as it was a very accurate portrayal of how these things actually go down in software feature development - the arguments about how a feature will be implemented (who can use the feature; precedents to justify certain aspects of the feature; what the feature can be integrated with; arguments about the approval process), and the rush to get the feature in at the right time are all very reminiscent of my experiences as a software developer.

The story also very elegantly shows the way a social networking site straddles the virtual and physical. Social network sites don't exist in a separate "cyberspace" - they are attached to many points in the real world, and effectively act as information shortcuts between those points. Most of those points in the real world are people, but more and more social networking sites link to places and objects, as we like* venues and organizations, and check in to locations. The social networking site is a bridge between different parts of the real world, more so than many other online experiences. The story reflects this: the feature being developed is done so in response to a real-world event - the zombie apocalypse - and the people reacting to the zombie apocalypse through the social networking tools are using it to communicate these real-world events and interactions they've had.

As a last note, when I was trying to dig up the story (I read it earlier today) to link to on this blog, I found Zombie Friends, and zombie themed social network, and Zombie Passions, a zombie themed dating site. O brave new world, that has such people in't!

* I've italicized these worlds to indicate I'm referring to the social networking meaning of them; we're past the point where wrapping quotes around them makes sense, as they're no longer a novelty; in some places, the social networking version is the primary meaning of these terms.

Saturday, November 9, 2013

New avenues

I'm sure you, the frequent reader of this blog, are now well aware that I'm not getting the wagonloads of data that I'd been hoping for, which is leading to some disappointment on my part. This hasn't escaped my supervisor's notice, and we've started discussing a Plan B - how I can turn what I'm doing into a viable PhD even if the usage never takes off.

The first part will be to finally sit down and conduct a series of interviews with students - this has been part of the plan from the start, but I've been waiting for the right time - I was hoping to catch the first wave of usage, and interview students just as usage is starting to seriously take off. But it's looking like a sensible move to interview a group of students sooner rather than later, so I'm going to start work on that early in the new year, with the aim of interviewing 8 students in Q1 2014.

The next thing I'll do differently was suggested by my supervisor, and it seems like a nifty idea. The plan is to start doing some research into what other projects are out there doing similar things. Other people are bound to have had similar ideas, and by comparing across a range of projects I'll have some interesting things to talk about for my thesis, but I might also find some important insights that I've missed, and that might help my project. I'll be searching for similar projects, and I guess categorizing them in a range of dimensions of similarity to my project:
  • social networks for learning
  • embedded social networks in LMS
  • custom-built social tools
  • linking student interaction to learning objectives
  • university student participants
  • medical elearning
  • resource sharing (vs. discussion, etc.)
  • self starting, voluntary network (as opposed to required and assessable)
  • low/no staff participation
 I'll search for these project through the literature, through web searching, and if necessary through word of mouth. Once I've found a few (maybe half a dozen or a dozen), and gathered as much information as possible, I can interview the creators of the system to discuss their ideas and how successful they were. To what did they attribute the success or failure of their network? What pedagogical and design ideas lead to the creation of the network? What did they have to change?

This search should result in some interesting ideas. At worst, it's an interesting small chapter in my thesis, and my main research plan will come good and I'll have some solid data to look at. At best, it will be the core of my thesis, with my DBR work being one of a number of examples that I use, and one I have particular insight into.

Sunday, October 20, 2013

PhD productivity


I went to a seminar this week given by Dorian Peters and Rafael Calvo. They're doing a lot of interesting work on Positive Computing - the deliberate design of computer interfaces and applications to improve people's lives. But what really struck me was when they talked about Self-Determination Theory (Deci & Ryan), which found that our perceptions of our competence, autonomy, and relatedness independently predict the variability in our well-being.

There are obviously other factors, but this is, I think, a big discovery for me. In a PhD, our sense of competence and relatedness are nearly always very low - the lonely, confusing path of the PhD student has been written about many times. In my work life, my autonomy and sense of perceived competence are under direct attack as well. My well-being then directly effects my productivity, in my work life, study life, and personal life. I've found myself in a funk both at work and while studying recently, and having something explanatory to point at is, I think, extremely helpful. It means I can dissect my emotional state more thoroughly, and hopefully get past it.

I've talked about "The Rut" previously, and I've hit it again this year. I can cope with PhD uncertainty and doubt when other parts of my life are going well, but when my professional life is in dramatic flux as well, it's hard to get anything done at all. The Tweet above asserts that dealing with this is a skill, and that makes sense. I think it's time to start working on this skill, and finding the tools I need to help me with it.

Sunday, October 6, 2013

Design Based Research and Agile Methodology


I find the parallels between Agile software development methodologies and Design Based Research quite interesting. I come from a software development background, and in a sense Agile methodologies are a formalization of the approaches software developers naturally tend to drift towards in the absence of formal project methodology. The statement in the Agile Manifesto says it well:

Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan

Agile methodologies are designed to be more flexible and responsive than traditional software development methodologies, and have come to the forefront since the early 1990s as one of the best ways to develop software - the software is built iteratively, in close consultation and collaboration with the end user, and without a defined intended outcome (apart from "build the software the user needs") stated at the start - the software evolves as it is built.

Design Based Research also emerged as a major force in educational research in the early 1990s. It is a response to the traditional research approach of hypothesis testing by conducting experiments with as many variables controlled as possible. The real world of education is messy - people act differently in different contexts, and consequently an experimental result found in the lab doesn't necessarily correspond to an outcome in the real world. One of the ideas that absolutely blew my mind when I first started learning about educational research was the Hawthorne Effect - that people behave differently when they are being observed by experimenters. Design Based Research is an approach of testing a learning intervention in a real-world setting (a classroom or course), not just to see what effect it has, but rather to iteratively improve the intervention until it is successfully helping learners learn, but also to develop a local theory about why it helps them learn. 

 I went into my PhD with the idea that I would develop in a reasonably Agile way, and was delighted to discover DBR, and how compatible the two approaches seemed. The following table is from my Thesis Proposal:


What I've found in practice is that both DBR and Agile development are methodologies designed for groups of people. As a PhD student, I'm largely doing my research by myself, so while DBR still makes sense (though gathering and analyzing the large quantities of data involved is a bit overwhelming), Agile seems like overkill.  I'll still write about the parallels between the two approaches in my thesis, and suggest that perhaps Agile methodologies might actually provide some formal structure around the DBR process (which is rather ill-defined and non-prescriptive in the literature as far as I have read), but the extent to which I've been actually able to adopt Agile methodologies and demonstrate their effectiveness is hampered by the fact that there is only one of me, so the need for processes to manage the "team" is limited.

Monday, September 23, 2013

Ratings: trying to encourage student to express the usefulness of resources

One aspect of my system that hasn't taken off well has been the rating system - students are uploading things, but there are less ratings in the system than resources - which means there is an average of less than one rating per resource. From where I'm sitting, there are two possibilities for this - one is that the students aren't interested in rating resources, and the other is that the rating tool just isn't obvious, or is hard to use. The design of the tools is a big part of what my research is looking at, and affordance theory is the most powerful way of thinking about these issues. Sharon Oviatt explained (with reference to Gibson) that affordances "establish behavioural attunements that transparently but powerfully prime the likelihood of acting on objects in specific ways". Each tool needs to be designed in a way that pushes the user towards the desired behaviours. Designers can't control how people use the tools they create, but they can design in such a way that the desired behaviours are the ones that users are more likely to perform. In this case, I want students to be collaboratively discovering and sharing the best learning resources, so I need a rating tool that encourages constructive use.

I need to eliminate the second possibility before I can really start pondering the first, and gathering information from students about it in interviews. I have replaced the rating tool, to make it clearer how it works. The original looked like this - the rating tool was a simple "Rate" link at the bottom of the post:

 When you clicked on the "Rate" button, it popped up this:


And it would calculate your rating based on which adjectives you chose about the post. The idea was that it would encourage considered, focused feedback, as opposed to a generic "Like". I suspect the lack of usage was due to the obscurity of the "Rate" button/link rather than the tool being confusing. My stats showed that on a week where there were 2,000 downloads of the resources, there were only 2 ratings done on the resources.

So clearly, a redesign was in order. I did two things: increase the prominence of the rating tool, and simplified the process. The simplification involved separating out the scoring from the adjectives. Rather than just having a rate button, there is now a thumbs up/thumbs down directly on the page:






Hitting a thumbs up or thumbs down buttons opens up the rest of the rating tool, and adds colour to the thumb you hit (green for thumbs up, red for thumbs down):
So the new tool is:

  • easier to see: a bigger button, much easier to see what it does
  • immediate feedback on action - the colouring is hopefully satisfying to the users, and will encourage them to feed back on all items. Prior to giving feedback, the button looks a little empty
  • Simper relationship between action and outcome - thumbs up gives a positive rating, thumbs down gives a negative rating.
The downsides are that the comment tool is now no longer available without giving the item a rating. Also, the user can't close the rating box once they rate an item - I probably need to add a close button on the rating panel.

It will be interesting to see how the students take to this change - whether an improved interface actually results in changed behaviour and increased use of the tool.


Oviatt, S. (2009). Designing Interfaces that Stimulate Ideational Super-fluency. New Knowledge Environments, 1(1).

Thursday, September 5, 2013

Studying my work

My research project is in the same area as my day to day work. In the daytime I'm a software development manager, and one of the bigger applications we've developed is an eLearning system. By night I'm a PhD student who is studying eLearning (but rarely fights crime). My research topic involves adding functions to the big eLearning app I manage during the day. This is both good and bad, in a number of ways.

Convenience sampling

Not many PhD students have a group of 1100 students to test their theory on, but the eLearning app we've developed is used by that many students, which means that's my sample space. I've asked the students to opt in, so I actually end up with about 700 students in my study, but that's still a lot. My fellow PhD students have to do a lot of negotiation to get a class of 30 students to study. This is definitely on the good side for me.

Access to real source code

My position means I have access to, and permission to modify the application. I don't have to create a new application and set up an artificial situation in which students use the app - I have direct access into the app these students are using on a daily basis, and permission to deploy the changes I make into the live system. This means I'm really testing my theory in a real world setting, which will give me confidence in my findings.

Prevention of unexpected changes of direction

If I were developing in a system that was also being developed by other folks, I would need to worry about whether they might make radical changes to the system that break, or render irrelevant, my functionality. But since I'm involved in all the conversations around changes to the applications, I can ensure that I'm prepared for any changes, so I can adapt my code appropriately.

Keeping focus

Were my PhD in a completely unrelated field, I'd be thinking about entirely unrelated things by day and night. With my current situation, things that I do in my studies benefit my performance at work, and vice versa.

Overdosing

There's a lot of potential for just getting too much exposure to this system, and either getting bored, or being just too close to it to be able to step back and see the big picture. I don't think that's happened, but it's definitely something to be wary of.

Conflict of Interest

Then there is the issue that I am probably extra defensive about the system I'm building. If the University decided to shut down this application and replace it with a third party tool, my PhD would essentially be rendered irrelevant, and I'd have to start over again. So in discussion about the future of the system I am definitely biased, and probably can't make an unbiased decision about what is in the organization's best interests. But I was like that before my PhD anyway - the applications we build are like our babies, and we always want to protect them.

Can't quit my job

Leaving my current job would endanger my PhD work - it would be very hard to get the kind of access to the system that I have now, if I weren't working where I am. I have strongly considered quitting my job a few times over the last year, and one of the things that has prevented me was the difficulties this would pose for my study. Even if I didn't burn any bridges, it would be against normal policy for the university to grant me the kind of access I have now if I weren't a staff member. This causes quite a bit of personal tension.

Sunday, August 25, 2013

Cleaning the CAM database

I'm now pulling nice big piles of data into my CAM database, but I'm finding it rather messy. I'm parsing log files, and there are over 100 distinct URL patterns I need to handle. This obviously brings in a lot of places I can make mistakes - mis-parsing URLs, overly broad matching of a URL pattern, missing variants in a particular URL pattern. So I'm now working on cleaning that all up. The main items I'm looking at are called PBLs and TAs, and they have clearly defined patterns regarding what they are.

It's slow going - I'm gradually fixing up and improving those 100 patterns, and finding all sorts of ways they can be wrong. I'm also finding "errors" in the data - which appear to be incorrect URLs formed by users playing with things in the address bar of their browser, or occasionally from javascript incorrectly accessing unusual URLs. I'm not sure what to do with these events; it's probably the case that I should just delete those URLs from my analysis database, as they will just make analysis harder, with no real benefit.

I'm interacting directly with the database here, using the SQL console. It's the quickest and most trustworthy way to get at the data; but I'm starting to think having some other visualization tools might come in handy at this point to confirm that I'm getting the patterns I expect.
Here's some of what I've done:

Phase 1: add a "pattern" tag to events, to allow mapping of parsing patterns to items in the database. This will let me check that each pattern is catching the right items, and that the code isn't catching and misinterpreting any false positive matches

Phase 2: Look through every single pattern and ensure that it catches URLs that will be parsed correctly by its parsing rule. Use commands like:

select distinct followedlink from event where tags='pattern60';

This will take a long time to do.

Phase 3: Look at items, ensure that they are being mapped correctly. Initially, I was getting PBLs with values like "4635" and TAs with values like "4.04" - I had mis-mapped items. Use commands like:

select  item.title,item.itemid,count(*) from item,eventitem where type='TA' and item.itemid=eventitem.itemid and not item.title ~ '^[0-9]+$' group by item.title,item.itemid;

To find non-correct TAs, and

select  item.title,item.itemid,count(*) from item,eventitem where type='pbl' and item.itemid=eventitem.itemid  group by item.title,item.itemid order by count;

to find all the PBLs and work out which ones are OK.