One thing that I find interesting is attempting to apply some of the methods of econometrics and labour economics to measure productivity within software development. At the moment it seems like many of us (myself included) base our opinions of "what works" from our own perspectives and from anecdotal examples (which I've said before doesn't really count as evidence for something) - although this here is an anecdote which may or may not be true.
It would be nice if we could come up with some good analysis techniques to give real support to our claims of what works and what doesn't, and better yet when something works and when it doesn't.
It turns out though that tackling the problem of measuring productivity within software development is quite hard. There are all sorts of problems that can arise here that make analyzing data and testing hypotheses rather difficult. This article will be the first in a series talking about some of the problems that I've though of, and I'd be more than happy to hear about what you guys might think about these issues.
The first problem is that of unobservability (also known as latent variables), which is where you are unable to measure a certain variable because you can't really see the variable directly the same way we can with observed variables. An example of this is ability: it is common knowledge (I think) that there is varying degrees of ability when it comes to developing software. But can we give somebody a stamp saying, "this person has productivity X?" Sure, we can use some indirect measures like lines of code produced per hour or bugs closed per day or some junk like that, but these are simply proxy measures that are the result of ability, not ability itself. Compare this with observed variables like years of experience, or language/methodology used, or team size: we can directly give a value for these variables with some specified unit, so therefore they are observed.
These types of variables are problematic because it is difficult to hold them fixed. Since we can't observe them, we often end up with some omitted variable bias as these variables are often correlated with our other observable variables. This happens when we see an increase in productivity and think it is due to one of our observed variables, when it is actually caused by an unobserved variable that is also influencing an observed variable.
I've thought of a few unobserved variables in software development, feel free to add any more that you think of:
1) Ability (the obvious one). I've already talked about this one to death, so I won't go into much more detail here.
2) Team Chemistry. You can throw a bunch of people into a room together, who individually are extremely good at what they do, but doesn't mean they'll get a lot done - if they all sit there bickering over testing frameworks or variable names or other things like that, they're not going to get a lot done. Likewise you can put a group of people together who may not be super geniuses, but if they work well together you still end up getting some good results. This is an important factor in a team's productivity, and you can't really stamp a number on it.
3) Productivity itself. All this time I've been talking about measuring the effects of various factors on productivity, but we don't actually have a way to directly measure productivity. You can see the indirect effects of the productivity: milestones get reached sooner, bugs get fixed faster, less bugs get introduced in the first place, etc. etc. But we don't have a measure to say something like, "the combination of development methodology X with N programmers of E experience and ... gives us P units of productivity."
I thought about putting the complexity of the project in, however I'm not sure if that affects actual productivity. It can affect some metrics used to measure productivity, but if you view productivity as how much someone is getting done per unit time, then I don't think complexity makes a difference in the same way that say for example, having two monitors does. Maybe I'm also on crack and need to go to bed soon, so this point is up for debate.
All these unobserved factors in software development make things rather difficult to do real quantitative analysis. One possible solution is using experiments to analyze various factors, however those come with their own set of issues that I will discuss in future posts.
Mar 2, 2011
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment