Sunday, February 22, 2009

Productivity as information network

There are some great descriptions of how to achieve productivity around the web. A culled list of things which the more enlightened models seem to agree on looks like this:

  1. Measure productivity in market value change, often derived from user value.
  2. Maintain a special interest in anything you did which has negative market value so you can remove it.
  3. Avoid anything which reduces your efficiency towards creating value in the future.

This list can be described with other words:

  1. Test the production output before attaching it to the product.
  2. Handle all kinds of defects consciously, from bugs to bad ideas and everything in between. Lean says; stop the line quality.
  3. Never sacrifice product integrity to increase productivity.

To achieve positive productivity and develop market value there are several techniques in the production toolbox which probably are more or less familiar. There are lots of bad or outdated techniques which have been tested and found to be bad such as working over time, measuring productivity in code or features and developing towards a rigid specification. These bad ideas are not that interesting, but it happens that even a modern team fall in their traps.

The interesting things happen on the good idea side.

  • Team size limits

Team size around 7 members. If this is because our brains are good at chunking the function of each team member, because biological networks get saturated with communication if nodes recieve information from more than 7 sources, if it is for some other reason such as creating a temporary family or that this number is about where you can find all relevant disciplines for creating user value I don’t know. The good thing is that it (the reasons) does not really matter.

  • Responsibility distribution

The team, not a boss, determines how to create value within given constraints. Scrum does this partially by defining goals which the team convert into tasks aimed at realizing the value of a time box. Lean appears to approach this through something I recall as “on demand production” via pull scheduling the development when the market demands it.

  • Understanding of User Needs

The whole team understands what the product aims to be and how it aims to meet a demand. This often shows up as a transparent product vision and the methods used for testing user value where the whole team is involved with understanding the needs and problems of real users.

The whole business as a biological network

I will now make a very poorly grounded statement:
- The whole business is a biological network with nodes made out of humans.

This network exchange information about what needs to be done to satisfy a market with the goal to make a profit in money or sometimes a less direct unit such as “cred” or brand awareness.

Biological networks are relatively well documented on a scientific level. I am not well informed on the subject but there are some properties of biological networks which even I understand and which may be interesting when optimizing information flow towards meeting market value.

  • Information degrades every time it passes a node.

The network specializes towards dealing with certain types of information. Common specializations are to improve short range communication at the cost of gimping long range communication. We cannot, for example, well understand the emotional stated a source node which is at some distance and emotions carry a lot of information.

  • The cost of relaying information across the network affects the amount of information relayed.

If I understand this correctly we humans have learned that communicating information across several nodes is close to impossible. We end up either limiting the message to something insignificantly small and useless or we end up saturating the network with protocols which becomes needed for the information to survive its journey through the network. Btw, such measures does not work very often. When implemented they provide a structure for social mobility and people create links when communication is needed.

  • Node failure is random

A node which gets swarmed with information will fail to operate properly. This causes some hesitation towards accepting information among hub nodes. The practical effect is the same as above, the network limit itself to avoid non-random node failure.

It looks like the good ideas of productivity are natural adaptations towards optimizing the biological network.

  • The product vision is integrated with the team.
  • Defects are seen by the whole team.
  • The actual user value is measured directly by every team member.
  • The decisions on how to react on feedback are determined by the team.
Relevant information for the team is at most 1 node distant.
  • Constraints are negotiated between the team and the client.
  • End users deliver feedback directly to the team.

The interesting part here is that I don’t see much other organizational structures having taken on this approach towards optimization. They probably have a slower darwinistic evolution cycle, while software development productions often mutate several times per year. Lots of large organizations appear to try strong arming their biological networks to work as digital networks and actually go ahead with implementing communication protocol rather than reorganize the network to fit naturally with our human senses. (Hello email rules and conventions.)

The infrastructure problem

If your user value is heavily dependent on large infrastructure, such as a nuclear plant and a power grid I would not guess if this type of optimization scales up. It would appear reasonable to think that you can scale the production of such a large system efficiently by optimizing the information network to fit the properties of the biological network which build these kinds of things. You will probably get a few layers of abstraction between the development team and the actual user value, defects and future velocity.

This little post should have had some nice images with it. Maybe I'll update it someday if it still makes sence in the future.


  1. Very interesting post, I'm glad to see your thoughts translated into text. There's definitely room for increased contextualization to make your ideas easier to interpret correctly, but it's a good start.

    I have two comments I'd like to make, I think you'll recognize my thinking here from our many prior discussions:

    1. The problem of interpreting data on user behavior

    Too many industries that work with user testing, analysis of user behavior etc do this without properly understanding or employing working methodologies. There is ample knowledge to be found in many research oriented fields.

    Since it's my own background, I'm a bit biased when I say that sociology and social psychology have pretty much directly translatable methods. Looking at what we're doing ourselves in our workplace I'd say it's a poorly maintained Grounded Theory methodology.

    I find it very difficult to discuss this problem, however, since the general experience in research methodology is so low within engineer driven contexts. There is a great distance to go for a person trained in - albeit arguably - objective theories revolving around logic and numbers, when he or she needs to understand users.

    I recognize this problem from my time as a student within the philosophical faculty, when trying to communicate with students from the technological faculty. Many, many times the tech students failed to understand what the hell we were doing. "How can it be so difficult to get knowledge on what XXX people think about YYY"; "What can you possibly learn about reading a text that takes so much time? It's just a text, how difficult can it be to interpret it? I can read, it's right there, just read it!" are examples of exclamations regarding the problems of securing validity and reliability of qualitative data; and the many aspects of discourse theory, respectively.

    It's eerie how these very same sentiments now echo from colleagues, regarding the same divide between sciences. "How can this be so difficult, I've seen it done a thousand times!", "Just do the simple solution, it can't require that many iterations" are opinions shared regarding new features.

    In this context I find it a bit hard to see where to start. How can one even discuss the problems of observer induced behavior changes when there isn't even a basic understanding of terms like validity and reliability?

    2. Guessing is seductive

    I'll go out on a limb here and illustrate this problem by referring to the human brain's intrinsic desire to identify patterns. Even in chunks of random data we do this, by calling it names like "EAS", "DES", etc. Hm... I'll stop myself here before transgressing. Suffice it to say (I hope) that we really really don't like it when we can't see patterns.

    Random acts of violence are a lot scarier than acts based on ideology no matter how twisted. Religion foregoes science by inducing patterns in things like thunder storms before we managed to invent the field of metereology. Theories like "The Ways of God Are Inscrutable", "Kismet", etc are invented to guard against unpredictability in everyday life.

    Now, transferred to our field, I find a good analogy in Kurt Vonneguts auto biography "A man without a country" which I read a while ago (well it isn't really his biography but probably as close as he'll ever get to writing one). In one of the chapters he writes about "the guessers". Bit of a long quote here, but I'd rather relay the information as written by the great man himself then diluting it with my own interpretation:

    "Persuasive guessing has been at the core of leadership for so long, for all of human experience so far, that it is wholly unsurprising that most of the leaders of this planet, in spite of all the information that is suddenly ours, want the guessing to go on. It is now their turn to guess and guess and be listened to. Some of the loudest, most proudly ignorant guessing in the world is going on in Washington today. Our leaders are sick of all the solid information that has been dumped on humanity by research and scholarship an investigative reporting. They think that the whole country is sick of it, and they could be right. It isn't the gold standard that they want to put us back on. They want something even more basic. They want to put us back on the snake-oil standard."


    "Do you remember those doctors a few months back who got together and announced that it was a simple, clear medical fact that we could not survive even a moderate attack by hydrogen bombs? /.../ What was the response in Washington? They guessed otherwise. What good is an education? The boisterous guessers are still in charge - the haters of information. /.../ Please don't do that. But if you make use of the vast fund of knowledge now available to educated persons, you are going to be lonesome as hell. The guessers outnumber you - and now I have to guess - about ten to one."

    This is made even more problematic when guessers join up with liars. Or rather, it's mostly the same kind of people. Guessing is done downwards in the hierarchy, and lying upwards. Guess what should be done next, and lie when the results aren't really as predicted - or more often when there isn't enough competence in place to interpret the results.

    This structure has gotten us far. Not without pain, but still, we exist. It's been a close call a couple of times, and we have extinguished quite a few people by guessing and lying. Hitler was quite good at this. We're starting to see the extent of the guessing and lying now, as the climate is screaming it's state louder than politicians can try to silence the science community by employing their own pseudo scientists; and as the ownership of information is breaking up. Bloggers and their like are starting to make it really difficult to control the flow of information.

    Ok, close to transgression again. Getting back on track, I'll add the final ingredient to the mix: The Path of Least Resistance. All networks move towards optimizing the energy needed to relay information. This is true also for the human brain.

    Within the game design community this is formulated as "Games that rely partly on luck are more casual friendly than those focused exclusively on skill". As long as we feel that we are learning something, getting better, we're happy. This is often used to design a good user progression in a game, by presenting the player with ever more difficult challenges - but never too difficult based on the previously possessed skill set. Our brain is wired to tell us that we're getting better when we get lucky throwing dice three times in a row. It's also wired to tell us that it's just bad luck when we roll crappy dice. We want to feel like we're learning, like we're navigating new patterns. We like that.

    Ok, over to the conclusion based on these assumptions of mine: Our Desire For Patterns; Guessers and Liars and their ability to have gotten us at least this far; and finally User progression and The Path of Least Resistance.

    These assumptions paint at least to me a clear picture of a core problem - it is extremely easy to guess when confronted with a problem too complex to solve in a convenient way. That guess is also easily and errounously transformed into something that should be used as a guide to decide what to do.

    Religious people have done this for ages. Here we can see clear examples of how guesses are often injected with subjective wishes. "Do as God wants and you go to heaven... and he really wants you to give us money. Or he'll be angry. 'Cause he's really ... vengeful." Or in our field: "We should do a product that people like... and they really like feature XXX. Implement that or we won't get any money. 'Cause we really need to get more features in."

    I believe that one way to minimize the impact of Guessers and Liars is to create a peer to peer based network rather than a vertical power hierarchy. If noone feels that it's his responsibility to make The Hard Decisions, noone will induce guesswork to solve problems without actually solving anything. Well, it might happen less often at least.

    Another way is to make sure that everyone involved in the process have grasped the same patterns. This can be done either by including only morons, so that no patterns are grasped and everyone guesses. This seems to work well for some corporations. Just like gambling does produce enough winners to fool a great deal of people into believing it's something to put your hopes on. Or, it can be done by including multi-disciplinary people with a true interest in understanding the entire problem, and go forward as one unit, with all problems found. This means exchanging reports with workshops, and roles based on competence with roles based on responsibility. Scrum does this to a certain extent, but it's a bit too vague on the details to ensure this happens.

    There is a lot more that can be said on this subject, but I feel that I'm starting to reach territories I haven't really spent that much time in with my thinking hat on. So, I'll stop here and look forward to your thoughts on this.

    /Peter, who'll do a mirror post of this on his own blog at as soon as he gets his virtual hosts up and working again.

  2. In short I read your comment as specifically aimed at the problem of measuring productivity in the first place. I believe a lot of the "best practice" as I have encountered through a mix of doing it myself and reading about how others have done it aims to handle the problem by directly connecting the whole team as a measurement tool. This is not a fail safe method, but it is better than everything else I have tried myself so far. And its probably what is reasonably available in short order to most dev teams.

    The longer way around the discussion splits as I see it, which is interesting.

    The first split is my own perspective. How much of this notion has a religious type of relationship and how much of it is real?

    The whole idea that you can look at productivity towards market value as a biological network is a guess, or maybe a bit better a "hunch". Practically I believe this idea could be religiously implemented as a tool which check a development process for theoretical errors.

    Check 1: Does anyone at 2 or more nodes away from the team have any opinion about the product which we might consider adjusting towards.

    If yes then expect failure.

    Solution: practically ignore such information or integrate the source in the team.

    Check 2: Does the team worry about what information more distanced nodes have access to?

    If yes then expect lowered productivity.

    Solution: Disempower distanced nodes, or move these nodes closer to the team. (Moving the team closer to the nodes might be more of what happens.)

    An interesting question relates to how to handle relevant information generated 1 node away from the team. As I mentioned the best practice seem to do this by making these distanced but acceptable sources do two things.

    1: Set constraints on the team, budget, goals and probably a few other administrative types of constraints. This is from the administrative side of external information sources.

    2: Inform the team of production value. This is done by the users or user representatives. Not by administration.

    Some products have multiple users, some are "end users" and some are "back-end users" and these should both be considered as users needing to have a demand met. However the back-end users are not the ones who can help you understand your productivity, they can instead help with more specific information about defects in the product.

    Guessing is something which according to this primitive network theory is a gamble for social mobility. If you happen to be responsible for a successful guess you might update your position in the network closer to the nodes you want to be close to.

    However as you already said gambling is a fools strategy towards profit.

    I believe there might be one trick to deploy to bridge the gap between philosophers and technologists in our particular case, and that returns the discussion to the matter of art. Even if the work done is done with a technical tool the responsibility of the whole team is to produce a work of art.

    The audio engineer is contributing artistically to the neoclassical rock album, not by writing the melody but by bringing the melody to the psychoacoustic sensory system of the audience. Audio engineers know this, it is integrated with the culture around the art form of music. The same should apply to interactive software. The code has no other purpose than bring the work of art to the sensory input of the end users.

    I realize the journey is quite long, but it should be quite an interesting trip for everyone involved to get a better understanding of artistic expression. ^^

    The second split is how to handle guessing when it directly influence decision making.

    I think this one requires conflict. It is not obvious when guesses happens. I sometimes find myself guessing, mostly by elimination of other alternatives and maybe even more often out of cultural setting. The cultural setting also comes comes from constraints which limit the ability to break free from guessing except by inducing violent conflict.

    I see this as the "magic circle" which limit the available moves to a set of tools which are the verbs of a strange game. The game spirals farther and farther from reality every turn it plays out. The winning conditions become a meaningless paradox outside the game.

    Synchronizing these winning conditions with a productivity reality is an interesting problem.

  3. I do not have any theoretical ground to stand on but my experience from software development agrees with you.

    - Team size limits
    I cannot comment upon the actual limit, but when groups grow I have observed that I, and others in the team, stops listening to what others are saying and doing. Since I cannot take in what everyone is doing, I react by filtering out some or all of the others.

    - Responsibility distribution
    Tricky one. Often the team will tend to push the responsibility for decisions upwards. If the team has all the comptences needed, this should be something to work against. Otherwise it might be a signal that the team lack the competence needed.

    - Understanding of User Needs
    Crucial, lacks in many projects. And it does not matter how good your are at doing stuff, when you do the wrong ones.


    Meh, I cannot compete with your walls of text. Nice reading anyhow!

  4. This one is interesting: "Otherwise it might be a signal that the team lack the competence needed."

    What are the odds that the competence which is really needed is more reliably useful towards creating user value outside or "above" the team?

    I might argue the opposite, that the higher up hierarchy is more likely to be even more distanced from the end user and hence have even poorer access to useful information towards developing user value.