Monday, October 1, 2012

Ubuntu 12.04 on macbook pro

I'm a long time linux user, I installed my first distro in 1994, it was slackware and required a bunch of floppies and was a fairly painful process. Over the years, I migrated from slackware to redhat and finally settled on Ubuntu as my "daily driver". Every computer in our house (a server, three laptops, and two netbooks) is currently funning a flavor of Ubuntu and I decided to give setting up my work machine (macbook pro) with Ubuntu. The big reason I like Ubuntu is that they are doing a good job of making an operating system (OS) that finds a great balance between "you can do anything you want" and "I just want it to work". Ubuntu has thousands of free software packages that are tested and verified to work that will install with the click of a button. Windows and OSX are so far behind in this regard as to not even be contenders. While there is a little more technical knowledge necessary to get Ubuntu up and running, it typically has so many rewards in the richness of free software that it vastly outweighs the negatives.

I downloaded ubuntu 12.04 LTS, burned the cd, then against all warning and my better judgement, started the install on a train ride home (about 1 hour 20). Note, the installer tells you to be "plugged in", but like a kid with a new toy on Christmas day, I just couldn't wait. The install didn't quite finish, so I closed the lid, a little disappointed, and thinking I was certainly going to be in for an adventure the next morning (perhaps even having to reload my entire machine). The next morning, while waiting for the train, I opened the lid of my mac and unbelievably, the install continued where it left off! One thing I will note is that I did have a wireless internet connection by tethering with my cell phone. Also, for non computer geeks, I would not recommend this technique at home... recovering a machine from a failed OS install can be a horribly frustrating activity and is not recommended for the faint at heart. My recommendation is, if the installer tells you to do it while plugged in, DO IT! That having been said, Kudos to the installer team, I was frankly amazed.

After the install finished, I poked around a little and while the display seems a little less smooth, my machine seems to be functioning well. I will note a couple of gotchas I'm working through right now:

Two finger scroll doesn't work out of the box.

Going to mouse preferences and selecting "two finger scroll" enabled this. I'm not sure why this isn't the default, maybe windows trackpads don't do this out of the box and it was thought it would be confusing?

The trackpad registers touches unexpectedly

This is a beef I've had with Ubuntu for a while on other laptops, the track pad doesn't seem to disable quickly enough when I'm typing and occasionally my mouse pointer will find it's way to the top of my screen. In perusing the documentation, it appears there are some workarounds to make this less of a problem... I'll continue to investigate.

Gwibber seems broken when hooking to facebook

When hooking Gwibber to my facebook account, it seems to not work. There is a bug reported and confirmed right now and I can only assume it will be fixed shortly. Frankly as this capability didn't exist with my Mac (without installing more software) I'm not going to mark it as a serious problem, but I hope to get this working soon.

In my first hour of use (I'm writing this post in ubuntu on my mac on the train right now)l these are the only minor issues I've noted. For any self respecting linux user who really likes Mac hardware, I'm going to highly recommend this as the time to make the switch.

More to follow...

Saturday, September 29, 2012

Developer's Creed

So borrowing an idea from the NCO Creed, I thought I do one for software developers: Discipline will be my watchword, the software I create will never be perfect, but I will perform my job to the best of my abilities and remember that every letter I type should have purpose.
Ego is the enemy of good code, I will remember that "I am not my code" and never fail to admit mistakes and fix in in a humble manner.
Value is provided not through MORE code, but less. My software will be succinct.
Energy overshadows ability, I will bring my "A" game and not give up because it's "too hard".
Love for my users is always in my heart, they will break my code, I will fix it
Openness wins over closedness, I will share my tips and secrets and together we'll all make the world a better place.
Patience is paramount, sometimes the answer isn't the first, second, or fiftieth idea.
Empathy for the newbie and the uninformed wins, you never know when that intern's stupid idea will bear fruit, don't be an asshat when it's suggested.
Respect the matrix, the network effect is a power for GOOD and BAD, know how to wield it responsibly.

Wednesday, July 25, 2012

Effectively communicating software requirements - PART 1

The most important factor that contributes to a successful software project is ensuring that the development staff has a good understanding of the requirements. Incorrect requirements almost guarantee incorrect software behavior. Most often, however, the requirements aren't clearly incorrect, but rather they are vague, ambiguous, or make incorrect assumptions about the best way to apply technology to solve the problem.

The original title of this post was "Writing Good Software Requirements", but I realized after writing the previous paragraph (irony noted) that writing good requirements is actually part of the problem. Having written software my entire adult life and much of my childhood, I've been most effective writing software for myself. This software has always done exactly what it was supposed to do (technical bugs notwithstanding) and the reason is because I had a visceral and complete understanding of the problem I was trying to solve and what factors would directly influence my impression of success.

It follows then, that the number one goal is to effectively communicate the problem that the software is meant to solve. By far, the most effective way to do this is to make the developer have the problem you want solved. Need a better solution for authoring blog posts offline? tell a developer that this is the only way they can do it and I guarantee you'll get a decent solution. Yes, it will likely need some visual tweaks and you'll need to make some changes to accomodate users that are NOT developers, but the essence of the problem will be solved very quickly.

This is obviously not a way to do things in the real world, but thinking about the problem leads us to some basic rules to follow when gathering and communicating requirements:

Step 1 - State the problem clearly

A common misstep, especially when the person assembling the requirements is technical, is to skip outlining exactly what the problem actually is. Folks will jump immediately to a solution and assume everybody understands why we need it. For example, let's supposed we are create a system to enable users to write blog posts. For the author, one problem with online editors (like blogspot) is that it's not possible to author posts offline. One version of the requirement is to say something like "enable users to run a copy of the blogging platform on their local machine".

There are probably a thousand business analysts who looked at that statement curiously and thought "that's PERFECT, what could possibly be wrong with that as a requirement?". Well, there are a number of problems, but first and foremost, it makes a rather large assumption about the mechanism that should be used to do the offline editing. This leads to a situation where it limits the developer's ability to use the technological assets at their disposal to solve the problem.

The better way to communicate the requirement is to start with a statement of the problem. "Users cannot author blog posts when their computer is not connected to the internet". This initial statement gets the developer thinking about the REAL problem, and not imagining other possible reasons that someone might want to run a copy of the blogging platform locally. More importantly, it also enables the developer to start creatively thinking about the problem in terms of the end user instead of as merely a technical problem to be implemented. This has the added benefit of forcing (one would hope) the developer to start taking ownership of both the problem and it's solution and helps avoid a situation where one implements a really crappy solution to the problem "because that's what they asked for".

Step 2 - Clearly express non functional requirements

Once the problem has been clearly articulated, the next step is to clearly communicate what other factors will influence the success of a solution. These factors might be "Users should be able to author offline posts in their iPhone" and "users shouldn't need to download any additional software to use the solution". These requirements are what architects would refer to as "non-functional requirements" or NFRs. The problem with these statements is that , again, the developer is left to their own devices as to WHY these requirements even exist, which leads us to:

Step 3 - Clearly express the reason for the non functional requirements

A super way to communicate these is to explain (from a business or user perspective) WHY it is important for these requirements to be met. The iPhone requirement might state something like "20% of our customers have iPhones and we believe we can capture this market and increase our market share by this amount if we have this capability". For the second NFR, it could be something like "90% of our users are non-technical and we believe they would not be willing to install additional software for this capability"

If you are a person responsible for communicating requirements to developers, starting with these three steps will guarantee that you're starting on a solid base and will enable you to supercharge your development team. Realistically, using these simple steps as the cornerstones of your requirements gathering process will yield immediate positive results. If your developers are complaining about "bad requirements" or you feel that the software your development staff is unable to produce the desired results, take a look at what you're communicating to them and make sure these things are being accomplished.

In a subsequent post, I'll outline some more specific pitfalls when expressing requirements (especially in written form) and give some helpful tips on how to avoid them.

Friday, July 13, 2012

equals and hashCode for dummies (again)

In java, writing equals and hashcode methods are perennial problems. Newbies and experts alike are confounded when things go haywire and troubleshooting problems can be extremely difficult. A common thing that a poor hashcode/equals implementation will cause are intermittent problems and/or mysterious out of memory errors. To help clear things up, I'll give a 10,000 overview of how the core java libraries use the hashcode/equals methods.

Essentially, the hashcode/equals methods are used when storing things in Hashes (HashMap, Hashtable, HashSet). Things often go wrong when folks start using custom objects as keys for these classes. For example, let's say we have a person class that looks something like this:

            public class Person {
                public Person(String orgName) {
                    this.name = orgName
                }
                private String name;
                public String getName() {
                    return name;

                }
                public void setName(String newName) {
                    this.name = newName;
                }
            }
        

This seems pretty straightforward, no surprises, can't get any simpler than this right?

Now let's say we add some people to a hashset that represents a club that people can belong to:

            HashSet<Person> myClub = new HashSet<Person>();
            Person jsmith = new Person("Joe Smith");
            Person sjones = new Person("Sarah Jones");
            myClub.add(jsmith);
            myClub.add(sjones);

        

The contract of a set says that it will guarantee uniqueness... to test this, we can try and add a duplicate Sarah Jones:

            Person sjones2 = new Person("Sarah Jones");
            assertTrue(myClub.add(sjones2));

        

This tells us that we can add two Person objects with the same name. In our strange use case, we don't want that, we want only 1 "Sarah Jones" person in the club. To accomplish this, we need to write an equals method in order to let the system know that our people are unique by name. So we override the equals method in our class by adding the following to the Person.

            @Override
            public equals(Object o) {
                if (o == null || o.getClass() != this.getClass()) return false;
                Person other = (Person)o;
                return other.getName().equals(this.getName());
            }
        

Now, if we try to add our second Sarah Jones, it should fail right? Well, it turns out it only "might" fail, but it also "might" work (sometimes) and this is where things get wonky (especially for newbies). Consider the following:

            HashSet<Person> myClub = new HashSet<Person>();
                    Person sjones = new Person("Sarah Jones");
                    Person sjones2 = new Person("Sarah Jones");
                    assertEquals(sjones, sjones2);


            // Now for the guts
                    myClub.add(sjones);
                    assertTrue(myClub.contains(sjones);

            //Random failures after here
                    assertFalse(myClub.add(sjones2));
                    assertTrue(myClub.contains(sjones2);


        

The above code will randomly fail the third and fourth assertions for no apparent reason. Why? It has to do with how the Hash* implementations work in java. For a really nice, more in-depth write-up, go here, but I'll just give a quick overview.

The Hash* java implementations store things in buckets typically with linked lists (or arrays) in each bucket. For example, a trivial implementation takes 10 buckets and when storing something, it takes the hashcode of the object, does a mod-10 operation on it and then adds it to the linked list of that bucket. The problem with our implementation is that we didn't override the hashCode function and the default implementation in java just uses the memory location for the result of the hashCode function. So, for example our above example would work if the following happens:

  • new hashSet is created
  • sjones gets created and stored at memory location 4
  • sjones2 gets created and stored at memory location 14
  • java compares sjones and sjones2, sees they are equal and returns true
  • java adds sjones to myClub by grabbing the mod-10 of sjones (which would be 4) and putting it in the list on bucket 4 of myClub
  • java verifies that sjones is in myClub by grabbing the hashCode, seeing sjones should be in bucket 4, then walking down the list calling the equals method on each object. There's only one and sjones.getName() in fact is equal to sjones.getName() so it can be found
  • java then tries to add sjones2 by doing a mod-10 of the sjones2 hashcode, seeing it should ALSO go in bucket 4, then walking the list to see if sjones is already there. It sees sjones is already there (because sjones.getName() is equal to sjones2.getName(). Due to the nature of the HashSet contract java will return false to indicate that sjones2 was NOT added because it was already there.
  • Next, java will again look up the hashcode of sjones2, mod-10, look in the bucket, and verify that sjones2 is, in fact, in the set... even though it previously said it was not added (which is totally correct and makes sense if you understand how things are supposed to work)

If the above results are suprising, you'll want to take a break, but for those of you who are following around, now it gets more interesting. Lets run through the above scenario, but instead of sjones2 getting created at memory location 14, let's say it gets created at memory location 17. This would mean that NOW, sjones2 will get added to the set and you'll end up with duplicates. Worse yet, if you call it enough, you could end up with 10 copies of Person objects that are all equal in the Set. How will this happen?

Well, we see that in order to determine which "bucket" to look for an Object, java will use the hashcode. If two Objects that are "equal" have different hashCodes, java will look in the wrong bucket and NOT find it. Because the default implementation uses memory locations as the hashCode, it's often the case that things will "sometimes maybe" work, but other times completely fail.

In short, if you're having wonky behaviour with complex types in Sets or the keys of Maps, carefully verify that all things that can be "equal" will "always" have the same hashCode. Note, things with the same hashCode DON'T need to be equal, but that's a completely different discussion.

Friday, June 22, 2012

Fixing Hibernate, DB2, H2, and Boolean Issues With a User Type

Edit: It was pointed out that this was a problem with H2, NOT Derby... not sure how I missed that, I've updated (Thanks Nick)

Our team recently started using H2 for local development, with our production database being DB2. One nice thing is that the sql dialects for these are nearly identical and you don't need to change the dialect to get things to mostly work. We did, however, hit a snag with our boolean type fields. By default, the values for a boolean object default to "1" and "0" (or 1 and 0) when using the db2 driver and definition the column as a char(1), but when using the h2 jdbc driver (with the DB2 hibernate dialect), the values were getting translated to "true" and "false" which was breaking because obviouly you cannot store 4 characters in a 1 character field.

After googling the problem, it turns out a lot of people have run into this and there aren't any obvious solutions floating around. The best bet we could find using hibernate 3.5 was create a custom user Type (based on the hibernate delivered YesNoType) and annotated all our booleans to tell hibernate to use this type to map these fields. The user type we ended up with looks like the following (based on YesNoType).

package org.shamalamading.dong;

public class OneZeroType extends org.hibernate.type.CharBooleanType {


    protected final java.lang.String getTrueString() {
        return "1";
    }

    protected final java.lang.String getFalseString() {
        return "0";
    }

    public java.lang.String getName() {
        return "OneZeroType";
    }
}

We then modified our Hibenate entities to use this type as follows:

package org.shamalamading.dong;
@Entity
@Table(name = "REPORT_TBL")
public class Report implements Serializable {
    @Column(name = "ACTIVE_FLAG", columnDefinition="char(1)")
    @Type(type="org.shamalamading.dong.OneZeroType")
    private Boolean active;
    public Boolean getActive() {
        return this.active;
    }
    public void setActive(Boolean newValue) {
        this.active = newValue;
    }

}


Now when persisting to database and setting values, hibernate will set "1" as true and "0" as false. This helps when using H2 and db2 together because of the differences in how the drivers natively handle booleans. On an additional note, I think it's interesting that in DB2 (or database) land, it seems pretty common for "0" or 0 to represent "false", but in programming land, it's much more common for 0 to represent "true" and "everything else" to represent false.

Thursday, May 31, 2012

How to juice your java performance

Warning: This is a preoptimization

In my previous post about equals and hashcode I thought I'd point out how to redesign the class in question to make better use of performance. If you have a situation where you create groups of objects that are immutable, but you need to pass them around in hashsets and/or look things up, a way to increase performance is to change from something like the following:

package blog.mainguy;

import org.apache.commons.lang.builder.EqualsBuilder;
import org.apache.commons.lang.builder.HashCodeBuilder;

public class LookupEntity {
    private String name;

    public void setName(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    @Override
    public boolean equals(Object other) {
        if (other != null && other instanceof LookupEntity) {
            return new EqualsBuilder().append(this.getName(), ((LookupEntity) other).getName()).isEquals();
        } else {
            return false;
        }
    }

    @Override
    public int hashCode() {
        return new HashCodeBuilder(17,31).append(this.getName()).hashCode();
    }
}

to something like
package blog.mainguy;

import org.apache.commons.lang.builder.EqualsBuilder;
import org.apache.commons.lang.builder.HashCodeBuilder;

public class LookupEntity {
    private String name;
    private LookupEntity() {
    }
    public LookupEntity(String name) {
        
        this.name = name == null?"":name;
        hashCode = name.hashCode();
    }
    public String getName() {
        return name;
    }
    private int hashCode;

    @Override
    public boolean equals(Object other) {
       if (other != null && other instanceof LookupEntity) {
            return this.name.equals(((LookupEntity)other).getName())
        }
    }

    @Override
    public int hashCode() {
        return hashCode;
    }
}

Note, I have not profiled this, it is based on my perception and understanding of how java works. It is also (as I noted) a pre-optimization that I personally wouldn't necessarily start with (though in many cases I might).

Use computers for repeatable tasks, use humans for creative tasks

Pundits often refer to the notion that the most effective developers are 10 or more times as efficient as the least effective. I think this is probably an understatement, but I think the reason is often oversimplified. Many of the "more effective" developers seem like to pat themselves on the back and chalk it up to being smarter or some other nonsense. While arguably this is true, I think it is more that the most effective developers HATE doing the same thing twice.

I think this is an important detail because computers are REALLY good at doing the EXACT same thing millions of times per second. People are not very good at this... in fact it's arguable that it is impossible to do the same thing exactly the same twice in a row.

The truly effective developers are the ones who use computers to do the repeatable tasks and use their unique human curiosity and creativity to find new things for the computers to do. More importantly, the best software development shops realize this and learn how to foster behaviors and environments that reenforce this division. In GREAT software shops, spending 1 hour per week filling out manual reports and copy/pasting info from one spreadsheet to another will NOT happen and will sound like a really stupid idea.

Wednesday, May 16, 2012

How to shoot yourself in the foot with HashCodeBuilder

The setup

HashCodeBuilder is a nice tool provided in apache commons to help building infamously difficult hashCode and equals methods in java classes that work properly. While I appreciate the tool, versions prior to 2.5 had a subtle api design flaw that is likely to bite you if you aren't careful. To illustrate how subtle the design flaw is, take a look at the following code:

package blog.mainguy;

import org.apache.commons.lang.builder.EqualsBuilder;
import org.apache.commons.lang.builder.HashCodeBuilder;

public class LookupEntity {
    private String name;

    public void setName(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }

    @Override
    public boolean equals(Object other) {
        if (other != null && other instanceof LookupEntity) {
            return new EqualsBuilder().append(this.getName(), ((LookupEntity) other).getName()).isEquals();
        } else {
            return false;
        }
    }

    @Override
    public int hashCode() {
        return new HashCodeBuilder(17,31).append(this.getName()).hashCode();
    }
}

The first surprise

It's probably impossible to see the bug, but I assure you with older versions, the following test will (often) fail.

import blog.mainguy.LookupEntity;
import org.junit.Test;

import java.util.HashSet;
import java.util.Set;

import static org.junit.Assert.assertTrue;

public class LookupEntityTest {
    @Test
    public void lookupTest() {
        Set mySet = new HashSet();
        LookupEntity one = new LookupEntity();
        one.setName("mike");
        mySet.add(one);
        assertTrue(mySet.contains(one));

    }

}

The flaw is that the HashCodeBuilder didn't use the hashCode() method to return the hashcode, it used a method called toHashCode(). We effectively were getting the hash code for the instance of the builder instead of the hashCode for the "built" entity. This was fixed in 2.5 so that hashCode() now calls toHashCode(), but illustrates how an simple decision (such as how to name a method or naming a method in a manner too similar to another) can cause subtle bugs.

Surprise Number Two

To show how interesting this problem is the following tests will (often) pass:

import blog.mainguy.LookupEntity;
import org.junit.Before;
import org.junit.Test;

import java.util.HashSet;
import java.util.Set;

import static junit.framework.Assert.assertEquals;
import static junit.framework.Assert.assertFalse;
import static org.junit.Assert.assertTrue;

public class LookupEntityTest {
    Set mySet = new HashSet();
    LookupEntity one = new LookupEntity();

    @Before
    public void setup() {
        one.setName("mike");
        mySet.add(one);
    }

    @Test
    public void lookupTest() {
        assertFalse(mySet.contains(one));

    }

    @Test
    public void countTest() {
        mySet.add(one);
        assertEquals(2, mySet.size());

    }

}

What! it appears that the "Set" is broken! The fact is, however, this is exactly why hashcode and equals not agreeing is a big problem. The method that java uses when storing things in Hashes is to divide the internal storage into buckets and use the hash to determine which bucket to put something into... it then adds things to an array in the bucket to store it. Then, when it's time to see if it contains something, it looks up the hash of the thing, goes to that bucket, and traverses the list to see if it is there. In our "broken" examples, we are returning the hashCode of a new instance of HashCodeBuilder... every time. This means that it will stand a strong chance of returning a hashCode that will not match the next time we call it.

Yes, one more surprise

For example, the following test will also (often) pass.

import blog.mainguy.LookupEntity;
import org.junit.Before;
import org.junit.Test;

import java.util.HashSet;
import java.util.Set;

import static junit.framework.Assert.assertEquals;
import static junit.framework.Assert.assertFalse;
import static org.junit.Assert.assertTrue;

public class LookupEntityTest {
    Set mySet = new HashSet();
    LookupEntity one = new LookupEntity();

    @Before
    public void setup() {
        one.setName("mike");
        mySet.add(one);
    }

    @Test
    public void lookupTest() {
        assertFalse(mySet.contains(one));

    }

    @Test
    public void countTest() {
        mySet.add(one);
        assertEquals(2, mySet.size());


    }

    @Test
    public void hashCodeTest() {
        assertFalse(one.hashCode() == one.hashCode());
    }


}

Now we (hopefully) can understand why we're going to have a problem. If I call hashCode twice on the same object and get different results, when the java runtime attempts to see if the object is in the hash backend, it will be looking in the wrong bucket. More insidiously, it will keep adding things to random buckets and the count will continue to grow even though you cannot find things you just put in.

So to recap

First: Designing easy to use APIs that are logical and correct is difficult. Part of the problem with this design was with the ease with which someone could use it incorrectly. Never forget #AdamsRule When trying to design something completely foolproof, don't underestimate the ingenuity of complete fools. The fix in the later versions of the commons api overcomes this problem very well I think.
Second: if you're using commons-lang, upgrade to (at least) 2.5 and avoid this particular problem.
Third: If you're NOT using commons-lang and rolling your own hashcode/equals methods, be careful and make sure you understand what you're getting yourself into.

Tuesday, May 8, 2012

Clean code is an important quality, but not the MOST important

Clean code is a book by Uncle Bob Martin that speaks to principles in software development that lead to high quality, legible software. This is a great book and every developer should read it once (or twice). The book outlines important tools and techniques, but these are not the end, they are a means. I've noticed a lot of folks treat clean code as the "secret sauce" that makes great software. Unfortunately, they take it a little too far and by being too focused on their secret sauce, they forget that it's worthless if the customer starves to death waiting for the secret sauce to be "perfect".

At one point early in the book, Uncle Bob uses the metaphor of physicians and hand washing. In particular, he makes a point that it would be unrofessional and possibly illegal for physician to not wash his hands before surgery, even if the patient (the customer) asked for it. I like this metaphor and I think he makes a good point, but I feel he narrows the perspective too much. I respect his position, and it makes complete sense when talking about elective surgery, but doesn't apply in all situations. In my experience, most software development isn't like elective surgery with a controlled environment and a sterile operating room, but more like treating wounds in a combat zone under direct fire. In this sort of an environment, hand washing (clean code) is important, but not always as important as stopping the bleeding and/or saving the patient's life. It's often more important to get the patient out of harms way and stop the bleeding than it is to make sure you don't introduce the risk of infection at some future point. Sometimes, you might need to even sacrifice not only hand washing, but even a leg or two in order to save the patient's life and you've done the right thing.

The more important factor beyond keeping things clean is understanding and delivering value to the customer in a manner (i.e. cleanliness) that is appropriate for the environment. I've been on a few projects where there is 100% test coverage, the code is a beautiful work of art, but the business customer ultimately cancels the project because they cannot afford to operate without the business value the code was supposed to deliver. In our combat zone metaphor, some practitioners are so busy washing their hands, they've forgotten that the goal is to save lives, not prevent infection. Clean code is an important goal, just like preventing infection is an important goal, but not THE most important goal.

Developers, clean code is important, it is downright essential in the long run, but it is not the primary goal of software development. Learn the principles and techniques to keep your code clean, but don't forget that your customer needs the software to do something for them and no matter how clean your code might be, if it doesn't provide real business value in a timely manner, the customer isn't getting what they need. In other words, don't let your patients die from blood loss while you're busy washing your hands.

Wednesday, April 25, 2012

Java programmers: Code to the interface, even if the interface is a class

After spending a considerable amount of time trying to figure out how to refactor some particularly hairly (hairy + gnarly) data access code, I thought I'd share some insight into a popular misconception about what coding to the interface actually means.

Let's say we're writing a data access layer and we have something called the UserDAO. a simple implementation might be something like:

public class User {
    public int id;
    public String name;
}
public class UserDao {
    public boolean save(User toBeSaved) {
    
    }
   
}

I'm going to dodge the issue of the user class not having getters and setters and thus not following the javabean spec and talk about the UserDao Interface. Yes, you heard me, the UserDao class effectively is an interface in this example. Sit down and just think about that for a minute, once you've done that, move to the next paragraph.

A great (GREAT) many java developers might not get to this paragraph because they'll immediately start reacting to the idea that the UserDao isn't a java interface, it's a java class. This is an implementation detail and I'm here to tell you that you have already started down a path of increased complexity. Why? because most java developers will, instead of just using the above two classes, add another layer of indirection.

public interface UserDao {
    public boolean save(User toBeSaved);
   
}
and change the UserDao to implement this class:
public class UserDaoImpl implements UserDao {
    public boolean save(User toBeSaved) {
    
    }
   
}

Which in my experience is of no value in a large percentage of use cases (let's call it 90% of the time). This is a mistake! I know that "best practices" from just about every source you'll find say this is a good idea, but I'm here to tell you that you are taking on debt and you should CAREFULLY weigh the cumulative cost of that debt. The biggest problem is that in non-trivial systems, this has adds unnecessary complexity to the design and makes things more difficult to decypher. There are other problems, but my biggest problem with this assumption is that not just the added complexity, but the knee jerk non-thought that goes into adding the complexity for no good reason. Imagine if you have 90 DAOs and 90 interfaces and every change to the interface requires a change in two places.

But Mike! people will say, what if my current implementation uses hibernate and I want to switch to ibatis? Fine, I'd answer, change the implementation of the save and get methods in your simple UserDao to use the other library. An example would be to use composition in the Dao to hook to the particular implementation you need (example use spring autowired beans).

public class UserDao {
    @Autowired
    private HibernateSession hibernateSession:
    public boolean save(User toBeSaved) {
        return hibernateSession.save(toBeSaved);
    }
   
}
and when we decide to use ibatis
public class UserDao {
    @Autowired
    private IbatisSession ibatisSession;
    public boolean save(User toBeSaved) {
        return ibatisSession.save(toBeSaved);
    }
}

I realize it's not really that simple (I don't know ibatis well enough, sorry), but my point is that the class in this example is GOOD ENOUGH as the interface. My rejection of the "automatically use a java interface" is because there are good reasons to USE an interface, but this example is NOT one of them.

So when is a good time to use an java interface? The time to use interfaces is when you have multiple things that need a shared interface (set of operations), but they don't necessarily have the same concrete class backing them. This design detail is java's way of handling multiple inheritance. In the context of most J2EE apps, DAOs are not a good use of the concept, a better example would be something like getting audit information for multiple entities:

public interface Auditable {
    public String getAuditString();
}
public class User implements Auditable {
    public int id;
    public String name;
    public getAuditString() {
        return "User " + id + " with name " + name;
    }
}

public class Account implements Auditable {
    public int id;
    public String accountNumber;
    public getAuditString() {
        return "Account " + id + " with account number " + accountNumber;
    }
}



public class AuditDao {
    public void audit(Auditable toBeAudited) {
        System.out.println("performing operation on:  " + toBeAudited.getAuditString());
    }
}
public class UserDao {
    @Autowired
    private HibernateSession hibernateSession:
    @Autowired
    private AuditDao auditor;
    public boolean save(User toBeSaved) {
        auditDao.audit(toBeSaved);
        return hibernateSession.save(toBeSaved);
    }
}


public class AccountDao {
    @Autowired
    private HibernateSession hibernateSession:
    @Autowired
    private AuditDao auditor;
    public boolean save(Account toBeSaved) {
        auditDao.audit(toBeSaved);
        return hibernateSession.save(toBeSaved);
    }
}

I realize there are better ways to implement this particular variation, but my point is that the auditable interface requires to implementation by completely different classes to happen at runtime. Hiding things behind interfaces should only be done if necessary and can provide realistic known value in the present or real future. Switching implementations can often be done in other ways when you spend time to think about your design. Java interfaces are for enabling multiple concrete classes to have the same interface, NOT necessarily for simply defining the interface of a concrete class. With good design, a class will hide it's inner details and the interface is just extra complexity.

Wednesday, April 18, 2012

ya ain't gonna need it until ya need it

Yesterday I posted a somewhat snarky comment about how You don't need layers until you need them which may have seemed like a nonsensical thing to say. Today I was started to write an example of how to refactor an anemic data model with lots'a layers into a lean and mean persistance machine... but stumbled into a perfect example of what I was trying to say. In essense, I was trying to repeat the idea that "Ya Ain't Gonna Need It", but with emphasis on the fact that... Yes, you may KNOW you're going to eventually need it, but building infrastructure before you need it accumulates overhead that you must pay for, even if you don't get the benefit.

My Example (Snippet of pom file)

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <properties>
        <scala.version>2.7.7</scala.version>
        <spring.version>3.1.1.RELEASE</spring.version>
    </properties>
    <groupId>tstMaven</groupId>
    <artifactId>tstMaven</artifactId>
    <version>1.0</version>
    <dependencies>
        <dependency>
            <groupId>rome</groupId>
            <artifactId>rome</artifactId>
            <version>0.9</version>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-compiler</artifactId>
            <version>${scala.version}</version>
            <scope>compile</scope>
        </dependency>

        <dependency>
            <groupId>org.hibernate</groupId>
            <artifactId>hibernate-core</artifactId>
            <version>4.1.2</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>3.1.1.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-orm</artifactId>
            <version>3.1.1.RELEASE</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-orm</artifactId>
            <version>3.1.1.RELEASE</version>
        </dependency>
    </dependencies>
</project>

What's wrong?

First, take a look at the scala version number. I've prematurely assumed I'm going to have multiple things that will depend on the scala version and moved it to a property. Don't do this, why? Because you don't need it. :) More importantly, if everyone follows this standard, they'll end up doing more work, one step to put the version property at the top of the file and another step to put the replaced string down in the dependencies section. Even more importantly, a person wanting to know which things have multiple dependencies on the version number will have no immediate cue as to which things have intentionally identical version numbers. The important theme is to try and communicate INTENT to subsequent developers.
Next, you'll note my spring config has the exact opposite problem, I've got three dependencies that all SHOULD move in lockstep and the version number is defined independently.
A lot of tech folks will immediately say "Just make everything use a property, that way it's all done the same way". I would agree, there is some value in standardizing on "how to define the version", but I think there is more value being lost in adopting this lowest common denominator mentality. In short, but only externalizing the version number when it's necessary, it adds a clear signal to the next person looking at the project when there are versions that multiple dependencies are dependent on.

The refactored version

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <properties>
        <spring.version>3.1.1.RELEASE</spring.version>
    </properties>
    <groupId>tstMaven</groupId>
    <artifactId>tstMaven</artifactId>
    <version>1.0</version>
    <dependencies>
        <dependency>
            <groupId>rome</groupId>
            <artifactId>rome</artifactId>
            <version>0.9</version>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-compiler</artifactId>
            <version>2.7.7</version>
            <scope>compile</scope>
        </dependency>
        <dependency>
            <groupId>org.hibernate</groupId>
            <artifactId>hibernate-core</artifactId>
            <version>4.1.2</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-core</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-orm</artifactId>
            <version>${spring.version}</version>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-orm</artifactId>
            <version>${spring.version}</version>
        </dependency>
    </dependencies>
</project>
This version eliminates the problem of knowing which things must travel in lockstep version and which things can version independently. While this requires more thinking when building the pom file, and it certainly a trivial example of a larger problem, I think it illustrates what YAGNI really means. Unnecessary baggage should be though of as equipment you're putting in your backpack for a 1000 mile hike... sure it might be nice to carry 20 lbs of first aid equipment, "Just in case", but remember you've got to carry all that stuff every step for the next 1000 miles.

Tuesday, April 17, 2012

All java archeditects read this

I have a couple of quick notes for any aspiring java architects. Please read them carefully and think about them.

Adding layers is BAD

In general, you don't need extra layers until you need them. At that point, add a new layer (but only in necessary places). Create "standard" layers just adds complexity, makes maintenance more expensive, and ultimately fosters copy/paste coding and discourages developers from thinking about what they're doing. An example of a good time to add a layer is when you need to hide complicated operations behind a facade because low level database transaction management is being done in the same place as the code that determines which screen should be displayed next. Too many developers heard "layers add flexibility/scaleability/whatever" and started adding layers to every situation that has an arbitrary division of responsibility... I've worked on systems where adding a table column to be displayed in a CRUD application required changing upwards of 10 different classes... this is a maintenance nightmare.

Interfaces have a special purpose, you don't need them for everything

Not every class needs an interface... They should be reserved for situations where an interface is useful and not just another unnecessary ceremony that developers will mindlessly follow "because that's the way we do it here". A good example of an interface would be something like "Nameable or Labelable". These are often contexts that systems need when rendering information ('cause toString() often won't cut it). The key point is that there will be many classes (at least more than one) that will implement the same interface in the system at the same time. If you try to hide an implementation behind an interface with the idea that the implementation might change in the future... Just use a concrete class and change the dag gone implementation when you need to change it. Don't force every update every developer makes for the next 6 years be TWICE as much effort...

Beware of one size fit's all solutions

Don't build the titanic when all you need is a rowboat. I've seen monster frameworks grow because of one tiny edge case. Instead of treating the single edge case as an outlier and walling of that portion of code from everything else, may architects make the mistake of trying to accomodate the edge case in EVERY part of the system. In addition to extra layers and extra interfaces, I've seen systems that generate javascript web service code, soap stubs, extra java classes to handle serialization, and any other number of overcomplicated plumbing... just because one or two calls in the system needed to be remote.

The short version is, don't overcomplicate your solutions and don't start adding code you don't need ahead of time. You'll be carrying that code on your back for every step you take and need to make sure you don't burn out hauling a bunch of unnecessary baggage.

Tuesday, April 10, 2012

The Notoriously Tricky "Step 0"

I recently posted about "Git for Dummies" and noticed that commented that they had followed my instructions and had a strange permission issue. Wanting to verify everything was correct, I did a quick check and, sure enough I was having the same problem. Upon investigation, I discovered that I had posted a which required a public/private key pair and folks who hadn't already set this up on their machines would get an error. The post was supposed to have been written for a new user and they typically wouldn't have done this. As a long time github user, I did this one time setup ages ago and had totally forgotten about it.

This is common occurrence, so common, that I give it a name... I call it "Step 0". This is a variation of the curse of (prior) knowledge and can be frustrating for both people trying to learn something new as well as people who are trying to explain how to that something to a newbie. In essence, "Step 0" is the prior knowledge necessary and fundamental to performing a task that is so fundamental that it is routinely forgotten until a novice attempts the task for the first time.

As with many things, there is no sure fire solution or way to avoid this, but being aware of the situation can certainly reduce the amount of frustration when it happens. I was about to amend my previous blog post to add the necessary detail, but decided to change the url to the "read only" version. Instead of editing the post to add the 5 steps necessary to set up and use the ssh version, I changed to use the anonymous version that most newbies should start with.

I chose the simpler solution over trying to explain "setting up a github account" and "setting up ssh public/private key pairs" because I think those topics are different and trying to put too much detail can be just a detrimental as not having enough. If someone follows the instructions and they don't work, they can ask a question.

Monday, April 2, 2012

Exploiting the Cloud for Personal Productivity

I'm currently doing an evaluation of Drools to illustrate the differences and similarities between it and JRules. Due to time constraints, most of this is being done on a train commuting between Chicago and my home. This means I'm doing most of my work at 60mph over a 3g connection. This also means that my network is constantly dropping, slowing down, and otherwise misbehaving.

So to evaluate drools, I needed to download the 300ish mb zip file and set up. Originally I started downloading to my laptop but realized this was just going to take longer than I really wanted. So, I fired up an ubuntu EC2 AMI, typed "wget http://download.jboss.org/drools/release/5.3.0.Final/guvnor-distribution-5.3.0.Final.zip" and was ready to roll in 5 minutes.

An additional benefit to this is that I can point people to the ip address of the EC2 instance and they can actually start to use the product without requiring me or my laptop to be present. An even bigger benefit is that I can tune this image, get it really good, then snapshot it and resell the image (or ability to create the image) as a starting point to multiple clients for their own rule based projects.

Thursday, March 29, 2012

Always Pull Branches When Merging

A question came up at work today that I've had to stop and remind myself numerous times and I'm sure other people have to think about on occasion. When merging changes from a branch back to an origin, should I pull them from branch into origin or push them from branch to origin. The difference in CVS would be... do I check out the origin and merge the deltas of the branch into working copy or check out the branch and somehow push those changes onto the trunk.

In short, always pull changes into a clean local copy that represents where the destination of the merge should be. It really doesn't matter which Revision Control System you use, this will almost always be the safest way to merge (assuming it supports a local copy).

I say this for one very important reason, you MUST be able to resolve conflicts before committing changes to the central repository, pushing doesn't necessarily allow you to do that unless the tools supports something like Multi Version Concurrency Control on a single revision (*and I'm not aware of any systems that do this right now).

Tuesday, March 27, 2012

Confessions of a Morning Person

Hello, my name is Mike and I'm a Morning Person.

During a brief stop at the gas station this morning, I said hello to another guy entering the gas station as I was leaving. Not sure what he was doing up at that hour, but the serene look on his face was a perfect reflection of how I was feeling.

As a person who routinely wakes up at 4-4:30 am, I love the morning. Don't get me long, I like to sleep... and need about 8 hours to feel rested, but I would rather go to bed early to not miss the morning than stay up late and sleep in until until 8:00am. On my drive to the train station, I started to explore what I liked and a variety of memories sprang to mind:

As a young child, waking up 30 minutes early with my brother to get dressed, then get back into bed so that when Mom and Dad came to wake us up, we were already prepared (occasionally sneaking out to get breakfast too). Relishing the look of surprise when they realized we were ready to go.

Waking up as a teenager, my Dad rustling me out of bed to go hunting... barely able to think (as a teenager I was more of a "late morning" person). Still half awake, muddling around, realizing my Dad had already been up quite some time making some hot soup/coffee and getting our gear ready. Hearing the sound of the frozen grass and snow crunching under my boots as we made our way to our stand. Listening to the sounds of the nocturnal animals winding down and the daytime animals starting to move about. Watching the transition from black to grey, occasionally being blessed with a fiery red or orange display in the sky.

As a young man at Fort Jackson, being woke up by a Drill Sergent and mustered down into formation for PT. Doing pushups and sit ups in the wet grass until we couldn't do them any more, then running and singing loudly. Living the slogan: "we do more by 8:00am than most people do all day". Feeling proud and alive and pushing each other to go further and faster and not letting anyone fall behind.

In my 20s, sitting patiently at stand-to in a variety of fighting positions, watching the darkness slowly turn into a murky grey. Tense, watching every movement, but also feeling a sense of marvel at the transition.

Later, driving through the fog from Karlsruhe to Mannheim (and back) on weekends to visit friends. Speeding down the autobahn at a rate that was totally unsafe for my car, feeling alive and free.

At specific times, being woke up by my wife, her telling me that "I think it's time" and rushing to the hospital anticipating the birth of a child. Feeling that the time was near that I was going to be a father. For later kids, loading them up half asleep into the car/minivan/van and herding them into the waiting room... For even later kids, waking them and telling THEM "It's time, we've got to head to the hospital, watch the little guys".

As an 30 something adult, cruising through my subdivision on the way to work, marveling at all the dark houses and the folks still sleeping. Knowing I had already been up for an hour and they would probably sleep another hour or two. Feeling a sense of camaraderie with the other cars on the road and wondering where the heck they were going at this hour.

Occasionally, as a middle aged(?) man, getting fired up about going for an early morning run... jogging through the cool mist, listening for the moment that the robins begin to sing their silly song and hoping this would be another day that I see the orange and red fire streak across the sky. Feeling quiet satisfaction in the knowledge that I had put in 3 miles before most people rolled over to hit the snooze button.

This morning, sharing that entire experience with the dude at the gas station with one knowing look and a nod of the head. Sitting in my car at the train station and mulling about with the other 6:05am train folks at the Woodstock Metra station. Feeling like a member of an elite club that

I guess with those sorts of experiences rattling around in my head and the sense of belonging, pride, and accomplishment, it's no wonder I'm a morning person... Who wouldn't want to feel that way?

Monday, March 26, 2012

Lotus Notes, A Lesson in Poor User Experience

As a longtime Outlook user, I've had the exciting experience of learning how to use Lotus Notes. When first starting to use it I was shocked at how "clunky" the user interface seemed, but wrote it off as my long familiarity with outlook. Recently, however, I started to realize that I was being too generous. Notes is full of user experience bad enough that I feel I need to point a couple out lest any young engineers try to design user interfaces based on them.

These things are actually a combination of how Notes works as well as pretty poor administration of the tool. There are some settings which, when you log out of a tool, your personal preferences are reset, but others where they are saved. As two quick examples, I give you the following:

Yes,No,Cancel

By default, when sending email, Notes prompts with the following dialog:

By itself, it is not a HUGE problem, we're not only giving you the option to save or not save in "sent items", but also cancel what you were trying to do.

The problem is that this is a pointless dialog and reflects something that most people answer 1 time in their entire life and never want to think about again. Asking them for EVERY single email is a waste of time and mental energy. There should be a checkbox that indicates "Save my response" and then the user should never see this dialog again. If they should change their mind, they should go to their preferences, dig around, find the setting, then change it. This is a subtle case of overengineering the user interface for edge cases instead of optimizing the routine flow.

Buttons, Buttons, and more Buttons!

This next example is so funny, I actually laughed out loud at work

If you start writing an email and try to close the message without saving it, you're prompted with the following dialog:

I give the engineer who thought this was a good idea, high marks for completeness (he missed discard & save, which is only slightly less confusing than this dialog), but they seriously missed the mark on simplicity and usability. Every time this happens I have to stop and read every nondescript, equal sized button for a cue on which one I want.

In this example, closing the message without sending or saving really should elicit the following question ONLY: "Would you like to save your message before closing?" This dialog should have two buttons, "Yes", and "No". Any other buttons, prompts, or information is too much. period. I MIGHT be able to be convinced of a cancel button, if it was less prominent or located away from the other two, but that is iffy. It is an example of the mentality in the first screen shot taken to larger proportions.

These well intentioned dialogs are examples of trying to accomodate edge cases by detuning the "normal" flow. This ends up creating a system that supports neither as well as it could. Putting questions in front of a user that they answer 99% the exact same answer is often just as bad as not prompting and doing destructive actions without warning them in the big picture. In the case of closing an email without saving it, I've done it, I've lost emails... Well, not in Gmail because it autosaves and I NEVER have to worry about this problem, I just go to my drafts folder to look for something I haven't sent yet.

Friday, March 23, 2012

The VB Model versus the Delphi Model

As an older school client/server developer, I've used by VB and Delphi for a few projects in the past. Having seen both tools, they have a difference of approach that I think is significant. The VB model is "90% of what you want to do is easy and the remaining 10% is impossible" whereas the Delphi model is "80% of what you want to do is easy and the remaining 20% is more difficult".

In the VB model, because it was such a closed ecosystem, when I couldn't do something, I'd have to go buy components from a third party or write a C library. There were multiple times when using VB I needed to write a special component in C or Delphi because VB just couldn't handle it. In addition, I was often forced to purchase third party components because nothing was available in the open source space.

In the Delphi model, with it's richer open source ecosystem, when I couldn't do something, I'd first look and see if there was an open source component that did what I wanted. At first this was relatively rare, but as time progressed it got to the point where 90% of the time, someone had already written what I needed. The additional 10% I had the option of either #1 writing it myself or #2 purchasing it.

In short, VB was easier to market because it "did more out of the box" and people making tool decisions would tend to think this was the best choice. After using it for a while and getting "locked in" to multiple proprietary third party components, it would often be obvious (to me) that Delphi would have been a better choice, but difficult to justify without stepping away from the problem. This is something to consider when buying things from companies and tools that are very proprietary. The hidden cost of making up for missing capabilities can often vastly outweigh the "safety" of choosing a tool that does more of what you want "out of the box".

Disruptive changes fuel innovation, innovation creates disruptive changes

As a developer who works extensively both ruby and java, I'm amazed at the turmoil in the ruby/rails space relative to java.  In the last few years, ruby and rails have had numerous massively headache inspiring incompatible changes that cause developers who used to know what they're doing to realize they're doing it the "old fashioned" way.

A particularly entertaining example from recent history is the handling of scopes in active record.  Not only has this changed from Rails 2.x to Rails 3.0, it's getting further changed in Rails 3.1 (in an incompatible way). I will agree that these are good changes, but they are certainly an effect of not spending time to think things through the first time and cause problems for newbies and old hats alike.

Compared with java  based projects (anyone still remember moving from hibernate 2.x to hibernate 3.0?) this rate of change is mind numbing and can be pretty frustrating. But, compared with the amount of innovation in java, ruby and rails are so far ahead of the game for high productivity/startup type application that java isn't really even a competitor.

I think the disruptive change in the ruby/rails space is what is actually part of what fuels the innovation, so while it's pretty shocking and can be frustrating, I think it can also be a competitive advantage. Moreover, the innovation fuels change, which fuels more innovation... so it's a self sustaining effect. In general, the more chaotic and random things are, the more opportunities there are for new game-changing innovation. The more game-changing innovation that happens, the more chaotic and random things are...

Tuesday, March 20, 2012

Being too early devalues your time

Arriving at every event too early, is a waste of time. If it normally takes 15 minutes to drive somewhere, some people will leave 30-45 minutes early. As another example, I meet people who arrive at the train station for 15-20 minutes before the train arrives almost every day for no good apparent reason.

My personal opinion on this is that showing up at the appointed time is optimal. Doing simple math can show that showing up early devalues your time. For this simple example, let's suppose I get paid $10 per hour to do a job. If I agree to work 1 hour per day at this rate, but I show up at work 15 minutes early every day, my hourly rate just dropped from $10/hr to $8/hr.

My agreement with my boss was to work 5 hours and receive $50 dollars, this amounts to an hourly rate of $10 per hour. What I actually DID, however, was donate an extra 1.25 hours (15 minutes per day over 5 days).

I will point out that the word "unnecessarily" is key because depending on the negative impact/cost of being late, it might be a good idea to allow for extra time. for example, showing up an extra hour early for a non-refundable flight to India might warrant spending some extra time waiting around "just in case". Leaving 15 minutes early to catch a train is probably not warranted, especially if there's another train leaving within 15 minutes of the one you're trying to be early for.

Monday, March 19, 2012

Why some people think messaging is more scaleable

I've often been around (or in the middle) of debates about how message oriented middleware is more scaleable than web services. The problem with this debate is that it is a false dichotomy. There is no reason you cannot do asynchronous http services where the response is simply "Yep, I got it". In practice, these services are just as scaleable and flexible as their queue based brothers and typically are not nearly as complex.

Some of the reason this propaganda gets started is that non-technical folks need to be told a reason why a particular technology is more appropriate. Folks will often use "hot button" phrases like "it's more scaleable" instead of trying to actually explain in nitty gritty detail what the real reason is.

Additionally, making asynchronous web services is truly a bit more challenging. The APIs for JMS foster the notion that the message is transmitted and immediately returns. HTTP libraries typically espouse the idea that you care about the response and therefor tend to be a bit more RMI-like.

A final and perhaps not least important reason is that when someone says "JMS", everyone else hears "Asynchronous". When someone says "HTTP", most people assume "Synchronous". Using technologies in a common manner is a good way to foster effective communication. Innovation is good, but having a shared context and terminology makes communication much simpler. Put another way, sometimes a clever solution is much worse than a simple one, especially when trying to communication the idea to someone else.

Both JMS and HTTP can be used to create scaleable solutions, when deciding on JMS, don't put TOO much emphasis on scaleability, but focus on other aspect like durability or manageability. Almost any technology can be made scaleable with a little thought. You just have to decide if the cost to think about an alternative is worth more or less than the cost of the knee-jerk solution.

Thursday, March 8, 2012

Run your enterprise like a startup

I've worked in a variety of companies and I notice an interesting phenomena -- It seems that the capabilities of individual programmers in companies are inversely purportional to the size on the company. Tech startups with 3 folks always seem to have superstars, even though it's a huge drain on their budget, but IT shops with 1000 people seem to always have 10 people who seem to be doing everything.

The irony in this situation is that a startup has the least amount of money to spend on programmers, but requires hiring only the best and needs to spend a disproportionate amount on payroll. On the other hand, a company flush with cash that could easily hire only the best and brightest, inevitably hires "everybody else". This means that particularly large shops end up with a handful of superstars (just by sheer luck of the draw) who do the majority of the work (and then burnout and leave) and a bunch of "also ran" folks who are really just padding their resume and being a drain on your cash flow.

A visionary IT leader at a large company would break software delivery down into a cluster of startup-like groups with very large degrees of autonomy. Forget about the mythical efficiencies of "enterprise architecture initiatives" and simply hold teams' feet to the fire to deliver real solutions with aggressive timelines. Use the incubator model to foster competition within the organization, after all, two insanely great teams working furiously on the same solution seems inefficient on the surface, but at the end of the day/week/month, you're more likely to then have TWO possible solutions to choose from. If you have one mediocre to crappy team of 50 slogging along and delivering nothing, you may be saving payroll money in the short term, but you will bleed to death waiting for solutions that will possibly never appear.

Thursday, March 1, 2012

Aggressive control freaks make great programmers

After reading Give it five minutes I saw an interesting pattern. Of the folks I know, the good/great programmers are all pretty aggressive. In addition they are also pretty controlling. Moreover, when reading the comments to this blog post, I was struck by the number of folks who could identify with the "hothead".

As a person who historically fit in to that personality type, I wonder why this is. It seems to me that the reason has to do with the way people interact with computers when programming. The very idea of being 'in charge' of the computer and making it do anything you want would seem to appeal to this sort of personality. In addition, the current market and the rate of change handsomely rewards people who aggressively pursue this end. Very successful programmers are the ones who can do this most effectively.

The obvious downside is that this creates a situation where negative behaviour (in human interaction) is actually rewarded. Without conscious effort many programming types forget when they are talking to humans and can be overbearing and aggressive without even realizing that it is happening. After all, if you spend 8-12 hours per day bending a computer to do your will, it imaginably takes time to "turn it off" and re-connect with humans.

More importantly, I think this personality type has a self limiting nature to it. While I know many great programmers who fit in this category, many of these top out at fairly low, though highly technical, roles. This is understandable to me because software is largely written for humans. If the person commanding the computer to do things cannot relate to the people the computers are supposed to serve, the odds are low that the computer will be told to do the correct thing.

So the next stupid idea you hear, think about it for five minutes before you start tearing it apart. Better yet, ask questions for five minutes and maybe try to understand why the other person thinks it's a good idea.

Tuesday, February 28, 2012

Using Mongoid, MongoHQ, and Heroku

I recently tried to set up mongoid with a free mongohq account on heroku... This info is accurate as of 28 Feb, 2012.

For the impatient (I just want to make it work)

mongoid.yml

production:
  uri: <%= ENV['MONGOLAB_URI'] %>

Gemfile

gem 'bson', '1.3.1'
gem 'bson_ext', '1.3.1'
gem 'mongoid', '2.0.2'

For those who want more detail (I have a similar problem, how do I troubleshoot

To determine heroku environment variables, do this:

$ heroku config

and the results should contain a line like what follows:
...
MONGOLAB_URI        => mongodb://heroku_app928349384:lkfjgoierheourhgoeurhgoeuh@ds031087.mongolab.com:31087/heroku_appapp928349384
...

The important thing to note is what the left hand line says. Around the internet (like here on SO they will incorrectly name the ENV as MONGOHQ_URI. So then update mongoid.yml like this:

Additionally, it appears that different versions of mongoid only work with certain versions of mongodb. I started by digging around on google to find versions that seemed to be compatible.

Stack trace with wrong URI:
production:
  uri: <%= ENV['MONGOLAB_URI'] %>

Will yield something like

2012-02-28T12:49:46+00:00 app[web.1]: /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.1/lib/mongo/connection.rb:518:in `connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.1/lib/mongo/connection.rb:656:in `setup'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.1/lib/mongo/connection.rb:101:in `initialize'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.1/lib/mongo/connection.rb:152:in `new'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongo-1.3.1/lib/mongo/connection.rb:152:in `from_uri'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.2/lib/mongoid/config/database.rb:86:in `master'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.2/lib/mongoid/config/database.rb:19:in `configure'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.2/lib/mongoid/config.rb:114:in `from_hash'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.2/lib/mongoid/config.rb:342:in `configure_databases'
2012-02-28T12:49:46+00:00 app[web.1]:  from (eval):2:in `from_hash'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/mongoid-2.0.2/lib/mongoid/railtie.rb:64:in `block in <class:Railtie>'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:25:in `instance_exec'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:25:in `run'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:50:in `block in run_initializers'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:49:in `run_initializers'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:49:in `each'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/application.rb:134:in `initialize!'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/application.rb:77:in `method_missing'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/config/environment.rb:5:in `<top (required)>'
2012-02-28T12:49:46+00:00 app[web.1]:  from <internal:lib/rubygems/custom_require>:29:in `require'
2012-02-28T12:49:46+00:00 app[web.1]:  from <internal:lib/rubygems/custom_require>:29:in `require'
2012-02-28T12:49:46+00:00 app[web.1]:  from config.ru:3:in `block (3 levels) in <main>'
2012-02-28T12:49:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:23:in `eval'
2012-02-28T12:49:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:23:in `block (3 levels) in <main>'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `instance_eval'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `initialize'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:63:in `new'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:63:in `map'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `initialize'
2012-02-28T12:49:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:18:in `block (2 levels) in <main>'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `instance_eval'
2012-02-28T12:49:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:11:in `block in <main>'
2012-02-28T12:49:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:11:in `new'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `instance_eval'
2012-02-28T12:49:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `initialize'
2012-02-28T12:49:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:1:in `new'
2012-02-28T12:49:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:1:in `<main>'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/rack/adapter/loader.rb:36:in `eval'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/rack/adapter/loader.rb:36:in `load'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/controllers/controller.rb:175:in `load_rackup_config'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/controllers/controller.rb:65:in `start'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/runner.rb:177:in `run_command'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/runner.

2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/bin/thin:6:in `<top (required)>'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/bin/thin:19:in `load'
2012-02-28T12:49:46+00:00 app[web.1]:  from /usr/ruby1.9.2/bin/thin:19:in `<main>'
2012-02-28T12:49:47+00:00 heroku[web.1]: Process exited with status 1
2012-02-28T12:49:47+00:00 heroku[web.1]: State changed from starting to crashed

Stack trace with wrong version of mongo/mongoid/bson :

Starting process with command `thin -p 4644 -e production -R /home/heroku_rack/heroku.ru start`
2012-02-27T22:50:46+00:00 app[web.1]: /app/app/models/expression.rb:2:in `<class:Expression>': uninitialized constant Expression::Mongoid (NameError)
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/app/models/expression.rb:1:in `<top (required)>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/engine.rb:138:in `block (2 levels) in eager_load!'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/engine.rb:137:in `each'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/engine.rb:137:in `block in eager_load!'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/engine.rb:135:in `eager_load!'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/engine.rb:135:in `each'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/application.rb:108:in `eager_load!'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/application/finisher.rb:41:in `block in <module:Finisher>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:25:in `instance_exec'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:25:in `run'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:50:in `block in run_initializers'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:49:in `each'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/initializable.rb:49:in `run_initializers'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/application.rb:134:in `initialize!'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/railties-3.0.8/lib/rails/application.rb:77:in `method_missing'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/config/environment.rb:5:in `<top (required)>'
2012-02-27T22:50:46+00:00 app[web.1]:  from <internal:lib/rubygems/custom_require>:29:in `require'
2012-02-27T22:50:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:23:in `eval'
2012-02-27T22:50:46+00:00 app[web.1]:  from <internal:lib/rubygems/custom_require>:29:in `require'
2012-02-27T22:50:46+00:00 app[web.1]:  from config.ru:3:in `block (3 levels) in <main>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:23:in `block (3 levels) in <main>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `instance_eval'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `initialize'
2012-02-27T22:50:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:18:in `block (2 levels) in <main>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `initialize'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:63:in `new'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:63:in `map'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `instance_eval'
2012-02-27T22:50:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:11:in `new'
2012-02-27T22:50:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:11:in `block in <main>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `instance_eval'
2012-02-27T22:50:46+00:00 app[web.1]:  from /app/.bundle/gems/ruby/1.9.1/gems/rack-1.2.5/lib/rack/builder.rb:46:in `initialize'
2012-02-27T22:50:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:1:in `new'
2012-02-27T22:50:46+00:00 app[web.1]:  from /home/heroku_rack/heroku.ru:1:in `<main>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/rack/adapter/loader.rb:36:in `eval'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/controllers/controller.rb:175:in `load_rackup_config'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/rack/adapter/loader.rb:36:in `load'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/controllers/controller.rb:65:in `start'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/runner.rb:177:in `run_command'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/bin/thin:6:in `<top (required)>'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/lib/ruby/gems/1.9.1/gems/thin-1.2.6/lib/thin/runner.rb:143:in `run!'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/bin/thin:19:in `load'
2012-02-27T22:50:46+00:00 app[web.1]:  from /usr/ruby1.9.2/bin/thin:19:in `<main>'
2012-02-27T22:50:47+00:00 heroku[web.1]: Process exited with status 1
2012-02-27T22:50:47+00:00 heroku[web.1]: State changed from starting to crashed
2012-02-27T22:50:48+00:00 heroku[router]: Error H10 (App crashed) -> POST newsfilter.heroku.com/_heroku/console dyno= queue= wait= service= status=503 bytes=

I apologize in advance for the poor formatting on this post...

Friday, February 24, 2012

Governance Gone Wild

I've written in the past about change control processes and how they relate to agility. Taking things to the next level, there's a bigger issue that needs to be addressed.

There are essentially two radically divided camps on what value IT Governance provides for enterprise software. These camps are: #1 the business camp which is tired of hearing excuses about why they are losing millions of dollars of business because IT cannot deliver a solution in a timely manner and #2 The IT camp that is tired of the business camp requesting crazy stuff and causing millions of dollars of unnecessary support costs. Camp #1 thinks governance is an excuse for IT/process wonks to slow things down or foot drag instead of delivering solutions. Camp #2 thinks governance is a way to reign in the business folks who believe that unicorns and pixie dust cause fully configured computers and network infrastructure to magically appear over night.

I hope its obvious that both camps are spectacularly wrong. While they both have arguably good reasons for what they're trying to do, I can attest to the fact that it's really (REALLY) hard to see that in the middle of a heated discussion between the two. Governance has often taken on a negative connotation to many because the ivory tower thinkers at many consulting companies treat it like a one size fits all religion where some ultimate authority is in sole possession of knowledge about what "the right thing to do" actually is. In reality, creating an agile, lean, and effective IT governance process is not about finding the right enlightened group with unique insight, but more about communicating vision and verifying that everybody is making decisions in line with that vision.

I feel one problem is that of the word "governance" itself. While wikipedia has a pretty good definition, to most folks, it implies active control. I contend the real goal is inculcating corporate goals to the culture and building transparency into the decision making process. Instead of doing this, though, organizations often relegate the decision making process to "czars", "councils", or "review boards" with the implication that they are somehow more knowledgable about what needs to happen. This is a sure fire way to slow things down, create an overly bureaucratic system, and ultimately get nothing done. It is better to let people do what they think is necessary to further the corporate goals, but develop a system to detect when things are going the wrong way as well as correct and avoid inappropriate activities.

A small example of such a problem would be in typical software change review process. Often, the "controlling" model ends up with a goal to restrict changes or make them difficult with the desired effect of limiting the amount of "bad" changes or "mistakes" someone might make. This can mean things like restricting access to the SCM tool and/or requiring multiple approvers to make a change, it also might mean only certain people can contribute to certain areas and all the administrative access problems this entails. To make matters worse, the approvers/reviewers in this model are often the more/most senior folks in the organization or people who actually don't have any working knowledge to understand what the change actually does. An example would be when a "change review board" consisting of managers/VPs is supposed to review java code changes. In my experience, there is typically no way they can have the time to actually comprehend what the change actually is. This has the effect of demotivating contributors because no matter how hard/fast they work, they can never be better/faster than the reviewer(s). It also implies that the reviewers somehow are smarter or have more knowledge about what is actually supposed to happen. The more obvious problem is that while the process limits "bad" changes, it also limits "good" changes.

A better way to handle this is to make the review process a little more democratic. While there might still need to be a senior person to resolve conflict, developers should be able to review/approve each other's solutions... this could even be done "after the fact", namely, developers can contribute code and move on, letting the review process "just happen" without their involvement (unless there is a question). It also puts responsibility for changes in the hands of people who SHOULD BE most informed about the goals of the change. Put another way, if the VP of Operations (or architecture) has a better understanding of business value of a change than the developer... the system is broken.

The ultimate problem with command and control style governance is that it often ends up turning into a "not my problem" situation. That is, the perception becomes that knowledge of "what is right" becomes reserved to a small group of people and they become gatekeepers whose objective ultimately becomes trying to restrict changes instead of facilitating proper change. People then lose sight of their personal contribution to the success of the organization and it becomes a downward spiral of ineffectiveness.

Wednesday, February 22, 2012

Architectural Antipattern: "Through The Looking Glass"

An anti-pattern is a commonly occurring pattern that is ineffective.

There is an architectural anti-pattern I like to call "Through the Looking Glass". This is when a system is broken into pieces or layers, but the pieces are cut perpendicular to the direction that the team and the business require. A common example is when a set of screens used to access a database system is split system into "front end" and "back end" components.

This "front end" and "back end" split ONLY makes sense if the back end can identify thier required interfaces before the front end is designed. If the "back end" requirements are being driven by the "front end" screens and/or systems, then this split is either #1 wrong or, at a minimum #2 premature. Most often, new systems need a front end to identify APIs that need to be used... it's more economical to build the system in such a manner that a front end change can quickly be migrated to the "back-end" (like rails) and enable using functioning software to be delivered early. The prudent thing is to then refactor and cut the system into pieces when necessary (or ideally just before necessary :).

I personally believe this problem is primarily caused by two things: over specialization and over engineering. Over specialization can be recognized when, for example, database administrators have no idea how the users will use or interact with the data. This can be combated by mixing DBAs into project teams as contributing members, not just people who sit around and wait for change requests to generate tables. Over engineering can be identified by looking for decorations inside the system design that are "for future growth" or other vague non-functional requirements. This can be solved by repeatedly asking the question "Are we actually gonna need this?"

Monday, February 20, 2012

The Best Practices Myth

I've found the term "best practices" to typically be a magical incantation that people invoke to use a fictitious third party expert to support their position. When most people say "best practices dictate" they're really just saying "because I said so" and I meet them with extreme skepticism.

This is not to say that the idea of "best practices" isn't alluring. Wouldn't it be great if we could simply do things the "best" way and not have to think about what we're doing. In fact, sometimes it might be valid, for example a "best practice" in java is to follow a naming convention. The first letter of classes should be upper case for example. I think this is reasonable, but frankly I'm currently hard pressed to understand WHY.

If you're going to recommend best practices, I feel you have an obligation to be able to explain the NEED of the "best practice" as well as the value. If the NEED is because someone needs to control someone else, be suspicious. If the need is that many people need to be able to attack a similar problem in a similar manner, then it might have some merit. That also means that the "best practices" are situational and need to be identified with the appropriate context. For example, following a java naming convention in python would not be extremely valuable.

Friday, February 10, 2012

people are not interchangeable cogs, they are more like the wind

Great organisations understand that people are not interchangeable cogs, but the wind in their sails.

I've struggled to come up with a metaphor that helps explain why command and control processes are so failure prone. In a command and control process, someone (or worse yet a committee) decides "how we're going to do things" and then builds a process to support this idea. Truly degenerate cases will take a step further and try to buy tools and then build their process around their tools. These struggling organisations then spend years try to find the right people or mold people to fit into their process.

These organisations have fallen into the trap of thinking that people are just pieces of a giant machine and that once they state their process, they simply need to find the correct pieces and hook them together. The problem with this perspective is that people are immensely variable. Finding two identical people is impossible (even identical twins are different in many ways) so the metaphor is fatally flawed. Worse yet, many of these processes try to LIMIT the amount of variability between people to make things "simpler", but in doing so limit the positive energy any individual can bring to the table.

A better metaphor is to think of the people in your organisation as the wind and it is the job of leaders to build processes that can harness and direct this power. This requires a big change of thought from the "interchangeable parts" perspective. Wind is intensely variable but carries tremendous energy... Sailors and shipbuilders recognise this and instead of trying to fix the wind so it always blows in their favor, they build, plan, and execute knowing that the direction and force of the wind is outside of their control.

Joe Wilner has an interesting blog post about why making the adjustment and treating people like the wind has positive impacts on the individual. The positive effects of this attitude change are much larger than any one individual. When you're looking around at your processes and wondering why "those stupid people" are not doing things correctly, perhaps it's time to re-evaluate things and look at the situation with a fresh perspective.

Tuesday, January 24, 2012

The difference between Black Swans, Flying Green Monkeys, and Unicorns

What is the difference between a Black Swan, a Flying Green Monkey, and a Unicorn?

First off, a Black Swan is an unexpected event that is rationalized to be obvious in retrospect. It's based on theory elaborated in a book by Nassim Nicholas Taleb. The most important distinction that Black Swans have apart from Flying Green Monkeys and Unicorns is that Black Swans actually exist whereas FGMs and Unicorns reasonably cannot.

The term Flying Green Monkeys refer to a pessimists fantasy about a highly improbable event or situation that might happen. Often they are paranoid delusions based on partial or incorrect facts and the actual probability that something may happen. They are related to Black Swans in that they live in the land of chaos and the unexpected (extremestan in Taleb's terms) but they are distinctly different because they are seen as a negative catastrophic event that can be planned for. To use a traditional statistical model, these are things that are soo far out on the end of the bell curve that they are reasonably impossible (like a 50ft tall person... or a "Wizard of Oz" style flying green monkey).

Unicorns, on the other hand, are the optimists fantasy and the flip-side of a Flying Green Monkey. Unicorns are a fabrication based on highly improbable events or situations that have a positive outcome. Good examples of Unicorns are the "inevitable payout" founders see when they begin working on a startup, or the gambler who "knows" he's going to hit it big on the next roll of the dice.

The biggest difference between these three things are in how people react to and perceive them. Black Swans can only be seen in retrospect and are highly sensitive to your point of view and initial conditions. They may SEEM just as impossible as a FGM or a Unicorn from a particular perspective, but to other observers in a different location (in time or space) they seem obvious or even trivial. More importantly, they are never predicted ahead of time. You cannot plan for Black Swans ahead of time, but simply react to them after the fact.

FGMs and Unicorns, on the other hand are people's reactions to chaos and their attempt to craft a belief system that will accommodate the chaotic. Engineers often run around hunting for Flying Green Monkeys to ensure their systems are robust. Serial entrepreneurs are always tracking unicorns and are CERTAIN this new set of tracks must lead to a veritable DEN of them.

To be successful, I think it is prudent to not focus too much attention ahead of time about finding Unicorns and avoiding Flying Green Monkeys, but be able to identify and react to Black Swans and exploit these situations quickly and effectively.