Thursday, March 29, 2012

Always Pull Branches When Merging

A question came up at work today that I've had to stop and remind myself numerous times and I'm sure other people have to think about on occasion. When merging changes from a branch back to an origin, should I pull them from branch into origin or push them from branch to origin. The difference in CVS would be... do I check out the origin and merge the deltas of the branch into working copy or check out the branch and somehow push those changes onto the trunk.

In short, always pull changes into a clean local copy that represents where the destination of the merge should be. It really doesn't matter which Revision Control System you use, this will almost always be the safest way to merge (assuming it supports a local copy).

I say this for one very important reason, you MUST be able to resolve conflicts before committing changes to the central repository, pushing doesn't necessarily allow you to do that unless the tools supports something like Multi Version Concurrency Control on a single revision (*and I'm not aware of any systems that do this right now).

Tuesday, March 27, 2012

Confessions of a Morning Person

Hello, my name is Mike and I'm a Morning Person.

During a brief stop at the gas station this morning, I said hello to another guy entering the gas station as I was leaving. Not sure what he was doing up at that hour, but the serene look on his face was a perfect reflection of how I was feeling.

As a person who routinely wakes up at 4-4:30 am, I love the morning. Don't get me long, I like to sleep... and need about 8 hours to feel rested, but I would rather go to bed early to not miss the morning than stay up late and sleep in until until 8:00am. On my drive to the train station, I started to explore what I liked and a variety of memories sprang to mind:

As a young child, waking up 30 minutes early with my brother to get dressed, then get back into bed so that when Mom and Dad came to wake us up, we were already prepared (occasionally sneaking out to get breakfast too). Relishing the look of surprise when they realized we were ready to go.

Waking up as a teenager, my Dad rustling me out of bed to go hunting... barely able to think (as a teenager I was more of a "late morning" person). Still half awake, muddling around, realizing my Dad had already been up quite some time making some hot soup/coffee and getting our gear ready. Hearing the sound of the frozen grass and snow crunching under my boots as we made our way to our stand. Listening to the sounds of the nocturnal animals winding down and the daytime animals starting to move about. Watching the transition from black to grey, occasionally being blessed with a fiery red or orange display in the sky.

As a young man at Fort Jackson, being woke up by a Drill Sergent and mustered down into formation for PT. Doing pushups and sit ups in the wet grass until we couldn't do them any more, then running and singing loudly. Living the slogan: "we do more by 8:00am than most people do all day". Feeling proud and alive and pushing each other to go further and faster and not letting anyone fall behind.

In my 20s, sitting patiently at stand-to in a variety of fighting positions, watching the darkness slowly turn into a murky grey. Tense, watching every movement, but also feeling a sense of marvel at the transition.

Later, driving through the fog from Karlsruhe to Mannheim (and back) on weekends to visit friends. Speeding down the autobahn at a rate that was totally unsafe for my car, feeling alive and free.

At specific times, being woke up by my wife, her telling me that "I think it's time" and rushing to the hospital anticipating the birth of a child. Feeling that the time was near that I was going to be a father. For later kids, loading them up half asleep into the car/minivan/van and herding them into the waiting room... For even later kids, waking them and telling THEM "It's time, we've got to head to the hospital, watch the little guys".

As an 30 something adult, cruising through my subdivision on the way to work, marveling at all the dark houses and the folks still sleeping. Knowing I had already been up for an hour and they would probably sleep another hour or two. Feeling a sense of camaraderie with the other cars on the road and wondering where the heck they were going at this hour.

Occasionally, as a middle aged(?) man, getting fired up about going for an early morning run... jogging through the cool mist, listening for the moment that the robins begin to sing their silly song and hoping this would be another day that I see the orange and red fire streak across the sky. Feeling quiet satisfaction in the knowledge that I had put in 3 miles before most people rolled over to hit the snooze button.

This morning, sharing that entire experience with the dude at the gas station with one knowing look and a nod of the head. Sitting in my car at the train station and mulling about with the other 6:05am train folks at the Woodstock Metra station. Feeling like a member of an elite club that

I guess with those sorts of experiences rattling around in my head and the sense of belonging, pride, and accomplishment, it's no wonder I'm a morning person... Who wouldn't want to feel that way?

Monday, March 26, 2012

Lotus Notes, A Lesson in Poor User Experience

As a longtime Outlook user, I've had the exciting experience of learning how to use Lotus Notes. When first starting to use it I was shocked at how "clunky" the user interface seemed, but wrote it off as my long familiarity with outlook. Recently, however, I started to realize that I was being too generous. Notes is full of user experience bad enough that I feel I need to point a couple out lest any young engineers try to design user interfaces based on them.

These things are actually a combination of how Notes works as well as pretty poor administration of the tool. There are some settings which, when you log out of a tool, your personal preferences are reset, but others where they are saved. As two quick examples, I give you the following:


By default, when sending email, Notes prompts with the following dialog:

By itself, it is not a HUGE problem, we're not only giving you the option to save or not save in "sent items", but also cancel what you were trying to do.

The problem is that this is a pointless dialog and reflects something that most people answer 1 time in their entire life and never want to think about again. Asking them for EVERY single email is a waste of time and mental energy. There should be a checkbox that indicates "Save my response" and then the user should never see this dialog again. If they should change their mind, they should go to their preferences, dig around, find the setting, then change it. This is a subtle case of overengineering the user interface for edge cases instead of optimizing the routine flow.

Buttons, Buttons, and more Buttons!

This next example is so funny, I actually laughed out loud at work

If you start writing an email and try to close the message without saving it, you're prompted with the following dialog:

I give the engineer who thought this was a good idea, high marks for completeness (he missed discard & save, which is only slightly less confusing than this dialog), but they seriously missed the mark on simplicity and usability. Every time this happens I have to stop and read every nondescript, equal sized button for a cue on which one I want.

In this example, closing the message without sending or saving really should elicit the following question ONLY: "Would you like to save your message before closing?" This dialog should have two buttons, "Yes", and "No". Any other buttons, prompts, or information is too much. period. I MIGHT be able to be convinced of a cancel button, if it was less prominent or located away from the other two, but that is iffy. It is an example of the mentality in the first screen shot taken to larger proportions.

These well intentioned dialogs are examples of trying to accomodate edge cases by detuning the "normal" flow. This ends up creating a system that supports neither as well as it could. Putting questions in front of a user that they answer 99% the exact same answer is often just as bad as not prompting and doing destructive actions without warning them in the big picture. In the case of closing an email without saving it, I've done it, I've lost emails... Well, not in Gmail because it autosaves and I NEVER have to worry about this problem, I just go to my drafts folder to look for something I haven't sent yet.

Friday, March 23, 2012

The VB Model versus the Delphi Model

As an older school client/server developer, I've used by VB and Delphi for a few projects in the past. Having seen both tools, they have a difference of approach that I think is significant. The VB model is "90% of what you want to do is easy and the remaining 10% is impossible" whereas the Delphi model is "80% of what you want to do is easy and the remaining 20% is more difficult".

In the VB model, because it was such a closed ecosystem, when I couldn't do something, I'd have to go buy components from a third party or write a C library. There were multiple times when using VB I needed to write a special component in C or Delphi because VB just couldn't handle it. In addition, I was often forced to purchase third party components because nothing was available in the open source space.

In the Delphi model, with it's richer open source ecosystem, when I couldn't do something, I'd first look and see if there was an open source component that did what I wanted. At first this was relatively rare, but as time progressed it got to the point where 90% of the time, someone had already written what I needed. The additional 10% I had the option of either #1 writing it myself or #2 purchasing it.

In short, VB was easier to market because it "did more out of the box" and people making tool decisions would tend to think this was the best choice. After using it for a while and getting "locked in" to multiple proprietary third party components, it would often be obvious (to me) that Delphi would have been a better choice, but difficult to justify without stepping away from the problem. This is something to consider when buying things from companies and tools that are very proprietary. The hidden cost of making up for missing capabilities can often vastly outweigh the "safety" of choosing a tool that does more of what you want "out of the box".

Disruptive changes fuel innovation, innovation creates disruptive changes

As a developer who works extensively both ruby and java, I'm amazed at the turmoil in the ruby/rails space relative to java.  In the last few years, ruby and rails have had numerous massively headache inspiring incompatible changes that cause developers who used to know what they're doing to realize they're doing it the "old fashioned" way.

A particularly entertaining example from recent history is the handling of scopes in active record.  Not only has this changed from Rails 2.x to Rails 3.0, it's getting further changed in Rails 3.1 (in an incompatible way). I will agree that these are good changes, but they are certainly an effect of not spending time to think things through the first time and cause problems for newbies and old hats alike.

Compared with java  based projects (anyone still remember moving from hibernate 2.x to hibernate 3.0?) this rate of change is mind numbing and can be pretty frustrating. But, compared with the amount of innovation in java, ruby and rails are so far ahead of the game for high productivity/startup type application that java isn't really even a competitor.

I think the disruptive change in the ruby/rails space is what is actually part of what fuels the innovation, so while it's pretty shocking and can be frustrating, I think it can also be a competitive advantage. Moreover, the innovation fuels change, which fuels more innovation... so it's a self sustaining effect. In general, the more chaotic and random things are, the more opportunities there are for new game-changing innovation. The more game-changing innovation that happens, the more chaotic and random things are...

Tuesday, March 20, 2012

Being too early devalues your time

Arriving at every event too early, is a waste of time. If it normally takes 15 minutes to drive somewhere, some people will leave 30-45 minutes early. As another example, I meet people who arrive at the train station for 15-20 minutes before the train arrives almost every day for no good apparent reason.

My personal opinion on this is that showing up at the appointed time is optimal. Doing simple math can show that showing up early devalues your time. For this simple example, let's suppose I get paid $10 per hour to do a job. If I agree to work 1 hour per day at this rate, but I show up at work 15 minutes early every day, my hourly rate just dropped from $10/hr to $8/hr.

My agreement with my boss was to work 5 hours and receive $50 dollars, this amounts to an hourly rate of $10 per hour. What I actually DID, however, was donate an extra 1.25 hours (15 minutes per day over 5 days).

I will point out that the word "unnecessarily" is key because depending on the negative impact/cost of being late, it might be a good idea to allow for extra time. for example, showing up an extra hour early for a non-refundable flight to India might warrant spending some extra time waiting around "just in case". Leaving 15 minutes early to catch a train is probably not warranted, especially if there's another train leaving within 15 minutes of the one you're trying to be early for.

Monday, March 19, 2012

Why some people think messaging is more scaleable

I've often been around (or in the middle) of debates about how message oriented middleware is more scaleable than web services. The problem with this debate is that it is a false dichotomy. There is no reason you cannot do asynchronous http services where the response is simply "Yep, I got it". In practice, these services are just as scaleable and flexible as their queue based brothers and typically are not nearly as complex.

Some of the reason this propaganda gets started is that non-technical folks need to be told a reason why a particular technology is more appropriate. Folks will often use "hot button" phrases like "it's more scaleable" instead of trying to actually explain in nitty gritty detail what the real reason is.

Additionally, making asynchronous web services is truly a bit more challenging. The APIs for JMS foster the notion that the message is transmitted and immediately returns. HTTP libraries typically espouse the idea that you care about the response and therefor tend to be a bit more RMI-like.

A final and perhaps not least important reason is that when someone says "JMS", everyone else hears "Asynchronous". When someone says "HTTP", most people assume "Synchronous". Using technologies in a common manner is a good way to foster effective communication. Innovation is good, but having a shared context and terminology makes communication much simpler. Put another way, sometimes a clever solution is much worse than a simple one, especially when trying to communication the idea to someone else.

Both JMS and HTTP can be used to create scaleable solutions, when deciding on JMS, don't put TOO much emphasis on scaleability, but focus on other aspect like durability or manageability. Almost any technology can be made scaleable with a little thought. You just have to decide if the cost to think about an alternative is worth more or less than the cost of the knee-jerk solution.

Thursday, March 8, 2012

Run your enterprise like a startup

I've worked in a variety of companies and I notice an interesting phenomena -- It seems that the capabilities of individual programmers in companies are inversely purportional to the size on the company. Tech startups with 3 folks always seem to have superstars, even though it's a huge drain on their budget, but IT shops with 1000 people seem to always have 10 people who seem to be doing everything.

The irony in this situation is that a startup has the least amount of money to spend on programmers, but requires hiring only the best and needs to spend a disproportionate amount on payroll. On the other hand, a company flush with cash that could easily hire only the best and brightest, inevitably hires "everybody else". This means that particularly large shops end up with a handful of superstars (just by sheer luck of the draw) who do the majority of the work (and then burnout and leave) and a bunch of "also ran" folks who are really just padding their resume and being a drain on your cash flow.

A visionary IT leader at a large company would break software delivery down into a cluster of startup-like groups with very large degrees of autonomy. Forget about the mythical efficiencies of "enterprise architecture initiatives" and simply hold teams' feet to the fire to deliver real solutions with aggressive timelines. Use the incubator model to foster competition within the organization, after all, two insanely great teams working furiously on the same solution seems inefficient on the surface, but at the end of the day/week/month, you're more likely to then have TWO possible solutions to choose from. If you have one mediocre to crappy team of 50 slogging along and delivering nothing, you may be saving payroll money in the short term, but you will bleed to death waiting for solutions that will possibly never appear.

Thursday, March 1, 2012

Aggressive control freaks make great programmers

After reading Give it five minutes I saw an interesting pattern. Of the folks I know, the good/great programmers are all pretty aggressive. In addition they are also pretty controlling. Moreover, when reading the comments to this blog post, I was struck by the number of folks who could identify with the "hothead".

As a person who historically fit in to that personality type, I wonder why this is. It seems to me that the reason has to do with the way people interact with computers when programming. The very idea of being 'in charge' of the computer and making it do anything you want would seem to appeal to this sort of personality. In addition, the current market and the rate of change handsomely rewards people who aggressively pursue this end. Very successful programmers are the ones who can do this most effectively.

The obvious downside is that this creates a situation where negative behaviour (in human interaction) is actually rewarded. Without conscious effort many programming types forget when they are talking to humans and can be overbearing and aggressive without even realizing that it is happening. After all, if you spend 8-12 hours per day bending a computer to do your will, it imaginably takes time to "turn it off" and re-connect with humans.

More importantly, I think this personality type has a self limiting nature to it. While I know many great programmers who fit in this category, many of these top out at fairly low, though highly technical, roles. This is understandable to me because software is largely written for humans. If the person commanding the computer to do things cannot relate to the people the computers are supposed to serve, the odds are low that the computer will be told to do the correct thing.

So the next stupid idea you hear, think about it for five minutes before you start tearing it apart. Better yet, ask questions for five minutes and maybe try to understand why the other person thinks it's a good idea.