Saturday, March 27, 2010

NoSQL versus RDBMS

I read with interest an article about the mentality behind nosql

nosql-vs-rdbms-let-the-flames-begin.html
. From my perspective, this debate isn't really a debate and I've written about similar things before.

If you stop for a moment and think, it seems obvious that giving up a relational model and giving up ACID compliance reduces overhead. It should also be obvious that you are giving up things that many people for many years have thought are really important.

To a person selecting an underlying database implementation, part of your decision making process needs to take these things into account. Here's a quick list of things to consider:


  • How important is speed? (how long can we wait for one operation to finish)

  • How important is scaleability? (how many operations must we be able to do at one time)

  • How much money can I spend? (Can I spend a $200,000 on a honking server?)

  • Who is going to support this system? (A bunch of developers, a bunch of dbas, or both?)

  • How big are my transactions? (Am I updating 100 unrelated things in one transaction?)

  • How important are transactions? (if update "A" fails, do I really have to rollback update "B")

  • Do I need to replicate my data? (does hong kong need the SAME data as the US)

  • Do I need to shard my data? (should hong kong orders stay in hong kong and never make it back to the US?)

  • How much data are we talking about? (managing a 1gb database is quite a bit different than 1Tb)

  • How often does the data change? (is this a transactional system, append-only, or read-only)

  • How much of the data changes? (the weather changes every day, the text of a book written 100 years ago doesn't change very often)




While this list isn't comprehensive, it illustrates the sorts of things people should be thinking about BEFORE even trying to pick which sort of solution is appropriate. What I find is that many people try to make significant architectural and highly technical decisions without having thought about what they're really trying to do and what is really important for that particular solution.

Thursday, March 25, 2010

The Psychology of user interfaces

I've spent a LOT of time studying user interfaces and it was all well a mystery until I met up with a few folks who where versed in the "science" of cognitive psychology. I put science in quotes because, as a self-appointed armchair mathematician it is only with much resistance that I can admit that any field of psychology is, in fact, a scientific discipline.

After a quite a few conversations with folks who are versed in this field, as well as many hours of listening to lectures on introductory psychology (and more to come) from MIT http://ocw.mit.edu/OcwWeb/Brain-and-Cognitive-Sciences/index.htm I now see with blinding clarity the disarray of the state of things in user interface design.

Problem number one is that most, if not all, people who design user interfaces have an opinion about "what the user will think" and frankly I've now gotten to the point where that actually IRRITATES me. I think this is largely because the transference of "me" to "the user" doesn't take into account the idea that developers/managers/Salespeople are not always the target users of a system.

This means that when "me" the "me" looks at a feature, I develop UIs that are easy to develop (if I'm a developer), easy to manage the development of (if I'm a manager), and easy to sell (if I'm a salesperson). This leaves those other folks... you know... the one's who are probably paying the bills by purchasing your product, at the mercy of a bunch of folks with wildly different objectives, none of which are necessarily in line with their own.

Tuesday, March 23, 2010

Technical Leadership

In my new position, I've spent a large amount of time helping morph my development team's process into something that is a little more effective than what existed when I showed up. What we used to do was have a bunch of developers furiously coding on what they thought was important without actually getting feedback from the business. This led us to an interesting situation where we where spending hundreds of hours developing things that nobody really wanted (sound familiar?)

We've since spent quite a bit of time honing our skills and process to make sure we're working on things that the business (folks who pay us at least) think are important. This work has been rewarding, but not very technical in nature. I've finally gotten some time to play around with really technical stuff again and now remember why I got into this business.

The fortunate thing I'm finally realizing is how EXTREMELY rare it is to find folks who can be effective at both managing development teams, as well as keeping abreast of the technical issues. In addition, I'm finding exactly how bad many techhies are at figuring out what things are important to their business partners (or users). More importantly, I'm seeing how a HIGHLY technical manager can actually be a very serious liability if they do not have the ability to realize that their output is measured by their team's output, not their own personal accomplishments.

Sunday, March 14, 2010

Are you a really smart web developer?

If so, take this into consideration and let me know if you think you're still really smart.

The cult of being busy

I read this post and it resonated with me. The Cult of Being Busy. Honestly, I've riffed on similar things in the past, one thing is good meetings gone bad.

As a person who has been this harried line out my cube door person who thought they needed to be the center of the known universe, I've evolved. Frankly, from my perspective, if everybody on my team needs to ask me personally what to do for every operation they perform the team will not succeed. Nowadays, I feel the people I choose to work with must have a certainly level of professional competence and I expect them to act appropriately in the absence of specific direction from me.

My obligation in this regard is to respect their decisions and give them the freedom to make mistakes and grow as professionals. This also means that if they should really screw up I need a way to help them avoid this problem in the future.

What I see all to often is that most folks think that having their day booked with 10-16 hours of pointless and ineffective meetings somehow makes them look and feel more important than the slacker who bails out after 8 hours of solid (effective) work. From my perspective there is nothing more inaccurate than this, but I'm at a loss as to how one can change the general consensus.

I'd rather have a person on my team who gets 8 hours of work done in 4 hours than a person who does 4 hours of work in 16 hours.

Saturday, March 13, 2010

Google Testing Blog: Clean Code Talks - Unit Testing

Google Testing Blog: Clean Code Talks - Unit Testing

What are unit tests?

here's a great, but kinda technical definition StandardDefinitionOfUnitTest

but, here's a more general example and an attempt to illustrate "why code is so crappy" and some of the difficulties in translating requirements and designs into "high quality" software.

Let's say we have a screen to enter credit card information. Let's further suppose that one of the requirements is to "Validate the credit card information". Let's then say that "Validate the credit card information" means specifically:

card types

The software written to perform this validation will have a number of "code" units that likely would translate into functions. One of these functions would likely be "validate length of card number".

Note: right now we're saying that "function"=="code unit" to simplify things.

Let's walk through what we might do to validate JUST Visa and Mastercard length. We'll write a function that will take two inputs (card type, and card number) and verify the card number has the proper number of digits. This function should return an error message if the card is not the right length or an empty string if it IS valid.
Let's say that the developer writes the following code (this is pseudo code):


validateCardLength (cardType,cardNumber)
if cardType equals "MC" and cardNumber.length != 16 return "Master Card must be 16 digits"
if cardType equals "VISA" and (cardNumber.length != 16 and cardNumber.length != 13) return "Visa must be 13 or 16 digits"
return ""


How many test cases should we write? What should they be?

Lets explore!

The trivial solution is to write 1 test case... something like this


test
validateCardLength("MC","1234123412341234") equals ""


This is a "happy path" test and it "kinda" verifies that the code does what we want in the optimal case. The problem is that it doesn't cover the second and third lines of the function. This test will never verify if you can even enter a valid visa or not. So to get the complete happy path for this VERY simple function we'd need:


test
validateCardLength("MC","1234123412341234") equals ""
validateCardLength("VISA","1234123412341234") equals ""
validateCardLength("VISA","1234123412341") equals ""


This is good, we know you can at least enter cards with the RIGHT number of digits, but kinda the whole point of this function is to verify you can't enter cards with the WRONG number of digits. Let's add to our tests then:


test
validateCardLength("MC","1234123412341234") equals ""
validateCardLength("VISA","1234123412341234") equals ""
validateCardLength("VISA","1234123412341") equals ""

validateCardLength("MC","123412341234123") equals "Master Card must be 16 digits"
validateCardLength("VISA","123412341234123") equals "Visa must be 13 or 16 digits"



So, for this simple thing, the minimum number of tests most people would agree are necessary is about 5. This is actually mathematically implied by Cyclomatic Complexity, but that is beyond the scope if this. When developers talk about "coverage", they are referring to how many of these branches have they covered with tests. Our first example would typically imply 20% coverage and the last example would imply 100% coverage.

Now my question is, are we done with our unit testing for this function?

I would argue that yes, we're done testing the "card length" function, but there are a bunch of "unanswered" questions that a typical developer (or architect) still has to answer in the larger scope. What immediately comes to mind is "what happens if you pass an invalid card type?"
Right now, the validateCardLength function happily ALWAYS returns that the card length is A-OK.

Just to further illustrate how unit testing doesn't catch everything, here are some other sources of errors that these tests will likely miss:


  • What happens if I pass a 100 character string as a card number? (whoops, the "length" function can only handle strings up to 50 chars)

  • What happens if the system that passes card type uses the code "M2" or "MC" to BOTH mean mastercard? (requirements didn't specify the exact codes)

  • What happens if an intermediate component encrypts the card number before sending it to the function? (does the encrypted string have the same length as the source string?)

  • What happens if some intermediate code automatically trims the card length to 16 characters? (this is a fun one which will lead to lotsa finger pointing)
    for non-developers or those who just don't get it: This is a problem because if the input string is trimmed to 16 characters, this means that when a user enters 17 characters the "validateCardLength" will return "" because it's only getting the string ALREADY trimmed to 16 characters (so the length == 16). This is particularly entertaining because it is a common theme in the software development world to do "helpful" little things like this and totally screw things up in unexpected ways.



So for this mindlessly trivial piece of software, here are a number of things that need to be tested as well as a number of ways that 100% testing will totally miss problems. For those of you that fly on airplanes, think of how complicated avionics software must be and let me know when you stop shivering...

Friday, March 12, 2010

Those lazy developers....

Right now I hear rumblings that the "crappy" quality of our software is a direct reflection of our developers not taking the time to "properly" read the requirements. I quote those two words for two very important reasons.

Number One: ALL software will be crappy... we're in the dark ages of software development. Accept it, if you think you've written perfect software, you probably didn't stick around long enough to have a hundred (thousand/million) customers. I've never met anyone (sane) who actually believes they wrote perfect software. Most of the time software is clunky and stupid 1/2 way through the project. The answer? Make your projects so small that having it turn into clunky and stupid 1/2 way through is no big deal because everybody knows you'll just fix it in the next iteration (i.e. next month). The only exception to this rule is in wonderland where people just proclaim victory even when the facts of failure abound.

Number Two: If I write a book that nobody can understand...It isn't that my audience is stupid and lazy, it means I'm a crappy author who doesn't understand my audience. If developers don't "properly" understand documents, you've probably written a "crappy" document. I've heard variations of this complaint from multiple directions... developers think non-technical users are dumb because they don't understand the "back button" behavior of a Web 2.0 AJAX application and Help Desk personnel think developers are dumb because they don't tell users exactly what went wrong in the system.

The fact is, it's easy to sling mud at a third party to absolve yourself from responsibility for mistakes that have been made, but it's much harder to come up with solutions to get things done.

Wednesday, March 10, 2010

Were you talking about ME in your blog?

This is now the second time in a year or so where someone has personally confronted me (this time not quite directly, but kinda) about something I wrote in my blog. This time it was in response to a post about services.

A friend of someone who apparently thinks they know the unnamed developer in the post sent me a personal email telling me to stop lambasting said individual on my blog. Additionally, I got a few comments on my blog from people who apparently used to work in my organization about how dysfunctional we are and it's all because of a single person who's a jerk.

First, My post was my perspective and perhaps it was perceived that I was being too hard on a teammate, but I certainly wasn't suggesting that I had no blame. (it takes two people to have an argument)

Second, While that post was spawned by a specific situation that happened, my perspective was formed after working with many people and having a similar communication problem. Just so we're clear, a lot of what's in my blog is not necessarily 100% fact for fact true. Often I change names and situations in order to protect innocent bystanders. Sometimes it's effective, sometimes folks see right through it.

Third, It's MY blog guys, ranting about a developer who doesn't seem to "get it" doesn't mean that developer doesn't get it. It means that, from my perspective, they don't get it.

All that having been said, I sincerely appreciate the time folks took to comment. Even if you're calling me an ignorant jackass, at least you're communicating and taking the time to think about my perspective.

Monday, March 8, 2010

gambler's fallacy

A while back, I was sitting at my desk and a fellow programmer wandered by and chuckled to himself. He turned to me and asked the following question "If I walk up to pay for something at the store and need 69 cents, what are the odds that I will reach in my pocket and pull out exactly 69 cents?".

I thought for a second and said "Probably like on in a gadgillion..."

Of course, him being an esteemed mathematician he smugly answered "WRONG, it is precisely 1 in 100"

After precisely 7 picoseconds I asked a couple of somewhat random questions:

#1 How many coins can you fit in your pocket?
#2 Is it possible to get 69 cents using just quarters?
#3 Do you always empty your pockets as soon as you have more than 100 cents?

He shut me down by saying "you're falling victim to the gambler's fallacy".

Let me do a quick explanation. The gambler's fallacy is basically explaining how people will irrationally think the probability of a future event is based on past events... even when they aren't. For example: If I flip a coin once and it lands on heads, I had a roughly 50-50 chance that it would land on heads. If I flip it again the probably is still the same that it will land on heads.

Interestingly enough, after a few hours, he came back and said "you know what, I think you're right, it probably is at least like one in a million". He went on to give some other questions that need to be answered in order to get an accurate probability.

#1 Do you collect antique coins?
#2 Do you have foreign currency in you pocket?
#3 Does your pocket have a hole in it?
#4 Do you HAVE pockets?

So, the next time someone is too stupid to understand simple things like the gambler's fallacy, make sure you aren't too smart to forget that the world is a really untidy and complicated place.

Sunday, March 7, 2010

The Peter Principle was wrong-ish

Ever get thrust into a position that you aren't prepared for? If you think so, it's probably not true...

What I mean is, most folks who are aware of their shortcomings are usually operating within a zone of competence (partly to mostly competent). Folks who "move up" and feel they are completely ready for it are either: #1 Just trying to get a leg up and don't give a damn if they do a good job or not #2 Completely unaware of their incompetence, generally a nuisance, but fairly harmless #3 A huge roadblock stumbling about and generally unknowingly making a mess of everything they touch.

If you are #1, you know you are a #1 and probably don't care, so I believe I cannot help you, therefore I won't say anything more.

If you are #2, you likely aren't reading my blog because you rarely put effort into thinking about work while AT work, so the odds that you'd ever try to think about work OUTSIDE of work are pretty low.

If you are #3, listen carefully... it's your fault. If everybody around you is an idiot and "it's not MY fault" is your standard disclaimer, you are a probably screwing things up royally and you need to carefully look at what you are doing and start making some changes.

Saturday, March 6, 2010

Groovy, Eclipse, and Karmic Koala

OK, eclipse is a widely used development tool and is pretty good, but you know.... It has really stoopid dependency problems that are only getting worse as more and more people build upon it.

For example GRECLIPSE-498 really horks me up. I realize that the debian/ubuntu packaging system is partly to blame, but a major selling point (to me) of the equinox platform is that OSGI is supposed to make things better. This bug is an example of how this goal is not really being met. If I have to redownload the entire eclipse platform every time I want to use a new plugin, I might as well just statically compile everything into one big ball of mud and be done with it.

In fact, it might be an indicator that the goal is not even worth pursuing. If the only way to determine which components I rely on is for the developer of the component to explicitly specify exactly which specific versions of which dependent components are needed, we are doomed to failure.

I'm not sure how we make this better, but re-downloading the entire platform every point release is dumb. I understanding using "non release" revisions puts me out in the bleeding edge, but there is no way for me (or any other developer) to easily fix this problem when trying to simply USE the components.

Maven, you're on my radar too... Managing interdependent components is hard, but we need to make sure we don't make things even harder than they were to begin with.