Posts

Showing posts from 2014

Why software estimates change and inflate

As a software developer (Architect?) I find myself in a constant battle with clients and project managers about changing estimates and their inaccuracy. I've lost count the number number of times I've given an estimate and then had to revise it to the dismay of folks who assumed seemingly subtle changes would move the estimate down or allow it to remain the same, only to see it creep up. There are a number of variables that increase the estimation risk and I'll briefly touch on the major factors. Major factors are: Changing requirements/assumptions - a fundamental truth is that any change incurs overhead. The act of changing the design to remove a requirement is is work too...remember, even simplifying the design is work. Removing a requirement mandates revalidating a design against (even if they're simplified) the new requirements. Changing the team structure - an experienced dev is much more effective than a newbie. Moreover a person well versed in a parti

Easily changing java versions in OSX

In OSX, they've frankly done a pretty good job of enabling multiple versions of java at the same time, but just as frankly it's somewhat obscure and inconvenient to manage multiple versions. It's not mind bogglingly difficult but for us oldsters who are lazy, I created a convenient way to switch versions inspired by (though nowhere nearly as awesome as) rvm for ruby. Download all of the versions of java you want to use from the appropriate location java 1.6 , java 7 , or java 8 . (you need at least one to start with) Add the following lines to ~/.bash_profile jvm() { export JAVA_HOME=`/usr/libexec/java_home -v "$1"` java -version } Either source .bash_profile by typing ". ~/.bash_profile" or simply close your terminal and relaunch At this point you can change versions of java by typing: jvm 1.6* or jvm 1.7* Yes, there's more to it, refer to java_home for more version matching options, and it could be way more awesome, but this

Why OSX sucks and you should use ubuntu instead

OK, I confess, I use OSX almost exclusively and have for a number of years now. I DO have a number of Ubuntu machines, VMs, and servers in stable, but my goto device is a macbook pro (actually two of them, one for work, one for fun). I love the hardware, but the OS and specifically it's lack of a software package management tool has just a level of suckyness that irritates me. Now, don't get me wrong, OSX suckyness is nothing compared to windows, but it seems to be frozen in 2004 and is not moving forward at a pace I think is acceptable considering the huge advances Ubuntu has made in a very short time frame. In the same vein, the UI for OSX is awesomely polished and user friendly, but there are some major pain points I can't seem to get past. My Points Ubuntu, being a Debian variant has an awesome software package management system. More importantly, just about anything you could ever want is ALREADY THERE in some shape or form. OSX has homebrew and macports...

It's not NoSQL versus RDBMS, it's ACID + foreign keys versus eventual consistency

Image
The Background Coming from a diverse background and having dealt with a number of distributed systems, I routinely find myself in a situation where I need to explain why foreign keys managed by an acid compliant RDBMS (no matter how expensive or awesome), lead to a scaleability problem that can be extremely cost prohibitive to solve. I also want to clarify an important point before I begin, scaleability doesn't equate to a binary yes or no answer, scaleability should always be expressed as an cost per unit of scale and I'll illustrate why. Let's use a simplified model of a common web architecture. In this model, work is divided between application servers (computation) and database servers (storage). If we assume that a foreign key requires validation at the storage level, no matter how scaleable our application layer is, we're going to run into a storage scaling problem. Note: Oracle RAC is this model...at the end of the day, no matter how many RAC nodes yo

Things to remember about information security

As more businesses look to cloud application providers for solution, the need for developers to understand secure coding practices is becoming much more important. Gone are the days when a developer would write an application that only ran in a secure environment and now it is possible for applications to be moved to locations where previously well managed security gaps now are exposed to the internet at large. Developers now more than ever need to understand basic security principles and follow practices to keep their applications and data safe from attackers. To make things more secure, a developer needs to first understand and believe the following statements: You don't know how to do it properly Nothing is completely secure Obscurity doesn't equal security Security is a continuum You don't know how to do it properly If I had a nickel for every developer who though they invented the newest, greatest, cleverest encryption/hashing routine, I'd be a millio

Avoid hibernate anemia and reduce code bloat

One of my beefs with Hibernate as an ORM is that it encourages anemic domain models that have no operations and are simply data structures. This coupled with java's verbosity tend to make code unmaintainable (when used by third party systems) as well as cause developers to focus in THINGS instead of ACTIONS. For example take the following class that represents a way to illustrate part of a flight booking at an airline: public class Flight { public Date start; public Date finish; public long getDuration() { return finish.getTime() - start.getTime(); } } This is the core "business" requirement for a use case in this model in terse java. Form an OO perspective, start and finish are attributes, and getDuration is an operation (that we happen to believe is mathematically derived from the first two fields. Of course, due to training and years of "best practices" brainwashing, most folks will immediately and mindlessly follow the java

Success, Failure, and Tradition

A quick post about my reality: Success is attainable through failure, and tradition is a crutch. I'm amazed at how many people think that they can envision some favorable outcome and attain it on the first try, flawlessly, with no course adjustments. This is as silly as imagining that a child can learn to walk simply by possessing the desire and some careful instruction. From my observation, anything even moderately complex and less automatic than breathing requires trial and error and the most important skill to learn is how to accept failure with open eyes and learn from it. In fact, I'd suggest that a critical part of learning something new is attaining an understanding what failures led to WHY it's done a certain way and not one of a million other ways. Sometimes the answer is buried in history and in fact, there might not even be a great reason for doing it that way anymore. One warning sign I use as my gauge on when this may have happened is when discussing

We are not our code

For many people (myself included), creating software is difficult, rewarding, and enjoyable. There is a feeling of pleasure in the act of creating something that is ultimately useful for someone... either for commerce, pleasure, or even the mundane (writing software to keep a deicer working properly...cool... yeah, that's a pun). It's important to keep things in the proper context though, you are not your code, I am not mine, we are not our code, to be effective, we must decouple our ego from our code and embrace Egoless Programming . Software should not be an extension of ourselves because software is much more transient than even we are, it's supposed to be, that's what makes it cool. Technology changes, the state of the art moves forward, patterns evolve, die, reemerge in a new form. In a way, deconstructing what's been done before and reforming it into something new is where the value of solid software development comes from, not from building the "u

Through The Looking Glass Architecture Antipattern

An anti-pattern is a commonly recurring software design pattern that is ineffective or counterproductive. One architectural anti-pattern I've seen a number of times is something I'll call the "Through the Looking Glass" pattern. It's named so because APIs and internal representations are often reflected and defined by their consumers. The core of this pattern is that the software components are split in a manner that couples multiple components in an inappropriate manner. An example is software that is split between a "front end" and a "back end", but they both use the same interfaces and/or value objects to do their work. This causes a situation where almost every back end change requires a front-end change and visa versa. As particularly painful examples, think of using GWT (in general) or trying to use SOAP APIs through javascript. There are a number of other ways this pattern can show up... most often it can be a team structure

Women in technology

When I stop and look around at work, I notice something that bothers me. Why is the ratio of women to men so low in technology? I've read quite a few posts from women who have lamented the locker room mentality of many technology shops and having been guilty of similar (often cringeworthy) behavior, I can certainly understand why this could be a significant detractor...but it seems like this can't be the sole reason. Only being intimately familiar with the North American technology workplace, I also wonder if this is true globally. I seem to recall on recent visit to a Chinese shop there being a higher number of women in the workplace. I recall a conversation with my daughter from last night that immediately revealed my own bias. Her car was having problems with the cooling system and I asked her if she had fixed it yet (my son and I both took a look and just shrugged). After her reply in the affirmative, I asked if she took it to a mechanic...and she stated "N

Testing Love and Polyamorous TDD

The rigor and quality of testing in the current software development world leaves a lot to be desired. Additionally, it feels like we are living in the dark ages with mysterious edicts about the "right" way to test being delivered by an anointed few vocal prophets with little or no effort being given to education of the general populace about why it is "right", instead spending effort evangelizing. I use the religious metaphor because to me it seems a very large amount of the rhetoric is intended to sway people to follow a particular set of ceremonies without doing a good job of explaining the underpinnings and why these ceremonies have value. I read with interest an post by David Heinmeier Hansson titled TDD is dead. Long live testing that pretty much sums up my opinion of the current state of affairs in this regard. A number of zealots proclaiming TDD to be the "one true way", but not a lot of evidence that this is actually true. Yes, Test Driven

The brokenness of java in the cloud

In the cloud, it's important to be able to do have computers talk to each other and invoke commands on each other. This is called Remote Method Invocation (RMI). By language, I have the following recap: Python works Ruby works Javascript works Java ... makes you work Java RMI is a godawful mess that should be killed now. RMI is a very simple thing until you let an argument of architects come up with "the best solution" and then it becomes a convoluted mess of edge cases. In it's simplest, RMI involves serializing and deserializing some input parameters and then running some logic, then doing the same with some output parameters... gosh, that almost sounds like HTTP to me (hint, it really IS). I don't know exactly where things went wrong with java RMI, but there are registries, incantations, and special glyphs you need to paint on your door on alternate Tuesdays to make it work correctly. In python, ruby, and javascript, I can do RMI 15 different w

Continuous integration versus delayed integration

A vigorous area of debate in the development and architecture community exists around the value of Continuous Integration . To give context, when a software development team gets to a certain amount of concurrent work that involves multiple teams making changes in the same codebase, there are two main ways to handle this: Each individual stream of work take a copy of the code and start working on their own Each team coordinate all work streams within the same codebase In my experience, the former method seems like a great idea, especially when folks start worrying about "what happens when project A impacts project B's timeline?". Everybody just works on your own feature and at the end of each team's work stream, just merge all the individual team's efforts back together in a shared location. The great advantage here is that if Feature A is pulled from the delivery, Feature B can be deployed independently. What is frequently overlooked is the fact that