Tuesday, November 25, 2014

Easily changing java versions in OSX

In OSX, they've frankly done a pretty good job of enabling multiple versions of java at the same time, but just as frankly it's somewhat obscure and inconvenient to manage multiple versions. It's not mind bogglingly difficult but for us oldsters who are lazy, I created a convenient way to switch versions inspired by (though nowhere nearly as awesome as) rvm for ruby.

  1. Download all of the versions of java you want to use from the appropriate location java 1.6, java 7, or java 8. (you need at least one to start with)
  2. Add the following lines to ~/.bash_profile
  3. jvm() {
     export JAVA_HOME=`/usr/libexec/java_home -v "$1"`
     java -version
    }
    
    
  4. Either source .bash_profile by typing ". ~/.bash_profile" or simply close your terminal and relaunch

At this point you can change versions of java by typing:

jvm 1.6*
or
jvm 1.7*

Yes, there's more to it, refer to java_home for more version matching options, and it could be way more awesome, but this should be a quick help for those who just need a simple way to switch when troubleshooting/testing jvm version issues and you want to quickly change JDKs in an automated fashion. Note, this also works with fix pack and minor versions, you just need to refer to the version pattern matching of the '-v' option for java_home to know how to use it.

edit - I originally had an alias pointing to a function until a gracious commenter pointedly asked why I did it that way. Not having an answer I eliminated the alias. This shows the strength of my convictions about the "right way" to do things...

Thursday, September 25, 2014

Why OSX sucks and you should use ubuntu instead

OK, I confess, I use OSX almost exclusively and have for a number of years now. I DO have a number of Ubuntu machines, VMs, and servers in stable, but my goto device is a macbook pro (actually two of them, one for work, one for fun). I love the hardware, but the OS and specifically it's lack of a software package management tool has just a level of suckyness that irritates me.

Now, don't get me wrong, OSX suckyness is nothing compared to windows, but it seems to be frozen in 2004 and is not moving forward at a pace I think is acceptable considering the huge advances Ubuntu has made in a very short time frame. In the same vein, the UI for OSX is awesomely polished and user friendly, but there are some major pain points I can't seem to get past.

My Points

Ubuntu, being a Debian variant has an awesome software package management system. More importantly, just about anything you could ever want is ALREADY THERE in some shape or form. OSX has homebrew and macports...which both suck and are just plain confusing. Why in the world there is a need to do a recompile on a platform as tightly controlled as OSX when Ubuntu can deploy binary packages is a complete mystery to me.

This having been said

Apple is a hardware and user experience company, not a software company. Your hardware awesomely rawks, keep it up. Your software is pretty darn good, but you need partner with canonical and/or an open source company to get a decent package management solution (or just fork Debian...or just partner with canonical). Your development tools are horrific. Please contact a professional developer who also does open source, not a sycophantic Apple Fanboi to help fix the problem.

Monday, August 18, 2014

It's not NoSQL versus RDBMS, it's ACID + foreign keys versus eventual consistency

The Background

Coming from a diverse background and having dealt with a number of distributed systems, I routinely find myself in a situation where I need to explain why foreign keys managed by an acid compliant RDBMS (no matter how expensive or awesome), lead to a scaleability problem that can be extremely cost prohibitive to solve. I also want to clarify an important point before I begin, scaleability doesn't equate to a binary yes or no answer, scaleability should always be expressed as an cost per unit of scale and I'll illustrate why.

Let's use a simplified model of a common web architecture.

In this model, work is divided between application servers (computation) and database servers (storage). If we assume that a foreign key requires validation at the storage level, no matter how scaleable our application layer is, we're going to run into a storage scaling problem. Note: Oracle RAC is this model...at the end of the day, no matter how many RAC nodes you add, you're generally only scaling computation power, not storage.

To circumvent this problem, the logical step is to also distribute the storage. In this case, the model changes slightly and it begins to look something like this.

In this model, one used by distributed database solutions, (including high end acid compliant databases such as Oracle RAC or Exadata or IBM purescale), a information storage is distributed among nodes responsible for storage and the nodes don't share a disk. In the database scaling community, this is a "shared nothing" architecture. To illustrate this a little further, the way most distributed database work in a shared nothing architecture is one of two ways, for each piece of data they either:

  • Hash the key and use that hash to lookup the node with the data
  • Use master nodes to maintain the node to data association

So, problem solved right? In theory, especially if I'm using a very fast/efficient hashing method, this should scale very well by simply adding more nodes at the appropriate layer.

The Problem

The problem has to do with foreign keys, ACID compliance, and the overhead they incur. Ironically, this overhead actually has a potentially serious negative impact on scaleability. Moreover, our reliance on this model and it's level abstraction, often blinds us to bottlenecks and leads to mysterious phantom slowdowns and inconsistent performance.

Let's first recap a couple of things (a more detailed background can be found here for those that care to read further.

  • A foreign key is a relation in one table to a key in another table the MUST exist for an update or insert to be successful (it's a little more complicated than that, but we'll keep it simple)
  • ACID compliance refers to a set of rules about what a transaction means, but in our context, it means that for update A, I must look up information B

Here's the rub, even with a perfectly partitioned shared nothing architecture, if we need to maintain ACID compliance with foreign keys, we run into a particular problem. If the Key for update A is on one node, and the Key for update B is on a different node... we require a lookup across nodes of the cluster. The only way to avoid this problem... is to drop the foreign key and/or relax your ACID compliance. It's true that perfect forward knowledge might allow us to design the data storage in such a way that this is not really a problem, but reality is otherwise.

So, at the end of the day, when folks are throwing their hats into the ring about how NoSQL is better than RDBMS, they're really saying they want to use databases that are either:

  • ACID compliant and they'll eschew foreign keys
  • Not ACID compliant

And I think we can see that, from a scaleability perspective, there are very good reasons to do this.