Enterprise Scalability and Architecture
I've recently spent a lot of time explaining scalability and architecture to people in our organization and am being largely unsuccessful. I blame hardware vendors for this, they shove new technologies at unsuspecting infrastructure professionals without any high level explanation about the real advantages.
As my first example, I will talk about logical partitioning. Listen folks, this is not a scalability feature! Don't listen to the salesman, he doesn't know what the hell he is talking about. I will admit I can imagine some scenarios where this might be able to help you scale a collection of applications, but do NOT try and tell me that adding a new LPAR is somehow scaling my application. Also do not tell me I should buy a machine that is 20x more expensive because it's more "scalable".
The problem with machines that share disk, buses, cache, or other resources is that you need to manage who is allowed to use the resource at any given time. It is very dangerous to allow two different processes to modify a memory location at the same time so you need to manage this. This management is not free and as you add more things (processors) sharing resources the overhead typically increases at an order of magnitude that is greater than linear. Put another way, this means the system will scale at an order of magnitude that is less than linear.
No matter what manner of wizardry your hardware vendor has put into his machines, if you are sharing anything, you are going to incur overhead. This overhead will also increase as you add nodes that need to contend with each other. So any salesman who tells you otherwise is simply wrong.
The number one most sure fire super hot way to scale an application is to partition the data and/or work and do the processing on independent machines. This is the holy grail of scalability, it is a sure fire, can't lose proposition. It is hard to do and does require thought, creativity, and a good deal of elbow grease. It is, however, also a huge reward for the effort and is the ONLY sure fire path to scalability.
As my first example, I will talk about logical partitioning. Listen folks, this is not a scalability feature! Don't listen to the salesman, he doesn't know what the hell he is talking about. I will admit I can imagine some scenarios where this might be able to help you scale a collection of applications, but do NOT try and tell me that adding a new LPAR is somehow scaling my application. Also do not tell me I should buy a machine that is 20x more expensive because it's more "scalable".
The problem with machines that share disk, buses, cache, or other resources is that you need to manage who is allowed to use the resource at any given time. It is very dangerous to allow two different processes to modify a memory location at the same time so you need to manage this. This management is not free and as you add more things (processors) sharing resources the overhead typically increases at an order of magnitude that is greater than linear. Put another way, this means the system will scale at an order of magnitude that is less than linear.
No matter what manner of wizardry your hardware vendor has put into his machines, if you are sharing anything, you are going to incur overhead. This overhead will also increase as you add nodes that need to contend with each other. So any salesman who tells you otherwise is simply wrong.
The number one most sure fire super hot way to scale an application is to partition the data and/or work and do the processing on independent machines. This is the holy grail of scalability, it is a sure fire, can't lose proposition. It is hard to do and does require thought, creativity, and a good deal of elbow grease. It is, however, also a huge reward for the effort and is the ONLY sure fire path to scalability.
Comments