Thursday, December 8, 2016

Five simple steps to select the ultimate software development tool

After years of keeping it a secret, I'm finally going to let folks in on how to select software development tools. The great thing about my process is it applies to programming languages, frameworks, design patterns, and many other development aspects (even 'non software development' things).

Here's the process:

  1. Does your current tool support your business goals? (things like: speed to market, cost, uptime, available programmers of the appropriate caliber)
  2. If yes, why are you reading this? you already have a tool, get to work.
  3. Can you modify your existing tool (keeping in mind the modification needs to take into account for business costs of modifying the tool) so that #1 applies?
  4. If yes, get to work modifying it.
  5. 90% of the time, I've found you don't need this step, but if you do...look for tools that more closely align with #1, try them, and go through the process again.

There you have it! Five simple steps to the ultimate software development tool.

Wednesday, November 16, 2016

News media bias and how to read through it

This morning I saw a Facebook post from a fairly liberal friend about how he watched something on Fox News and couldn't believe people thought this stuff was real. It's interesting to me, because of course it's "real", but it's only telling the parts that their audience want to hear. So to that end, I thought I'd share some tips on how to detect bias and separate "opinion" from "fact". I'm not a journalist, but this skill is very helpful in "real" life too. All people can have a variety of opinions, some deeply held, some changing all the time, but some people confuse opinion with fact and it is a major reason reason for conflict.

Opinions are judgements (hopefully) based on facts, facts are verifiable

For a little more detail, here's a pretty good assessment I won't go into detail, but a key thing to consider is "how can I disprove this?". If it is possible to disprove something, then it's probably an opinion. If two different people can observe different "facts" in the same situation, it's still an opinion. If no matter how hard you try, you can't disprove something, it's probably a fact.

So now, for my illustration of the differences (using recent events as an example). Recently, there were riots in Greece. This is verifiable from multiple sources and there are multiple pieces of photographic evidence to this. Generally, you COULD hold the opinion that all this evidence is faked by some huge conspiracy that is seeking to misinform you (and some people hold this to be true...unbelievably), but it would make your world so small because the only thing you could really trust is your own opinion based on your personal life experience and perception of the world. If you instead trust that other people can report facts to you reliably, you simply need to strip their opinions away and you can get to some kernel of "fact".

Let's look at how this has been reported in the news: Fox News has the headline "Greek police use tear gas, stun grenades to quell anti-Obama protesters in Athens". This is a really catchy and emotional statement that seems to imply Fox News has intimate knowledge of the protesters' intent. While an interesting opinion, there are many manufactured pieces of information that are presented in a manner that implies the people reporting the news are somehow omniscient and can know "why" the rioters are protesting. Nowhere does it explain how they know that communists are protesting a visit by President Obama, it's just floated as a fact (but is really an opinion). At the very end of the article, there is a quote which reads "American imperialism has not changed," Lafazanis said Tuesday. "The U.S. presidents and administrations have played — and still play — a leading part in the bailout-linked plundering of our country ... and their interventions are drowning our part of the world in blood and creating refugee waves." which seems to imply that Panagiotis Lafanzis a Populist politician in Greece is very angry with United States foreign policy. Makes sense, I respect his opinion and understand why he might feel that way.

Now, another source, CNN proclaims: Arrests in Athens as protest turns violent during Obama's visit to Greece. This seems to only state verifiable facts... President Obama is in Greece, people rioted and were arrested. Furthermore, in reading the article there are no statements that aren't attributed to a source about the rioters INTENT or theories about their THOUGHTS, simply clearly written statements about "what happened" and "who did what".

Finally, we have The Telegraph and their headline is: Greece crisis: a second day of riots in Athens (they're a few hours ahead of the US so they're reporting day 2). They have a bit of colorful opinion, because they're stating the opinion that Greece is in a crisis...not sure how they can possibly know that is or isn't true...in fact, knowing if something is a crisis or not is inherently an opinion, so I'd have to ignore that judgement. What's interesting about this (very brief) article is that we get the nugget of information that the protesters were a group of communists who had a legal permit to protest and they were attacked by a mob of counter protesters. Funny that this information is somehow lost to CNN and Fox News, but mostly just an observation on "what is important" to different news agencies.

After analysis, my opinion is that it's reasonable that pro capitalist (or plain 'ol anarchist) folks attacked a bunch of people they disagree with. It's difficult for me to understand how Fox News reached the conclusion that anyone was protesting for or against President Obama, but it's stoutly proclaimed in such a manner that an uninformed reader might believe to be a fact. Again, to read through the bias, it takes a bit of mental training to ask yourself "how do they know that?" and "can I prove or disprove it?", if the answer to those two questions is either "I don't know", or "they must have information I don't have that they didn't share", or "I don't really care, I want to believe it and it fits within my worldview so i won't challenge it", you're probably reading about folks opinions, not facts.

P.S. Japan Times has an even better article...better meaning it has more facts than any of the other three and almost zero opinion...

Saturday, November 5, 2016

The end of election 2016 is near! (Thank God)

I'm not really a political guy, in my lifetime, I've voted for: Bush, Clinton, Nader, Mccain, and Obama, so it should be clear I'm all over the board and have no political affiliation (If you're keeping score, that's 2 Republicans, 2 Democrats, and a Green Party candidate). In previous elections I was largely indifferent to the primaries and really only started researching a month or so before the election. The cycle, while I still ignored the primaries (largely), I did a little bit of research as the VERY broad republican field and the interesting wild card (Bernie Sanders) on the Democratic ticket was unavoidable (thanks twitter and Facebook! :).

As we near election day, I thought I'd share my observations as a person who "doesn't have a horse in the race". What I mean by this is the following:

  • I don't care about political ideologies... Political Parties such as: Republican, Democrat, Green, Constitution, Natural Law, Pirate (yeah, it's a thing, look it up) all are just marketing ploys. If you're a staunch "Party person" you probably can stop reading because your unconscious bias is just going to be a problem...I know this, it's OK...you've made your decision and I'm OK with it.
  • I don't care about media hype, hyperbole, ad-hominem attacks, and the 30 other ways folks are unconsciously manipulated. I prefer to challenge myself and my perspective about things to try and get to an understanding of what candidates "real" agendas are. If you're a particular fan of a candidate and have dreams about how wonderful they are, you may want to stop reading. The fact is, largely based on what I believe to be fairly strong scientific evidence, most candidates are Ego Centric Narcissists and many are probably Psychopaths. It's OK, because to do the job you need that to do the job...The only alternatives to these are #1 an almost superhuman ability to rise above the fray (I think our current President falls into this category), or #2 an almost subhuman sense of self in that you don't even care what people say because it isn't "YOU" they're talking about, it's "the party" (I'd put Bush Jr into this category).
  • I think most US media outlets have a general bias so I spend a lot of time on alternate outlets. I DO still read/watch fox news and NPR (my conservative and liberal outlets), but also follow CNN (slight liberal bias), Al-Jazeera (somewhat conservative if a bit focused on the middle east), Deutsche Welle (slight conservative bias for US topics, slight liberal for German), and the BBC (seems to be all over the board, but fairly liberal...).

My assessment of the two main candidates this year follows:

  • Donald Trump - He's running because he wants to 'Make Trump Great Again'...He gives zero shits about the "working man" or "America" except as a source of labor in the first case and a legal system for him to exploit others in the latter case. He's not really a great businessman but a GREAT marketer (he's probably better than PT Barnum IMHO). A vote for Trump is going to be good for America as long as he can actually do things to keep his ego afloat...anything that challenges that (like congress being in a deadlock and him being unable to actually DO anything) will cause him to lash out. He's going to be a great president for SNL and Comedy Central, but I think in general he will be unable to get ANYTHING done. As he has been unable to articular any concrete plans except "build a wall", we MAY see a jobs boost in the skilled trades and manual labor market as we hire millions of people to build this wall (assuming he can actually make that happen), but in the end I personally think it's a boondoggle. Perhaps, if we're lucky, that will translate into a better appreciation for this underserved area of the market and he'll be able to create infrastructure advances. As this will likely also serve his needs, he might be "good for America" and actually "make America greater than it already is", only time will tell.
  • Hillary Clinton - She's running because she believes she wants to improve the general state of affairs (pun intended) in the United States. As an egghead and political animal, she is very qualified to actually "get things done", and is willing to wheel, deal, negotiate, strong-arm, lie, cheat, and any other political maneuvering necessary to achieve her goals. Her goals seem genuinely well intended, but we all know what the road to hell is paved with. I think personally, she will be able to make effective, but only incremental, changes to things and in general we'll see her able to both articulate plans, but also report on progress. Many of these reports might just be spin or outright fabrication, but in four years I think we'll likely be only a small increment ahead of where we are right now.

Given those two statements, the question really is: Do I take a huge gamble and possibly live through another era of "Bush Jr" where the president is an embarrassment and just grunt through it...with the potential for an upside....or do I take the safe bet and have another "Clinton" presidency. It's a tough call this year, and I might just be inclined to throw my hat in the ring with another independent (perhaps Gary Johnson) knowing that they'll unlikely to get 270 votes, and even if they did, they'll likely be ineffective, but at least more of an idealist instead of being so ego driven.

For all the folks that think Trump is going to start WW III, it's doubtful to me. For the folks that think Clinton is somehow "unfit" because of all her scandals and other sketchy stuff...you're just wrong and worrying about the wrong stuff. They're both qualified --enough-ish--, make your choice and look forward to 2 years with election drama taking a sideline to cat photos in your Facebook feed. I, for one, cannot wait...

That is all...

Tuesday, September 13, 2016

My perspective on female programmers

Setting aside the obviously sexist tone, one thing I've noticed (aside from the dearth of female programmers) is that, at least on my team, the female programmers seem MUCH better at asking good questions. While I realize the sample size is too small to be even remotely scientific, I have a couple of observations that seem to hold true.

  • Female programmers seem to do "just enough" research before asking questions and can frame the question in a way that makes reaching the crux of the question very easy.
    • Too often with male programmers (especially junior ones), I get questions that amount to "I'm stuck and I don't know what to do" ... with no background or research ... so I end up playing a game of "did you google the error message?" "what is it you're actually trying to do in the big picture?" or "what have you already looked at?".
    • Worse yet, with many male programmers, they will have spent a month rewriting an entire subsystem in a new programming language (or framework)...before getting stuck on something, and THEN reach out for an answer. In this situation, when I dig back to "why are we rewriting this?" there are often uncomfortable pauses as the best answer usually ends up being a variation of "I thought it would be better" without any clarification of what "better" actually means.
  • Female programmers seem inclined to actually ASK questions instead of give ego driven general proclamations about their opinion on how it "should" be. Too often with male programmers I hear things like "this code is all crap, we should rewrite it" or "we should reimplement this with (a new language/a new tool). Any challenge to this assertion and I feel like the crusty old gunslinger that every new kid on the block has to test their mettle with in a shootout.

In short, it seems like female programmers (please feminists, don't hurt me!) are more adept at "figuring out what the real question is" and "respecting the way things are". That's not to say there aren't good innovations coming from my female team members, some of the best real improvements we've seen in our system are from these folks rolling up their sleeves and coming up with novel ways to solve problems. The difference is that they seem to "fix problems" or "make improvement" that are based on what we're ultimately trying to do instead of their opinion of "how the world SHOULD work".

I'd be inclined to think it's a reflection of culture, but it would have to be something with "females in software development" as the folks I can think of that best reflect this are from WILDLY different places and grew up (as far as I can tell) in very different ways. Perhaps this is why so many females tend to drop out of tech and take on ancillary positions like project managers and analysts...maybe those roles better reward this sort of behavior, versus the "wild west" culture of software development that seems to better reward this sort of behavior. From my perspective, we need to better foster this collaborative mentality, versus the historic "dude in a cave writing millions of lines of code in a vacuum".

For a quick guide see asking smart questions, an essential read for hackers and open source developers.

Monday, June 27, 2016

Honor, Dignity, and Victim Culture

I won't bore you with details, but I will point to an interesting article on a rise in microagressions and the concept of a "Victim Culture". To recap quickly there are three primary types of cultures discussed #1 Honor Culture - were you generally have a personal code of honor and use force to correct even the slightest of offenses #2 Dignity Culture - were you ignore insignificant (that is, legally allowed) slights and rely on dialogue and third parties to dispute offenses according to a unified, documented written code, and #3 Victim Culture - were you use the perception of victimization to resolve disputes.

In Honor cultures the strong are rewarded for being strong or oppressive and there is nothing to check their power. This leads to adaptations in behavior that favor the idea that "the strongest make the rules". Moreover, the rules tend to be arbitrary and (inherently) serve those who are already in positions of power (or in the case of revolution wherever has superior weaponry). This, to me, is clearly not an appropriate way to exist as a society. In such a society, warlords and gangs will run rampant as each leader seeks to continue to amass enough personal power to maintain their position.

On the other hand, Victim Cultures do the exact opposite, but end up devolving into the same scenario on it's ear. The adaptations become "I'm the most victimized" and in this culture a downward spiral of "who's the bigger victim" exists as each player in their attempt to amass power points out infractions against their person. To me, Victim Culture versus Honor Culture both have the problem that the source of "truth" is subjective and can be easily manipulated to favor those that are on both extremes. In this culture, individuals only become powerful by expressing their "victimhood" to third parties. This leads to gossip and demagoguery being the way to nullify perceived violations of your person.

At the top of the triangle of these cultures is "Dignity Culture". What is important about this model is that actors are automatically imbued with power (and, for example...human dignity) and their actions can only take away from it..."proper" actions are assumed and "improper" actions are punished according to a SHARED set of values. This is the foundation of (not just) the american system of government and at the cornerstone of my personal belief system. The important difference between the three is that this is the only one of them that assumes equality and codifies what is wrong action and what are appropriate consequences. This means that collectively and socially we agree to "universal" terms and rules and apply them unemotionally in all situations versus allowing either victims or oppressors to make their own rules that are highly situational and based often on emotional responses versus a "rule of law".

This having been said, I've been thinking about this a bit and, in particular, I think about a video I watched a while back with our current President speaking to the owner of a gun shop about gun control. In this clip, it appears evident that spreading Fear Uncertainty and Doubt (FUD) about the current president's position on gun control has clouded an otherwise rational person with what appear to be an inaccurate assessment of affairs. His decent into victimhood is clear by his assessment that the current administration is hell bent on punishing the "good law abiding people" while not trying to keep guns out of the hands of the "good guys". This video (to me) is a clear example of how pundits can spin situations and tell a compelling (if untrue) tale of doom that incites otherwise rational people to attribute motives that don't exist to otherwise innocent and well intentioned attempts at improving the overall state of the human condition.

My point is: Before condemning someone who doesn't share your viewpoint, step back and apply Hanlon's Razor... That is, "Never attribute to malice that which is adequately explained by carelessness". It is much easier to feel that another party is wrong and attribute actions to motives that don't exist (but fit your worldview or predispositions) than it is to step back and really look at their actions and attribute "malicious intent" instead to "lack of skill".

Monday, April 25, 2016

Headless rasberry pi 3 install for OSX for non-noobs

Having purchased a raspberry pi 3 a few weeks ago, I was quite confused by almost every reference for install mentioning "plug in HDMI monitor and USB keyboard" as a step. While I've found references on how to do a headless install, it seems that many of the instructions come from a background of "you've already installed and run the graphical installer". As a person coming from an arduino/linux server background, I really don't need X11 for my use case and just want a powerful micro controller that I can setup via ssh (well, USB would be better, I still don't understand why you can't do this using the USB connection as a tty...but that's a different discussion). What follows are the steps I used...NOTE if you use the wrong disk number you will destroy potentially important information on your machine, use at your own risk and only do this if you understand what this means otherwise you will likely have an unusable machine or at a minimum lose information.

First, download the raspbian lite image.

Next, plug your sd card into your mac

run

df

and you should see an entry that corresponds to your SD card. My output had an entry similar to this (other output omitted)

/dev/disk2s1 129022 55730 73292 44% 0 0 100% /Volumes/mysdcard

Unmount the sd card:

sudo diskutil unmount /dev/disk2s1

Copy the image to the RAW device (this means /dev/rdisk2 instead of /dev/disk2s1...the disk number will quite likely be different on your machine)...

sudo dd if=2016-03-18-raspbian-jessie-lite.img of=/dev/rdisk2 bs=1m

Note, I'm not sure about the whole "block size" thing, but this is what I used.

This will run for a few minutes with no feedback, you can hit ctrl-T in your terminal to get a status output. Once this command has completed, you can eject the disk.

sudo diskutil eject /dev/rdisk2

now plug the sd card into your pi, power it up (via usb), and look for the device (plugged into ethernet) on your network. Assuming you've found the device's IP address (mine was at 192.168.1.105) you can then ssh into the machine with:

ssh pi@192.168.1.105

Using 'raspberry' as the password

At this point you should have a functional pi image and can continue with your configuration...My first set was to resize the root partition using raspi-config (as I have a 32gb card).

Hopefully these instructions will help 'slightly more advanced' users wade through the "Noob" clutter available on the internet.

Friday, April 15, 2016

Do you test your IT operations and business processes?

The software industry expends a lot of energy making sure software is tested. From unit testing, to system and performance testing, to manual "poke it with a stick testing", almost no software team "doesn't do it" or fails to see the need. Ironically though, many places don't routinely test their IT operations and business processes. This is ironic because if those things are broken or brittle, it generally has a MUCH larger negative impact on a company than buggy software.

To clarify, I've worked with many companies that have a "backups" and "disaster recovery plans", but they never TEST to see if either of this can actually lead to a recovery in the timeframe expected. A well known (for me at least) scenario in the IT field (related to operations) is this:

  1. "Yes we do backups"
  2. Server fails, all data is gone
  3. Build new server (this works)
  4. Restore data that was previously backed up
  5. Realize backups actually were written in a way that isn't recoverable, the backups we thought were being performed have actually never worked, someone "forgot" to enable backups for that particular server...(the list goes on and on...)
  6. Weep
  7. Go out of business

Stretching outside the technical realm, there's another area that confounds me on its lack of testing maturity, which is: "Testing your process". Most places I've encountered are, at best, able to define and presumably follow a process of some sort, but generally unable to understand or define what happens when the process fails. As an example, many places have "human steps" in their process, but never test "what happens if the human forgets, has incorrect assumptions about what that step means, or is just plain lazy and lies about performing a step?". In general, there is too much reliance on an individual sense responsibility being the safeguard that the process will perform adequately.

As a very common example...if we have a software delivery process and a step is "update the API documentation", how many organizations will actually QA the process to understand how to detect and/or ensure that this step is done? More importantly, how many teams will have someone test "making a change without updating the documentation properly" to ensure that this is detected? My general answer is "a vanishingly small number".

Most people (in my experience) when quizzed about issues such as this will throw out statements like "well, we pay our people well and they are 'good' people, so we don't have to worry about it". To me, this is a silly and fragile position to take, many professions have very highly paid people who are extremely reliable (for people) that still have checks, double checks, and controls to ensure that the "things we said we were going to do" actually "got done". While I think the security industry is the dimension of tech that has made the most progress in "defining" these sort of controls, I still see that (even in that industry) most companies don't take the additional step to validate or test that the process itself is adequate and failure is detected at an appropriate manner, level, and time.

Tuesday, April 5, 2016

Let it crash

"Let it crash" is a watchword in the Erlang world that has a pretty specific meaning in that context, but can be seriously misapplied if taken out of context.

In the erlang world, a "crash" is the termination of an actor in it's specific context. In a well designed actor system, the actors have very specific jobs and if they cannot complete that job they are free to fail immediately. This is a bit of a problem for folks working in the JVM world as crash can be overloaded to mean things that change the semantics of a transactional system.

Real world(ish) example: Suppose you have a distributed system that accepts a message, writes it to a data store, then hands a new message off to three other components. Suppose further that the transactional semantics of the system are such that the job isn't "done" until all four operations have completed and are either #1 permanently successful, or #2 permanently failed.

The important detail here is that when performing a transfer, we want the balances of both accounts are updated as a single transaction and we cannot be in a state where the money has left one account, but has not arrived at the other account. To do this requires the concept of a distributed transaction, but without using an "out of the box" distributed transaction coordinator. To clarify, we will assume that the components described are exposed via web services and don't have access to each other's underlying transaction management system.

So, to design this, the trivial implementation (let's call it the synchronous model) is as follows:

In this model, we need to handle a situation where if EITHER of the nested transactions fail, the requestor can rollback the OTHER transaction also and report back to the client that the entire transaction has failed. I won't dig into the details, but this is fairly complicated and we'll leave those details alone. The important detail here is that the entire transaction is synchronous and blocking. This means that the client and the requestor must hang around (typically in memory) waiting for the other components to report "success" or "failure" before reporting anything to the client. Moreover, it means that a failure of any component ultimates is a "permanent" failure (from the client's perspective), and it's up to the client to retry the transaction if it's important. While some failures might genuinely be permanent (one or the other of the accounts maybe don't or will never exist), while other failures (connectivity to one or other of the accounts updators) may only be transient and/or short lived.

In many ways, this simplifies things as it delegates responsibility of management of success or failure to the leftmost component. That having been said, there is still potential for things to go wrong if, for example, the first updator succeeds, but then the requestor dies and is unable to rollback the first transaction.

When put that way, it's obvious (I hope), that there needs to be some intermediate management that determines if there are any "partial" transactions if the request processor dies and can immediately rollback partial transactions should a failure occur. As an example, here is what this might look like.

We're still dodging some internal transaction management housekeeping, but the important detail is that between the point where the client lost track of the requestor (because it died), and the final "transaction failed" from the supervisor, the client has no idea what the state of the transaction is...it genuinely could be that the transaction succeeded, but the connectivity between the transfer requestor and the client simply failed.

So the problems in this model are twofold: #1 it's "mostly" synchronous (though the Request supervisor -> client messaging clearly isn't), and #2 it assumes that it's "OK" for the transfer requestor to simply fail should an intermittant failure in part of the system cause a partial update to have happened. Obviously this may or may not be acceptable depending on the business rules at play, but it is certainly a common model...i.e. you aren't sure if the transaction worked because, as the client...your network went down, so you get an out of band email from the Request supervisor at your bank confirming that it, in fact, did fail.

While this is a good approach, it does tend to tie up more resources in highly concurrent systems, doesn't deal with failure very well (you only have a few hard coded strategies you can use), and when you scale to dozens or hundreds of components, the chances of a single failure become so large that you are unlikely to EVER succeed.

So what's the alternative?

The detail here (which enables more flexibility), is to assume things will intermittantly fail, and design the transaction semantics into the application protocol. This allows you to have "less robust" individual components, but adds the complexity of transaction management to the entire system. An example of how this might work:

The important details here are: #1 transaction details become persistant in the Transfer Store, #2 the Transfer Supervisor takes on the reponsibilitiy for the semantics of how the transaction strategy is managed, #3 the transaction gains the capabilities to become durable across transient failures with "most" components in the system, and #4 each independant component only needs to be available for smaller amounts of time. In general these are all desirable qualities, but...

The devil is in the details

Some of the negative side effects of this approach are that: #1 as the designer of the system, you now are explicitly reponsibile for the details of the transactional behavior, #2 if the system is to be robust across component failures, the operations must be idempotent (not have side effects across invocations). As an example of how this might be more robust, let's look at how we might implement a behavior that is durable across transient failures:

In this model, the transfer supervisor implements a simple retry strategy when the account requestor is unavailable. While this potentially make the system more robust and accounts for failure with much more flexibility, it's obvious that the Account Requestor (or the store) needs to be able to discern that sometimes it might recieve duplicates and be able to handle that gracefully. Additionally, it becomes more important to know the difference between something that mutates state and something that simply acknowledges that it is in agreement with your perspective on the state of the system.

More importantly, the latter approach now means we must take into account the difference between a "permanent failure", and a "transient failure" and this is often not a trivial task...i.e. transferring between a nonexistant account and a real account a transient problem or not?...if you think that's a trivial answer, think about a situation where there yet another async process that creates and destroys accounts? Is it acceptable to retry in 1 minute (in case the account is in the process of being created when you initially try the transfer?).

In conclusion, while distributing transactions into smaller pieces adds great power, it also comes with great responsibilitiy. This approach to designing systems is the genisys of the "let it crash" mantra bandied about by Scala and Erlang acolytes. "Let it crash" doesn't necessarily mean "relax your transaction semantics" or "don't handle errors", it means you can delegate responsibilty for recovery from failure out of a synchronous process and deal with it in more robust and novel ways.

Monday, April 4, 2016

Problems in the internet of things

Problems in the “Internet of Things”

Having worked with connected vehicles for a number of years now, there are some things that it seems newcomers always “get wrong”. Having worked through the “plain ‘ol Internet” (POI) boom, I see parallels between the mistakes made during that period and similar mistakes in the current ongoing boom. I’ll outline the biggest few:<,/p>

Failing to recognize mobility

In the POI, people used a client/server paradigm to design applications for the web. Additionally, the protocol generally chosen was one designed for document management, not application connectivity and it too almost a decade before general purpose alternatives arose that were better suited for the types of interactions desired. Moreover the general interactive design tended to try and replicate a desktop experience instead of design for the new platform. The problem here is that the device “might not be where you think it is” or it may have even “fallen off the network”. Without good tracking of these events, diagnosing problems with devices is a nightmare (is my Chevy land in the levy, or is the battery just dead?).

Failing to design for a headless device

` In the POI, folks failed to account for the fact that the client and the server were connected with a somewhat unreliable network with varying latency. This was remedied (after years of pain) by giving a user feedback and perhaps give them advice (hit refresh, if that doesn’t work call 1888-hit-it-again)… With headless devices, there is no “refresh button”. Often clever engineers will put logic in for retrying, but in my observation they forget about the fact that without giving data to a user of management service (or building updatable AI into the device for managing connectivity), the rules are often too primitive or brute force to be effective. A great one I’ve seen a number of times is a progressive fallback retry strategy that ends up with the device wait so long (or going offline) that it’s nearly impossible to account for losses.

Failing to manage embedded problems

In the original “mainframe” days, resources were fairly well managed as they were scarce and/or expensive, as we transitioned to the POI days, the equation on the costs of these things changed dramatically (memory became cheap, client storage disappeared [for a while], power was ubiquitous and distributed. In the IOT, power can become scarce and must be carefully managed (new problem [yes I know embedded folks ‘get it’], memory is a decision that can be balanced, as can storage. There are, however a multitude of other “embedded system” problems that are now being introduced to a larger group of engineers. Historically there hasn’t been a large overlap between people who deal with network protocols, backend systems, and embedded devices in uncontrolled environments. There are many systems that have “parts” of those problems, but not very many where ALL of them now must be solved for. i.e. perhaps a warehouse management system works with embedded devices talking to backends, but it’s NOT mobile, and it’s generally a controlled environment.

Security

This is the big hairy and scary gorilla in the room. At the inception of the POI, security was a very secondary concern because historically it was handled in the server room and by tightly controlling the desktop. The POI opened this up such that the client was inherently insecure and observable. This led to many mistakes by folks who were used to being able to control both sides of the equation and not realizing that this mental model is dangerous in a highly distributed world. In the IOT world, the bigger problem is that our ways of thinking about security don’t account necessarily for the fact that when devices are moving about, they encounter many network situations that just don’t happen with a web browser or mobile phone. Depending on what sorts of sensors and capabilities the devices are designed for, the number of ways that things can go wrong is multiplied many times over (versus the relatively simple problems with the POI).

This is just a short list, but hopefully gives pause to folks designing connected devices to “think about the things they might be thinking about incorrectly” when designing IOT solutions.

Thursday, March 17, 2016

Painting while blindfolded

In a discussion yesterday about how to know if our code was thoroughly tested, one of my tech leads mentioned "Unit testing without a code coverage tool is like painting a wall while blindfolded". I think it's a very apt metaphor, and something that developers should take to heart.

In most modern programming languages, there are tools to both automate unit level tests as well as determine how much of your code was actually executed as part of a test run. To run unit tests without validating what has and hasn't been covered is unprofessional. Imagine if you hired someone to paint your house and the first step is to blindfold themselves...what is your confidence that they will do a good job?

What amazes me is that I've heard a number of developers repeat this phrase "My professor told me that 100% code coverage is practically impossible"...which seems to lead them to the inevitable conclusion that "so therefore I won't write ANY tests". Fine 100% is tremendously difficult (especially if your code is poorly thought out or a "bad design (TM)", but 100% is NOT practically impossible...anyone who says this is either: #1 a hack, or #2 unprofessional. On that note, I will point out that typically CS/CE professors are quite often not professional developers or engineers, they're more often academics and/or scientists.

P.S. Professors, please stop saying this to your CS/CE students...

Wednesday, March 16, 2016

The craft of software development

There has been a recent spike (well relatively ancient in software terms...) in the term Software Craftsmanship. This is the notion that developing software bears more resemblance to a craft or art than to a scientific pursuit. This is a very accurate and a good explanation for why guilds and other artisanal concepts creep into software development circles.

I've noticed interesting contrasting correlations in folks who "grew up" developing software to solve problems versus people who learned computer science or computer engineering at a university. While the latter generally have a more sound grounding in the theoretical underpinning of "how computers work", the former have a better grasp at "how software delivers value". Put another way, computer scientists and computer engineers seem to have a focus on the nuts and bolts of how computers work instead of how to get them to do new and valuable things.

This is not a BAD thing, but losing sight (or not focusing on) "how what I'm building adds value" and instead focusing on "how the computer does stuff" tends to lead to what I'll term "science experiments". Using a different metaphor, a craftsman finds tools that are "good enough" for the job and spends a little effort of time to constantly refines them whereas a scientist will spends a LOT of effort over time finding or building the "perfect tool" for the job and little effort to actually do "the job".

Put another way, if I want to build something out of wood, I could spend days/weeks/months studying the various characteristics if different kinds of wood, or I could build a cabinet out of whatever lumber I can get my hands on and refine my techniques as I progress through more projects with different kinds of wood. I find the latter to be a more useful approach, unless my value proposition is that I want to be the foremost expert on "all things wooden". To illustrate the difference I ask the question: "Do you want your cabinets built by someone who has built many progressively better cabinets with a variety of materials, or by someone who has never actually built anything, but can explain the shear strength of cedar versus maple?".

While I realize this is an oversimplification (well maybe not as much as we'd like to admit), where I find a particular weakness in the industry right now is in developing the "craft". Too often, industrialized software development ignores a particular aspect of craft, that is, the continuous improvement of your tools and process and instead focuses on a particular "perfect technology" without refining the art. I've seen too many developers focus on finding the "perfect tool" instead of learning how to work with the ones they already have...the process of learning new tools and techniques better serves "building useful things" when it is constant and evolutionary instead of intermittent and revolutionary.

Wednesday, January 27, 2016

Process versus flexibility

I heard an interesting statement (just minutes ago) that really struck a chord with me. agreed, we need to have a balance between proceess and flexibility which both resonated with me, but also sent a shiver up (and down) my spine. A common thread I see is that folks who like "process" tend to be "inflexible" and people who espouse "flexibility" do so at the expense of "good process". I disagree violently and think this is a false dichotomy. At the end of the day, "a crappy process" or "an ad hoc process" both still a "process", and having either of those processes is no more or less flexible than having a vigorous and dynamic process that supports what you're business objectives.

The problem as I see it is that "process" has gotten a bad rap because many egghead know-nothing consultants have sold horribly convoluted and untenable processes to folks and when said customers see their effectiveness crash, they blame the "process". Having lived through this, I've seen many ad-hoc and not well though out processes be VERY effective and watched (multiple times) well meaning, but misguided academic "one size fit's all" processes wreak havoc on a system that, instead of being improved, was utterly destroyed by a new...but ineffective...process.

My stance is this: If your business model needs flexibility, you process should support flexibility in places that it makes sense...but it should discourage flexibility in places where it's working against your goals. In the "process" camp, many people fall victim to the idea that "I've designed the 'ultimate' process...if your business needs don't fit into it, you're a stupid head". On the flip side, the "process is evil" camp, there's the notion that "process is for morons, I have smart people, I can just change things at a moment's notice and they'll just do whatever they think is right and magically everything will work out". Obviously both of these are wrong (put this way), but at the end of the day, not using process oriented thinking will lead to conflict.

Listen, an "ad-hoc, do what you want" process is still a process. I would contend that an entire company set up where everybody just does what they feel like doing at that moment regardless of business objectives...will fail quickly...unless coincidentally everyone's goals happen to be aligned and they have perfect knowledge of what each and every other person is doing. Yet again on the flip-side...process for process sake is just dumb and generally only rewards management consultants and people who really want to move papers from point A to point B.

In conclusion, the important thing to remember is that your process should support things you want to happen, and discourage things you don't want to happen. If the risk/reward within the process is equal or upside down, you will spend time fighting the process instead of letting it support your objectives. Put another way, the goal is to make the process YOUR tool, instead of being just a tool abiding by the process.

Tuesday, January 26, 2016

Navigating The Internet of Things

Having worked a number of years with connected devices, I thought I'd like to briefly share some observations and pitfalls that folks just arriving to the field should heed.

The network isn't always there

Many folks arriving on the scene of connected devices come from a background where their applications ran in the datacenter and connectivity was a user's problem. That is, if someone tries to use your site, can't, then tries google and it doesn't work...they assume the problem is on THIER end. While this isn't universal, it's much more likely than someone who tries to turn off the lights in their house or start their car and it doesn't work. While wireless providers do a very good job of connecting when devices are reachable, there are so many variables that impact coverage for a wireless device, it is almost impossible in most cases for a user to make a determination what is casing their "thing" to not work. This is especially aggravated when the device is mobile...that is, a Nest connected via WIFI is generally "working" once it initially establishes a wireless connection but a connected vehicle moving at highway speeds will very often lose connectivity for periods of time (never mind parking in "dead" zones like underground parking garages or dead zones in rural areas). While a consumer device normally has an indicator of signal strength, many vehicle manufacturer still don't provide for this functionality.

Manage battery life

Designing a device to be "always on" with a high speed data connection is a recipe for designing a device that isn't really useful. Having just purchased a Samsung Gear S, the need to charge every day is a bit of a drag. Yes I could disable all connectivity and stretch to battery life to days, but then...it's just a watch. Don't get me wrong, I love the connectedness of my gear, but it's been sitting dead on my desk for a week now because I keep forgetting to charge it. Power management is a "big deal", don't drain your user's battery necessarily and give them options to help manage the tradeoffs between "battery life" and "connectivity". For folks used to writing desktop and server applications, the idea of managing power is a completely new thing that embedded folks might understand...but surrendering to embedded notion of conserving every last milliamp hour at the expense of usability is not going to cut it.

Open wins

Too many folks in the IOT space are hoping to win market by using proprietary APIs and locking them down to keep competition at bay, this is a short sighted position. Apple realized this early on and Google forced the issue by creating an open platform. While Google itself may not directly be reaping the benefits of proprietary licensing schemes, they have succeeded in fragmenting the market and creating new opportunities for revenue that sticking with a proprietary network and stack would never allow (remember RIM?).