Saturday, July 9, 2022

feature switch it

note, this post was sitting in drafts for a few years ago and suddenly has become top of mind again

in my post about git branching problems, I neglected to inform/expand on the "real" problem.

Namely, Delaying integration is bad(tm)

The History

Waaaay back in the olden days, when folks used tools like RCS,cvs, and svn...branching was...well downright difficult. Or rather, reintegrating the branches and/or cherry picking single commits without making things super complicated was very difficult. In those days, many folks adopted "trunk based development"...which meant...everybody worked on the trunk, and if you had conflicting changed you had to deal with it...RIGHT NOW. Moreover it made things like long running divergent branches a wicked problem when trying to bring back into the most people just "didn't do it".

Then Linus Torvalds Changed Things

Well, in fairness, distributed VCS tools had been around a while, but he sorta made "sharing divergent ideas and development paths" easier, or at least more cost effective. This was great for the linux kernel team and many folks relatively quickly adopted his new version control tool named git. git is a great tool, don't get me wrong, but I feel many of the problems it solved are not necessarily problems that a well run/organized software delivery organization has.

Gitflow Killed Git For Me

Some time back in the early 2010's, Vincent Driessen came up with gitflow which, to a very specific context...made a lot of sense. However, I feel he and his post have had the value of the concept victimized by the effectiveness of it getting mindshare without context. At the end of the day, for web applications and continuous delivery models...gitflow is "very often harmful" (Vincent actually added an addendum to that effect a few years ago). For packaged software running for different clients with different featuresets/chipsets/architectures...I could be convinced of it's value (as I can imagine it, but I don't work in that kinda space so I don't have real experience to draw from).

But Back to My Point

At the end of the day, maintaining multiple codebases is HARD...sorry I mean REALLY HARD and not a problem you want to take on if you don't have to. For folks trying to do continuous delivery of software as a service or other api addressible solutions (think MACH architectures as an example), why would you even think about doing this? "maybe" I could imagine a team that has client specific SaaS solutions that they want to maintain "kinda" parity with, but other than that, most of the clients I work with are actually hampered by trying to solve a problem they don't have by overcomplicating thier branching strategy.

"There is only the next release"

I say this, because that's generally what a branch should represent...which is to say "only the mainline matters". If you're branching every feature, every project, every release, every're creating a maintenance and discoverability absolute nightmare. If there is incomplete work that shouldn't be released, you have a "software design problem", not a "branching strategy problem", and you should seriously start to consider "how can I design this new feature so that the code can go into the next release...even if it's not 100% complete...without breaking existing functionality.

Why I bring up feature switching

One approach to solving the previously mentioned problem is to include feature switches. This enables you to (at runtime ideally) switch new or existing functionality on or off based on an external configuration. This means if "add to cart and get reminder email if the cart isn't checked out in 20 minutes" feature isn't ready for production at the next release off the need to design up front for a switch that says "enableReminderAfter20Minutes" that by default is "off" and can be switched "On" in any environment at any time. Not only does this help fix the branching problem, but it also fixes the "proliferation of environments" problem that happens when you start arguing about "but the branch with the reminder needs to be in QA tomorrow" and the "branch without it needs to be in QA for 1 hour" to verify a potential problem in production. Instead of spinning up two environments, you just switch the feature off for 1 hour, verify what you need to verify, then switch it back on. Of course, you can always have another environment set up for long running breakfix activities or other things of that nature...but the environments all run the same code, just with different configurations. Put another way, eject these problems from the source code control context and into the environment/configuration management context. For further reading, I would suggest Gitflow considered harmful as well as Feature Toggles.

Friday, March 18, 2022

Experiential Commerce

I was building myself a diagram and stumbled across something in my library that I had totally forgotten about. At the time I had called it the "commerce triad" and realized there was a term in there that I think has a lot of relevance now. The term was "experiential commerce" and at it's root means roughly "meeting the customer where they are and enabling them to interact and transact with your brand while they're doing something else". I thought I'd share a rough sketch of what it means before I yet again forget about it.


Wednesday, February 23, 2022

On Software Technical Debt

What is technical debt?

Some examples of technical debt are:

  • Code that has error scenarios and will fail in certain scenarios in a way that is undesireable.
  • Code that doesn't perform as well as desired or (like above) performs well for some percentage of use cases, but other degenerate cases will not work as well as desired.
  • Software written in a way that requires undesireable manual workarounds or processes.
  • Software that is difficult to read, is undocumented, or otherwise makes maintenance more difficult that necessary.

Why does it happen?

In my experience, good developer incur technical debt in order to achieve some other business objective that has higher value than the debt incurred. For example...if I spend 5 minutes to write some hacky/difficult to read code that fails 1 out 100 times, but that enables my business to reap thousands of dollars of revenue, I might be inclined to "just do it". Of course, in my oh so humble opinion, I have the tools (experience, value mindset...whatever) to make these decisions and/or have an objective discussion around "should I spend 3 weeks writing the perfect solution...or spend 5 minutes writing a hacky workaround that I will have to rewrite 10 times to get "right"?

In addition to that reason, there is the "it's new, I'm inexperienced with the problem space, I don't know the programming language, ... "fill in the blank" reason that can cause it too. As an example, I know little to nothing about the Go programming language...but there is a specific thing I need to do (with some minor customization) that is relatively easy in "go". So I can implement this in 10 minutes, not realizing that I will overspend in the future with days or weeks of "something went wrong that I don't 100% understand.

How can you avoid it?

The way many many many folks avoid this problem is to slow down and only use tools they are very familiar with. As an example, many shops will "standardize" on a language/framework/saas tool in an effort to economise on "number of ways things can go wrong" but then they lose the opportunity to use a language/framework/tool that solves a particular problem in a much more economical way. This is usually done under the mantle of "standards" and "best practices" but it is usually only a crutch for newcomers to get familiar with the patterns already in use.

When is it good?

Going back to "Why does it happen?", it's good when you can reason about the leverage you gain from incurring the debt. Thinking in financial terms, many folks are totally OK taking on a mortgage for a place to live now, versus living in a cardboard box for 20 years so they can purchase thier very own home at that point. In the same token, taking on some debt now to solve an immediate problem is almost always a good idea. The rub is, however, you need to know the "interest rate" on that debt. Using the "home" metaphor, getting a mortgage on a house now that has a 50% APR, might not be a good idea (unless you're flipping and can make that back) and the same applies for code. If you're spending all your time servicing your debt, it's time to pay it down...if you're gaining leverage from your debt, you can pay it down if you want...but your money might better be spent elsewhere.

Thursday, February 10, 2022

B2B, B2C, B2B2C oh my!

Online Commerce

In the olden days, digital commerce had a "dividing line"... Businesses that sell to other businesses (B2B) and businesses that sell directly to consumers (B2C). In the recent decade or so, another model has emerged which are businesses that a) sell to both b) partner with other companies for portions of the solution.

B2B - generally B2B solutions are geared around selling large quantities (or very specialized/custom products) wholesale to customers who either a) use the products in the course of doing their business (think a robot supplier for an automaker). or b) resell the products to consumers in a retail setting (think a manufacturer who make soap and sells it to retailers).

Often what happens is that folks don't differentiate the nuance on what B2B actually means. For example, if a manufacturer sells a multi-million dollar tool to another manufacterer/service provider (think robots, earth moving equipment, airplane avionics testing gear) etc, these are generally relatively rare sales that have a tendency to have a long lead time.

However, there is another large group of B2B companies that sell high volume commodity items to consumers (think soap, toothpaste, dogfood, generally consumer packaged goods). These brands have an interesting opportunnity to sell both to retail outlets, but also directly to customers. Conversly, retailers (think big box retail, gas stations, drug stores) also have an opportunity to sell products they don't necessarily stock in thier stores to consumers and have "someone else" fulfill the product (think drop shipping/outsourced fulfillment models).

B2C, on the other hand, are companies/brands that sell and target end consumers of their product. Sometimes they actually manufacture the product, but generally they are focusing on merchandising, marketing, and fulfillment not manufacturing. These companies are the "customers" for pure B2B companies and historically have owned the consumer relationship. This is largely (IMHO) an artifact of the historical relationship where wholesalers sold bulk products they manufacture (build a great brand/product) versus creating an environment to foster sales to consumers (build a great storefront and place for consumers to discover products).

More commonly, there is a new model that is actually "both"'s called B2B2C (business to business to consumer). What's really interesting about this is it's expansion of scope beyond a single transaction and the endorsement that "it takes a village" to support a customer. What's more interesting is that when you talk to folks about what B2B2C and/or strategies, this is almost universal confusion.

This confusion arises because depending on your core business, what B2B2C looks like can be very different. Some examples:

  • I am a manufacturer of toilet paper, I run advertisements, have billboards, and sell pallets or trailers of products. Typically one order is for thousands, if not millions of units of product. I actually have never interacted directly with a customer...except for the time we accidentally produced TP with poison ivy in it...OMG that was difficult. I now want to sell directly to my end customers and provide them with reliable delivery options and customizable products. I've partnered with a large retail chain to actually ship product from the store nearest my customer to supply reliable delivery, but also partner with a company that takes a graphic that a customer supplies and prints it on toilet paper. I own the customer relationship and collect all the money and handle customer service, but I rely on two different partners for two different ways a customer wants to interact with me.
  • I'm a large retailer, I want an endless shelf of product, but my brick and mortar stores/distribution centers have limited capacity. Because of this, I partner with many manufacturers to get their products on my digital shelf. I rely on them to fulfill products that aren't stocked in my store, but also can sell thier products in my retail digital storefront and fulfill from my (often cheaper) wholesale product inventory sitting on my shelves.

As you can see, these two archetypical businesses have very different focus, but are arguably both within the B2B2C ecosystem. The important development is "who owns the customer relationship". Historically, retailers owned this relationship and would rely on wholesalers to supply product, but more and more the lines get blurred because there is a very low barrier to entry.

My core point is that B2B2C is not a monolithic "way of doing business" but more a philisophical understanding that there are many players in a particular retail transaction and and acknowledgement that it is often wise to let each player play to the strength they have in a particular part of the customer shopping/purchase experience, rather than try to do "everything".

Thursday, December 9, 2021

A Brief(ish) Explainer on Headless Architecture

What does this even mean?

It can mean a couple of things. From a technical perspective, the term originated from systems that had no display attached. So, for example, when setting up a data center, it might be necessary to have thousands of servers and having a monitor on every server led to a lot of redundant displays. Specifically, your database/web server really never needed a display because it's sole purpose was to service calls from the network. This also potentiall applies in the modern virtualized server world to provisioning new vms with a specific operating system image. The term "headless" in this situation indicates the server (virtual or not) has no display or keyboard attached and the only way to connect to it is via a network interface.

An alternate definition, and where much noise is currently being made, is around a platform (like a content management system) that "traditionally" would serve up web content, but instead only serves up data that some other system renders. So, to clarify, if you use to create a web page, the system that you edit the web page in actually sends the web page to your customers. In a headless world, the system you edit the web page content on is different than the one that actually serves the page.

So what?

The first definition is so entrenched in data center operations and architecture that it's rarely even used any more...everyone does it this way. The second defintion has become more relevant because of the multitude of ways someone might want to reuse content across many touchpoints.

For example

Say you're a brand that wants to display your product and some images to your customers. You build a web page and publish it to the interwebz. A couple of years go by and you're updating your images/descriptions and you want to start running ads on your favorite social network. So you upload the images to the ad platform, copy/paste the descriptions and publish there. Then, lets say you have a mobile app you want to put in the hands of your hire a team to build the mobile app, copy/paste the images and descriptions and publish the app(s) [ios and andriod right?].

Now, let's say you want to update the images or product descriptions. In this relatively simple scenario, you now need to update in 4 different places. In a headless architecture, you would update the description and image in your "headless" CMS, and all your apps simply fetch the data from the CMS (or the CMS pushes the data to the channels...there are multiple ways to facilitate this).

On the surface, it might seem this architecture is obvious, but in reality many folks are still in the copy/paste world. Additionally, shifting to a headless approach generally mandates a shift in ownership of the content. In the olden days, if the "web team" owned the content and digital assets, someone from the "mobile team" or the "marketing team" would need to go find the content and assets. To make headless work, the content and assets need to be viewed as a shared resource that is managed independant of the various channels.

Wednesday, December 8, 2021

Platform Mobility is the "next big thing"

Designing for change

I often get into discussions with architects that turn a little bit into "platform/language" shootouts. Moreover this can leak into business meetings where folks start to sound like elementary school students bragging about how "my dad can beat up your dad". The reality is, however, that the lifetime of a platform's relevance is roughly around 5 (especially digital business) evolves and changes so rapidly that the cost to switch becomes an overarching theme when thinking about the "big picture".

How I think about software platforms I'll call it the "mike system"

  1. how hard is it to get onboard?
  2. How hard will it be to get offboard in 3-5 years?
  3. Everything else (functionality, scalability, performance...)

Why is onboarding ease important? Well, because if it takes 3 years to set up, you'll be on to your next platform before you can realize value from this one.

Why is offboarding ease important? Same thing...if it takes 3 years to migrate off your current platform you'll not be able to reduce the negative impact of your legacy platform.

What about everything else? Well, truth be told, for any solution in the "general purpose" catgory of ecommerce, contennt management, generic integrations, there are already a large and every growing number of tools/platforms that can get the job done quite well.

So, unless you're building an avionics system (in which case you should probably build it yourself) or some sort of life critical system (same thing), go find a commodity product that meets your cost/benefit goals and stop thinking it's "strategic".

Friday, March 12, 2021

social network censorship

OK, I want to break something down at this point. Social networks, web applications, newspapers, and other media outlets are not "the government".

Why do I make that statement? Because I keep seeing people crying "wolf" about how facebook is violating their free speech rights. This is 100% untrue (right now) and opens up a thorny debate that has been around since people were dialing up on 1200 baud modems to bulliten boards in the 1980s. Here's the root of the problem/question:

if somebody posts illegal content, who can be sued/go to jail for it?

In the olden days (before section 230) it was (for the most part) "anybody and everybody involved". This means, if you uploaded...IDK kiddie porn, copywritten materials (music, books), legit conspriators or contrive to overthrow the government are all examples everyone seems to use... the creator of the content, the place they uploaded it to, the phone company, anyone who downloaded it, and you name it could be sued/jailed.

Thus, back in the olden days they created a law that protected (somewhat) intermediaries who are simply "platform providers" some legal protection from liability in the case illegal content was put on their platform. This solves, however only HALF the problem...sure myspace can't necessarily be sued for hosting illegal music uploads, but now the music industry isn't protected from folks pirating their product. So the "other half" protects folks who are effectively being ripped off (think pirated music/movies) by giving content providers the right to moderate (by taking down illegal content) materials uploaded without the risk of the person uploading suing them for "getting rid of their content".

The conundrum around the current situation, however, is..."who gets to decided how content should be moderated?". Right now that's in the hands of the platform provider (facebook, google, whomever) and the problem is, if they decide a bunch of Antifa or Proud Boys posts are in violation of their own terms, they have the right to remove the content, ban the user, or...really do anything that want (including nothing).

So the problem becomes this point facebook could take every "pro biden" post (well there are logistical problems, but that's a different issue) down and other than the poster fuming about it (unless they were banned) nobody would know. The upside is that there are market forces at work because facebook makes money from advertising "pro MAGA" materials to the proud boys and "BLM" material to BLM they need to keep some of that material to pay the bills. (how can you shill MAGA hats and #BLM t-shirts if you block all thier posts?)

At the end of the day, there is I think emerging awareness that "the system" as we know it around these digital content and social platform has some pretty serious flaws and suspect in the next few years they will start to be regulated a little more closely. I don't think section 230 will necessarily be rolled back, but there will definitely need to be some adjustments in order to both maintain a free and open internet, but also hold companies that profit from divisive and objectional content being posted by third parties accountable for fostering a potentially toxic environment.

feature switch it

note, this post was sitting in drafts for a few years ago and suddenly has become top of mind again in my post about git branching problem...