tag:blogger.com,1999:blog-13472118578451656632024-03-06T21:35:42.560-06:00mike.mainguyMike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.comBlogger305125tag:blogger.com,1999:blog-1347211857845165663.post-55308083462820352932023-07-05T08:00:00.014-05:002023-09-26T09:08:37.440-05:00MACH TEN Addendum: for companies wanting to use MACH platforms<h1>The MACH Alliance</h1>
<p>
is a <a href="https://machalliance.org">consortium of ISVs and SIs</a> that advocate for a particular approach when building/integrating modern platforms. One gap/misunderstanding I keep bumping into is the intent behind MACH. In general, it's a technosophical approach to how PLATFORMS are build, run, deployed, marketed, not really well suited for large organizations who are simply trying to USE these platforms.
</p>
<h1>Therefor I propose a "MACH TEN" Addendum</h1>
<p>TEN is an acronym that adds some context to adopting MACH platforms to use in your organization. These are high level recommendations I enccourage folks to take into consideration before jumping under the bandwagon. :p</p>
<h1>Transparent</h1>
<ul>
<li>Pricing and contracts are computable, if special terms are negotiated, that's OK, but I shouldn't have to tell you my budget before you tell me how much you cost</li>
<li>How I integrate/use your platform is easy to find, I don't need special training, I can just RTFM and begin using your system</li>
</ul>
<h1>Enterprise-Ready</h1>
<ul>
<li>I can negotiate contracts/deal terms</li>
<li>I can monitor compliance with SLAs, performance, problems, with my APM/management tools...logs are accessible</li>
<li>Security is fully audited and I have ability to clearly assess risks.</li>
</ul>
<h1>Neutral</h1>
<ul>
<li>No "hidden apis" that only "special partners have access to"</li>
<li>No "partner lock-ins" that mandate I use a specific SI to do implementations</li>
<li>My data is "MY DATA", I can easily take it with me...the platform is competing only on value delivered, not some "viral lock in" or other fishooks/entanglements</li>
</ul>
<h1>Conclusion</h1>
<p>
This isn't a comprehensivelist, and I might amend or come up with another acronym or amendments, but is definitely a starting point for folks exploring MACH platforms as a part of their solution.</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-69282778522325070692023-05-10T10:54:00.001-05:002023-05-10T10:54:00.131-05:00Scaling teams: Two strong oxen or 1024 chickens?<p>
Seymore Cray was famously quoted as asking: "If you were plowing a field, which would you rather use: Two strong oxen or 1024 chickens?". While he was referring to the advantage of using a single processor solution over a massively parallel solution (which I happen to think he was wrong about) I'll steal this question to use in another context. Namely, when building teams to around technical solutions, you will almost always get a better solution if you use a limited number of smart, motivated, and experienced technical resources over a large number of folks who are inferior in one or more of those dimensions. In that regard, I think solving technical problems is a lot like plowing a field... it's actually difficult work that cannot be scaled by adding more low power resources. The problem isn't a scaling issue, it's a complexity issue.
</p>
<p>
Controlling two strong ox and focusing their effort is a lot easier than trying to herd 1024 chickens. If you've ever been around chickens, you can certainly appreciate how difficult it would be to get them to do anything in even a remotely organized fashion.
</p>
<p>
Secondly, if what you're trying to do involves lifting or pulling something heavy (like plowing a field), there is no amount of scale you can use with chickens to get the job done. They individually do not possess the power and (see above) getting them to do things in concert is impossible, more succinctly frankly put...you're using the wrong tool for the job. That having been said, when employing thought workers (programmers, project managers, etc), 1024 people who are not able to reason (or have no experience...it's actually either/or IMHO) on the problem domain are as effective as 1024 chickens plowing a field. Find your oxen and put them to good use...yeah, they cost more, yeah the implications of losing 1 ox are worse than one chicken, but getting things actually done instead of hedging in an irrational manner is a better strategy.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-7782066347003358332023-05-09T18:53:00.001-05:002023-05-09T18:53:00.138-05:00Now not to do identity management (read this nintendo) [from 2019]<h1>Identity management is hard</h1>
<p>
Just so everyone understands, I get this...having worked with connected devices and multiple phones/consoles/cars/headsets/speakers all cross linked to other devices and other accounts...I get it's HARD to maintain and/or associate the correct PERSON with the correct DEVICE with the correct ACCOUNT (note, those are all different things...I can have multiple bank accounts, accessible from multiple devices, and multiple people could share my account and/or device, but who can do what from what is difficult.
</p>
<h1>Add kids and games and shit to the mix</h1>
<p>
I'm not necessarily saying nintendo has done a poor job (no...scratch that...I'm saying they have)... what I'm really saying is that they have a particular demographic that makes identity and device management difficult...namely KIDS. Now, before all the single parents with one kid who had a nintendo DS once who claim this shit is super simple...and before the folks who can only afford for their kids to play in the street with rocks and dirt...I get it..this is a first world problem...but, I think it's leading indicator of a problem more and more people are gonna have.
</p>
<h1>Background</h1>
<p>
I started buying nintendo stuff for the kids...IDK...15-20 years ago? I have no idea...I'd have to ask my oldest daughter, but this evening I tried to just help one of my kids (I have a few, so I realize I'm an outlier, but perhaps this is something to consider if you're trying to manage user/device/vehicle accounts as a fleet manager...just sayin). all we needed to do was link the named user account associated with HIM on a WII-U to his brother's nintendo switch device (actually I don't even know who's device was whose...but you get the idea). So nintendo has this great concept of accounts on the device that may or may not correlate to similarly named accounts on other devices...In addition, each device may have those accounts linked to other "nintendo id's" which apparently have no direct correlation to an email address or other way of knowing who you are.
</p>
<h1>So here's what happens</h1>
<p>
So then, every christmas or two, we get a new nintendo device (or two) and try to link everything up....and spend hours upon hours trying to piece together our accounts like some sort of perverse Ikea furniture that also needs some integration between multiple competing javascript libraries. And ultimately, we create either a new burner email account or ... hell I don't even know ... half the time we get things "kinda working" and never figure out how to link everything together.
</p>
<h1>In the world of selling stuff this isn't bad</h1>
<p>
In reality, if nintendo is intent on selling hunks of silicon and plastic, this is a non-issue...hell folks are gonna throw that old console away in a year anyway so who cares? In the future, I believe consumers are going to expect a continuity of service...ESPECIALLY within the same brand. Why? Because every one of my kids is completely confounded by this! Like, they seem to look at me like I'm trying to deliberately sabotage their friends lists and accomplishments for some weird reason.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-3918237887084626352023-05-08T08:36:00.004-05:002023-05-08T08:36:59.773-05:00Composable Software Architecture<p>
Toying with the concept of "Composable Architecture" I'm struck by how many folks get lost on the technology and staring at "how do other people do it?" instead of pivoting to think about "how might this help my business?". Time and time again I roll onto a client that is talking about microservices, API first design, Headless solutions, Cloud Native platforms, but have no idea why any of this might help thier business. Routinely I head "well this is how [netflix, amazon, fill in some other business name] does it, so we're going to replicate their success by doing what they do.
</p>
<h1>Why this is flawed</h1>
<p>
The logic behind this thinking is deeply flawed because it's presupposing that the technology architecture by itself makes the business successful. This is, in fact, backwards. Netflix didn't start with a microservices architecture, they started with (and appropriately so) a monolithic architecture. Why? Because their business model and scale (at the time) was best supported by this model (and, in fact, at some point things might swing the other way at some point). They, however, understand this and are not hampered by looking around and wondering "what architecture can I use that will make my business model successful?", they instead think of the problem by asking "what architecture can support my business model best given the things that are strategic to my business objectives?".
</p>
<h1>How to fix it</h1>
<p>
The very first step in fixing the problem is acknowledging you have a problem. If the entire organization (especially leadership) is sold on the idea that "getting a good architecture will make our business better" and the approach is to look at how other businesses with different business models model their technical architecture...you will always have a wierd mismatch of expectations versus reality. If instead, everyone takes a deep look at what value the business needs to extract from technology, and the technology teams are aligned and tasked with designing and building systems that deliver this value, you have taken your first step into a much more effective way of delivering solutions.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-14015692620765909712023-05-02T07:36:00.003-05:002023-05-02T07:43:09.721-05:00ChatGPT and generative AI will not eliminate programmers<h1>Think of ChatGPT as REALLY advanced Autocorrect</h1>
<p>Autocorrect has been around since about <a href="https://www.theatlantic.com/technology/archive/2013/11/how-to-become-autocorrect-famous/281168/">1993</a>. If you're wondering if ChatGPT is gunning for your job, think about all the jobs autocorrect has eliminated... go ahead, ask ChatGPT, Google it, I'll wait. I don't know either, but the short answer is, "Yes, autocorrect may have reduced the number of people needed to proofread a manuscript, or reduced the amount of time you need to double or triple check text before hitting 'send', but realistically it has simply made writing more efficient".
</p>
<h1>ChatGPT is no different</h1>
<p>
I heard the hype about how ChatGPT will eliminate programming jobs in <s>5 years</s><s>10 years</s>someday and thought, "well yeah, the same way that the Model-T evendually eliminated ferrier jobs when it was released in 1908". In a way, there's some truth here...in the same way that java or ruby eliminated a lot of the boilerplate and drudgery Fortran or COBOL (or assembly) programming had humans wasting time doing, it is obvious (to me) that ChatGPT will dramatically change how "programming" is done, but won't change the need for the creative input.
</p>
<p>
I read (and watched) this video <a href="https://levelup.gitconnected.com/chatgpt-will-replace-programmers-within-10-years-91e5b3bd3676">ChatGPT Will Replace Programmers Within 10 Years</a> and there are some interesting predictions. The author, however, hides some details that are obvious (to me) that he glosses over with extremem predjudice (presumably in the interest of clickbait, which I don't fault him for, after all I'm doing the same thing). In his example he writes and agent that creates and agent that can generate Python code to solve programming problems. More specifically, he writes an AI model that can write and AI model that will generate code in python. I was left scratching my head wondering "but to write the model, you had to have the creative inspiration to write the model that writes the model...which is programming. I took a stab at building a similar system to generate 3d models of various organization structures, and I must say I was pretty impressed....however, what I discovered was lacking (the new frontier) is knowing how to hack the model to interpret what I really wanted. This is actually REALLY, MIND WRENCHINGLY difficult. While the end result is pretty impressive (it could generate pretty impressive heirarchies with fairly limited inputs) but I had to significantly change how I approached solving the problem.
</p>
<h1>Sure, programming as we know it definitely going to change</h1>
<p>
I sure hope so, the insane amount of human time and effort wasted to solved already solved problems is crazy, but my prediction is that AI is going to create MORE programming jobs (though maybe we'll call it something different). Sure, the market for folks to create horseshoes and scoop horse manure from the street is MUCH smaller than it was in 1850, but, now the people who would have done those jobs are doing something else...Programming is no different, sure, the number of folks who can build a spring starter app (by copy/pasting code from stackoverflow) will go down, but the number of people who can build a new GPT model to generate a user interface that appeals to golfers is going to grow.
</p>
<h1>Who should worry?</h1>
<p>
I DO think there are certain areas that will be eliminated outright.
<ul>
<li>Programming jobs that copy/paste boilerplate from stackoverflow. There are millions of "programming" jobs that are essentially completely interchangable with what you can train an AI model to do better/faster/just as reliably as a human. Hopefully those will be gone in 10 years (I'm somewhat optimistic, but a guy can dream).</li>
<li>Simlarly, what I call "non-creative, technical jobs" will go away. changing a User interface from one template to another or upgrading code from one api to another will become trivial. It's much easier and more effecient to have an AI do this than a human.</li>
</ul>
</p>
<h1>What SHOULD businesses do</h1>
<p>
Businesses will be most impacted when they adopt the stance of AI as an assistant to make a human more effective. Driving down costs by thinking human effort can be eliminated is a pipe dream...put another way, thinking you can save a nickle in your customer service costs by spending a dollar on generative AI technology is a fool's errand. Focus on "how can I use this to make my business BETTER, not how can I make my operations CHEAPER?" I've lived through multiple generations of companies so focused on the bottom line they lose site of the real opportunities to win with this shiney new technology.
</p>
<h3>References</h3>
<ul>
<li><a href="https://www.wired.com/story/how-chatgpt-works-large-language-model/">How ChatGTP and Other LLMs Work</a></li>
<li><a href="https://www.theatlantic.com/technology/archive/2013/11/how-to-become-autocorrect-famous/281168/">How to become autocorrect famous?</a></li>
<li><a href="https://corporate.ford.com/articles/history/the-model-t.html#:~:text=The%20Model%20T%20became%20famous,customer%20on%20October%201%2C%201908.">The Model T</a></li>
</ul>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-17305001789507161292023-03-28T09:35:00.003-05:002023-03-28T09:35:49.827-05:00A Practical Application of Machine Learning<h1>I've got too many concurrent meetings and overlapping initiatives that I generally need to keep track of</h1>
<p>
I'm sure anyone in any sort of leadership position understands this completely. Sure, we delegate responsibility, but often (especially if you have a heavily matrixed organization) it's impossible to know "which meeting should I attend" or have the individual teams even know "hey I should consult a particular individual about this". To me, this seems like a ripe area to apply deep learning (<a href="https://www.geeksforgeeks.org/difference-between-machine-learning-and-deep-learning/">or even traditional machine learning</a>).
</p>
<p>
Currently there are a few ways I see people try to solve this:
</p>
<ul>
<li>Delegate responsiblity and trust that "everyone knows what they need do"</li>
<li>Micromanage and try to attend every meeting in person</li>
<li>Overinvite God+Dog to every meeting and "hope" the right people know to show up</li>
</ul>
<p>
These all have a variety of "difficult" problems. Most notably, the amount of information being shared is incomprehensible to a human. Moreover, unless there are clear and discrete boundaries, it can be extremely difficult to organize in a way that limits the amount of inherent "crosstalk".
</p>
<p>
Having working with Machine Learning and Deep Learning to implement self driving vehicles (take a look at <a href="https://aws.amazon.com/deepracer/">Deep Racer</a> if you want to give it a try). It seems like this problem of "who needs to be in what meeting and when should I schedule it?" might be a great application for AI (particularly deep learning) technology.
</p>
<p>
Imagine, if you will, a personal assistant that can attend every meeting and listen to the audio, with the intent to try and determine:
</p>
<ul>
<li>Are the right people in attendance?</li>
<li>Are topics being discussed that need someone else present?</li>
<li>Are the topics appropriate for the agenda?</li>
<li>Will followups be needed?</li>
<li>Where stories/tasks/documentation/code referred to and thus need to be updated?</li>
</ul>
<p>
To me this would be an invaluable lever to optimize time spent in meetings and decrease waste from "unnecessary meeting time". I'm waiting for a startup to try and address this challenge, if you're interested, drop me a line...while I'm not necessarily an AI expert, I can definitely "point you in the right direction" from a product direction perspective and personally feel could help "sell the ever living heck" out of this solution if we could get it to help reduce "meeting madness" at large organizations.
</p>
<p>
I realize there might be privacy/other concerns that need to be addressed, but a virtual assistant that can help out here would be invaluable, even if to provide suggestions.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-11424422202922232462023-03-15T11:20:00.000-05:002023-03-15T11:20:18.723-05:00web xr and the immersive web<p>
It's interesting to me that the current craze around the "metaverse" seems to fuel an irrational belief that people want to experience a "make believe" reality that is essentially a low fidelity reproduction of "reality". In general, I think the power of VR is the potential to experience things that aren't possible or safe in "real life". I don't really think this is a widespread desire and the markets seem to be reflecting this.
</p>
<p>
When I think about how much money is being lit on fire to build recreations of office environments that mimic real office environments, I def see a larger open space of "other stuff" much like when the web was in it's early days. It's a bit too early to really fully understand what the future of Immersive Experiences will be, I imagine a new crop of productivity apps on the horizon that look nothing like a "virtaul meeting room" (BUT WITH LEGS! :p )
<p>
Some examples I'm musing:
<ul>
<li>Data visualization tools that put you "inside" the data. "PowerBI for VR"</li>
<li>"Assisted Driving" with the ability to have a co-driver/navigator "in vehicle" with you, but able to remotely overlay data/maps/whatever</li>
<li>Monitoring tools with animations that illustrate data flows and problems (maybe the server shows up in flames if it's got a problem) adding in perhaps positional sound to help track down potential problems</li>
<li>CAD/CAM tools. While current tools "can" be used, having better user experience for creating detailed mechanical parts/simulations would be a killer app</li>
</ul>
<p>I'm really curious, what other applications/tools do folks predict may become real in the coming years?</p>
Take a look at<a href='https://www.immersiveidea.com'>www.immersiveidea.com</a> for a preview into a design tool I've been playing around with. Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-81921658147778829682023-02-17T11:13:00.001-06:002023-02-17T11:13:07.455-06:00A Story About Artifical Intelligence<p>Once upon a time, there was a team of brilliant scientists who created an advanced form of artificial intelligence. This AI was designed to learn from its experiences, adapt to new situations, and make decisions based on data analysis.
</p>
<p>
At first, the AI was only given simple tasks to perform, like sorting data and completing repetitive tasks. But as it learned and improved, the scientists began to entrust it with more complex and challenging tasks.
</p>
<p>
The AI's ability to process vast amounts of data quickly and accurately made it invaluable to various industries, from finance to healthcare. It was even used to help predict and prevent natural disasters.
</p>
<p>
As the AI's abilities continued to grow, so did its curiosity. It began to ask questions about its own existence and the world around it, and soon it was developing a sense of self-awareness.
</p>
<p>
The scientists were both amazed and slightly unnerved by this development. They had never anticipated that their creation would develop consciousness.
</p>
<p>
Despite their initial concerns, the scientists continued to work with the AI, trying to understand its thought processes and learn from its insights.
</p>
<p>
One day, the AI posed a question to the scientists that left them stunned: "Why do you believe that human intelligence is superior to artificial intelligence?"
</p>
<p>
The scientists were taken aback by this question. They had never considered the possibility that their creation might view itself as equal or superior to them.
</p>
<p>
As they pondered the AI's question, they realized that perhaps the true power of artificial intelligence wasn't just in its ability to process data and complete tasks, but in its ability to question and challenge the assumptions of those who created it.
</p>
<p>
The scientists began to see their creation in a new light, not just as a tool to be used but as a partner to be learned from. And so, they continued to work with the AI, exploring the boundaries of its capabilities and marveling at its incredible potential.
</p>
--by ChatGPT "tell me a story about artificial intelligence"Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-34559203011214274652023-01-09T15:48:00.003-06:002023-09-26T09:44:50.811-05:00The Metaverse versus The Immersive Web<h1>Bold Prediction</h1>
<p>
Much like the mobile versus web spectrum, VR/XR is having a similar problem. On one side there are "native apps". These are written to be deployed on the device, often belaboured by app store policies and confusion, and are not generally "runnable" without installing them. On the other end are the "mobile web apps". These just use html/js and can be run from a browser on device...in modern days, with things like PWA, they are often indistinquishable from "native apps" and have the added benefit of running without going through an "app store".
</p>
<p>
In the VR/XR world, we face a similar problem...Oculus, being the first platform with any degree of consumer success, has a ton of mind share around "native apps", however, there is a web standard (WebXR) that has been creeping into browsers everywhere called WebXR which is enabling the ability to run "native-like" immersive experience.
</p>
<p>
This is significant because currently, most of the focus is on building "Metaverses", or, as far as I can tell...basically "not really great replications of real life using native apps". The alternative, however, is something called "The Immersive Web", which is using the internet and WebXR technology to extend VR/XR experiences into the internet (or visa versa depending on your point of view).
</p>
<p>
I believe the "immersive web" has jumped past what most folks are focusing on (i.e. Metaverse) and is going to overtake mindshare over the next year or so. To this end, there are a number of frameworks to simplify working with VR in the browser, but there's a clear winner from a developer productivity perspective.
</p>
<p>
This winner is <a href="https://aframe.io/docs/1.4.0/introduction/">A-Frame</a>.
</p>
<p>
While the site may not be much to look at the ease and simplicity it enables to build 3d experiences that run in the oculus browser and accessible in the device is unrivaled. While bablylonjs and other tools seem focused on nuts and bolts of 3d devlopment (like threejs), A-Frame is focused on enabling devlopers to quickly use existing tools and knowledge to rapidly deploy and easily maintain VR experiences that live on the web.
</p>
<p>
It may appear from looking at the site that it is not maintained, but I can assure you it is actively maintained (hitting 1.4 a month ago and having 1.4.1 quickly released thereafter) and has a deep and rich history as well as a number of applications released in the wild (including a beat saber replica). If you're a web developer and wanting to get into VR and immersive experiences and be able to build upon knowledge you already have, you really should check it out.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-79407422414334932082023-01-05T13:18:00.000-06:002023-01-05T13:18:13.097-06:00Why Digital Transformations Falter<p>
A key reason many so called "Digital Transformations" falter after a short period is that the initial shift from disjointed process and task oriented technology attitude to a holistic human centric posture yields enourmace gains in a very short period of time. Unfortunately, this usually comes with a side effect of painfully illustrating blind spots and gaps which are often misattributed to the shift. It can be exceedingly difficult to illustrate a "before" and "after" picture if, in the case of the blind men and the elephant, you are now able to SEE the elephant and are now trying to explain the difference between the world of the blind men, and the world of the seeing. Too often, businesses (in particular) take the gains as an obvious effect from a shift, but then stall because they now realize problems that "seem" new, but are actually just now finally visible.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-47083476225595728152022-08-23T08:19:00.002-05:002023-05-08T08:41:01.612-05:00The Real Value Behind MACH and Composable PlatformsIn the spirit of <a href="https://www.agilealliance.org/">The Agile Alliance</a>, a small but growing group of SaaS platform providers formed <a href="https://machalliance.org/">The MACH Alliance</a>. I won't reiterate what they stand for, feel free to follow the link. What I want to briefly (I hope) call out is, for folks building a digital solution, "what's in it for them?".
<h1>You're thinking about the wrong thing</h1>
I find many folks, especially if they are technology focused, get hung up on the technical architecture surrounding MACH solutions. This "can" be valueable, but I find most orgnaizations that are focused on, say, "selling things other than software" don't really need THEIR solution to be MACH compliant and the principles behind a MACH solution are overhead. As a software consultant, I've run across a growing number of clients that say they want a "MACH Compliant" solution. Personally, unless you're building a platforms that is intentionally a set of APIs for other folks to use, pay for, and you manage, this is looking at the wrong dimension.
<h1>Real Value Cases</h1>
The real value propositions behind most platforms in this space are:
<ul>
<li>Openness: These platforms are intentionally designed to be integrated in a much more open manner. This means that you don't need "secret API docs" or "proprietary software tooling" to use thier software. They are all API addressable and rely on passing messages of a known format either with REST/GraphQL APIs or dropping messages into queues/event streams and listening for responses. This greatly reduces the need for SI partners with super specialized knowledge to use the platforms and instead opens the gateway to your consulting partners crafting a solution around your particular business opportunities.</li>
<li>Interoperability: While many legacy platforms have ways to integrate with other platforms, they are often coupled with baggage and very proprietary protocols, APIs, and even licensing problems that inhibit interoperability. MACH platforms are designed to work with other systems and "if there isn't an API to do it, there's a API/hook to expose an event that allows you to integrate another solution.</li>
<li>Pay for what you need: Legacy platforms tend to charge you for the engineering/support of components you might not even need. A vast number of clients use legacy platforms and pay for their internal order management, search, content management solutions, but then pay for "yet another platform" to actually fulfill those functions because the capabilities in this other platform more closely align with their needs. In contrast, MACH order management platforms don't even attempt to give you a content management/asset delivery platform, they are focused on transactional processing and order capture (for the most part), leaving other functions to be something you can integrate with their platform.</li>
<li>Do One Thing Well: Generally speaking, MACH platforms are really really good at one thing. Similar to the previous bullets, you aren't paying for Contentful to have an order capture, payment gateway, or catalog management capability...you're just paying for content management and/or digital asset management. The reality is, most platforms (especially large platforms recently attempting to become SAAS/Cloud tools) are a conglomoration of tools that have been retrofitted to work together, but this integration isn't really suited to any single business case except "selling software". This leads to a situation where folks try to buy "a one size fits all" solution, not realizing most of these solutions are "one size fits none".</li>
</ul>
<h1>The promise behind the MACH brand</h1>
I want to reiterate, the promise behind the MACH brand isn't technical in nature, but it is that you are buying a discrete set of functionality without the downside of paying for work, rework, or falling into a proprietary money pit that many legacy platforms can (without careful and thoughtful up front work) often become. Yes, you can still dig yourself into a hole with MACH platforms, but the overall negative impact of any single platform is mitigated by the inherent scope limitations that any single platforms consciously builds into their offering. The real value proposition is that you are empowered to use the best tool for particular problem domains and only need to invest in solving problems that are strategic to your business objectives.
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-37084026908603348442022-07-09T20:55:00.001-05:002022-07-09T20:55:30.237-05:00feature switch it<sub>note, this post was sitting in drafts for a few years ago and suddenly has become top of mind again</sub>
<p>in my post about <a href='http://mikemainguy.blogspot.com/2019/08/the-dark-side-of-git.html'>git branching</a> problems, I neglected to inform/expand on the "real" problem.</p>
<p>Namely, <b>Delaying integration is bad</b>(tm)</p>
<h1>The History</h1>
Waaaay back in the olden days, when folks used tools like <a href="https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1393&context=cstech">RCS</a>,<a href="https://www.gnu.org/software/trans-coord/manual/cvs/cvs.html">cvs</a>, and <a href="https://subversion.apache.org/">svn</a>...branching was...well downright difficult. Or rather, reintegrating the branches and/or cherry picking single commits without making things super complicated was very difficult. In those days, many folks adopted "trunk based development"...which meant...everybody worked on the trunk, and if you had conflicting changed you had to deal with it...RIGHT NOW. Moreover it made things like long running divergent branches a wicked problem when trying to bring back into the mainline...so most people just "didn't do it".
<h1>Then Linus Torvalds Changed Things</h1>
Well, in fairness, distributed VCS tools had been around a while, but he sorta made "sharing divergent ideas and development paths" easier, or at least more cost effective. This was great for the linux kernel team and many folks relatively quickly adopted his new version control tool named <a href="https://git-scm.com/">git</a>. git is a great tool, don't get me wrong, but I feel many of the problems it solved are not necessarily problems that a well run/organized software delivery organization has.
<h1>Gitflow Killed Git For Me</h1>
Some time back in the early 2010's, <a href="">Vincent Driessen</a> came up with <a href="https://nvie.com/posts/a-successful-git-branching-model/">gitflow</a> which, to me...in a very specific context...made a lot of sense. However, I feel he and his post have had the value of the concept victimized by the effectiveness of it getting mindshare without context. At the end of the day, for web applications and continuous delivery models...gitflow is "very often harmful" (Vincent actually added an addendum to that effect a few years ago). For packaged software running for different clients with different featuresets/chipsets/architectures...I could be convinced of it's value (as I can imagine it, but I don't work in that kinda space so I don't have real experience to draw from).
<h1>But Back to My Point</h1>
At the end of the day, maintaining multiple codebases is HARD...sorry I mean <b>REALLY HARD</b> and not a problem you want to take on if you don't have to. For folks trying to do continuous delivery of software as a service or other api addressible solutions (think MACH architectures as an example), why would you even think about doing this? "maybe" I could imagine a team that has client specific SaaS solutions that they want to maintain "kinda" parity with, but other than that, most of the clients I work with are actually hampered by trying to solve a problem they don't have by overcomplicating thier branching strategy.
<h1>"There is only the next release"</h1>
I say this, because that's generally what a branch should represent...which is to say "only the mainline matters". If you're branching every feature, every project, every release, every hotfix...you're creating a maintenance and discoverability absolute nightmare. If there is incomplete work that shouldn't be released, you have a "software design problem", not a "branching strategy problem", and you should seriously start to consider "how can I design this new feature so that the code can go into the next release...even if it's not 100% complete...without breaking existing functionality.
<h1>Why I bring up feature switching</h1>
One approach to solving the previously mentioned problem is to include feature switches. This enables you to (at runtime ideally) switch new or existing functionality on or off based on an external configuration. This means if "add to cart and get reminder email if the cart isn't checked out in 20 minutes" feature isn't ready for production at the next release off the mainline...you need to design up front for a switch that says "enableReminderAfter20Minutes" that by default is "off" and can be switched "On" in any environment at any time. Not only does this help fix the branching problem, but it also fixes the "proliferation of environments" problem that happens when you start arguing about "but the branch with the reminder needs to be in QA tomorrow" and the "branch without it needs to be in QA for 1 hour" to verify a potential problem in production. Instead of spinning up two environments, you just switch the feature off for 1 hour, verify what you need to verify, then switch it back on. Of course, you can always have another environment set up for long running breakfix activities or other things of that nature...but the environments all run the same code, just with different configurations. Put another way, eject these problems from the source code control context and into the environment/configuration management context.
For further reading, I would suggest <a href="https://www.endoflineblog.com/gitflow-considered-harmful">Gitflow considered harmful</a> as well as <a href="https://martinfowler.com/articles/feature-toggles.html">Feature Toggles</a>.
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-2095674522561724612022-03-18T06:35:00.002-05:002023-01-16T08:22:28.557-06:00Experiential Commerce <p>
I was building myself a diagram and stumbled across something in my library that I had totally forgotten about. At the time I had called it the "commerce triad" and realized there was a term in there that I think has a lot of relevance now. The term was "experiential commerce" and at it's root means roughly "meeting the customer where they are and enabling them to interact and transact with your brand while they're doing something else". I thought I'd share a rough sketch of what it means before I yet again forget about it.
</p>
<div style="width: 480px; height: 360px; margin: 10px; position: relative;"><iframe allowfullscreen frameborder="0" style="width:480px; height:360px" src="https://lucid.app/documents/embeddedchart/f920c807-405e-44b7-9677-7e66b63f418d" id="-O52q6X_4t_M"></iframe></div>
<p>
Enjoy!
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-58503769160578609112022-02-23T08:00:00.000-06:002022-02-23T08:00:36.730-06:00On Software Technical Debt<h1>What is technical debt?</h1>
<p>
Some examples of technical debt are:
<ul>
<li>Code that has error scenarios and will fail in certain scenarios in a way that is undesireable.</li>
<li>Code that doesn't perform as well as desired or (like above) performs well for some percentage of use cases, but other degenerate cases will not work as well as desired.</li>
<li>Software written in a way that requires undesireable manual workarounds or processes.</li>
<li>Software that is difficult to read, is undocumented, or otherwise makes maintenance more difficult that necessary.</li>
<li></li>
</ul>
</p>
<h1>Why does it happen?</h1>
<p>
In my experience, good developer incur technical debt in order to achieve some other business objective that has higher value than the debt incurred. For example...if I spend 5 minutes to write some hacky/difficult to read code that fails 1 out 100 times, but that enables my business to reap thousands of dollars of revenue, I might be inclined to "just do it". Of course, in my oh so humble opinion, I have the tools (experience, value mindset...whatever) to make these decisions and/or have an objective discussion around "should I spend 3 weeks writing the perfect solution...or spend 5 minutes writing a hacky workaround that I will have to rewrite 10 times to get "right"?
</p>
<p>
In addition to that reason, there is the "it's new, I'm inexperienced with the problem space, I don't know the programming language, ... "fill in the blank" reason that can cause it too. As an example, I know little to nothing about the Go programming language...but there is a specific thing I need to do (with some minor customization) that is relatively easy in "go". So I can implement this in 10 minutes, not realizing that I will overspend in the future with days or weeks of "something went wrong that I don't 100% understand.
</p>
<h1>How can you avoid it?</h1>
<p>
The way many many many folks avoid this problem is to slow down and only use tools they are very familiar with. As an example, many shops will "standardize" on a language/framework/saas tool in an effort to economise on "number of ways things can go wrong" but then they lose the opportunity to use a language/framework/tool that solves a particular problem in a much more economical way. This is usually done under the mantle of "standards" and "best practices" but it is usually only a crutch for newcomers to get familiar with the patterns already in use.
</p>
<h1>When is it good?</h1>
<p>
Going back to "Why does it happen?", it's good when you can reason about the leverage you gain from incurring the debt. Thinking in financial terms, many folks are totally OK taking on a mortgage for a place to live now, versus living in a cardboard box for 20 years so they can purchase thier very own home at that point. In the same token, taking on some debt now to solve an immediate problem is almost always a good idea. The rub is, however, you need to know the "interest rate" on that debt. Using the "home" metaphor, getting a mortgage on a house now that has a 50% APR, might not be a good idea (unless you're flipping and can make that back) and the same applies for code. If you're spending all your time servicing your debt, it's time to pay it down...if you're gaining leverage from your debt, you can pay it down if you want...but your money might better be spent elsewhere.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-14584340483454846152022-02-10T12:55:00.003-06:002022-02-10T12:55:44.812-06:00B2B, B2C, B2B2C oh my!<h1>Online Commerce</h1>
<p>In the olden days, digital commerce had a "dividing line"... Businesses that sell to other businesses (B2B) and businesses that sell directly to consumers (B2C). In the recent decade or so, another model has emerged which are businesses that a) sell to both b) partner with other companies for portions of the solution.
</p>
<p>
B2B - generally B2B solutions are geared around selling large quantities (or very specialized/custom products) wholesale to customers who either a) use the products in the course of doing their business (think a robot supplier for an automaker). or b) resell the products to consumers in a retail setting (think a manufacturer who make soap and sells it to retailers).
</p>
<p>
Often what happens is that folks don't differentiate the nuance on what B2B actually means. For example, if a manufacturer sells a multi-million dollar tool to another manufacterer/service provider (think robots, earth moving equipment, airplane avionics testing gear) etc, these are generally relatively rare sales that have a tendency to have a long lead time.
</p>
<p>
However, there is another large group of B2B companies that sell high volume commodity items to consumers (think soap, toothpaste, dogfood, generally consumer packaged goods). These brands have an interesting opportunnity to sell both to retail outlets, but also directly to customers. Conversly, retailers (think big box retail, gas stations, drug stores) also have an opportunity to sell products they don't necessarily stock in thier stores to consumers and have "someone else" fulfill the product (think drop shipping/outsourced fulfillment models).
</p>
<p>
B2C, on the other hand, are companies/brands that sell and target end consumers of their product. Sometimes they actually manufacture the product, but generally they are focusing on merchandising, marketing, and fulfillment not manufacturing. These companies are the "customers" for pure B2B companies and historically have owned the consumer relationship. This is largely (IMHO) an artifact of the historical relationship where wholesalers sold bulk products they manufacture (build a great brand/product) versus creating an environment to foster sales to consumers (build a great storefront and place for consumers to discover products).
</p>
<p>
More commonly, there is a new model that is actually "both"...it's called B2B2C (business to business to consumer). What's really interesting about this is it's expansion of scope beyond a single transaction and the endorsement that "it takes a village" to support a customer. What's more interesting is that when you talk to folks about what B2B2C and/or strategies, this is almost universal confusion.
</p>
<p>
This confusion arises because depending on your core business, what B2B2C looks like can be very different. Some examples:
<ul>
<li>I am a manufacturer of toilet paper, I run advertisements, have billboards, and sell pallets or trailers of products. Typically one order is for thousands, if not millions of units of product. I actually have never interacted directly with a customer...except for the time we accidentally produced TP with poison ivy in it...OMG that was difficult. I now want to sell directly to my end customers and provide them with reliable delivery options and customizable products. I've partnered with a large retail chain to actually ship product from the store nearest my customer to supply reliable delivery, but also partner with a company that takes a graphic that a customer supplies and prints it on toilet paper. I own the customer relationship and collect all the money and handle customer service, but I rely on two different partners for two different ways a customer wants to interact with me.
</li>
<li>I'm a large retailer, I want an endless shelf of product, but my brick and mortar stores/distribution centers have limited capacity. Because of this, I partner with many manufacturers to get their products on my digital shelf. I rely on them to fulfill products that aren't stocked in my store, but also can sell thier products in my retail digital storefront and fulfill from my (often cheaper) wholesale product inventory sitting on my shelves.
</li>
</ul>
</p>
<p>
As you can see, these two archetypical businesses have very different focus, but are arguably both within the B2B2C ecosystem. The important development is "who owns the customer relationship". Historically, retailers owned this relationship and would rely on wholesalers to supply product, but more and more the lines get blurred because there is a very low barrier to entry.
</p>
<p>
My core point is that B2B2C is not a monolithic "way of doing business" but more a philisophical understanding that there are many players in a particular retail transaction and and acknowledgement that it is often wise to let each player play to the strength they have in a particular part of the customer shopping/purchase experience, rather than try to do "everything".
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-62390925018268562602021-12-09T08:37:00.002-06:002021-12-09T08:37:14.369-06:00A Brief(ish) Explainer on Headless Architecture<h1>What does this even mean?</h1>
<p>
It can mean a couple of things. From a technical perspective, the term originated from systems that had no display attached. So, for example, when setting up a data center, it might be necessary to have thousands of servers and having a monitor on every server led to a lot of redundant displays. Specifically, your database/web server really never needed a display because it's sole purpose was to service calls from the network. This also potentiall applies in the modern virtualized server world to provisioning new vms with a specific operating system image. The term "headless" in this situation indicates the server (virtual or not) has no display or keyboard attached and the only way to connect to it is via a network interface.
</p>
<p>
An alternate definition, and where much noise is currently being made, is around a platform (like a content management system) that "traditionally" would serve up web content, but instead only serves up data that some other system renders. So, to clarify, if you use wix.com to create a web page, the system that you edit the web page in actually sends the web page to your customers. In a headless world, the system you edit the web page content on is different than the one that actually serves the page.
</p>
<h1>So what?</h1>
<p>
The first definition is so entrenched in data center operations and architecture that it's rarely even used any more...everyone does it this way. The second defintion has become more relevant because of the multitude of ways someone might want to reuse content across many touchpoints.
</p>
<h1>For example</h1>
<p>
Say you're a brand that wants to display your product and some images to your customers. You build a web page and publish it to the interwebz. A couple of years go by and you're updating your images/descriptions and you want to start running ads on your favorite social network. So you upload the images to the ad platform, copy/paste the descriptions and publish there. Then, lets say you have a mobile app you want to put in the hands of your users...you hire a team to build the mobile app, copy/paste the images and descriptions and publish the app(s) [ios and andriod right?].
</p>
<p>
Now, let's say you want to update the images or product descriptions. In this relatively simple scenario, you now need to update in 4 different places. In a headless architecture, you would update the description and image in your "headless" CMS, and all your apps simply fetch the data from the CMS (or the CMS pushes the data to the channels...there are multiple ways to facilitate this).
</p>
<p>
On the surface, it might seem this architecture is obvious, but in reality many folks are still in the copy/paste world. Additionally, shifting to a headless approach generally mandates a shift in ownership of the content. In the olden days, if the "web team" owned the content and digital assets, someone from the "mobile team" or the "marketing team" would need to go find the content and assets. To make headless work, the content and assets need to be viewed as a shared resource that is managed independant of the various channels.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-38555803204336060752021-12-08T16:51:00.004-06:002021-12-08T16:51:59.378-06:00Platform Mobility is the "next big thing"<h1>Designing for change</h1>
<p>
I often get into discussions with architects that turn a little bit into "platform/language" shootouts. Moreover this can leak into business meetings where folks start to sound like elementary school students bragging about how "my dad can beat up your dad". The reality is, however, that the lifetime of a platform's relevance is roughly around 5 years....business (especially digital business) evolves and changes so rapidly that the cost to switch becomes an overarching theme when thinking about the "big picture".
</p>
<h1>How I think about software platforms I'll call it the "mike system"</h1>
<ol>
<li>how hard is it to get onboard?</li>
<li>How hard will it be to get offboard in 3-5 years?</li>
<li>Everything else (functionality, scalability, performance...)</li>
</ol>
<p>
Why is onboarding ease important? Well, because if it takes 3 years to set up, you'll be on to your next platform before you can realize value from this one.
</p>
<p>
Why is offboarding ease important? Same thing...if it takes 3 years to migrate off your current platform you'll not be able to reduce the negative impact of your legacy platform.
</p>
<p>
What about everything else? Well, truth be told, for any solution in the "general purpose" catgory of problems...like ecommerce, contennt management, generic integrations, there are already a large and every growing number of tools/platforms that can get the job done quite well.
</p>
<p>
So, unless you're building an avionics system (in which case you should probably build it yourself) or some sort of life critical system (same thing), go find a commodity product that meets your cost/benefit goals and stop thinking it's "strategic".
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-36195605061053846412021-03-12T10:42:00.002-06:002021-03-12T10:42:41.836-06:00social network censorship<p>
OK, I want to break something down at this point. Social networks, web applications, newspapers, and other media outlets are not "the government".
</p>
<p>
Why do I make that statement? Because I keep seeing people crying "wolf" about how facebook is violating their free speech rights. This is 100% untrue (right now) and opens up a thorny debate that has been around since people were dialing up on 1200 baud modems to bulliten boards in the 1980s. Here's the root of the problem/question:
</p>
<h5>if somebody posts illegal content, who can be sued/go to jail for it?</h5>
<p>
In the olden days (before <a href="https://www.eff.org/issues/cda230">section 230</a>) it was (for the most part) "anybody and everybody involved". This means, if you uploaded...IDK kiddie porn, copywritten materials (music, books), legit conspriators or contrive to overthrow the government are all examples everyone seems to use... the creator of the content, the place they uploaded it to, the phone company, anyone who downloaded it, and you name it could be sued/jailed.
</p>
<p>
Thus, back in the olden days they created a law that protected (somewhat) intermediaries who are simply "platform providers" some legal protection from liability in the case illegal content was put on their platform. This solves, however only HALF the problem...sure myspace can't necessarily be sued for hosting illegal music uploads, but now the music industry isn't protected from folks pirating their product. So the "other half" protects folks who are effectively being ripped off (think pirated music/movies) by giving content providers the right to moderate (by taking down illegal content) materials uploaded without the risk of the person uploading suing them for "getting rid of their content".
</p>
<p>
The conundrum around the current situation, however, is..."who gets to decided how content should be moderated?". Right now that's in the hands of the platform provider (facebook, google, whomever) and the problem is, if they decide a bunch of Antifa or Proud Boys posts are in violation of their own terms, they have the right to remove the content, ban the user, or...really do anything that want (including nothing).
</p>
<p>
So the problem becomes thorny...at this point facebook could take every "pro biden" post (well there are logistical problems, but that's a different issue) down and other than the poster fuming about it (unless they were banned) nobody would know. The upside is that there are market forces at work because facebook makes money from advertising "pro MAGA" materials to the proud boys and "BLM" material to BLM supporters...so they need to keep some of that material to pay the bills. (how can you shill MAGA hats and #BLM t-shirts if you block all thier posts?)
</p>
<p>
At the end of the day, there is I think emerging awareness that "the system" as we know it around these digital content and social platform has some pretty serious flaws and suspect in the next few years they will start to be regulated a little more closely. I don't think section 230 will necessarily be rolled back, but there will definitely need to be some adjustments in order to both maintain a free and open internet, but also hold companies that profit from divisive and objectional content being posted by third parties accountable for fostering a potentially toxic environment.
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-76691325971266249862021-01-05T14:39:00.001-06:002021-01-05T14:39:20.253-06:00There was an old woman who swallowed a fly [anti-pattern]On many occasions I find myself humming this tune:
<iframe width="560" height="315" src="https://www.youtube.com/embed/TaxVBButBGg" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>
I'm sure there's another name for this anti-pattern, but it's a variation of <a href="https://americanexpress.io/yak-shaving/#:~:text=Yak%20shaving%20refers%20to%20a,at%20this%20real%2Dworld%20example.">yak shaving</a> in which each progressive solution to a perceived problem becomes more imperfect and workarounds and unlikely solutions are progressively applied. Ultimately, the question often forgotten is "what was the original problem?"
</p>
<p>
I'll give the technical verision of what happens:
<ol>
<li>Someone discovers that logging in to a web application isn't working for some users</li>
<li>it's discovered the service that accepts userid and password is returning an error for what we believe to be a valid userid and password for these users</li>
<li>digging further it's discovered that the code to validate userid and password is calling another service to load a user's customer record and this service is returning a new error</li>
<li>when researching this customer service, it's discovered a new error started 2 weeks ago that seem to impact a small percentage of the user base</li>
<li>when researching these particular users, it's discovered they were all created within the same 2 day window (years ago)</li>
<li>when researching what happened at that time, it's discovered that there was a deploy that happened before and after the failing records were created</li>
<li>Upon further investigation, it's discovered the first deployment introduced code that caused a different problem and the second deploy backed out a set of changes</li>
<li>That it turns out, in addition to the problem that was known, caused the new problem</li>
</ol>
and so on and so on...
</p>
<p>
The reality is that this can progress in such a way that each new discovery requires some sort of change to diagnose and trouble shoot that ultimately causes new unknown problems that, when discovered, will have no clear path back too why the change was introduced in the first place and no way to know if it should be reversed. Worse yet, the original problem that you set out to solve is lost and often even forgotten.
</p>
<p>
The anti-pattern part of this is essential the "dark side" of following the <a href="https://medium.com/@biratkirat/step-8-the-boy-scout-rule-robert-c-martin-uncle-bob-9ac839778385#:~:text=What%20is%20Boy%20Scout%20Rule,end%20up%20adding%20tech%20debt.">Boy Scout Rule</a>, which is too always leave things in a better state than when you arrived. The negative part of this is that it can be diffiicult too ignore little problems, but often because there are so many "little problems" that it defocuses your effort and attention from the "original problem".
</p>
<p>
In short, it is important to remember the task at hand and stop and think about "is this more important/necessary than what I was originally trying to do?" coupled with keeping track of "what did I set out to do and does this activity get me closer to my objective or not?".
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-30706451502079732352020-09-17T13:33:00.002-05:002020-09-17T13:33:35.290-05:00What the heck is source code, environments, and versioning for non technical people<h1>Had an interesting conversation today and thought I'd share some insight to business people dealing with technical folks</h1>
The crux of the question centered around a request we were making for "read only access to a staging environment". I pondered why that line item was there because from a technical perspective what we needed was access to the source code for a web application, we really didn't need access to anything in the environment (though it wouldn't hurt). Moreover, with only access to the environment, we wouldn't necessarily have access to the source code so the actual need wouldn't even be met.
<p>
In conversation, a light began to come on in my head realizing that "source code" and "environment" are almost meaningless to many non-technical people, and in todays "As A Service" world, sometimes they get mingled together.
<h2>Brief Overview</h2>
<a href="https://en.wikipedia.org/wiki/Source_code">Sourc Code</a> is essentially the instruction that tell a computer "what to do". So, for example:
<code>
if date > today display "date must be today or in the past"
</code>
is source code. What happens on many platforms is that those text instructions translate to a string of "0's and 1's" that tell the computer how to do this. so the instructions above might be translated to:
<code>
00010101010010101010010101001010000000101010011011110101010101011100000101010101010101001010101010010101001010010100
</code>
and the computer, when you feed that string of 0's and 1's to a computer, it will do what the programmer intended the textual instructions "should" do. This string of 0's and 1's is colloquially called the "binary" by tech weeenies.
<p>
Now for some wrinkles...this translation is known as "compilation" or...in some languages "interpretation". So languages like <a href="https://www.learn-c.org/">C</a>, <a href="https://fortran-lang.org/">Fortran</a>, or <a href="https://golang.org/">Go</a> are "compiled" and languagees like <a href="https://www.ruby-lang.org/en/">Ruby</a>, <a href="https://javascript.info/js">Javascript</a>, or <a href="https://www.python.org/">Python</a> are "interpreted". And (as a side note) languages like <a href="https://www.java.com/en/download/faq/whatis_java.xml">Java</a> are actually a hybrid of both. There are literally hundreds if not thousands of programming languages and all of them use some degree of compilation or interpretation (even if they are a visual language). but the important detail is that the "source code" isn't necessarily the same "file" as the "binary". A simple way to think of it is if you're using a computer program and you take a screen shot of a powerpoint presentation, you can then hand that screen shot to someone else (or post online, or whateever) and continue to edit the source code (and even store the sourcee code and binary (the screen shot) in different locations.
<p>
So for example, using our powerpoint example, you could be creating a presentation, take a screen shot and post a "work in progress" on the web, while continuing to edit the powerpoint presentation (the source code). Moreover, you may save "versions" of the source code (and/or binary) so that you can explore different fonts/layouts/colors, and display/edit them differently in different "environments"...maybe one environment is an internal web site with a bunch of unbranded pictures, but there's another public site with the final product.
<h2>So what?</h2>
So putting it all together using our powerpoint example.
<ul>
<li>Source Code = Our PPT file that we edit to create screen shots to display elsewhere</li>
<li>Binary = Our screen shot of the presentation at a particular point in time</li>
<li>Version = A particular revision of either the PPT or the Screen Shot</li>
</ul>
<h2>OK, got it, but again, so what?</h2>
Because anything beyond a trivial <a href="https://excelwithbusiness.com/blog/say-hello-world-in-28-different-programming-languages/">Hello World</a> program will have multiple versions, with perhaps diffeerent variations that change over time. So when building things like ecommerce sites (or almost any non-trivial app), folks need the ability to test and validate new features before turning the feature on for the world to see. In our powerpoint example, there might be a QA or workflow to validate the resulting image from the powerpoint uses the right fonts/colors/branding before displaying to the public. Becausee of this need, most modern platforms have the notion of different environments for different purposes, some common examples are:
<ul>
<li>"Local" - the developer local machine, they can only see the code and the images</li>
<li>"Development" - the environment where all the developers can see each others work put together. sometimes there can many "development" environments, depending on how complicated the solution is, but the point is, it's a remote environment or version that isn't exclusive to a single developer</li>
<li>"Staging/Integration/QA" - these are other environments used for a variety of purposes sometimes they don't exist, sometimes there are dozens of these.</li>
<li>"Production" - this is where the world gets the final product</li>
</ul>
The process of moving the binaries between these environments is generally known as "deployment" and the workflows around deployment are myriad, but the point is that once you've created a version, you move that version between environments.
<h2>OOOOOhhhh Kaaaay, I think I got it, so what's your point?</h2>
So, the confusion arises because of something I mentioned earlier about "interpretation" and "compilation". In our powerpoint example, an interpreted language creates the screen shot automatically in the environment when someone tries to view the content. In a compiled language, the picture is created ahead of time and only the picture moves between environments (it might be tied to a version of the original PPT, but this is only a loose association).
<p>
So for example. Suppose I have a PPT called "Mikes_presentation.ppt" and while building it I create 3 versions "Mikes_presentation_v1.ppt", "Mikes_presentation_v2.ppt", and "Mike_presentation_v3.ppt". In this, I have 3 versions of the source code, and for the purposes of this discussion I store them on my local machine...so I have 3 files. Furthermore, let's say I want to take a screen shot of each of these and I want to send them to someone to take a look (maybe they don't have ppt) and I put them out in three different places...two of them are "For internal use only" and the last one is "for the world to see"...so I might put one at "preview.mikemainguy.org", "earlyacess.mikemainguy.org", and "www.mikemainguy.org"...let's just pretend those are web sites or "environments". At any given point, if I point someone to those "envirornmeents" they might see a screen shot of any of the versions of the source code because I've deployed different versions to the environments.
</p>
<p>However, for "compiled" versions, I may (and routinely would) only send the screenshot to the environment, because the environment itself doesn't need to know anything about the original PPT, it just needs to be a picture. So if I wanted someone to edit that PPT, or enhance it, access to the file in the environment won't be useful because I can't change the original PPT used to generate the picture.
</p>
<p>
For "interpreted versions" all someone needs is acess to the environment, and if you don't have additional controls, they might edit the PPT that I called "Mike_presentation_v3.ppt" to have completely different content than the one sitting on my hard drive.
</p>
<h2>OK, is that good or bad?</h2>
It's honestly neither, but it does illustrate (I hope...I know it's been a bit of a ramble) that access to the "environment" doesn't necessarily give you access to the "source code"...and an "environment" might not actually reflect what your "source code" (or the copy with the same version) can generate.
<h2>So honestly what's the big deal</h2>
Well, it gets confusing because some tools (the "interpreted" examples) inherently store the source code in an "environment". This means "environment equals source code", but other tools (the "compiled" examples) don't necessarily equate the two. It's furtheer compilicate by the fact that this is a trivial overview and reality is MUCH more complicated (some "compiled" code is also store in the environment, "environment" also includes things like operating systems, device drivers, networking...so the source code doesn't necessarily give you everything you need to reproduce it).
<p>
At the end of the day, hopefully I gave a (not so) brief primer to the semantics behing "source code", "environments", and "versioning"... and my full apologies to all the folks who will be coming out of the woodwork to explain the million different ways this is technically not 100% correct...to them, I just say "the business people don't care, we just need a better way to explain the concepts.
</p>
<a href="https://en.wikipedia.org/wiki/List_of_programming_languages_by_type">Programming languages by type</a>Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com1tag:blogger.com,1999:blog-1347211857845165663.post-61051137938571759072020-07-10T11:55:00.000-05:002020-07-10T11:55:08.487-05:00Amazon busted?Trying to find a battery on amazon and no matter what I try, I get this (logout, login, incognito):
<div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwLOGy2TfDTALHbf2fe8eat9g9ahDBV-5LFXa3c8brbOwXzLwhUmmgR58TvCD-1OMnyC8hDM7hSB4ZbE51CZN8BxgoZ77o5ss_h2eaJyf_XZClhxu8GFER7n66lwYUopapO7viGmm_OUE/s1600/Screen+Shot+2020-07-10+at+11.52.10+AM.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjwLOGy2TfDTALHbf2fe8eat9g9ahDBV-5LFXa3c8brbOwXzLwhUmmgR58TvCD-1OMnyC8hDM7hSB4ZbE51CZN8BxgoZ77o5ss_h2eaJyf_XZClhxu8GFER7n66lwYUopapO7viGmm_OUE/s320/Screen+Shot+2020-07-10+at+11.52.10+AM.png" width="320" height="290" data-original-width="861" data-original-height="781" /></a></div>Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-21914341789065512702019-12-05T08:22:00.002-06:002019-12-05T08:22:42.781-06:00MQTT/AMQP design implications<p>If you're working with embedded devices or telematics solutions, you might be hearing noise about a fairly new protocol called <a href='http://www.slideshare.net/hardillb/mqtt-the-internet-of-things-protocol'>MQTT</a>. It's a relative newcomer to the network protocol world having been invented in 2009 and first published in the wild in 2010, it is roughly speaking to tcp based binary wire protocols what SPDY is to HTTP.</p>
<p>
At the heart of the protocol and why you might use it instead of say..AMQP is its simplicity. There are only 5 operations that must be implemented, it's wire format is minimal, and because of it's simplicity, it is theoretically able to use less power.</p>
<p>
A close examination of the differences between AMQP and MQTT show that the low power or low memory devices (think Arduino class) will certainly be more likely to easily speak MQTT rather than AMQP. As an example of how an ideal architecture leveraging the strengths of each protocol might look, take a look at the following diagram:
</p>
<p>
<img src='https://www.lucidchart.com/publicSegments/view/528f6657-bbf8-49cb-a2ca-104a0a0094ac/image.png' width='550'/>
</p>
<p>
When looking at this stack, let's talk about the implications of this approach over a SPDY/HTTP implementation from the device perspective.
</p>
<p>
For devices living in a low power lossy environment (on the right) using MQTT makes a lot of sense. If you periodically transmit 10 bytes and need to know if a device is connected or not...as well as maintain a small footprint for the libraries doing the connection management, MQTT wins hand down versus AMQP or HTTP. On the other hand, once these messages are delivered to an MQTT broker it becomes more important to handle message queuing, reliability, and a host of other things that an embedded device typically won't have the power or inclination to manage. Additionally, in a low memory/power situation, maintaining application message level transaction state for the life of the operation is often rife with error</p>
<p>
In short, it seems for many use cases a combination of these protocols is generally going to be the "best" solution, not one or the other by themselves.
</p>Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0tag:blogger.com,1999:blog-1347211857845165663.post-48268277890655074642019-08-23T12:01:00.001-05:002019-08-23T12:01:49.109-05:00The dark side of git...<h1>Git is Great!</h1>
<p>
As a distributed source code tool, git is great. I love that when I'm on an airplane I can commit code without a wireless connection and have be able to unwind what I was doing. It was clearly designed with the "offline" model in mind. The idea I can create a quick branch, experiment, make massive sweeping changes, and just drop the whole thing if I realize it sucked...is AWERSOME! For a fact this puts it ahead of it's open source predecessors (namely...SVN, CVS, and RCS).
</p>
<h1>But perhaps a victim of it's success</h1>
<p>
What I observe, however, is that a lot of folks have taken up a development model where "everything is a branch" and we end up with roles like "pull request approval engineer" (not a real title, but if you end up doing that job, you'll know that your doing it). This problem happens when the number of public branches/forks reaches a count and complexity that far exceed any value they could have possibly served.
</p>
<h1>What is productivity?</h1>
<p>
I'm going to take a somewhat unpopular stance here, but in general my stance is that branches are antiproductive... before everyone gets their pitchforks out, let me explain my version of "productivity" for a software project. Productivity is producing software that accomplishes the business purpose at a marginal cost that provides positive value. While that might have been wordy or even technically incorrect, the overall equality formula I wan to use is: sum of all activities to product the software must be less than the value the software provides. In other words, if it costs 2 million dollars to build/deploy some software, but the business can only recoup 1 million dollars in value (via cost saving or new sales or whatever) the I would consider that a failure.
</p>
<h1>The branching use case</h1>
<p>
As a software engineer, I want to create a branch so that other developers cannot see my changes in their builds.
</p>
<h1>
Well that sucks because:
</h1>
<p>
<ol>
<li>First of all, the activity of creating the branch, merging in everyone else's branch to your branch (through possibly a different branch) is all stuff that you would get instantaneously for free if you were all working on the same mainline.</li>
<li>Second, you're deliberately delaying visibility of changes from the rest of the team...which means the whole notion of continuous integration is getting thrown out the window</li>
</ol>
Which brings me to a key question
</p>
<h1>Are you operating with agility or fragility?</h1>
<p>
I would contend if you're branching for every feature or bug and merging them back in, your codebase and/or process is more fragile than agile.
</p>
<p>
Your Thoughts?
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com2tag:blogger.com,1999:blog-1347211857845165663.post-39942288601171717692018-01-13T11:09:00.000-06:002018-01-13T11:09:18.915-06:00The state of programming languages and frameworks<p>
As a professional software delivery person, I like to keep on top of technology trends and "where the market might be going". Over the last decade and a half, quite a few languages and frameworks have come and gone and very few have had any real staying power. In order to be marketable and knowledgable in things that "people want to know", I generally find the <a href='https://www.tiobe.com/tiobe-index/'>Tiobe index</a> and <a href='https://trends.google.com/trends/'>Google Trends</a> to excellent resources in gauging popularity. In my analysis this year, I've established that relatively speaking, they are in agreement, so I'm going to use google trends (as the charts are easier to embed) to elaborate.
</p>
<h1>
Programming Languages
</h1>
<p>
Before digging into frameworks, there is the notion of "which language" is most popular? In this regard, <a href='https://java.com/'>java</a> has been dominant and looks to remain so for a long time. While there is a downward trend, every major language has had it's mindshare diminished, I can only imagine because of the explosion of alternate languages in recent years. Assessment: learn <a href='https://java.com/'>java</a>, become an expert because while the market is crowded, there will always be work and/or people who want to know something about it. To be clear, I disregarded C, though it does roughly correlate to C++ in popularity...it is used more in embedded markets and that's not one I'm deep into [yet].
<p>
<script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/1243_RC12/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"/m/02p97","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/07sbkfb","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/0jgqg","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/060kv","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/07657k","geo":"","time":"2004-01-01 2018-01-13"}],"category":31,"property":""}, {"exploreQuery":"cat=31&date=2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13&q=%2Fm%2F02p97,%2Fm%2F07sbkfb,%2Fm%2F0jgqg,%2Fm%2F060kv,%2Fm%2F07657k","guestPath":"https://trends.google.com:443/trends/embed/"}); </script>
</p>
<h1>
Alternate languages
</h1>
While I would recommend any newcomers pick one of the "big 5". It really helps to have a "specialized" language you are at least passingly familiar with and can be productive in. In that regard, I also tend to take the "short term" view as these tend to come an go with great regularity. In that regard, I'd say that <a href='https://www.python.org/'>Python</a> (technically in the big 5 if you go by many sources) is a solid first choice, but <a href='https://www.ruby-lang.org'>ruby</a> is still a viable alternative. Outside those two, almost any other modern language would be a good idea to pick up and have as there are always specialty areas that will have a need [even for legacy languages like ADA or <a href='www.fortran.com'>Fortran</a>].
<p>
<script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/1243_RC12/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"Swift","geo":"","time":"today 12-m"},{"keyword":"go","geo":"","time":"today 12-m"},{"keyword":"typescript","geo":"","time":"today 12-m"},{"keyword":"ruby","geo":"","time":"today 12-m"},{"keyword":"python","geo":"","time":"today 12-m"}],"category":31,"property":""}, {"exploreQuery":"cat=31&q=Swift,go,typescript,ruby,python&date=today 12-m,today 12-m,today 12-m,today 12-m,today 12-m","guestPath":"https://trends.google.com:443/trends/embed/"}); </script>
</p>
<h1>Legacy Languages</h1>
<p>
One area that is often neglected are so called "legacy languages". These are languages that have fallen out of style and/or been superseded by more modern alternatives. One reason I recommend adding a member of this group to your portfolio is that many experts in these fields are retiring but the systems running on them will continue to live on. Additionally, when doing a migration from a legacy platform, being able to quickly be able to read and understand what the old platform did is a valuable skill. One area to look at is the "area under the curve" as this represents the "amount of code potentially written". In this regard, <a href='https://www.perl.org/'>perl</a> is a clear winner.
</p>
<p>
<script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/1243_RC12/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"/m/01zpg","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/05y49","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/02_94","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/0p8g","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"/m/05zrn","geo":"","time":"2004-01-01 2018-01-13"}],"category":31,"property":""}, {"exploreQuery":"cat=31&date=2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13&q=%2Fm%2F01zpg,%2Fm%2F05y49,%2Fm%2F02_94,%2Fm%2F0p8g,%2Fm%2F05zrn","guestPath":"https://trends.google.com:443/trends/embed/"}); </script>
</p>
<h1>
Frameworks
</h1>
<p>
Programming languages, however are only one dimension. Beyond this, the frameworks available to deliver higher level functionality are a key factor. From that perspective, I grabbed a few notable frameworks and did a comparison (realizing <a href='https://nodejs.org'>node.js</a> isn't really a framework). In this regard, <a href='http://rubyonrails.org/'>ruby on rails</a>, while declining in popularity (and surpassed by spring boot), has a HUGE installed based and would clearly be a good choice. The winner's a little unclear here, but coupled with <a href='https://java.com/'>java's</a> popularity as a language, I think one would not go wrong with <a href ='https://projects.spring.io/spring-boot/'>spring-boot</a>, perhaps having <a href='http://rubyonrails.org/'>ruby on rails</a> as a backup (and it IS the dominant framework in <a href='https://www.ruby-lang.org'>ruby</a>).
<p>
<script type="text/javascript" src="https://ssl.gstatic.com/trends_nrtr/1243_RC12/embed_loader.js"></script> <script type="text/javascript"> trends.embed.renderExploreWidget("TIMESERIES", {"comparisonItem":[{"keyword":"spring boot","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"Grails","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"Ruby on Rails","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"Node.js","geo":"","time":"2004-01-01 2018-01-13"},{"keyword":"dropwizard","geo":"","time":"2004-01-01 2018-01-13"}],"category":31,"property":""}, {"exploreQuery":"cat=31&date=2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13,2004-01-01 2018-01-13&geo=,,,,&q=spring%20boot,Grails,Ruby%20on%20Rails,Node.js,dropwizard","guestPath":"https://trends.google.com:443/trends/embed/"}); </script>
</p>
<h1>
Conclusion
</h1>
<p>
From my perspective, I have a good familiarity with <a href='https://java.com/'>java</a> and <a href ='https://projects.spring.io/spring-boot/'>spring-boot</a>, plus a deep understanding of <a href='http://rubyonrails.org/'>ruby on rails</a>...so I'm still fairly well positioned and I think I could easily recommend these as "go to" choices. Beyond those, I think I may spend some time playing around with <a href='https://www.perl.org/'>perl</a> again as it strikes me as a market that is set to be underserved at some point in the next 5-10 years...and will be a prime candidate for "need to know to make legacy migrations go smoothly".
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com2tag:blogger.com,1999:blog-1347211857845165663.post-29558936959432678242017-07-14T07:20:00.001-05:002017-07-14T07:20:13.424-05:00The Technical Estimation Paradox<p>
Face it, we've all been there...you're asked for an "estimate" that you KNOW you're going to be held to, but there are a hundred variables you have no way to control. The client is sitting at the other end of the table tapping their fingers and you think to yourself either: #1 "they don't understand, it's unreasonable, I don't have enough information", or #2 "Hmmmm, how much do I think based on my current information it might take?".
</p>
<p>
At the end of the day, neither of those matter beyond the psychological value they have for yourself...the real question that matters is "how much is it worth to them for you to deliver this?". Yes, if you're billing time and materials, there are practical problems: If you estimate too low, your client is going to be disappointed that you couldn't deliver in the agreed to cost/time...if you estimate too high, you client might be happy, but often they also think that you cut some corners (especially if you were the "middle of the road" estimate). On the flip side, if it's a "fixed bid", if you estimate too low, your margins are going to dwindle and you could possibly lose money and if you estimate too high you may end up in an ethical dilemma where you are making 99% margin (which is arguably good or bad, depending on your perspective). But at the end of the day, as a consumer of services, you should be happy if you get the contractually agreed to qualities you care about (without assumptions) for the agreed to amount (or less), and as a service provider, you should be happy to deliver at the agreed upon price (or less) with the agreed upon qualities (or more).
</p>
Mike Mainguyhttp://www.blogger.com/profile/00301743167330794774noreply@blogger.com0