I have evaluated many vendors offering solutions to help with different areas of needs in business. (There seems to be an interesting correlation between the pace of business transformation and the pace of evaluating new B2B SaaS solutions to fill various business operations needs.) Over time, I’ve come to place a lot of importance on aspects that are often ignored when validating vendors; specifically, on the factors around their long-term ability to deliver and remain agile by today’s expectations.
I’m not talking about just those 3rd party products that we integrate to deliver our customer experience (e.g. a loyalty management product). I’m talking about every B2B service or product that we use to keep our operations running smoothly and to keep up with the demands from the emerging digital organization (as it goes through its own transformation), for example, store operations & execution, labor operations, crew training & certification, staffing & scheduling, inventory management, supply chain distribution and logistics, workplace engagement, etc.
Just like consumer-facing businesses, incumbent B2B vendors are equally at risk of being disrupted by digitally native startups or the big technology platform players. The criteria and lens we are accustomed to use to evaluate and select vendors and their solutions for our business operations are no longer sufficient to predict their ability and viability to sustainably meet the dynamic demands from the ever increasing pressure of business operations process improvement across the board, as a result of our own digital transformation.
For example, common practices in vendor selection look at product capabilities as compared to business functional requirements, cost/pricing, services & support, financial & organizational viability, product position in the market, etc. However, product capabilities and market position are static snapshot views reflecting how the product (and its vendor) performs at the time of assessment. Instead, we need to assess and predict a B2B vendor’s evolution fitness for weathering market disruption dynamics and meeting our digital transformation needs, by evaluating its product engineering practices. In today’s world, this proves to be an important viability predictor.
Over the course of this year, I used the same principles we apply in digital transformation, in DevOps, and the same engineering KPIs we use to measure our own progress, to evaluate a variety of vendors across several B2B market domains on their respective product engineering practices. Some of the evaluations were for the purpose of selecting new solutions to address gaps or needs in business operations, while others proactive on-going “fitness” assessment based on our growth needs.
The results were quite telling.
Sometimes, it felt like a mini live-play of disruption, watching the young, small startups easily catching up on functionalities against establishments in a matter of a few months due to their abilities to push new updates many times a week (or day) and their abilities to “co-evolve” experiences with end-users directly by sending them “what-if” features and collecting instant feedback for the next iteration.
Other times, I could see where each established player was at in their own transformation journeys along the whole spectrum – some stuck in their past success, and unaware of the risk and pressure coming from platform players; some aware of their own challenges and competition in the market, but didn’t truly understand the nature of the new “agile” game and what needed to be transformed at their cores.
To be clear, startups are not automatically better at engineering new products. I’ve observed a common misconception by startups and long-time incumbents alike when showcasing their solutions’ architecture, scalability, and reliability etc. aspects. Being in the cloud is by no means any proof of the organization’s own agility or engineering maturity for scaling their product.
For big or small players alike, it’s easy to choose one of the cloud platform providers to build out the initial solution. There’s so much out-of-the-box platform-provided capabilities to be leveraged right away, almost anyone can have “something” built quickly these days. The question is: is this truly a “product” (albeit MVP) ready to be iterated quickly in the field with real users? How quickly can the vendor roll out new functionalities and scale up in a consistently reliable manner? When the only answer I got back on the question of “what’s your approach to scaling your solution” is, “the underlying Azure platform will scale for us”, it was clear there’s no deep architecture or engineering thought behind the solution (possibly due to the lack of experience or understanding of product/solution-level architecture & engineering needs for scaling).
To effectively support large business customers and their increasingly more rapid transformation needs, B2B SaaS providers must understand what it means to scale the product, the experience, and the engineering team itself, sustainably.
There are a few areas of probing I rely on for clues into the maturity level of a B2B provider’s product engineering practices:
1. Trunk-Based Development. Many vendors were surprised when I probed about code management, branching and merging approach used in development. Based on my experience over the course of this year, it’s amazing how directly and accurately it correlates with a vendor’s product release agility. For a sizeable engineering team, any vendor who is not doing trunk-based development almost always are the ones that can only release once every 4-6 months or longer. They are also more likely to run into last minute release stability issues, resulting in further delays in releases.
2. Test Strategy and Approach. The level of automation (and where & how automation is done) and sophistication in testing methodology directly reflect an organization’s engineering maturity. I’ve been presented plenty vendor talks of surface-level automated tests (e.g. using UI drivers) and boiler-plate statements on unit tests, whereas those that truly get it right demonstrate working test executions and outcomes with more and more sophisticated tests layered along the entire DevOps tool chain. Testing in their universe is at the core of engineering & tool chains.
These two areas, along with DevOps, enable the engineering team to keep their product in an “always deployable” state as much as possible, which is the foundation for being able to iterate and evolve the product in the field quickly. (I refrained from using DevOps as a primary source of probing, as I found that CI/CD and DevOps, just like “agile”, have been talked about so much and so universally, it’s hard to differentiate one from another just by description and discussions in a limited amount of time.)
3. Demonstrated commitment in product observability as part of foundational solution elements, e.g. telemetry implementation & tooling. Understanding the importance of observability and subsequently having the ability to proactively prevent production issues or to troubleshoot quickly improves production availability and reliability. This provides the foundational confidence for pushing frequent software updates (i.e. product iterations).
4. Demonstrated thoughtfulness in the selection, definition, and implementation of KPIs that measure user experience performance. One of the manifestations of great product engineering is the data-driven feedback loop of how the product (through its features/functionalities) is performing in users’ hands. It’s easy to simply track service response times, or even just various page, screen, or UI element load times as reported by the underlying frameworks; it’s much more work to figure out meaningful user-centric performance metrics, implement them, and then optimize user experience accordingly. Engineering team who are not close to user experience feedback tend to optimize the raw performance times for certain areas of code in isolation – these numbers would often look quite acceptable or even good, whereas in real use, one can pull out a watch and count to many seconds or even minutes before the product is loaded or usable.
This sets the foundation for engineering-enabled, data-driven, test-&-learn product evolution mindset.
5. Approach to product evolution. “Tell me what you want and I will build it for you.” I hear this quite often from vendors eager to satisfy prospective customers. End users often request and describe a feature/functionality request based on his/her prior usage context or experience; or simply, what they thought they wanted. Blindly building out a product’s feature/capability set based on end-user requests can become a detriment to the provider and its product itself. I once asked what was the rationale behind supporting windows mobile devices, the answer wasn’t that there was any user base of windows devices, but rather that they posed the question “should we support windows devices” to customers and customers said “sure why not”. The consequence was that the team significantly diluted precious engineering resources that could have been used to make the experience for the real user base better.
6. Product thought leadership, recognizing and prioritizing those capabilities that are quickly forming the new ways of working in the digital economy and digital ecosystem, e.g. self-serviceability, APIs and rich interoperability features, visibility and clarity of data usage and compliance, etc.
7. Engineering leadership and culture. Engineering leaders must be close to the code and team operations on the ground; and engineering team must be close to the customer and close to the field performance and feedback. While I’ve seen very successful distributed engineering team setups, I’ve also seen product and team paralyzed due to layers of lost translations, which directly affected the agility and quality in engineering a product.
It’s also interesting to pick up clues of engineering excellence by the team’s attitude towards technology selection and use in the product. In the engineering community, there are always plenty passionate debates around languages, libraries, components, tools platforms of choice. At the end of the day, no matter how “superior” a particular technology may be, bad engineering can make the resulting product many times worse performing in every aspect than if a less flashy technology is used but with better team engineering expertise and confidence.
Finally, just like interviewing candidates for potential hire, conclusions or predictions drawn from interview sessions could sometimes be misled. When we want to minimize such risks, companies often conduct a test phase with short-listed providers to evaluate the products (and vendors) through simulated or real-use scenarios. When you can afford such an approach, I find that capturing and tracking DORA‘s Software Delivery and Operational (SDO) performance metrics are invaluable engineering performance indicators during this phase. A promising startup who struggles to implement and report on these KPIs is a risk sign of whether they will be able to scale the product and team successfully.
To summarize, when comparing vendors and their solutions, consider trade-offs towards better product engineering practice rather than more complete functional capability coverage:
- Rethink requirements on product capabilities and differentiate what’s core or foundational vs what’s easily evolvable. A vendor with strong product engineering practices can easily and quickly catch up on some missing functionalities in a short period of time and provide better experience and innovation of new capabilities in the long run.
- Elevate the importance of “digital ecosystem” capabilities, as increasingly a product is interacting with other systems. Open, transparent, and inter-operative (or collaborative) product characteristics such as data access APIs, self-serviceability, programmability, event streaming/messaging/notifications, etc. reflect the B2B product team’s inherent understanding of where the world is heading rather than where the world has been, consequently a better prepared product and vendor.
Don’t mistake use of cloud platform alone as being modern and agile.
Reliability, resilience, observability and insights into how a product is functioning is a prerequisite to get right and get very solid, to enable product to evolve quickly.
If the engineering team is not able to keep their code in an “Always Deployable” state, they are likely not going to meet our “agile” business needs, regardless how complete their current functional capabilities are.