In the modern world, no other class of software has divided opinions and evoked a broad range of emotions as AI. But, has all the hype lived up to the expectation? Has the disillusionment set in? Did you bet on something that failed to match the expectation of the customers?
In the modern world, no other class of software has divided opinions and evoked a broad range of emotions as AI. But, has all the hype lived up to the expectation? Has the disillusionment set in? Did you bet on something that failed to match the expectation of the customers? Do you struggle to explain the tangible benefits and RoI that your AI-powered product offers to your customers? Some would say this is a trough of disillusionment but this is not only about the hype cycle, it is also about how fundamentally AI is different from other classes of software and requires fresh perspective and solutions to avoid disillusionment.
Accountability, transparency, compliances, and auditing have been key drivers for adoption of software. Thus, users and businesses expect the software (or technology) solutions to be reliable and predictable.
User or business's expectation from software is typified by the comment recently made by a football pundit - Technology never lies! During my conversations with a CDO at an emerging healthcare company, he made a telling comment that shows how ingrained this expectation is.- he said nowadays whenever our BI / Analytics projects fail to meet the expectations, we first tend to question and look closely at our data, data gathering processes, and data quality in general.
The issue fundamentally with Machine Learning and AI-first software is that AI may lie i.e,. unlike other classes of softwares, AI softwares will err, and it may err at seemingly simple tasks - it will evolve and get better as it learns. But can and will customers be patient enough to ride through this period and what does it mean for the positioning of the product? how should product managers, sales, and strategy teams align themselves to make sure that customers do not suffer from disillusionment and expectation management is done right.
Explainable AI (XAI) is an emerging field in AI which seeks to make machine learning and AI models provide various levels of explainability as it predicts, recommends, and classifies. The techniques to achieve XAI can involve:
This paper details various techniques which are currently employed at varying degrees to bring transparency.
XAI offers a toolbox of enablers for product teams to deliver transparent, explainable, and hence more trustworthy AI predictions, recommendations, and classifications.
Recently, a Product Manager at a leading MarTech SaaS company had his proposal to convert his explainable AI PoC for paid media campaign performance prediction got shot down due to the hesitancy of his higher ups. They felt it was too risky to open the pandora's box.
The black-box nature of AI leaves room for a certain degree of deniability and provides an escape mechanism, especially for those who are bringing AI into their otherwise well-established products. For AI-first startups though, overlooking explainability could prove fatal especially as customers seek to put the outputs of the AI models in context.
Customer success teams, sales teams, and product managers will reap benefits of such a move towards greater transparency in both short and long run so long as they can fend off the perception issues by objectively proving the value proposition and RoI of your product through well defined performance metrics.
Providing transparency is not easy and resulting perception issues can feel daunting to overcome however knowing the fact that there are organizations that have not only navigated through the minefield but have also been able to use the Explainable AI and metrics driven performance tracking of their product to their advantage should allay any fears of causing perception issues.
A couple of our customers have grown over 200% Y-o-Y in the last couple of years by combining mature & battle-ready AI models and explainable AI approaches. These companies have been able to acquire and retain some of the well-known Fortune 1000 companies. They have been careful in choosing the right customers, crafting their messaging, and backing it up with a product-led, transparent approach to problem-solving.
So, you can be rest assured that the Explainable AI and the transparency that it brings will enable you to get better at acquisition and retention of customers for your AI-first product.
Optimistic concurrency control (OCC), also known as optimistic locking, is a concurrency control method applied to transactional systems. In this blog, we talk about implementing concurrent data updates with Optimistic Locking.
Explainable AI is a key concept in Machine Learning/AI to explain why your model is making the predictions. It helps us understand how good a model is. In this blog, we cover how you can use a game theory-based method called Shapley Values to explain what's happening inside the ML model.