Sign up

Making AI More Accessible

Why UX Is a Vital Part of the Success of AI

According to McKinsey, a lack of understanding of and strategy for AI is still the greatest barrier to its adoption in most organizations. People still often think about AI as science fiction, not as a tool that they can use to drive growth. Other organizations fear that leveraging their data more effectively would require data practices likely to turn consumers off.

The lack of general understanding and accessibility holds the field back and hurts many businesses that could benefit from more strategic deployment of AI. It’s a shame because AI can help businesses better meet evolving customer demands, especially when it comes to things that can make brands competitive in even the most difficult industries, like better customization and more personalized and efficient service.

So how can professionals working in AI make it more accessible to the general, non-technical population? We have to prioritize a positive user experience or UX from the outset. UX may not commonly be the priority for AI experts, but if we want to continue to achieve higher adoption and utility, UX has to rank higher on the list.

Human-Centred AI

No AI can operate totally independently; eventually, a person will be providing inputs or interacting with outputs. Perhaps, then, the most important way to make AI more accessible to the average person is to design AI with them in mind from the very start. Effective AI has both human-centric interfaces and functionality. To this end, we have to create models that solve the right problems by giving practical, autonomous solutions that users can easily put into action.

From a model design perspective, this means that:

(a) the AI solves a real problem faced by users
(b) the AI solves that problem in a practical way that does not conflict with other priorities
(c) the AI integrates easily into other operations
(d) the AI interface and outputs are simple to use and understand

In other words, AI can’t be a one-size-fits-all solution. It needs to be customized to align with the needs and priorities of the end-user. No one wants to devote time to a tool that ultimately delivers little ROI for the effort required to understand it.

We instead have to be driven by the question, “How will this help my end user?”. All new functions and tools should be designed in response to actual business needs rather than our own ideas for disrupting their field. It’s why all data scientists today are also business scientists. Unless we fully understand the business context and the problems facing the end-user, we can’t design AI that will fully address their needs. Human-centric AI is ultimately preferable to even the most innovative models that are too complex for the end-user.

At my company Evo, for example, we created an Excel add-in that allows users to calculate forecasts and deliver recommendations inside an Excel spreadsheet rather than on our custom dashboard if they prefer. Why? Because our clients were already using Excel. They were comfortable in Excel. The Excel tool may not be as “cool” or outwardly innovative as the dashboards we custom-built using Shiny, but this interface matched client needs better. We can adapt our AI to fit the end-users’ current reality without compromising what matters (i.e. delivering automated, accurate pricing and supply chain recommendations). Ultimately, AI must solve clients’ problems, not create new ones. That means we adapt to their processes (such as integrating with Excel instead of always requiring a brand new tool) and provide clean, simple interfaces with no “fluff” functionality for any tools we design ourselves.

AI aims to improve our lives; that’s impossible without human-centric design.

By putting human-centred values in AI, we can bring about new renaissance of thinking and learning.

Marc Tessier-Lavigne, President of Stanford University

Setting Realistic Expectations

Once you have designed the right AI tool, you still must set realistic expectations for its performance or risk alienating end-users who feel they aren’t getting what’s been promised. People often expect AI to be simultaneously smarter and dumber than it really is. They have high expectations for AI transformation while still depending on their own guts over the AI recommendations. The conflict leads to inevitable disappointments. When AI doesn’t live up to its loftiest promises, the user feels like something has broken.

As a result, over-promising feeds negative UX. You may assume that it is better to under-promise and then surprise the user with exceeded expectations. Unfortunately, this can also make AI less accessible. People are often only willing to invest in learning a new technology if it can significantly improve on the old methods. By setting low expectations, you may inadvertently motivate people to use the technology incorrectly — or not use the AI at all. These negative sentiments carry over into the UX, making your AI seem like a waste.

The only solution is honesty. Set realistic expectations for AI and meet them. This should include the promise that your AI will improve results over time, as it learns from doing. When people understand exactly what AI can do for them, they are much more likely to feel empowered to try it for themselves and trust the AI to give dependable results.

UX of AI Comes Down to Results

Of course, design and expectations are just a part of what makes AI more accessible to non-tech users. No matter how much we focus on creating a friendly user interface for AI tools or ensure that expectations are in line with the actual purpose and utility of the AI, UX is going to be negative if the AI doesn’t deliver what it promises. Ultimately, it matters most if your AI works.

Design is not just what it looks like and feels like. Design is how it works.

Steve Jobs

Much of the user experience of AI comes down to results. If a company switches over to AI forecasting and gets worse results than they could get without the AI, they obviously are going to be resistant to change. Positive results will encourage the expansion of AI use. It may only take a single negative experience with your AI to discourage use long-term. Even unrelated AI technologies can sour a person’s relationship with the whole categories of AI, especially when it comes to business.

We need to invest in getting AI right before turning it over to end-users. Your model doesn’t need to be perfect. After all, that’s an impossible dream. You do, however, need to be sure that you have created a robust algorithm that can deliver accurate insights now and learn to do better as it goes forward.

Your model should be equipped for self-learning that adapts to disruptions and only gets more accurate over time. This kind of AI can not only avoid model drift, but it also becomes inherently more accessible. AI that stays just as useful during a crisis achieves far more value for the end-user — and that does more for positive UX than any other functionality.


Making AI Accessibility and UX Your Priority

The good news for those of us that work in AI is that a better UX syncs with our own goals. AI should be human-centered for the sake of UX, but this demand ensures that our model has utility. Since UX requires setting the right expectations, data scientists and engineers have an opportunity to honestly assess the strengths and limitations of their models. Finally, UX of AI depends on the model working well — something we all agree is a priority for everyone involved.

It’s no surprise that UX and data visualization experts are in high demand right now. People who work in AI have discovered that even the best model is useless unless end users can leverage both the technology itself and its recommendations easily. Even during a worldwide economic crisis, the UX of AI is something smart AI companies and departments are investing in. Here at Evo, we’re currently hiring data visualization developers. UX has become an investment in the future that is vital for anyone who plans to grow the impact of AI.

AI has the potential to revolutionize the economy in the next decade, but only if companies continue to adopt and use AI in more areas of their businesses. That’s only possible if everyone working in AI makes a commitment to making AI as accessible as possible to the general public. It’s time to demystify AI and embrace it as a tool to improve everyone’s lives — not just those of us who understand the deeper technology.

***

Thanks to Kaitlin Goodrich for her contribution to this article.

About the author

Elena Marocco joined Evo as data scientist in 2016 after a very successful internship experience. A cum-laude graduate in Mathematics at the University of Turin, she defended an MSc with an innovative solution for Fashion Inventory Management.

She is excited about the world of probability, statistics and, more generally, in discovering useful maths that can have a significant impact through real life applications.

Hey! Was this page helpful?

We’re always looking to make our docs better, please let us know if you have any
suggestions or advice about what’s working and what’s not!

Send Feedback