Sign up

Data Science Can’t Innovate Without Transparency

Misinformation and lies are killing innovation in AI: here’s how we can save it

This month, I’ve spent a lot of time at business schools teaching people from non-technical backgrounds about data science, analytics and AI. These are smart, well-informed executives, so I’ve been surprised by some of their misunderstandings of what is currently possible with data science, as well as some of their fears about the technology.

This misinformation? It’s coming from us: data scientists.

In our zeal to advocate the possibilities of AI, we’ve lost sight of reality — to the detriment of growth.

Most notable was the CEO forgoing a cutting-edge, proven AI solution for a competitor touting quantum analytics, despite the fact that quantum computing won’t even be a viable solution for almost another decade by the most optimistic estimates. Quantum AI may be fascinating and full of promise, but claiming your company is currently using it to get better results than others in the field is simply a lie. Full stop.

As a data scientist, you may want to shrug it off. So what if one company lies?

But it’s not just one company and not just one lie.

Hype and exaggerated claims are crowding out true innovators in the field. If we aren’t careful, data science liars will kill innovation in AI and set data science back years.

A thin line between hype and lies

Photo by John Schnobrich on Unsplash

For AI to have a maximum positive impact, its use must be appropriate and widespread. People have to trust that solutions can do what they claim without encoded biases, inaccurate assumptions or other structural problems negatively impacting outcomes. This societal trust is wearing thin.

In fact, only about a third of people believe you can rely on AI outcomes. After years of excessive hype, people are rightfully sceptical of whether AI and even analytics can deliver.

Data scientists cannot be fully blamed for this. Optimistic claims make their way into marketing and media, and then are inflated and broadcast further. Yet some data scientists deliberately puff up and even blatantly misrepresent what their technology can do. They capitalize on their willingness to lie to outpace more honest operators, all while increasing mistrust of AI: a deadly cycle.

A lie can travel halfway around the world while the truth is still putting on its shoes

Photo by Boba Jovanovic on Unsplash

Negative public perception of AI is growing. That pressure hampers AI investment and increases demand for overly restrictive legislation. Fewer companies want to invest in AI because of the risk, missing out on tangible benefits they could get from more efficient use of data. Complex problems become harder to solve.

It’s not just that bad AI harms the reputation of all AI, however. Fake AI claims limit the growth of the true innovators in the field. The AI companies willing to make the biggest untrue claims often capture the most clients and investment, yet they feel no need to advance the technology.

When money flows to the liars, innovation slows. Research takes time and money: when it is scarce, progress screeches to a halt.

To transparent solutions

Sadly, we can’t eliminate liars entirely from data science. There will always be opportunists in every field. We can, however, make it easier for the public to understand when AI does what it claims. We do this through transparency.

Photo by Kevin Ku from Pexels

We need to make it easier for people to understand what AI can technology can realistically accomplish and exceed expectations. We also need to give users more control over their data and directing outcomes of AI. Finally, we need to be able to explain why AI makes any recommendation, minimizing the black box of analytics.

Transparency is a critical value for data science but hard to enforce at the individual company level. Anyone can claim transparency as much as they can claim effective technology. The problem in both cases: few people understand the technology well enough to properly assess claims.

Save legislation; save AI

Photo by Scott Graham on Unsplash

The solution: effective AI legislation. The role of governments is to protect individual rights and fix market failures: here, a lack of trust.

A bottom-up approach to AI legislation that focuses primarily on bridging the gap in that trust will go a long way in encouraging innovation in AI and data science generally. When both parties are well-intentioned and both want to make the world better and more efficient, all stakeholders benefit from a 3rd party authority who can differentiate between the verifiable and unverifiable claims.

The pharmaceutical industry is proof that this approach works. When you buy a pill, you may not understand the chemistry behind it, but you understand that there is a robust regulatory framework behind that pill that ensures it’s not harmful and that it works. The regulatory framework also helps the companies that genuinely provide good innovation to have healthy margins to invest in good R&D and build upon those innovations.

The people who are buying AI have a similar knowledge gap and need the same 3rd party reassurance.

It’s time to save AI from dishonest actors and develop an innovative legislative framework to rebuild trust.

Building trust in AI

Photo by Oleg Magni from Pexels

AI legislation is currently being finalized in the EU, the US and many other governments are debating their own models. The UN is even crafting its own legal instruments and recommendations. Anyone who wants to avoid legislation altogether is going to be sorely disappointed.

As data scientists, we have a responsibility to be actively involved in developing legislation that solves the real problems of trust in AI while still encouraging innovation.

I’ve been taking every opportunity I can to speak to the stakeholders involved with creating these regulatory frameworks, encouraging a bottom-up data-driven approach, as well as focusing on the true market failure of trust. But I can’t do it alone. If you value AI innovation, it’s time to get involved and help build AI laws that block the bad actors and allow the rest of us to innovate.

About the author

Fabrizio Fantini is the brain behind Evo. His 2009 PhD in Applied Mathematics, proving how simple algorithms can outperform even the most expensive commercial airline pricing software, is the basis for the core scientific research behind our solutions. He holds an MBA from Harvard Business School and has previously worked for 10 years at McKinsey & Company.

He is thrilled to help clients create value and loves creating powerful but simple to use solutions. His ideal software has no user manual but enables users to stand on the shoulders of giants.

Hey! Was this page helpful?

We’re always looking to make our docs better, please let us know if you have any
suggestions or advice about what’s working and what’s not!

Send Feedback