I was about to bail on a tiresome marketing whitepaper about AI when chapter 4, Visions of the Future, caught my eye. An excited data scientist proclaimed that "the future is AI for AI." His vision is to employ AI that decides which AI to use. 

Techno-hype like this does our community a disservice. AI for AI is a cool alliteration, but it's evolutionary, not revolutionary. An example of AI for AI is AutoML, which is like Grammarly for data scientists. Sure, spell-checkers can help make you a better writer, but they can't turn you into Ernest Hemmingway, and AI for AI doesn't magically make your algorithms more effective.

Indeed, promoting a magic potion for your magic potion is harmful--we're already drowning in AI clutter. For example, one study showed that 90% of companies spent more on AI last year, yet just 24% are using it effectively and trending down from the previous year. (footnote 3 & 4). So we're spending more on AI but getting less value from it.

Buying boxes to store the junk you already don’t use won’t make you happier.

Instead, simplify and declutter your data science home. Google Chief Decision Scientist Cassie Kozyrkov compares data science to cooking: "you can cook with a microwave without knowing how to build one." Her point: focus on using AI effectively rather than inventing more or better algorithms.

Employ tools, policies, and procedures that make AI more accessible. Firms are shifting headcount from data science to data science operations teams that help deploy models. Explainable AI "puts algorithms behind a button" (5) to make them easier to understand and use. Model operationalization, or ModelOps, is like Uber Eats for data science. It delivers models into production safely and securely.

AI for your AI won't make you more effective—tidying up the data science you already have will.

Comment on or share this post on LinkedIn.

Don’t make more algorithmic clutter. Use the AI you already have more effectively.

Don’t make more algorithmic clutter. Use the AI you already have more effectively.

BIAS DISCLOSURE and FOOTNOTES

(1) Public service announcement! I'm a fan of MIT Technology Review, which is why I bit on this paper. But with research (and by research, I mean, Google told me), that MIT Technology Review Insights is "a custom publishing division of MIT Technology Review." That's industry jargon for "we write cleverly disguised marketing collateral that leverages the MIT brand name." So when you see the term "custom publishing" or "sponsored content," squint a little harder for the bias. It's usually hidden in plain sight. 

(2) Personal hypocrisy disclosure: I work for a commercial software company, and we pay for third-party-sourced material, too. So let me beat you to the punch: I know, and I'm sorry. And if aggrieved over something that came from my company, TIBCO, let's critique it together! I've done it before. But as in this article, I criticize the idea, not the messenger. But lazy marketing can be a disservice to the community, so let’s fix it when David Meerman Scott's Gobbledy Gook Manifesto gets violated.

(3) Be Decision-Driven, Not Data-Driven, based on research by Bart de Langhe, a behavioral scientist, and marketing professor at University Ramon Llull, ESADE, and Stefano Puntoni, a professor of marketing at the Rotterdam School of Management at Erasmus University and the director of the Psychology of AI lab at the Erasmus Centre for Data Analytics.

(4) Ibid, by NewVantage Partners LLC, led by Thomas H. Davenport and Randy Bean.

(5) Where AI Belongs, featuring Michelle Lacy, head of data science at Bayer Crop Science




Previous
Previous

Do You Need an Empathy Lab?

Next
Next

On Data Centralization Disease and Data Fabrics