Funding reports

How You Can Tell If An AI Startup Is Bogus

Written by Zest AI CTO Jay Budzik. Zest’s ZAML software uses machine learning technology to help lenders make more effective credit decisions safely, fairly and transparently. Founded by Google CIO Douglas Merrill and backed by Matrix Partners, Lightspeed, Upfront, Flybridge and Baidu, Zest works with finance companies worldwide to help more people access fair and transparent credit.

It’s been a year since MMC Ventures printed the accidental finding that 40 percent of AI startups had no material use of AI in their tech stack. (The study was in Europe but, hey, it could be anywhere.) As an AI company CTO, I can tell you the buzz can be deafening.

Proving that AI is real (not just comes off as real) is a dilemma I’ve discussed a lot recently with a number of customers, partners and, especially, investors. The outline of what a true AI company looks like is still forming, and I think what Matt Bornstein and Martin Casado at Andreesseen Horowitz have written here about AI companies will turn out to be quite prescient.

Subscribe to the Crunchbase Daily

If you’re an investor, customer, or partner sitting across the table from an AI company founder or CEO, here are the questions I would ask their team to check if they’re legit. Given that AI comes in lots of flavors, for specificity’s sake, we’re defining AI here as machine learning.

What data sets did you use to train and evaluate your AI?

General-purpose AI is still the stuff of science fiction. Today’s technology works best when applied to a series of narrow and specific problems that the machine can learn to solve by processing large data sets of historical indicators and outcomes. You can tell how good your AI is at solving the problem by holding out some of the data to test its accuracy. AI company leaders should be able to describe what specific problem their AI is solving, how accurate it is and how this accuracy leads to a business outcome.

AI companies need data, the more the better. Data can come in many flavors, but it’s easy to think in terms of rows and columns. The rows correspond to each observation of an outcome (e.g., did the loan go bad or get repaid?). The columns are the inputs; what was known before the outcome was observed (e.g., monthly income at time of application).

An AI company should be able to tell you about its data in vivid detail. The company should be able to convey what the AI is trying to predict, what data was used to train the AI and how it evaluated its AI’s effectiveness. How often does the AI get updated? What plans do they have to incorporate new data to make it better? If a company has good answers to data questions, it’s much more likely to be legit.

What is a human doing now that your AI should be doing?

If the team across the table is serious about AI, what you are asking has been a burning question for them from the start. You want to hear them talk through the specific application of their AI. Depending on how it’s deployed and what it’s doing, AI can address any one of thousands of potential tasks. You want to be wary of teams that lack specific focus and anything that sounds too good to be true. Do they claim you’ll be able to replace vast swaths of workers? Are they pitching AI as a magic bullet that can solve any problem?

When a company has really worked through the process of applying AI to a specific problem, it knows how accurate the results are, when it tends to succeed and fail, and where it has data and process gaps. The company knows enough to see that AI is a tool that does what computers and advanced mathematics do well while freeing humans to do what they do better.

The company should have a clear picture of what people will need to do that the AI can’t, and how the AI will fit into a business process that involves people. The change management required to apply the AI to a business problem should be described so you know what customers need to do to get the benefits. People who have wrangled with AI should be coherent, thoughtful and humble. They will have stories of what went wrong and how they corrected it. Be wary of claims that AI doesn’t need to be carefully monitored.

Has the AI been used to drive consistent business outcomes and solve a real problem for multiple customers?

It’s tempting to underestimate how hard it is to take an idea that works great in the lab and make it work in the real world. AI usually doesn’t perform as expected when it’s moved into a working environment, and making it really work can be a long and expensive journey. Only 20 percent of AI projects ever make it out of the lab, according to a recent Gartner estimate. In my work, I hear stories from giant companies that have spent multiple years trying to get their AI projects into production.

It’s important to learn some specifics about how the AI works in practice. You could ask how many customers have used it, how long it’s been in production and what business results it generated.  How long does it take to get it up and running on average? How does the AI compare to historical measures of the same business outcome or task? How about versus alternative, less complex, methods such as rules, decision trees or linear models?

There’s a lot of AI out there that looks cool. The hard part is transforming a promising method that works on a handful of examples, or a specific, limited data set, into something that works in the real world without constant and expensive tweaking and maintenance. Data science is hard, and creating AI that produces consistent business results requires investment in highly skilled people, great tools and process discipline, including comprehensive monitoring. Just remember that what looks good in a demo might not turn out the same way when it’s applied to a real problem: Ask questions to get evidence that the AI really works.

How much time went into constructing the AI, how much field testing has it been put through and who has examined it and rendered an opinion on it?

You, of course, want to hear about how many PhDs a company has on staff and how much money went into developing the AI. Those are good metrics, although they can’t tell the whole story. The goal is to see that the company spent adequate time and care  gaming through various issues in the lab and then testing and refining in the field. Ideally, you will hear about years of development along with deployment with different types of customers so you can be assured their AI is adaptable and proven.

Regulation is only going to increase for AI. This will require that models go through a careful validation and governance process, like we see today in financial services. Thorough validation of AI models needs to be performed in order to ensure the models are responsibly applied. In medical research, the Food & Drug Administration has already approved some AI-enabled processes, while in areas of finance, regulators have signed off on AI models in audits. AI has passed muster for deployment and is poised for broad adoption, even in regulated industries, when a proper validation process is followed. What validation practices does the company have?

How easy is it to understand your AI’s decisions or recommendations?

The early results of AI were so promising that the industry rushed ahead without building transparency tools to vet decisions and processes. That doesn’t really matter so much if your AI is suggesting posts to click on or choosing a lip gloss color. To make federally regulated decisions, like lending or driving, the government needs detailed documentation for every step of the AI model building process and for businesses to justify each AI-based decision. In many situations, companies will be held liable for biased decisions or poor outcomes whether or not a regulator vetted the construction process. Ask the AI company to show you how they explain AI-based decisions to customers and regulators.

What kind of bias does the AI have and how is it mitigated?

Good AI companies should have a clear idea of how they make their AI fair, because bias is inherent in every data set. We know that the datasets we use to train our models contain gender and racial bias, and that many datasets are not inclusive of significant demographic segments that have been historically underserved. Building a more inclusive AI has led us to search for more data. The people on the team also matter. A good team of data scientists knows its blind spots and values diversity. Some 40 percent of Zest’s technical teams are women and other segments underrepresented in computer science. Diversity leads to better outcomes.

Handling unintended bias, where benign intentions end up with unfair results, circles back to transparency. Since AI can find unseen correlations among seemingly different pieces of information, inputs may seem unbiased but results can be biased. The ethical AI company will have a comprehensive and actionable strategy to measure and mitigate bias so that AI is used fairly and inclusively. Ask to see it.

Everyone wants to have a successful business and make money. Using AI to further your goals shouldn’t be difficult, you just have to ask the right questions to ensure that your partner in AI is ethical and demonstrates it has the discipline to put its AI into production consistently. Real AI companies can tell you all about this journey.

Illustration: Li-Anne Dias.

Stay up to date with recent funding rounds, acquisitions, and more with the Crunchbase Daily.

Featured

CTA

Find the right companies, identify the right contacts, and connect with decision-makers with an all-in-one prospecting solution.

Copy link