Companies are in a heated race to showcase how their AI innovations help businesses achieve significant gains in processing data more quickly and accurately. But, as with all things, jumping on the bandwagon without a plan is rarely a wise move.
As AI tools developed in a variety of industries over the years, their use has uncovered lessons for future innovators. Here are just a few of those lessons.
Sudden pivots are wise
Though it seems like a lifetime ago, it was only eight years ago that Microsoft released the Tay chatbot. It instantly became a playful conversational tool with cute little quirks. However, people quickly migrated from having fun conversations with it to engaging in more controversial use cases. They realized they could train Tay, and soon taught it to become racist, sexist, and hateful.
This unanticipated outcome led to two important learnings. First, Microsoft reacted quickly and removed Tay from public use. Tay’s removal did not necessarily reflect a failure but, rather, a calculated risk within the innovation funnel. Second, as we’ve already learned from “Privacy by design” methods, the Tay incident reinforced the need for AI tools to incorporate “Ethics by design” models. Thanks in part to Tay, most AI tools now incorporate ethical guardrails. Take the risks, bring your innovations to market, but ensure they are prebuilt with processes to detect and prevent misuse.
Minimum viable standards are relative
Remember when restroom hand dryers with sensors first came out? They worked great for many people, but it soon became apparent that they were unreliable for people with darker skin tones. Developers hadn’t tested the product on people who had darker skin. Across the board, we’ve seen that AI tools are often biased towards pale, male faces because other people are excluded in sufficient quantities from AI training data. As a result, we now have higher minimum standards for training datasets, and we ensure they include people reflecting a wide range of demographics, especially in social and market research.
Our standards have improved over time, but they also differ based on the use case. In the research industry, for example, if you need to code questionnaire verbatims to understand which color of bar soap people prefer, 85% accuracy is suitable for the job. Increasing the 85% to 95% won’t change the outcome of the research but it will take longer and cost more. On the other hand, if you need to understand the efficacy of different mental health therapies, achieving 99% accuracy via automated coding enhanced with manual coding is the better way to go. Life-and- death situations necessitate higher accuracy. Standards are relative.
Ensure people retain final oversight
If you ask several competitive AI image generation tools to create an image of fish in a river, and they all show sushi and maki rolls swimming upstream, that doesn’t make the image valid. In fact, after seeing just one image, people would know the result was invalid.
This is exactly why people are necessary to confirm the validity and accuracy of AI tools. For example, during the development of our qualitative coding tool, Ascribe, we compared the results generated by the AI tool with results generated by expert human coders. It takes time to continually generate results in various industries and topics, and then test those results with human coding. But, that ongoing process is time well-spent to ensure that the quality of AI results is comparable to or better than human results.
Cautious risk-taking will win the AI race
Perfection is elusive, even in the age of AI. Every product, no matter how advanced, has its limitations. While some flaws, like those seen with Tay, might demand drastic changes, most can be addressed with small tweaks or by discovering the optimal use cases. The secret to successful innovation is a balance of bold ideas, agile adaptation, and the courage to take small, calculated risks. If you’re curious to learn more about our approach to using AI wisely, please get in touch with one of our Ascribe or Artificial Intelligence experts.