Facebook share sampel

Recognizing Bias in Business AI Use

Apr 20, 2022 Sam Yehya

Bias in AI in business applications can come from several different places.

large img

Let’s face it - now that artificial intelligence has stepped out of the pages of a Robert Heinlein novel and into our SaaS applications and cloud servers, AI is the new black. It’s what everyone is talking about. If most articles are believed, it’s impossible to run a modern business without it.

Although we're not quite there yet, there may be a point where that becomes true. The reality, however, is that AI has some shortcomings, even in business use. It’s already clear that it won’t replace humans in the customer experience cycle. Its deficiencies go beyond empathy, however. Bias is also an issue.

Bias is an interesting topic for AI. Why? Because AI can be both the problem and the cure. The challenge is that the AI businesses are already using displays some bias. What does AI bias in business look like? How do you recognize it within your own organization?

What is Bias in Business AI?

Bias in AI in business applications can come from several different places. While human developers and designers are part of the problem, the part that we generally think of as unbiased - the data that trains the AI - even in its raw form, can promote bias.

Essentially, bias can creep into an AI system in several ways:

 

  • Data collection: This is one of the most common places for AI bias to stem from. Data collected by humans can contain errors, outliers, or even skew (consciously or unconsciously) toward various biases. Whom the data is collected from, how the data is collected, and even how questions are asked to create the data can push the information closer to a specific profile or result.
  • Preprocessing: Before training, data is generally scrubbed to remove problematic information, like outliers and missing values which can introduce bias. Often data is also filtered - too much filtering and the information no longer represents the target and, depending on the filtering, can add selection bias to the data set.
  • Analysis: While not bias in the AI results in itself, analysis of the AI output can introduce a skewed view. This is where report and graph structure and confirmation bias can slip into how AI information is used.
  • Garbage-in-garbage-out: The reality is that the data used to train the AI may already be flawed. For instance, the famous Amazon AI recruiting example - in which the data used to prepare the algorithm was from resumes from predominantly male candidates - caused the AI to rule out those resumes that didn’t look like the “good” ones fed to the system, creating unintentional gender bias thanks to historical issues.

 

When issues like these creep into your AI, your business can be impacted in many ways. In DataRobot’s State of AI Bias report, companies surveyed identified direct negative impacts as a result of AI bias, including:

  • 62% reported lost revenue
  • 61% reported lost customers
  • 43% reported lost employees
  • 35% reported legal fees as the result of a lawsuit or legal action
  • 6% reported damage to their brand reputation or media backlash

 

Spotting Bias in Your Organization’s AI

Most advice around AI in business applications centers on preventing it in the first place. At the same time, article after article notes that companies - even some of the world’s largest organizations - are running into issues with the AI systems they have in place.

If you already have artificial intelligence applications in place, you need ways to validate that bias is not impacting the recommendations, predictions, and results those applications are producing.

The first step in recognizing bias in your AI is understanding where bias can come from, some methods of which are outlined above. Using these potential bias entry points, examine your data and outputs for issues.

It's also vital to continually validate your AI's decisions. This may require specialized skills but is crucial as an ongoing effort to prevent skewed results that can harm your business reputation and revenue.

Similarly, and with the potential causes in mind, validate and revalidate the training data used. Was there bias in your operations before the AI that may be using now? Are you adding information elements to the data that aren't directly relevant to the analysis (names, zip codes, etc.) that could be introducing bias?

The challenge, of course, is in defining bias versus fairness - a primarily ethical question outside of the scope of this piece. However, it is an essential element in developing your AI and training data and evaluating the AI's performance over time. It can be easy to label something as biased when in reality, it's a fair representation of the data. That's why it's so important to review and tweak your AI's training to truly represent the information you are looking to have it act on.

Conclusion

The key to recognizing bias in your business AI applications is to first identify where bias comes from. Once the sources are defined, existing data sets, collection methods, and training information can be vetted for bias and then regularly tested and vetted to ensure bias is kept at bay.

If you’re looking for help with your data and AI, especially in solutions like Salesforce, contact us at Six Consulting. We’d be happy to help you assess and analyze the data and process being used to better tune and optimize your organization’s AI systems.

“When issues like these creep into your AI, your business can be impacted in many ways.”

  • Sam Yehya
  • COO
  • Six Consulting
35% reported legal fees

as the result of a lawsuit or legal action.

6% reported damage

to their brand reputation or media backlash.

Ask us anything. From basic questions to complex queries.
We’re ready to talk to you :  470-395-0200

or drop us line at
hello@sixconsultingcorp.com