The proliferation of artificial intelligence applications for business has created an interesting challenge.
The proliferation of artificial intelligence applications for business has created an interesting challenge. AI and machine learning are available to nearly every organization, from Fortune 50s to the mom-and-pop corner business. Not every issue with AI usage has been ironed out, however.
Specifically, bias can still easily make its way into business applications in many forms and from many different places. Even with an army of data scientists developing training models for an AI business application, bias can make its way into a system. Many companies who have suddenly found themselves with access to AI don't have the resources available that large organizations do, and even enterprises are finding themselves on the wrong side of AI bias.
Big or small, companies are responsible for mitigating bias from their AI applications. There is still room for improvement, but there are strategies that every business can use to avoid excessive bias and reduce any that may make it into their applications.
Regarding avoiding bias in business AI solutions, the problem is both AIs and humans. It's reasonably clear that rooting out bias and even separating bias from legitimate facts is something people still struggle with. At our core, humans are pattern-matching machines. So are AI algorithms. They do what we do, but faster and at scale.
Humans are also the ones creating the algorithms and defining the training models. It's easy for bias to slip by in several ways and go uncovered as it is applied thousands or millions of times over. This means that complete avoidance is nearly impossible at this nascent stage of AI usage. It's a problem that's being worked on but not one we've yet solved.
In the meantime, the best we can do is mitigate bias through knowledge and vigilance. The first part requires understanding some of the many ways bias slips in. There are systematic ways that can happen, and those should be monitored. But it also takes acknowledging a few of the philosophical or planning strategies that can influence an AI system.
One of the suggested remedies for eliminating bias in a training model is to remove the apparent elements that might create bias. For instance, if you are concerned about gender, race, or age bias, you might remove those data points from the training data. However, while removing those clear indicators may lessen the likelihood of bias, other implicit indicators may still exist. The Amazon hiring algorithm is a common example of this - while they removed gender from the data set, certain words were used on resumes more often by men. Gender bias still ended up inserting itself into the solution.
Limiting bias means moving toward a fair system. But the very concept of "fair" has been debated by nearly all Western civilizations. We may look at the results of an AI prediction and say, "It must be biased because the results don't seem fair," when, in truth, the data has been properly scrubbed and utilized for training the algorithm. As many of us were taught in grade school, "fair" and "same" are not equivalent, and that lack of an agreed-upon definition can be problematic when working to remove bias from an AI system.
Context matters. That seems like an obvious statement, and in most situations, it is.
However, context can get lost if other elements are prioritized. For example, software engineering principles teach us that we should re-use algorithms and code as much as is feasible. Entire programming language paradigms - like Object Oriented languages - are built on this premise.
However, the situations in which an AI is used may not be translatable to another. It may seem like a good idea to re-use code developed for evaluating large B2B sales opportunities for smaller, B2C-type transactions - after all, both are a purchase, right? Logically, however, we know that these are very different transactions and require further analyses to determine the probability of closing.
If some of the core elements of developing an unbiased AI are unclear and open for debate, how can a business at least minimize the bias that might creep into an application? There are several methods that, for now, should be used to monitor these applications for potential bias.
Continuous feedback (Human in the loop) - Computers are outstanding at speed and scale but can't solve every problem. Humans, and their ability for heuristic logic, can creatively spot issues and develop solutions. When a human is part of the process, the AI can learn from the human's correction, creating a constant refinement and learning process.
“Blind taste test” - Just as a blind taste test can determine which brand of food or drink a consumer prefers, a similar concept can be applied to AI. If you’re concerned that a specific data point is introducing bias, train your AI with the data point included, then re-train it with the point excluded. You can then validate the outcomes of the two.
Broadly review the data - Reviews of data and its capture and filtering methodology don't need to be done by one person or group. It might be easier for someone outside the data science or development group to spot data or predictions that raise concerns. The more transparent you are with the data, the better your chance of catching bias within its representation.
Vet the training data - Similarly, the data used to train the AI can be reviewed by multiple groups. The stakeholders that are most familiar with the data might be able to spot issues with the training models once checked.
Altogether avoiding bias in AI applications is something that even the largest companies and the most knowledgeable AI experts have yet to figure out. However, that doesn't mean that businesses don't need to be on top of the issue. Reviewing data and applying it with the minimization of bias in mind is both ethical and the responsible thing to do.
Data, and its use, can be a complex topic, but it’s one that organizations of all sizes can get help with. Six Consulting has worked with many organizations to prepare and vet their data for use in platforms like Salesforce, where AI applications are readily available to businesses, large and small. Contact us to learn more about how we can help you better leverage your data for business success.
“One of the suggested remedies for eliminating bias in a training model is to remove the apparent elements that might create bias.”
Ask us anything. From basic questions to complex queries.
We’re ready to talk to you : 470-395-0200
or drop us line at