Artificial intelligence is increasingly involved in arduous business processes, such as credit rating and resume searches to determine ideal candidates. As a result, AI and its results are understandable get under the microscope. The main question that worries executors: is the AI algorithm biased?
Bias can creep in in multiple ways, including in sampling practices that ignore large segments of the population, and confirmation bias, where a data scientist includes only those data sets that fit their worldview.
Here are some ways data scientists are tackling the problem.
1. Understand the potential for AI bias
Supervised learning, one of the subsets of AI, works on the basis of data ingestion. By learning under ‘supervision’, a trained algorithm makes decisions about data sets it has never seen before. According to the “garbage in, garbage out” principlethe quality of the AI decision can only be as good as the data it captures.
Data scientists must evaluate their data to ensure it is an unbiased representation of its true equivalent. To address confirmation bias, the diversity of data teams is also important.
2. Increase Transparency
AI continues to be challenged by the impenetrability of its processes. Deep learning algorithms, for example, use neural networks modeled after the human brain to make decisions. But exactly how they get there remains unclear.
“Part of the move to ‘explainable AI’ is to shed light on how the data is trained and how to use which algorithms,” said Jonathon Wright, chief technology evangelist at Keysight Technologies, a testing technology supplier.
While making AI explainable will not completely prevent bias, understanding the root cause of bias is a critical step. Transparency is especially important when enterprises use AI programs from third-party vendors.
3. Institute Standards
When deploying AIorganizations must follow a framework that standardizes production while ensuring ethical models, Wright said.
Wright pointed to the European Union’s Artificial Intelligence Act as a game-changer in the effort to rid the technology of bias.
4. Test models before and after implementation
Testing AI and machine learning models is one way to avoid bias before releasing the algorithms into the wild.
Software companies built specifically for this purpose are becoming more common. “It’s where the industry is going now,” Wright said.
5. Use Synthetic Data
You want data sets that are representative of the larger population, but “just because you have real-world data doesn’t mean it’s unbiased,” Wright noted.
Indeed, it is a risk that real-world AI learning biases pose a risk. To address this problem, synthetic data could be seen as a possible solution, said Harry Keen, CEO and co-founder of Hazy, a startup that creates synthetic data for financial institutions.
Synthetic datasets are statistically representative versions of real datasets and are often used when the original data is bound by privacy considerations.
Keen emphasized that the use of synthetic data addressing bias is “an open research topic” and that rounding up datasets — for example, introducing more women into models resumed by vets — could introduce a different kind of bias.
Synthetic data sees the most traction when smoothing “lower dimensional structured data,” such as images, Keen said. For more complex data, “it can be a bit of a game of Whack-a-Mole, where you might dissolve one bias but introduce or amplify others. … Bias in data is a bit of a thorny problem.”
Still, it’s a problem that needs to be addressed, as the technology is growing at an impressive 39.4% annual rate, according to a Zion Market Research study.
About the authorPoornima Aptea is a trained engineer turned writer specializing in robotics, AI, IoT, 5G, cybersecurity, and more. Poornima, winner of a report award from the South Asian Journalists’ Association, loves learning and writing about new technologies and the people behind them. Her client list includes numerous B2B and B2C stores, commissioning features, profiles, white papers, case studies, infographics, video scripts, and industry reports. Poornima is also a card-carrying member of the Cloud Appreciation Society.