We’re loving Julius for both quantitative and qualitative data analysis. But we have a concern about potential biases inherent in AI systems and how that might be present in results generated. There are biases inherent in the algorithms on AI (see links below). This might be a bigger issue with qualitative analyses where the system is trying to make meaning of propositional statements from interviews or open response survey items.
Has anyone else been thinking about this issue?
Has anyone noticed biases in AI generated data analyses?
What strategies can we employ to reduce potential biases in data analyses with AI?
I love, love, love this post. Thank you for sharing! I have not really thought too much about the issue until you posted here really. I personally have not noticed any biases with AI from my use, but I also have not been actively looking for it. But I will keep this in mind now and update you if I do notice. Have you noticed any?
To answer your question about, “What strategies can we employ to reduce potential biases in data analysis with AI”, I feel like that answer would stem from the people who interact with it. For example, Bias in AI: What it is, Types, Examples & 6 Ways to Fix it in 2024 makes a great statement, “…we don’t expect AI to be completely unbiased any time soon… after all, humans are creating the biased data while humans and human-made algorithms are checking the data to identify and remove biases…”. This really highlights the root of the issue: in order for AI to truly become unbiased, we need the people who are interacting with it to remain unbiased themselves. However, I believe we can implement ways to address this issue in the algorithm itself.
For example, I found another article that chats about AI and bias (What Do We Do About the Biases in AI?) which brings up a lot of the same points these ones. However, it does bring to light a way to address these biases such as Counterfactual fairness. The article What Do We Do About the Biases in AI defines it as, “…a concept in the field of machine learning that addresses the need for fair treatment by models. This is based on the idea that a decision or prediction made by the model should remain unchanged even if the attributes change…”. So by adding in this extra “layer” of monitoring, we can hopefully address the bias issues before they become they become a bigger issue.