This paper examines how implicit cultural biases, gender inequalities, and artificial intelligence (AI) interact to shape contemporary workplace dynamics. Cultural biases—often unconscious and deeply ingrained—influence key processes such as personnel selection, task assignment, and promotion decisions, favoring candidates who mirror dominant norms and marginalizing alternative perspectives. These mechanisms erode the psychological safety of members in multicultural teams and hinder innovation, underscoring the need for bias-aware training and recruitment technologies. On the gender front, wage gaps persist, women remain underrepresented in leadership roles, and those perceived as overly assertive face harsher evaluations. Obstacles such as limited access to high-level mentors, stereotypes about women’s leadership capabilities, and greater domestic care burdens exacerbate exacerbate women’s position in the workplace. The paper also explores the intersectional dimension of discrimination, in which race, age, or disability biases compound gender bias to create multilayered barriers for the most vulnerable groups. The research then addresses AI’s ambivalent role: on one hand, selection algorithms trained on historical data may perpetuate existing prejudices, disadvantaging ethnic minorities, gender-diverse applicants, and other marginalized groups; on the other hand, when ethically designed, built on diverse datasets, and subject to transparent governance and rigorous auditing, AI can deliver more objective and consistent evaluations that reduce the impact of human bias. Exemplary tools include algorithms that detect discriminatory language in job postings and platforms that measure equity in decision-making processes. Following a critical literature review, the study adopts an integrated multiple-case approach combined with content analysis to examine real-world examples and concrete data. This methodological combination ensures both deep contextual interpretation and systematic rigor. Human oversight, ethical design principles, and continuous monitoring are therefore essential to translate AI’s transformative potential into truly inclusive and equitable organizational practices.
Arduini, S., Beck, T., Mattei, T. (2025). Gender Bias: How Artificial Intelligence Can Reinforce or Reduce Stereotypes. In Knowledge Futures: AI, Technology, and the New Business Paradigm.
Gender Bias: How Artificial Intelligence Can Reinforce or Reduce Stereotypes
Simona Arduini;Tommaso Beck
;Tommaso Mattei
2025-01-01
Abstract
This paper examines how implicit cultural biases, gender inequalities, and artificial intelligence (AI) interact to shape contemporary workplace dynamics. Cultural biases—often unconscious and deeply ingrained—influence key processes such as personnel selection, task assignment, and promotion decisions, favoring candidates who mirror dominant norms and marginalizing alternative perspectives. These mechanisms erode the psychological safety of members in multicultural teams and hinder innovation, underscoring the need for bias-aware training and recruitment technologies. On the gender front, wage gaps persist, women remain underrepresented in leadership roles, and those perceived as overly assertive face harsher evaluations. Obstacles such as limited access to high-level mentors, stereotypes about women’s leadership capabilities, and greater domestic care burdens exacerbate exacerbate women’s position in the workplace. The paper also explores the intersectional dimension of discrimination, in which race, age, or disability biases compound gender bias to create multilayered barriers for the most vulnerable groups. The research then addresses AI’s ambivalent role: on one hand, selection algorithms trained on historical data may perpetuate existing prejudices, disadvantaging ethnic minorities, gender-diverse applicants, and other marginalized groups; on the other hand, when ethically designed, built on diverse datasets, and subject to transparent governance and rigorous auditing, AI can deliver more objective and consistent evaluations that reduce the impact of human bias. Exemplary tools include algorithms that detect discriminatory language in job postings and platforms that measure equity in decision-making processes. Following a critical literature review, the study adopts an integrated multiple-case approach combined with content analysis to examine real-world examples and concrete data. This methodological combination ensures both deep contextual interpretation and systematic rigor. Human oversight, ethical design principles, and continuous monitoring are therefore essential to translate AI’s transformative potential into truly inclusive and equitable organizational practices.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


