AI ethics on trial: Scarlett Johansson’s case and the gender bias in Technology

Imagine a world where your voice, your image, and your identity can be used without your consent

Scarlett Johansson's case and the gender bias in Technology

For years, we have willingly surrendered our identities and personal data to technology companies operating without clear ethical guidelines or public accountability. As technology advances, these companies increasingly assert control over our likenesses, our data and even our sense of agency.

The alleged unauthorised use of Scarlett Johansson’s voice by OpenAI’s ChatGPT isn’t just a legal mishap—it’s a stark reminder of how AI can exploit our identities, challenging the ethical foundation of our technological advancements. Johansson’s legal team highlighted the violation of her personal rights, underscoring the necessity for explicit permission before using an individual’s likeness or voice. This incident points to a broader ethical imperative for transparency and respect for personal autonomy in AI development.

Authors, artists, and other creatives are increasingly clashing with AI companies that often operate under a Silicon Valley ethos of “move fast and break things”, prioritising innovation over consent and ethical considerations. Johansson’s legal action against OpenAI underscores the urgent need for robust legal frameworks to protect individuals from such violations. The unauthorised use of her voice is not only a breach of privacy but also an infringement on her autonomy. This case should prompt policymakers to develop comprehensive regulations to ensure AI technologies are used responsibly and ethically.

The EU AI Act, which aims to regulate high-risk AI applications and foundation models like ChatGPT, exemplifies the kind of stringent regulatory framework needed to address these ethical concerns. The Act mandates transparency, accountability, and ethical usage of AI, setting a global precedent for AI governance.

If tech companies can manipulate a celebrity’s likeness and claim it’s synthetic, what prevents them from doing the same to anyone else? This could impact a small business owner whose promotional videos are used without permission, a parent whose family photos are repurposed for advertisements, a musician whose original compositions are exploited—ultimately, anyone who values their privacy and identity is at risk.

What each of these incidents demonstrates is a potentially exploitative relationship between tech companies and creatives. Artists are justifiably concerned that AI companies are selling tools that produce knock-offs of their work without remuneration or credit. For example, David Holz, the founder of AI software Midjourney, admitted to using living artists’ work still under copyright without their consent to train the company’s AI-powered image generation tools. This example highlights the broader issue of how AI companies often exploit creative works without proper acknowledgment or compensation.

Johansson’s legal action against OpenAI underscores the need for robust legal frameworks to protect individuals from such violations. The unauthorised use of her voice is not only a breach of privacy but also an infringement on her autonomy. This case should prompt policymakers to develop comprehensive regulations that ensure AI technologies are used responsibly and ethically.Companies must be held accountable, and clear guidelines should be established to prevent the misuse of AI.

Addressing gender biases in AI: A call for diversity

Beyond ethical violations, this controversy sheds light on a deeper issue: the pervasive gender biases in AI technologies. Research by UNESCO reveals that virtual assistants like Siri and Alexa, which predominantly use female voices, reinforce stereotypes of women as subservient and compliant. This is not just a design choice but a reflection of the predominantly male development teams whose unconscious biases shape these technologies. Studies undertaken by The World Economic Forum have shown that women make up only 22% of AI professionals worldwide, a disparity that contributes to the perpetuation of these biases.

The gender imbalance in the tech industry significantly contributes to these biases. Women constitute a small fraction of AI researchers and developers, leading to technologies that often fail to consider female perspectives adequately. For example, in 2021, the Stanford AI Index reported that women represented only 16% of tenure-track faculty focused on AI globally, reflecting a lack of diversity that perpetuates existing biases and results in products that do not equitably serve all users.

Further examples illustrate the broader impact of gender bias in AI technologies. A review published in PLOS Digital Health highlights that AI-based prediction algorithms often perpetuate biases, leading to disparities in healthcare provision. For instance, algorithms trained on datasets that do not adequately represent diverse populations can result in less accurate predictions for women and minority groups, necessitating gender-sensitive approaches in developing healthcare algorithms to avoid perpetuating existing disparities. Similarly, an AI recruiting tool used by a global tech company was found to favour male candidates over female candidates. The tool, which was trained on resumes submitted over a decade, favoured resumes that included words more frequently associated with male applicants. This incident underscores the importance of using diverse training datasets and implementing bias mitigation techniques in AI systems used for recruitment. Research by MIT Media Lab found that facial recognition systems have higher error rates for women, particularly women of colour, compared to men. 

These biases can lead to discriminatory practices in security and law enforcement applications, necessitating stricter regulations and development practices to ensure fairness.

Conclusion: Reforming AI for an ethical future

The rapid advancement of AI technology is inevitable, but the direction it takes is not predetermined. To ensure a future where technology serves all of humanity ethically and equitably, we must focus on two critical areas: legal frameworks and education.

Robust Legal Frameworks

Developing comprehensive legal frameworks is crucial to prevent ethical breaches. These frameworks must mandate explicit consent for the use of personal likenesses and enforce accountability. Tech companies should also commit to greater diversity in their development teams to create AI systems that are fair and inclusive. The EU AI Act provides a promising model for such frameworks, aiming to protect fundamental rights and ensure the ethical use of AI across Europe and beyond.

Ethical Education for Future Leaders

We must invest in educating the next generation to be ethically-minded leaders. Initiatives like Teens in AI play a vital role in inspiring young people, women and minorities in particular, and promoting diversity in the AI field. By fostering an inclusive and ethical environment, we can ensure that future AI development respects and serves everyone equitably.

The time for action is now. By implementing these reforms, we can guide AI development towards a future that honours personal rights, promotes diversity, and upholds ethical standards. Only through concerted efforts in legislation and education can we create a technological landscape that truly benefits all of society. Encouraging a diverse range of voices in the AI development process is not just about fairness; it is about creating technologies that respect and serve all members of society. The Scarlett Johansson matter serves as a critical reminder that ethical considerations must be at the forefront of AI development.


UNESCO Report on AI and Gender Bias

World Economic Forum on Gender Diversity in AI

Element AI Report on Gender Diversity in Machine Learning

Telegraph Article on OpenAI and Scarlett Johansson

Sky News Article on OpenAI and Scarlett Johansson

Nature Digital Medicine on AI in Healthcare and Gender Bias

Reuters on AI in Hiring Practices and Gender Bias

MIT Media Lab on Facial Recognition and Gender Bias

Elena Sinel
Elena Sinel

Share via
Copy link