AI Bias and Lensa

I shared my experience with ChatGPT the other day. Generative AI is here and we’re going to feel the ripple effects for years to come. I ended that post with a note about unintended consequences.

One of the challenges with predictive AI is that the output is based on all the data it is trained on. That can result in all sorts of algorithmic bias. A powerful example of this was in a powerful post by Melissa H on the MIT Technology Review titled – “The viral AI avatar app Lensa undressed me—without my consent.” TLDR – as an Asian woman, the avatars showed hyper sexualized images while her male colleagues got to be astronauts and inventors.

These give a sense of just how much work lies ahead – to debias training data while also improving our ability to generate quality output when that isn’t easy.

But, perhaps more importantly, it points to the second and third order consequences of adopting transformational technology. There’s a lot of goodness that will be unlocked from these advances.

We just have to be mindful and thoughtful about the unintended consequences.

2 Views
 0
 0