June 20th 2023.
Canva recently released an update to its Text to Image app, which allows users to generate images from text. The graphic design company also announced that it has invested heavily in safety precautions to stop the creation of unsafe content. However, there was one issue that Canva didn’t anticipate—the presence of racial biases in its search results.
Adriele Parker, a DEI thought partner, experienced this firsthand when she attempted to find photos of Black women in popular hairstyles. After typing “Black woman with Bantu knots” into the search field, she was met with an error message mentioning that “bantu may result in unsafe or offensive content.” She took to LinkedIn to express her dismay with Canva’s algorithm.
“Tell me your AI team doesn’t have any Black women without telling me your AI team doesn’t have any Black women,” Parker wrote. She also urged Canva to be the change and hire a DEI consultant, if needed.
The post quickly gained attention and Canva responded with an apology. However, Parker felt that the apology was too “canned” and failed to address the greater issue. She urged Canva to craft a more thoughtful and intentional response.
A Trust & Safety Product Lead at Canva then chimed in, explaining that while this particular issue has been fixed, safety triggers like this are important to prevent offensive content from slipping through the gaps. The recent update to Canva’s Text to Image app has sparked a much-needed discussion on the need for technology to be more aware of race and ethnicity.
[This article has been trending online recently and has been generated with AI. Your feed is customized.]
[Generative AI is experimental.]