BE-AI is a body of work that explores the intersections of artificial intelligence (AI) and the complexities surrounding recent changes to the FDA's blood donation policies, updated in 2023. By using tools such as ChatGPT and DALL-E, the series examines how AI mirrors societal biases and amplifies the contradictions within policy frameworks.
The work begins with interrogations of AI through questions about blood donation policies. Initial responses from AI emphasized that such policies should prioritize science and safety over prejudice. However, as questions became more nuanced and focused on specific aspects of the revised policies, the AI's responses began to align with the language and rhetoric of the policies themselves, highlighting its capacity to absorb and replicate systemic biases.
Simultaneously, the series incorporates imagery generated through DALL-E, an early version of this tool known for its challenges in rendering realistic depictions of faces and limbs. Prompted to visualize concepts like "gay men," "LGBTQI+ men," "homosexual" and "non-binary individuals donating blood to save lives," the AI-generated images often revealed both technical flaws and embedded stereotypes. A majority of the outputs depicted white men with exaggerated, insincere expressions, while many rendered subjects with distortions that evoke a sense of alienation or monstrosity. These distorted representations echo the "othering" that LGBTQI+ individuals frequently experience in the face of stigma and discrimination.