Microsoft's AI text-to-image generator Copilot Designer appears to be heavily filtering outputs after a Microsoft engineer, Shane Jones, warned that Microsoft has ignored warnings that the tool randomly creates violent and sexual imagery, CNBC reported.
Jones told CNBC that he repeatedly warned Microsoft of the alarming content he was seeing while volunteering in red-teaming efforts to test the tool's vulnerabilities. Microsoft failed to take the tool down or implement safeguards in response, Jones said, or even post disclosures to change the product's rating to mature in the Android store.
Instead, Microsoft apparently did nothing but refer him to report the issue to OpenAI, the maker of the DALL-E model that fuels Copilot Designer's outputs.
Read 20 remaining paragraphs | Comments
Ars Technica - All contentContinue reading/original-link]