Microsoft’s artificial intelligence capabilities have sparked controversy, as allegations surface about the company’s AI image generator producing offensive and inappropriate content.
According to a CNN report, a Microsoft employee has filed a complaint with the US Federal Trade Commission, warning that the company’s AI system Copilot Designer has “systemic issues” enabling it to create sexualized and demeaning images of women regularly.
The whistleblower, software engineering lead Shane Jones, accused Microsoft of falsely promoting the product as safe for young users despite its propensity for generating such harmful imagery. This revelation comes as the tech industry grapples with the challenges of responsibly developing and deploying AI image-generation tools.
Earlier, Google faced similar backlash over its Gemini AI program’s handling of images based on race, prompting the company to suspend generating human images while working on improvements. As artificial intelligence increasingly permeates consumer products, ensuring these systems uphold ethical standards remains a critical concern.
Also read: Kerala School Introduces Iris, First AI Teacher of India
This news is republished from another source. You can check the original article here