Microsoft Plans to Eliminate Face Analysis Tools in Push for
‘Responsible A.I.’
The technology giant will stop offering automated tools that
predict a person’s gender, age and emotional state and will
restrict the use of its facial recognition tool.
By Kashmir Hill
June 21, 2022
For years, activists and academics have been raising concerns that
facial analysis software that claims to be able to identify a
person’s age, gender and emotional state can be biased, unreliable
or invasive — and shouldn’t be sold.
Acknowledging some of those criticisms, Microsoft said on Tuesday
that it planned to remove those features from its artificial
intelligence service for detecting, analyzing and recognizing faces.
They will stop being available to new users this week, and will be
phased out for existing users within the year.
The changes are part of a push by Microsoft for tighter controls of
its artificial intelligence products. After a two-year review, a
team at Microsoft has developed a “Responsible AI Standard,” a
27-page document that sets out requirements for A.I. systems to
ensure they are not going to have a harmful impact on society.
[...]
continua qui:
https://www.nytimes.com/2022/06/21/technology/microsoft-facial-recognition.html?