Microsoft has announced that it is retiring the emotional and facial recognition features of Azure Face while updating its Responsible AI Standard (for the most part).
Microsoft’s internal guidelines for creating AI systems are outlined in the Responsible AI Standard. The business hopes AI will improve society and never be abused by dishonest people. It’s a standard that has never before been disclosed to the general public. Microsoft, however, decided that this new change meant that the time was now.
Software for emotional and facial recognition has, to put it mildly, generated controversy. The outlawing of this technology is supported by numerous organizations. For instance, Fight for the Future called the development of emotional tracking software by Zoom “invasive” and “a violation of privacy and human rights” in an open letter that was published back in May.
Change in policy
Microsoft will update its Azure Face service to comply with the specifications of its new Responsible AI Standard as it has been laid out. The AI’s ability to scan emotions is first being made inaccessible to the general public by the company. The ability of Azure Face to recognize a person’s facial features, such as “gender, age, [a] smile, facial hair, hair, and makeup,” will also be lost.
The reason for the retirement is that there is still no “clear consensus on the definition of ’emotions'” among scientists worldwide. Microsoft’s Chief Responsible AI Officer, Natasha Crampton, stated that experts both inside and outside the organization have expressed their concerns. The issue is “the difficulties in generalizing across use cases, regions, and demographics, as well as the elevated privacy concerns,”
Similar limitations will also apply to Microsoft’s Custom Neural Voice in addition to Azure Face. A text-to-speech app that is startlingly realistic is called Custom Neural Voice. The service will now only be available to a small group of “managed customers and partners,” or those who collaborate closely with Microsoft’s account teams. The technology, according to the company, has a lot of potential, but it could also be used for impersonation. All current users of Neural Voice must fill out an intake form and receive Microsoft’s approval in order to continue using the service. These customers will lose access to Neural Voice if they are not chosen by June 30, 2023, and they must be approved by then.
Still in the works
In spite of what has been said, Microsoft isn’t completely giving up on its facial recognition technology. Only public access is covered by the announcement. The Principal Group Project Manager at Azure AI, Sarah Bird, wrote about ethical facial recognition. Furthermore, she writes in that post that “Microsoft recognizes these capabilities can be valuable when used for a set of controlled accessibility scenarios.” A representative mentioned Seeing AI, an iOS app that aids the blind in recognizing people and objects around them, as one of these examples.