Last week Microsoft Corp. said it would stop selling software that guesses a person’s mood by looking at their face.
The reason: It could be discriminatory. Computer vision software, which is used in self-driving cars and facial recognition, has long had issues with errors that come at the expense of women and people of color. Microsoft’s decision to halt the system entirely is one way of dealing with the problem.
But there’s another, novel approach that tech firms are exploring: training AI on "synthetic” images to make it less biased.
With your current subscription plan you can comment on stories. However, before writing your first comment, please create a display name in the Profile section of your subscriber account page.