‘Nerd,’ ‘Nonsmoker,’ ‘Wrongdoer’: How Might A.I. Label You?

Sep 20, 2019 · 3 comments
Katie (Encinitas, CA)
Ironically, I uploaded my image after reading this article and it said I looked like a "newsreader." To Portland Mike's point, even though the algorithm may have been limited to provide more provocative results it still highlights the digital underbelly. Inherent bias baked into AI. We cannot eliminate it. Awareness is key.
Pamela G. (Seattle, Wa.)
I tried it. It say's I'm myopic. I am. I'm thinking the glasses gave it away. I was hoping for something deeper...
mike (portland)
While it's worth highlighting inherent biases in historical data, it should be pointed out that in this particular project they used an object recognition dataset and algorithm to identify types of people and disabled the confidence output to intentionally get provocative results. Nobody would ever use this dataset in this manner to get any meaningful data, it's pretty disingenuous and kind of muddies their point.