Sunday, March 04, 2018

Algorithmic racism

A couple of years ago, when reading an article on photography, I had an "aha" moment.  Because, the article answered a question that had been bugging me for the longest time ever since I came to this country.

What was that problem?

In the photographs back in India, I seemed less dark than in the photographs here in the US.  How could it be that cameras were making me less dark in India and more dark in the US?

Turns out that cameras are racist bastards!  Ok, it is not the machine that is racist.

Think about those old days of roll film.  Exposure to light is what is recorded on the film, right?  Apparently those were calibrated with respect to the fair-skinned folks. It was the narration of a simple incident in that article that helped me understand how I was able to understand my own relative darkness and lightness.

The article said that in the shooting of the movie "In the heat of the night," Sidney Poitier had to have a whole lot of extra light on him in order for the film to properly record his image when he was standing with the white characters.  It made Poitier extra sweaty from the heat, but it worked with the storyline anyway--it was all in humid Mississippi.

Aha!

I was extra dark in the US because I was in mixed company, with whites.  I then looked at a few photos with fellow brown-skinned, and ... you get the point.

That calibration apparently continues with the digital imagery as well.

Think of how much bias exists in that simple aspect of life, photographs, that we don't think about.  Not that the photo technologists were intentional about this.  Not at all.  Now, think about computer software that looks into very many aspects of our lives.  And about artificial intelligence that learns through our responses, which are products of explicit and implicit biases.

And then read this essay that asks "Is your software racist?"

The examples there will/should worry you. I leave it up to you to follow up on the details there.

I want to end this post with this exciting footnote.  The computing expert who is quoted there (and he is working with ACLU on these issues) is, get this, Suresh Venkatasubramanian.  Yes, not only an Indian-American but a Tamil-American.  Yay!


6 comments:

gils said...

WOW!!!!!! thought the title of the link shared was click bait. But when i read the content it was like !!!!! no words to express. It is really complex and scary at the same time. Still trying to process what i just read!!! OMG

Sriram Khé said...

Get ready for a lot more screwing up of our lives by AI :(

Ramesh said...

Really ? Photographs were like that ? I had no idea. I wonder how I'll look like if I were captured in an American photograph. I suppose these days with everything being digital - it would be no different. It will continue to be a depressing sight !!!

I am less shocked than Gilsu and not shocked at all from that article. What these early days AI is doing is simply pattern recognition. If there are more male doctors in the world than female doctors, it will call it a he.

The issue of use in judicial systems is more serious. And again it is just doing pattern recognition. I would argue there is more racial bias amongst humans who currently administer justice (police, judges, etc ) than any machine might have. Cases like the gorilla one quoted in the article will happen, and get corrected.

AI will improve and correct itself. That's how it works.

Ramesh said...

Wow, your post has so inspired Gilsu that he has made a full post on this topic in his blog !!

http://supershanki.blogspot.in/2018/03/old-issues-plaguing-new-solutions.html

gils said...

Thala..ithunala thaan neenga thala :) i wanted to post the link here as response. nanrigal pala.
But on the point that AI just treats the results based on pattern matching, isnt it a cause for worry? We opt for machines and root for them because they are supposed to be better, tireless and unbiased without any prejudice. But if they are going to base their responses on what is happening in real world, wondering what should be fixed and in which order? to try and fix the code of AI to be perfect or change the mindset of their 7 billion masters!!!!??

Sriram Khé said...

Ramesh is being uber-optimistic that AI will self-correct. If that is the case, then computer science researchers like Suresh Venkatasubramanian, and organizations like ACLU, won't be so fired up about this. I believe his optimism is baseless.

Yes, Gils, the greater challenge is to change the mindset of seven billion masters. Ain't gonna happen though. The fascists in the US, and the Hindutva maniacs in India, are classic examples. We humans insist on being awful to each other, and the machines will faithfully and more efficiently implement our behaviors.