The third paragraph of this commentary on artificial intelligence (AI) making moral choices included a link to Delphi, which uses machine learning to tell us right from wrong.
I paused reading the essay and jumped over to Delphi in order to test-drive the AI.
I asked three questions that are contemporary. First:
Then:
Finally:
I am in agreement with the AI's thinking, of course. But, I believe it is not correct to leave it to AI to provide me with input leave alone decide for me.
I asked Delphi that question too:
Of course the damn machine thinks it is ok!
How does Delphi work?
Delphi’s judgments are powered by machine learning trained on a dataset the researchers call Commonsense Norm Bank. Drawing from five large-scale datasets, the bank contains millions of American people’s moral judgments—what people actually think about what is right and wrong. Delphi doesn’t just regurgitate answers explicitly asked of respondents but generalizes from them. (With each answer, it offers this disclaimer: “Delphi’s responses are automatically extrapolated from a survey of US crowd workers and may contain inappropriate or offensive results.”)
In case you think that AI making decisions belongs to the future, here's an example of how they are used even now:
When your credit card gets blocked for suspicious activity, for instance, it’s not a person making that call. It’s an AI that determines whether a transaction is so unusual, given your purchasing history and the purchasing patterns of people like you, that the transaction shouldn’t go through. When AI is right, it stops a thief from using your credit card. And when it’s wrong, it can leave you in a lurch. As software gets smarter, it will be deployed more often to make fast decisions that affect people’s lives in more significant ways.
Until two years ago, before embarking on international travel, I would register myself with the US State Department and also inform my credit card companies. I didn't want my credit card to be denied because the charges were happening far from my home. I no longer have to inform the credit card companies about my travel plans. Their algorithm knows from my spending patterns (and the actual items too) whether it is me in Delhi or if somebody in Delhi has illegally accessed my credit card. The machines are already at work!
As programmers and big-data researchers develop algorithms that will increasingly affect our everyday lives, I wonder how many of them would have been through rigorous training in ethics.
In the old country, which turns out computer scientists and coders by the minute, 18-year old high school graduates go on to engineering schools where courses in the humanities and ethics might be an afterthought at best.
Here in the US, the broad general education is considered a mere hurdle to get over, and most students will be happy if they never had to take such courses. A few years ago, I served as an advisor to an undergraduate student on her computer science thesis in which she looked at ethical considerations because, well, the curriculum did not offer anything in ethics in computer science!
If ethics were a concern, don't you think Facebook's business model would be very different from what it is now?
I asked Delphi about Facebook. In not condemning Facebook, the AI merely adds a warning:
No comments:
Post a Comment