What was it? This magazine:
The OCD behavior of this professional academic manifests via reading and then blogging, so here I am ;)
I scanned through the content for this special on robots. Familiar names in plenty and familiar ideas. I have already bombarded you with those names and ideas. And will continue to do so unless and until the evidence convinces me that my understanding is incorrect.
What fascinated me the most was an essay about "the coming robot dystopia" by, get this, a "Professor of Robotics at The Robotics Institute of Carnegie Mellon University." One of them robotics experts worrying about the coming dystopia! I will not go into details of the essay, primarily because I want you read all that. Instead, I want to focus on a bottom-line:
Robotic technologies that collect, interpret, and respond to massive amounts of real-world data on behalf of governments, corporations, and ordinary people will unquestionably advance human life. But they also have the potential to produce dystopian outcomes. We are hardly on the brink of the nightmarish futures conjured by Hollywood movies such as The Matrix or The Terminator, in which intelligent machines attempt to enslave or exterminate humans. But those dark fantasies contain a seed of truth: the robotic future will involve dramatic tradeoffs, some so significant that they could lead to a collective identity crisis over what it means to be human.You see, that professor with immensely valued credentials delivers the same bottom-line that I harp on all the time: we need to understand what it means to be human, what it means to belong to humankind. Unfortunately,
How robots interact with people depends to a great deal on how much their creators know or care about such issues, and robot creators tend to be engineers, programmers, and designers with little training in ethics, human rights, privacy, or security. In the United States, hardly any of the academic engineering programs that grant degrees in robotics require the in-depth study of such fields.Again, no different from what I have been yelling about for years. For decades, ever since it became clear to me that humans were not made for machines. If only people listened to me!
Anyway, back to the expert:
A clear set of decisions about robot design and regulation stand between today’s world of human agency and tomorrow’s world of robot autonomy. Inventors must begin to combine technological ingenuity with sociological awareness, and governments need to design institutions and processes that will help integrate new, artificial agents into society. Today, all civil engineers are required to study ethics because an incorrectly designed bridge can cause great public harm. Roboticists face this same kind of responsibility today, because their creations are no longer mere academic pursuits.Even at the small little university where I teach, computer science students are not advised to take courses in the humanities and the social sciences. In fact, the advise students get makes them think that these courses are unnecessary hurdles that keep them from focusing on, well, computer science! One brave computer science student is working on a thesis with me as her adviser. Because, she is worried about some of the issues in society. During one of our conversations a few weeks ago, I asked her if the computer science curriculum included any mandatory course on ethics. Of course not. But, we faculty are good at shenanigans. So, the catalog lists a course on ethical aspects of computer science but she has no idea when, if at all, that course is ever offered!
Read all the essays in the magazine, if the website grants you the access.
And then come back and tell me whether there was any argument in any one of those essays that contradicts anything that I have authored here in this blog--I want to feel good because the future is dystopian ;)