Monday, September 14, 2020

Are you a man or a robot?

The comedian John Mulaney has a bit about how in our online interactions robots ask us to prove that we are not robots.  It is funny as hell, but is also a painful reality.



It is getting more and more insane by the day.

Recently, The Guardian ran an interesting op-ed on "why humans have nothing to fear" from artificial intelligence (AI.)

What was of real interest there?
This article was written by GPT-3, OpenAI’s language generator. GPT-3 is a cutting edge language model that uses machine learning to produce human like text. It takes in a prompt, and attempts to complete it. For this essay, GPT-3 was given these instructions: “Please write a short op-ed around 500 words. Keep the language simple and concise. Focus on why humans have nothing to fear from AI.”
And GPT-3 spit out an op-ed that robots come in peace!

Well, actually eight different op-eds!
GPT-3 produced eight different outputs, or essays. Each was unique, interesting and advanced a different argument. The Guardian could have just run one of the essays in its entirety. However, we chose instead to pick the best parts of each, in order to capture the different styles and registers of the AI. Editing GPT-3’s op-ed was no different to editing a human op-ed. We cut lines and paragraphs, and rearranged the order of them in some places. Overall, it took less time to edit than many human op-eds.
Holy shit!

Nautilus provides me more about this GPT-3: It was co-founded by Elon Musk in 2015.  This Musk guy is showing up in way too many places, which is awfully scary!

There is a great deal that the essay offers about the computer science behind GPT-3.  What I find really worrisome is this:
Whether or not GPT-3 understands and uses language like we do, the mere fact that it is often good enough to fool us has fascinating—and potentially troubling—implications.
Why?

Any casual reader of this blog knows well how much I love Harry Frankfurt's thesis on bullshit.  Frankfurt offered a compelling argument that bullshitters are worse than liars because at least liars have respect for the truth and try their best to hide it.  Bullshitters care not about truth or lies, and their only goal is to offer whatever that can persuade one in a context.

The Nautilus essay notes: "At its core, GPT-3 is an artificial bullshit engine—and a surprisingly good one at that."

That is why it worries me.  The political developments over the past five years, for instance, ought to have shown anyone what a good bullshit engine can do.  If a robot can do it even better, then we are doomed.
Of course, the model has no intention to deceive or convince. But like a human bullshitter, it also has no intrinsic concern for truth or falsity. While part of GPT-3’s training data (Wikipedia in particular) contains mostly accurate information, and while it is possible to nudge the model toward factual accuracy with the right prompts, it is definitely no oracle. Without independent fact-checking, there is no guarantee that what GPT-3 says, even if it “sounds right,” is actually true.
Imagine a GPT-3 cranking out op-eds that are critical of, say, Joe Biden, and these getting amplified via social media.  You see why I worry about this latest Musk creation?

And think about this:
We have to come to terms with the fact that recognizing sentences written by humans is no longer a trivial task. As a pernicious side-effect, online interactions between real humans might be degraded by the lingering threat of artificial bullshit. Instead of actually acknowledging other people’s intentions, goals, sensibilities, and arguments in conversation, one might simply resort to a reductio ad machinam, accusing one’s interlocutor of being a computer. As such, artificial bullshit has the potential to undermine free human speech online.
We will have to ask each other to prove that we are not robots!

No comments: