Thursday, November 20, 2014

In the survival of the fittest, this idiot will be terminated by natural and artificial intelligence!

Sometimes I simply want to stop reading anymore.

I know it sounds strange.  But there are moments when I wonder if my life would have been a lot better and more enjoyable if I didn't care to engage in random reading.

Why?

Because, I feel all the more that I don't know a damn thing.  During those moments, the knowing that I don't know becomes a burden.  Which is when I wonder if I would have had a happier existence, you know, leading a normal life of doing a job, and then ...

But, I am stuck with who I am.  The good news is that the clock is winding down ;)

The latest realization about my idiocy came from reading this piece at Edge.  It is a conversation with Jaron Lanier, whom I have quoted before in this blog--as recently as, ahem, the last post.  As if that level of exposing my idiocy weren't enough, the commenters, my god!  Well, not some internet trolls, but people who were invited to comment.  It is like one huge all-star lineup.

The conversation is all about artificial intelligence, and algorithms beginning to take over our lives.  Lanier provides this example that most of us can relate to:
Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. ...
I'll give you a few examples of what I mean by that. Maybe I'll start with Netflix. The thing about Netflix is that there isn't much on it. There's a paucity of content on it. If you think of any particular movie you might want to see, the chances are it's not available for streaming, that is; that's what I'm talking about. And yet there's this recommendation engine, and the recommendation engine has the effect of serving as a cover to distract you from the fact that there's very little available from it. And yet people accept it as being intelligent, because a lot of what's available is perfectly fine.
... But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
A wonderful difference Lanier is alerting us to--"the recommendation engine is serving to distract you from the fact that there's not much choice anyway."  That is one heck of a scary thought, right?

So, on to the next step in this algorithmic world:
I want to get to an even deeper problem, which is that there's no way to tell where the border is between measurement and manipulation in these systems.
In case you are wondering what Lanier is talking about, he explains:
the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened. That's a pretty clear thing. What's not clear is where the boundary is.
Exactly!  My worry has been this, but I had no idea how to articulate it.  I was all gobbledygook whenever I tried to get my brain to work on this!  Whether it is Netflix or Amazon or Facebook or Google,
All of these things, there's no baseline, so we don't know to what degree they're measurement versus manipulation.
Turns out there is more.
If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore. It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
 "Mass incompetence."  Hmmm ... it feels like it has already arrived.

The way big data/algorithmic world then works has tremendous economic consequences.  Like this one about the automatic translators and voice recognition.  Think Siri, for instance.  The more people use it, the better Siri gets, right?
The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
... What this is, is behind the curtain, is literally millions of human translators who have to provide the examples. The thing is, they didn't just provide one corpus once way back. Instead, they're providing a new corpus every day, because the world of references, current events, and slang does change every day. We have to go and scrape examples from literally millions of translators, unbeknownst to them, every single day, to help keep those services working.
The problem here should be clear, but just let me state it explicitly: we're not paying the people who are providing the examples to the corpora—which is the plural of corpus—that we need in order to make AI algorithms work. In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
This, to me, is where it becomes serious.
Why is this an economic issue?
The usual counterargument to that is that they are being paid in the sense that they too benefit from all the free stuff and reduced-cost stuff that comes out of the system. I don't buy that argument, because you need formal economic benefit to have a civilization, not just informal economic benefit.  
I.e., you can't buy a home with that informal benefit, can you?
In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," but, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.
And then the commenters come in and critique Lanier.  At the end of it all, I came away wondering what the heck these smart people are talking about, and why I am feeling like an idiot.  You see why I think I would have been better off not reading that essay in the first place?

Let me leave you with this, that I have quoted before:
Apple is building a world in which there is a computer in your every interaction, waking and sleeping. A computer in your pocket. A computer on your body. A computer paying for all your purchases. A computer opening your hotel room door. A computer monitoring your movements as you walk though the mall. A computer watching you sleep. A computer controlling the devices in your home. A computer that tells you where you parked. A computer taking your pulse, telling you how many steps you took, how high you climbed and how many calories you burned—and sharing it all with your friends…. THIS IS THE NEW APPLE ECOSYSTEM. APPLE HAS TURNED OUR WORLD INTO ONE BIG UBIQUITOUS COMPUTER
Have a good day! ;)

3 comments:

Ramesh said...

I wouldn't be as alarmist as you.

There are two elements to the ubiquitous collection of every minute data about us.

The first is the loss of privacy which bothers me deeply. I might take up your suggestion of running away to your hermitage. I am however hopeful that privacy products will be offered by the market which will somewhat mitigate this threat - I already use Startpage routinely instead of Google search.

The second is manipulation, which I am less concerned about. I don't care what Amazon recommends - I will read what I want. Its not going to be that easy to manipulate what I do, given that I am a stubborn, argumentative pedant. So they can trawl my data for all they want and I will pointedly ignore everything anybody says and do my own damned thing. I suspect you would do likewise too.

By the way I like Lee Smolin's comment best on the talk.

Ramesh said...

Where is Anne, by the way ? Anne - if you are reading this, by popular demand, you have to comment :):)

Sriram Khé said...

Sure, you can stay in your hermitage and not bother about recommendations and ... but, that is merely you and a few others, right? An overwhelming majority behaves the other way and that is going to make all the difference.

In this context, I didn't care for Lee Smolin's comments because by projecting this "day-to-day" issue that Lanier writes about against the story of human evolution and the 14-billion year story of the cosmos itself, Smolin says, "hey,don't sweat this." But, that is like saying "don't worry because we are all going to die anyway" ... Thus, Smolin can use that same argument about discussions on income inequality, wars, Ebola, ....