Saturday, November 22, 2014

"Green Phonies": Practice what you teach? ... continued

This is the first ever post in this blog with the title of "Practice what you teach?" and yet I have added the "continued," in case you didn't notice it already ;)

Just to let you know that there was another place where I did--a decade ago, in September 2004, I authored this piece at Planetizen, in which I wrote about the difficulty in trying to achieve a consistency between what we say and what we do, and how to draw the line "between my academic life and personal decisions."

I recalled there:
Katie in the front row (of course!) asked me, "Francesca and I were talking the other day about you, Dr. Khé. How come you don't drive a small car but drive a gas-guzzling Jeep Cherokee instead?"
It has been ten years since, and I am all the more convinced about the moral of the story:
academic life means a continuous attempt to redraw the line that separates what I teach from how I live.
It does not mean that I drive a small car now.  Or, a hybrid, like a Prius.  Because, I remain convinced that there is no single identifying litmus test, like whether or not I drive a Prius, in order to understand how much I am helping the environment.

In fact, I have even made fun of Hollywood celebrities who flashed their Priuses when they were introduced.  Those celebrities, who are the very embodiment of material consumption, pretending to help the environment by driving around in Priuses was the best joke of all.  But, dammit, people so believe that a consumption hog who drives in a Prius is more "environmental" than one like me whose consumption is way minimal.

Is it possible at all to help people understand that a steak-eating, California almond-munching, Prius owner is not helping the environment?
According to recent psychological research, these outwardly symbolic displays of green values are, if anything, too powerful. They can fool outside observers into thinking that we're a lot more environmentally conscious than we are. Perhaps worse still, they may lead us to fool ourselves.
Which means, even though I have refused to be fooled, the joke is really on me.  How twisted!  It is all because of "symbolic significance fallacy":
 The idea, which grows out of a large body of research on cognitive biases and mental shortcuts, is that we tend to focus far too much on outward symbols (like Prius driving) in judging whether people are energy conscious. As a result, these powerful symbols bias us into overrating certain kinds of seemingly green behavior, and underrating other behaviors that may be quite green, but don't seem that way to us at first glance.
No surprise that I am not viewed as left-of-center, environmental, or any of those labels, which is really who I am.  The price we pay for being rational in this shallow, superficial world :(
What's the upshot of all this? First of all, Siegrist says the results should make us concerned about what he calls "moral licensing": The idea that doing something that is symbolically green, like driving a Prius, licenses you to do other things in your life that aren't (like driving it huge distances).
The bottom-line then?
as we move into a world full of hybrids, electric vehicles, rooftop solar installations, and much else, we should bear something in mind. Energy use calculations may not be very intuitive or easy to carry out, but the fact remains that there is only one way to evaluate whether someone is actually green: Substance.
Focus on the substance?  Crazy talk!  Focus on substance calls for people to engage in the hard work of thinking, which is increasingly rarer than smog-free days in Beijing!

Friday, November 21, 2014

Life's open secret: Do the right thing!

An old friend called after a couple of months.  Somehow or the other, every conversation has at least one piece about how I live my life without a god or a religion to guide me.

I explained that it is really, really simple, the way I see it.  All I have to do is this: make sure that I do the right thing.  The dharma that the Hindu philosophers refer to.

I am sure you agree with me.  If not, you are not doing the right thing ;)

See, even here I can't help but kid around!

Ok, kidding aside, to me to go through life is, thus, wonderfully simple without all that baggage of god(s) and religion(s).

Of course, doing the right thing is not really simple.  For one, how do I know that "x" is the right thing to do and not "y"?  It becomes a challenge.  A challenge that forces me to think about options "x" and "y" and conscientiously arrive at a decision.

And even if I have decided that "y" is the right thing to do, well, I still have to do it, right?  The awesome thing for an atheist is that there is no hell or god's fury to worry about if I chose not to do "y."  Yet, as an atheist I end up trying as much as possible to do the right thing.

It is worth quoting Steven Weinberg, again:
 Living without God isn’t easy. But its very difficulty offers one other consolation—that there is a certain honor, or perhaps just a grim satisfaction, in facing up to our condition without despair and without wishful thinking—with good humor, but without God.  
A wonderful satisfaction, not a grim satisfaction, that I thought and did the right thing.  And then for the chips to fall where they may.

And, yes, to do all that with good humor.  Well, with atrocious humor as well. Like this one that one of the grocery store checkout clerks told me a while ago:
Q: What's red and smells like blue paint?
A: Red paint.
Do the right thing and keep laughing.  That's all there is to it.

Thursday, November 20, 2014

In the survival of the fittest, this idiot will be terminated by natural and artificial intelligence!

Sometimes I simply want to stop reading anymore.

I know it sounds strange.  But there are moments when I wonder if my life would have been a lot better and more enjoyable if I didn't care to engage in random reading.


Because, I feel all the more that I don't know a damn thing.  During those moments, the knowing that I don't know becomes a burden.  Which is when I wonder if I would have had a happier existence, you know, leading a normal life of doing a job, and then ...

But, I am stuck with who I am.  The good news is that the clock is winding down ;)

The latest realization about my idiocy came from reading this piece at Edge.  It is a conversation with Jaron Lanier, whom I have quoted before in this blog--as recently as, ahem, the last post.  As if that level of exposing my idiocy weren't enough, the commenters, my god!  Well, not some internet trolls, but people who were invited to comment.  It is like one huge all-star lineup.

The conversation is all about artificial intelligence, and algorithms beginning to take over our lives.  Lanier provides this example that most of us can relate to:
Since our economy has shifted to what I call a surveillance economy, but let's say an economy where algorithms guide people a lot, we have this very odd situation where you have these algorithms that rely on big data in order to figure out who you should date, who you should sleep with, what music you should listen to, what books you should read, and on and on and on. ...
I'll give you a few examples of what I mean by that. Maybe I'll start with Netflix. The thing about Netflix is that there isn't much on it. There's a paucity of content on it. If you think of any particular movie you might want to see, the chances are it's not available for streaming, that is; that's what I'm talking about. And yet there's this recommendation engine, and the recommendation engine has the effect of serving as a cover to distract you from the fact that there's very little available from it. And yet people accept it as being intelligent, because a lot of what's available is perfectly fine.
... But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.
A wonderful difference Lanier is alerting us to--"the recommendation engine is serving to distract you from the fact that there's not much choice anyway."  That is one heck of a scary thought, right?

So, on to the next step in this algorithmic world:
I want to get to an even deeper problem, which is that there's no way to tell where the border is between measurement and manipulation in these systems.
In case you are wondering what Lanier is talking about, he explains:
the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened. That's a pretty clear thing. What's not clear is where the boundary is.
Exactly!  My worry has been this, but I had no idea how to articulate it.  I was all gobbledygook whenever I tried to get my brain to work on this!  Whether it is Netflix or Amazon or Facebook or Google,
All of these things, there's no baseline, so we don't know to what degree they're measurement versus manipulation.
Turns out there is more.
If people are deciding what books to read based on a momentum within the recommendation engine that isn't going back to a virgin population, that hasn't been manipulated, then the whole thing is spun out of control and doesn't mean anything anymore. It's not so much a rise of evil as a rise of nonsense. It's a mass incompetence, as opposed to Skynet from the Terminator movies. That's what this type of AI turns into.
 "Mass incompetence."  Hmmm ... it feels like it has already arrived.

The way big data/algorithmic world then works has tremendous economic consequences.  Like this one about the automatic translators and voice recognition.  Think Siri, for instance.  The more people use it, the better Siri gets, right?
The thing that we have to notice though is that, because of the mythology about AI, the services are presented as though they are these mystical, magical personas. IBM makes a dramatic case that they've created this entity that they call different things at different times—Deep Blue and so forth. The consumer tech companies, we tend to put a face in front of them, like a Cortana or a Siri. The problem with that is that these are not freestanding services.
... What this is, is behind the curtain, is literally millions of human translators who have to provide the examples. The thing is, they didn't just provide one corpus once way back. Instead, they're providing a new corpus every day, because the world of references, current events, and slang does change every day. We have to go and scrape examples from literally millions of translators, unbeknownst to them, every single day, to help keep those services working.
The problem here should be clear, but just let me state it explicitly: we're not paying the people who are providing the examples to the corpora—which is the plural of corpus—that we need in order to make AI algorithms work. In order to create this illusion of a freestanding autonomous artificial intelligent creature, we have to ignore the contributions from all the people whose data we're grabbing in order to make it work. That has a negative economic consequence.
This, to me, is where it becomes serious.
Why is this an economic issue?
The usual counterargument to that is that they are being paid in the sense that they too benefit from all the free stuff and reduced-cost stuff that comes out of the system. I don't buy that argument, because you need formal economic benefit to have a civilization, not just informal economic benefit.  
I.e., you can't buy a home with that informal benefit, can you?
In the history of organized religion, it's often been the case that people have been disempowered precisely to serve what were perceived to be the needs of some deity or another, where in fact what they were doing was supporting an elite class that was the priesthood for that deity.
That looks an awful lot like the new digital economy to me, where you have (natural language) translators and everybody else who contributes to the corpora that allow the data schemes to operate, contributing mostly to the fortunes of whoever runs the top computers. The new elite might say, "Well, but they're helping the AI, it's not us, they're helping the AI." It reminds me of somebody saying, "Oh, build these pyramids, it's in the service of this deity," but, on the ground, it's in the service of an elite. It's an economic effect of the new idea. The effect of the new religious idea of AI is a lot like the economic effect of the old idea, religion.
And then the commenters come in and critique Lanier.  At the end of it all, I came away wondering what the heck these smart people are talking about, and why I am feeling like an idiot.  You see why I think I would have been better off not reading that essay in the first place?

Let me leave you with this, that I have quoted before:
Apple is building a world in which there is a computer in your every interaction, waking and sleeping. A computer in your pocket. A computer on your body. A computer paying for all your purchases. A computer opening your hotel room door. A computer monitoring your movements as you walk though the mall. A computer watching you sleep. A computer controlling the devices in your home. A computer that tells you where you parked. A computer taking your pulse, telling you how many steps you took, how high you climbed and how many calories you burned—and sharing it all with your friends…. THIS IS THE NEW APPLE ECOSYSTEM. APPLE HAS TURNED OUR WORLD INTO ONE BIG UBIQUITOUS COMPUTER
Have a good day! ;)

Wednesday, November 19, 2014

When kids lose their privacy ...

I might have rebelled--well, ok, I did rebel--against traditions as I transitioned into the teenage years.  But, I was a good kid.  Didn't get into trouble at all.

If I were a teenager during the iPhone era--as in now--I would have had wonderful outlets for my teen angsts of a gazillion kind.  I can imagine a teenage me tweeting pissed off comments about the principal, the English teacher, about the government.  And would have blogged and tweeted about my leftists feelings. Oh, of course, I would have tweeted about that high school love, too ;)

It is a good thing that I didn't grow up with all those technology gizmos.  Which is why I feel sorry for the teenagers and the youth of today.  So, what is the hassle if they use these, you ask?  Hassles are in plenty, my friend!

A few months ago, I got a Facebook friend request from a name that I could not recognize.  But then Facebook said we had mutual friends.  So, I went to the requester's page and, yes, it was easy to recognize the fellow.  I accepted his request, and sent him a message inquiring about the name change--the first and last names were nothing like his "real" name.

Turns out that the fake name was a recent one, and was strictly in Facebook alone and only for one reason: admissions.  He didn't want a web search for his real name to reveal his antics on Facebook.  It is a growing trend among the young to worry about those issues for a good reason; for instance, among the exclusive private colleges:
Of the 403 undergraduate admissions officers who were polled by telephone over the summer, 35 percent said they had visited an applicant’s social media page — a 9 percentage point increase compared with 2012.
This is atrocious.  What a young person has in words or photos should not be of anybody's concern when it comes to admissions.  Yet, it does.  Which is also why the smart ones are cleaning up the public presence (using the fake name is an easy way, right?)
only 16 percent of them said they had discovered information online that had hurt a student’s application — compared with 35 percent in 2012.
“Students are more aware that any impression they leave on social media is leaving a digital fingerprint,” said Seppy Basili, Kaplan’s vice president for college admissions. “My hunch is that students are not publicly chronicling their lives through social media in the same way.”
Students are now a step, or more, ahead of the admissions folks.  Good for them.
Mr. Dattagupta said he looked favorably upon applicants who posted positive comments about the college and about themselves. But he said he was troubled by applicants who publicly disparaged his college or any other on social media using offensive language.
“That’s a big turnoff for me,” Mr. Dattagupta said. “I wouldn’t want a student like that here.”
The college, however, doesn’t notify students if their social media posts hurt their applications, Mr. Dattagupta said. “We don’t have a mechanism to let a student know they were not accepted because of that particular tweet,” he said.
There is something seriously creepy about Dattagupta's take.  Even creepier it is to think that there are a lot more like Dattagupta than I would ever want.

What is a youth if there cannot be youthful indiscretions and exuberance?

Jaron Lanier talked about how it might get increasingly difficult for the young to erase their past indiscretions.  You can imagine how easy it is going to be to do opposition-research and dig up dirt from when a candidate was a mere sixteen years old.  Especially when you think about something like sexting--when kids sext!

The older I get, the more I worry about the ways in which technology is negatively affecting our lives.  I don't think this is merely the effects of age as I look at the horizon.  There is something seriously creepy when high school kids and college youth have to worry about cleaning up their digital tracks; don't you think so too?

Monday, November 17, 2014

What a way to end! Well, nobody's perfect!

Relax, this post ain't about death! ;)

The trigger for this post was simply an ending line in an essay that I read.  It was in the Economist.  One of the few magazines that I love to read, for the content and for the writing style as well.  (I even put my money where my mouth is, in this case--I am a subscriber!) Those wonderful writers, who always have that Economist way of writing.  In a magazine that is staunchly pro-individual rights and capitalism, the writers remain anonymous:
Why is it anonymous? Many hands write The Economist, but it speaks with a collective voice. Leaders are discussed, often disputed, each week in meetings that are open to all members of the editorial staff. Journalists often co-operate on articles. And some articles are heavily edited. The main reason for anonymity, however, is a belief that what is written is more important than who writes it. As Geoffrey Crowther, editor from 1938 to 1956, put it, anonymity keeps the editor "not the master but the servant of something far greater than himself. You can call that ancestor-worship if you wish, but it gives to the paper an astonishing momentum of thought and principle."
Kind of ironic, right, that the emphasis is on the whole, the product, with no spotlight on the individual writer?  Almost like one of those leftist collectives ;)

Anyway, the ending sentence that impressed me was in a report on China building a new "silk road" via Kazakhstan. 
There are many ways a train can derail.
How awesome!  You need to read the entire essay in order to understand why it is such a wonderful line.

We might remember many opening sentences, like Tolstoy's "all happy families are alike, every unhappy family is unhappy in its own way" in Anna Karenina.  Or, Melville's "call me Ishmael" in Moby Dick. Or, of course, Dickens' "it was the best of times it was the worst of times" in A Tale of Two Cities.   But, the final lines don't always get that applause.

My favorite last line is from a movie. From an old movie.  No, not from Casablanca, though that is phenomenal as well.  The one I love, love, as an ending is from Some Like It Hot.  A marvelous final line! ;)