Robots and human interaction

Last year, before I even heard of DI, I resolved to read all 15 of Isaac Asimov’s books/novels set within his Foundation universe this year. Why “before I even heard of DI”? Well, you may already know, but I won’t spoil the detective work if you don’t. (Hint: scroll down to the list about ¾ down the wiki page.) Why “this year”? I spent the summer and fall studying for an exam I had put off long enough and had little time for any outside reading. I read I, Robot nearly 40 years ago, and The Rest of the Robots some time after that, followed by the Foundation trilogy, Foundation’s Edge when it was published in 1982, and Prelude to Foundation when it was published in 1988. I never read any of the Galactic Empire novels or the rest of the Foundation canon, and none of the “Robot novels”, which is why I decided to read them all, as Asimov laid out the timeline. I do like to re-read books, but hadn’t ever re-read any of the robot short stories, even when I added The Complete Robot to my collection in the early 1980s. As I’ve slept a bit since the first read, I forgot much, particularly how Asimov imagined people in the future might view robots. Many recognize Asimov as one of the grandmasters of robot science fiction, (any geek knows the Three Laws of Robotics; in fact Asimov is credited with coining the word “robotics”). He wrote many of his short stories in the 1940s when robots were only fiction. I promise not to go into the plots, but without spoiling anything, I want to touch on a recurrent theme throughout Asimov’s short stories (and at least his first novel…I haven’t read the others yet): a pervasive fear and distrust of robots by the people of Earth. Humankind’s adventurous element - those that colonized other planets - were not hampered so, but the mother planet’s population had an irrational Frankenstein complex (named by the author, but for reasons unknown to most of the characters being that it is an ancient story in their timelines). Afraid that the machines would take jobs, harm people (despite the three laws), be responsible for the moral decline of society, robots were accepted and appreciated by few (on Earth that is.) {note: the photo is the robot Maria from Fritz Lang's "Metropolis", now in the public domain.} Robots in the 1940s and 1950s pulp fiction and sci-fi films were generally menacing like Gort in the 1951 classic, The Day the Earth Stood Still, reinforcing that Frankenstein complex that Asimov explored. Or functional like one of the most famous robots in science fiction, Robbie in The Forbidden Planet, or “Robot” in Lost in Space who always seemed to be warning Will Robinson of "Danger!" When writing this, I remembered Silent Running, a 1972 film with an environmental message and Huey, Dewey and Louie, small, endearing robots with simple missions, not too unlike Wall-E. Yes, robots were bad again in The Terminator, but we can probably point to 1977 as the point at which robots forever took on both a new enduring persona and a new nickname – droids. {1931 Astounding was published without copyright} Why the sketchy history lesson (here's another, and a BBC very selective "exploration of the evolution of robots in science fiction")? It was Star Wars that inspired Dr. Cynthia Breazel, author of Designing Sociable Robots, as a ten year old girl to later develop interactive robots at MIT. Her TED Talk at December 2010’s TEDWomen shows some of the incredible work she has done, and some of the amazing findings on how humans interact. Very interesting that people trusted the robots more than the alternative resources provided in Dr. Breazel’s experiments. Asimov died in 1992, so he did get to see true robotics become a reality. IBM’s Watson recently demonstrated its considerable ability to understand and interact with humans and is now moving on to the Columbia University Medical Center and the University of Maryland School of Medicine to work with diagnosing and patient interaction. Imagine the possibilities…with Watson, and Dr. Breazel’s and others’ advances in robotics, I think Asimov would be quite pleased that his fears of human robo-phobia were without … I can’t resist…Foundation.

Continue ReadingRobots and human interaction

Turning toward science?

According to this article by M. Mitchell Waldrop, the Templeton Foundation (endowment of $2B) seems to be making an adjustment away from religion and toward traditional science:

Towards the end of Templeton's life, says Marsh, he became increasingly concerned that this reaction was getting in the way of the foundation's mission: that the word 'religion' was alienating too many good scientists. This prompted a rethink of the foundation's research programme — a change most clearly seen in the organization's new website, launched last June. Gone were old programme names such as 'science and religion' — or almost any mention of religion at all (See 'Templeton priorities: then and now'). Instead, the foundation has embraced the theme of 'science and the big questions' — an open-ended list that includes topics such as 'Does the Universe have a purpose?'

Continue ReadingTurning toward science?

Step up and solve a difficult social science question

Here are ten of the biggest unsolved social science questions:

1. How can we induce people to look after their health? 2. How do societies create effective and resilient institutions, such as governments? 3. How can humanity increase its collective wisdom? 4. How do we reduce the ‘skill gap’ between black and white people in America? 5. How can we aggregate information possessed by individuals to make the best decisions? 6. How can we understand the human capacity to create and articulate knowledge? 7. Why do so many female workers still earn less than male workers? 8. How and why does the ‘social’ become ‘biological’? 9. How can we be robust against ‘black swans’ — rare events that have extreme consequences? 10. Why do social processes, in particular civil violence, either persist over time or suddenly change?
Related article at Nature.

Continue ReadingStep up and solve a difficult social science question

Affirmative action for conservatives?

I have written several posts holding that we are all blinded by our sacred cows. Not simply those of us who are religions. This blindness occurs to almost of us, at least some of the time. Two of my more recent posts making this argument are titled "Mending Fences" and "Religion: It's almost like falling in love." In arriving at these conclusions, I've relied heavily upon the writings of other thinkers, including the writings of moral psychologist Jonathan Haidt. Several years ago, Haidt posited four principals summing up the state-of-the-art in moral psychology: 1. Intuitive primacy (but not dictatorship) 2. Moral thinking is for social doing. 3. Morality is about more than harm and fairness. 4. Morality binds and blinds. In a recent article at Edge.org, Haidt argued that this fourth principle has proven to be particularly helpful, and it can "reveal a rut we've gotten ourselves into and it will show us a way out." You can read Haidt's talk at the annual convention for the Society of Personality and Social Psychology, or listen to his reconstruction of that talk (including slides) here. This talk has been making waves lately, exemplified by John Tierney's New York Times article. Haidt begins his talk by recognizing that human animals are not simply social, but ultrasocial. How social are we? Imagine if someone offered you a brand-new laptop computer with the fastest commercially available processor, but assume that this computer was broken in such a way that it could never be connected to the Internet. In this day and age of connectivity, that computer will get very little use, if any. According to Haidt, human ultrasociality means that we "live together in very large [caption id="attachment_16630" align="alignright" width="300" caption="Image by Jeremy Richards at Dreamstime.com (with permission)"][/caption] groups of hundreds or thousands, with a massive division of labor and a willingness to sacrifice for the group." Very few species are ultrasocial, and most of them do it through a breeding trick by which all members of the group are first-degree relatives and they all concentrate their efforts at breeding with regard to a common queen. Humans beings are the only animals that doesn't use this breeding trick to maintain their ultrasociality. [More . . . ]

Continue ReadingAffirmative action for conservatives?

The evolution of the mechanism for evolution.

I must confess that I have something in common with Creationists: I find it difficult to understand how the earliest and simplest life forms came to exist. Unlike the creationists, however, I am not willing to suggest that the earliest life forms were created as-is by some sort of disembodied sentient Supreme Being. I can’t fathom how such a Being could get anything at all done, given that “he” is alleged to be disembodied; for instance, some sort of physical neural network is a prerequisite for cognition. Further, those who posit that life was created as-is by a supernatural Creator need to explain how that Creator got here in the first place; their creation of a Creator constitute an eternal regress. Who created “God,” and God’s God, etc. Thus, I don’t believe in a ghostly Creator, but where does this leave me?  How did the earliest life forms emerge from non-life?  Though firm answers have not yet been derived from rigorous scientific experimentation, I am intrigued by the ideas put forth by Stuart Kauffman in his 1995 book, At Home in the Universe: The Search for Laws of Self Organization and Complexity. Early in his book, Kaufman points out that the simplest free living cells (called "pleuromona") are highly simplified types of bacteria. They have a cell membrane, genes, RNA, protein synthesizing machinery and all the other necessary gear to constitute a form of life. Here's the problem: [more . . .]

Continue ReadingThe evolution of the mechanism for evolution.