Ray Kurzweil, author of (among other books) The Age of Spiritual Machines, expounds on the promises and pitfalls of the coming expansion of GNR (genetics, nanotech, and robotics) technology, claiming that by 2029 scientists will have effectively modelled the human mind, producing artificial intelligence fully capable of passing a Turing test.
Having just spent upwards of 25 hours in a car driving between Peterborough, Toronto, and Pukaskwa National Park, one of the ways we passed the time was listening to a variety of podcasts, including Philosophy Bites, CBC Ideas, and the Long Now Foundation’s Seminars About Long Term Thinking (SALT).
While SALT has hosted a bevy of fascinating and influential guests, including Craig Venter, Jimmy Wales, Francis Fukuyama, and Ray Kurzweil, Daemon: Bot-Mediated Reality by author and software engineer Daniel Suarez was one of the most interesting and thought-provoking (mp3 here).
I have a lot of catch-up listening to do with regards to The Long Now Foundation‘s excellent Seminars About Long-term Thinking (SALT) lecture and podcast series. I’m a charter member of the Foundation, which gets you a sweet membership card and access to video of their lectures, among other less tangible things like knowing you’re helping inject some much-needed awareness of long-term thinking and planning into public discourse.
Unfortunately, I think that in the near future, as more and more processes are automated, we will see more such screw-ups of this scale. I can’t help but think that this might have been avoidable, though, if the indexing engine had been able to take advantage of semantic data rather than relying on scraping and evaluating natural language.
Despite this minor setback, people in the US military were able to keep their cool, and not come to extreme conclusions like, “maybe we shouldn’t put guns on robots.”
Though these friendly looking little guys were pulled from operation, there is no indication that the MQ-9 Reaper airborne wardroids (aka bringers of death from above) have been retired.
As usual in the US military, clear heads prevail.
*UPDATE* apparently this was a bit of an internet hoax, and the guns did not in fact accidentally aim at humans… according to the defense contractors who made the robots–whose credibility, incidentally, I do not doubt for one instant. Anyone who is wise enough to put guns on semi-autonomous robots is surely to be unquestioningly trusted.