Like most of the popular-format articles, this summarizes a lot of disparate ongoing research, giving a particular take on it. However, it was a good and thought-provoking take, I thought. In part the author points out what should be obvious, that nowadays one hears constant jabber about "artificial intelligence" that is not merely "deflationary", as the article phrases it at one point, but actively misleading. Meanwhile, we may wonder, what is really meant by those two words, and in what forms is that going to manifest in our future? Sometimes people talk about "self-awareness" as a standard for AI, but that's a far bigger issue than mere "intelligence", for which I don't think one needs to have much perception of the "self" as a "perceiver".
One person quoted in the article, AI researcher Charles Isbell, offered an interesting standard that: "“true” AI requires that the computer program or machine exhibit self-governance, surprise, and novelty." When you start to dig into what these terms might mean in different contexts, however, the cliff-edge is quickly reached. Is my K-cup machine self-governing, since it can regulate some of its own functioning and give me suggestions about what to do to take care of it? The car does that, and what will we think of the auto-driving cars? Are those glorified roombas or do they have something more going on, in terms of a decision procedure?
People (including me) will sometimes say things like, well, some other human chose to give it THAT decision procedure, upon which it now depends, but we know that machines can improve their own procedures as a response to results (should we call that learning, then?), and it's not clear we humans give ourselves all or most or even the most important of our own decision procedures. So it's interesting to consider. Well, ttfn, SC, more later.Statistics: Posted by Phoebe — Mon Apr 17, 2017 12:57 pm
]]>