Of Jaron Lanier...
To be fair, I only came across mention of this chap this evening. He seems to be highly accomplished, and earns my respect for clearly stating that he has no academic degrees. But on his page about AI, it seems to me that he commits many a cardinal sin.
He starts out with a reasonable thought experiment – the one where each neuron is replaced in turn by an artificial substitute. Unfortunately, whilst talking about AI, he is using a thought experiment about artificial consciousness, but I guess I can let that slide for the moment. No account is taken of the possibility that consciousness may not only reside in the nervous system, but also in the muscles, bones, organs and various fluids of the body. Many chemicals inside us are effective neurotransmitters, and may be responsible for some of the continuity of self which we experience. Additionally there are reasonable theories about electromagnetic fields making up a part of our consciousness, related to, and interacting with, out neurons, but not intrinsically a part of them. So, sadly, the thought experiment may well fall down at this point.
But let us assume that we can indeed make a software emulation of our consciousness (I know, I haven’t really defined what I mean by that yet), including all the awkward bits which we haven’t fully identified yet. In order for it to be the same consciousness as ours, it would have to be experiencing exactly the same things as us, and, of course, it would have to be perfectly initialized with the same state information and exactly the same dynamics. This is, of course, not possible thanks to the Heisenberg Uncertainty principal. But… if we made enough of these simulations and randomized the way they were set up, there is a chance (a very, very, small one, but it is there nevertheless) that we may now have a simulation which had exactly the same things going on in it as we have in our own consciousness. If it is continuing to experience exactly the same stimuli as we are. Which it isn’t, and cannot be, unless we model the entire universe with exactly the right states in order to interact with it, down to the quantum level (and, the rate physics is progressing these days, possibly even beyond).
So, sorry, the rest of the rant about rain storms is hugely irrelevant.
In his thought experiment number 2, he discusses the Turing Test. Personally, I have no truck with the Turing Test as anything to do with intelligence, and less still to do with consciousness. However, I have not read anything apart from this chaps writing which suggests the Turing Test has anything to do with consciousness, so that is OK. If a computer (or anything else) exhibits consciousness, personally I would argue it should be ascribed rights equal to any other conscious being. But then I am still a meat eater, so obviously I cannot actually be taking that argument the whole way. On the other hand, I do believe there are different levels of consciousness, which assuages my guilt somewhat.
But one comment he makes is particularly worthy of note – the claim of machine intelligence that consciousness emerges from intelligence. I am not convinced that this is a claim made by people working in machine intelligence, but it certainly does appear to be a hope held by those in the field. I would argue that it is probably the other way around, however. Intelligence is the benefit which the huge cost of consciousness can confer upon us, making it worth having in evolutionary terms. And personally, I think that both are emergent properties of communities of sufficiently complex ‘agents’.
Anyway, he goes on to discuss the problem with AI being that if you label a computer program as being intelligent, users will adapt to fit it, and that this reduces the chances of the software being developed properly, and also allows people to abdicate from responsibility – “oh, sorry I cannot do anything about it, the computer made the decision”. In some ways I applaud this view – because to some extent it is true. However, the responsibility shirkers never get away with it when they are dealing with me, nor indeed my Mum. But then we don’t have intelligent software yet, so maybe things would change if we did. The thing is, actually intelligent software will adapt to fit the user as much as the user adapts to fit the software. But then, I don’t think the article was actually ever about real machine intelligence was it – see, “Artificial” – says it all.

0 Comments:
Post a Comment
<< Home