Evolution of Androids? (A summarizing paper for futurists, those in fear of robots, and those who merely take an interest in this topic.)

Introductory summary

 Pinocchio, the mysteriously seductive Olimpia in E.T.A. Hoffmanns The Sandman, the Tin Man in L. Frank Baums The Wizard of Oz, the Terminator, Data from Star Trek – they all are machines somehow breathing the very life we posses. Are they imaginative science-fictional dreams, or are they a possibility to come to be realised? Could they possibly live out a co-evolution besides us human beings, besides organic life at all?

 In the following I will summon what some scientists and philosophers argue and describe. (I will avoid the philosophical questions whether artificial intelligence could be capable of thinking, could have emotions and consciousness. Its part of epistemology, metaphysics (if one likes it) or philosophy of language.)

 Michio Kaku shows that, in spite of all the technical problems scientists face in developing robots, the idea of a co-existence of humans and thinking machines doesnt violate the laws of known physics. Yet, he doesnt seem to believe in robots replacing humans in many of their working places, in their originality and creativity.

 Klaus Mainzer argues, with reference to complexity theory and Stephen Wolfram, that the immanent unpredictability of natural processes (even though we may once could determine the laws of nature), describable by computation, is the very condition allowing to mimic those processes – requiring certain further conditions to be carried out – with technological inventions. So, it turns out that the idea of a co-evolution of artificial intelligence or intelligent computer functions is no longer an impossibility.

.

Stuff once declared as being impossible, and now is not

 Michio Kaku continues in his Physics of the Impossible to look at the question whether something that is impossible today will be impossible in a far away future. His attraction of that question arises from the myriads of errors about declaring stuff as being impossible. Aeroplanes, X-rays, radio, nuclear bomb, black holes, rockets in the universe – they all were declared as impossible stuff; by the way, not by some unknown illiterates with regard to physics but by world famous physicians themselves: Lord Kelvin, Lord Rutherford, Albert Einstein, Stephen Hawking. Just to name a few who once supposed something to be impossible, but that lastly became realised after all.

 That prompted Kaku to quote Sir William Osler: “The philosophies of one age have become the absurdities of the next, and the foolishness of yesterday has become the wisdom of tomorrow” (p. xv). On that account Kaku transforms a pronouncement once given by T. H. White in his The Once and Future King: “Anything that is not impossible, is mandatory!” (original by White: “Anything that is not forbidden, is mandatory!”) (p. xv).

 Kaku gives an example. Stephen Hawking tried to prove that time travel was impossible according to the laws of physics. However, he not only failed to find a law, rather it could be “demonstrated that a law that prevents time travel is beyond our present-day mathematics” (p. xvi).

 Kaku derives from this, under the condition that the fundamental laws of physics are basically understood, that a system of categories is in need to declare stuff as impossible, or simply not. Or to be more accurate: he proposes three categories of the “impossible”.

  1. Class I impossibilities: To this belong technologies which are impossible to construct at the current state of technological development. Nevertheless, they do not violate the known laws of physics. Thats the case, i.e., with teleportation, certain forms of telepathy, invisibility, and – well, yes – robots.
  2. Class II impossibilities: To this belong technologies “that sit at the very edge of our understanding of the physical world. If they are possible at all, they might be realized” in a far away future (p. xvii). I.e. time machines, hyperspace travel, and travel through wormholes.
  3. Class III impossibilities: To this belong technologies “that violate the known laws of physics. (…) If they do turn out to be possible, they would represent a fundamental shift in our understanding of physics” (p. xvii).This is the case with Perpetual Motion Machines and precognition.

.

Michio Kaku checks out the Top-Down Approach and the Bottom-Up Approach

 With robots there are two problems in interrelation, writes Michio Kaku in his The Physics of the Impossible chapter “Robots” (pp. 103-125). Those two problems refer to two basic aspects of, especially, human life: 1. pattern recognition, 2. common sense.

 When it comes to pattern recognition it is a matter of fact that robots can see and hear better than human beings, but they are not able to understand what they see and hear. So the goal of the Top-Down Approach has been “roughly speaking, (…) to program all the rules of pattern recognition and common sense on a single CD. By inserting this CD into a computer, they believe, the computer would suddenly become self-aware and attain humanlike intelligence” (p. 110). This approach succeeded in developing programs and robots that were able to play chess, checkers, calculate mathematical functions, or to navigate in a room.

 Yet they soon found themselves in a deadlock. At first, for the fact is that those huge machine were only able to move in two dimensional space with straight lines. The background for this failure is that we human beings see things (i.e. furniture), while robots only see an accumulation of geometric objects. Second, neither is it possible to translate common sense knowledge into calculations or mathematical formulas, nor is it possible to find all the laws of common sense – there are just plainly too many of them. While human beings learn by interaction, robots just know what their program contains.

 In other words, as Weicker describes in Evolutionäre Algorithmen (p. 225), logical deductions in each time separate systems of perception, planning and action led to unpredictable behaviour and collapse of those machines when faced with small discrepancies. That’s why differently constructed machines were in need. They had to be able to interact with their environment through simulated evolution in artificial neuronal networks that are connected to feed backing receptors.

 This project is what Kaku calls Bottom-Up Approach. In accordance to this paradigm scientists try to develop machines that mimic evolution and the way babies or insects learn; which is to say that they learn by trial and error. But their success kept within a limit, very narrow limits.

 He closes his investigation by suggesting that maybe a synthesis of both approaches will lead to breaking through success.

 Furthermore Kaku scouts the field whether robots are able of emotions and consciousness. When it comes to emotions he says that it is possible under the condition that emotions serve several functions and that they are connected to the neuronal system. Whether robots can be conscious, he is tired of the arduous philosophical and theological discussions. Instead he proposes just to try it.

 Finally he concludes:

“The idea of creating thinking machines that are at least as smart as animals and perhaps as smart or smarter than us could become a reality if we can overcome the collapse of Moore’s law [the end of the silicon age] and the common snese problem, perhaps even later in this century. Although the fundamental laws of AI are still being discovered, progress in this area is happening extremely fast and is promising. Given that, I would classify robots and other thinking machines as a Class I impossibility” (p. 125).

 That is optimistic. And this attitude is shared, maybe even stronger, by those who are fascinated with the complexity theory.

.

Computational equivalence as the breakthrough to androids?

 At first let me summon up what are some basic ideas of complexity theory (cf. Hromkovič: Sieben Wunder der Informatik (pp. 109-110, 143-144, 150, 179, 185-186) and Theoretische Informatik (pp. 206-207); Mainzer: “Komplexe Systeme and Nichtlineare Dynamik in Natur und Gesellschaft”, in: Komplexe Systeme and Nichtlineare Dynamik in Natur und Gesellschaft (pp. 3-29)). Computer sciences are faced with two basic problems. At first, it could be proven that there are problems which cannot be solved by algorithms. (That concerns especially non-trivial semantical problems: what does the program compute?, does the program solve the problem?, is the program correct?) Which is to day, that not everything can be automatised. A second basic problem is that there are practically unsolvable problems. This names problems for which an algorithmic solution would need more energy than the whole universe contains (i.e. the non-linear three-body-problem, chaotic dynamics, some cellular automatons). There are two ways to deal with that problem. One way is to find a proof for the non-existence of efficient algorithms to the concrete problems. But at present this way is mathematical unsolvable. Another way is to use randomised algorithms for it can be proofed that under certain circumstances this way results in a wrong result only one time in 1018 (approximately earth’s age in seconds) cases.

 What does this have to do with robotics? Stephen Wolfram’s A New Kind of Science seems to having paved the way for it. His starting point is complexity theory’s insight that there are a lot of everyday problems and systems which cannot be described in a mathematical way. That led him to experiment with cellular automatons. They resemble a chess-board: black and white boxes. Those boxes have only the task to turn into black or white. This task they have complete according to a certain rule (cf. http://en.wikipedia.org/wiki/Cellular_automaton). Those experiments delivered two results. First, some of the automatons turned to be systems whose behaviour isn’t predictable. Tp be more precise, Wolfram formulated three categories of cellular automatons. Automatons whose behaviour is always predictable (a fixed pattern), automatons which show regular behaviour as well as chaotic behaviour (irregularly changing patterns as well as fixed patterns), automatons whose behaviour isn’t predictable at all (irregularly changing patterns). Second, this made see that, although the rules of a system are totally known, the system’s behaviour isn’t predictable (pp. 27-41, 223-250, 737).

 In a next step Wolfram transfers this computation to the natural sciences – a step being justified, according to him, since there is a cellular automaton being able to mimic all the other cellular automatons and to mimic the complexity of all other systems. By this means you can find computational equivalence in all natural processes as well as in artificial processes. That means, beyond them is no hidden regularity that could be grasped by any thinking. That’s why natural processes aren’t predictable. We have to go step by step until the end, we cannot overtake them. And when it comes to an end, we are facing our end, too (pp. 637-691, 715-748).

 The consequence of this is, as Wolfram writes, that this algorithmic unpredictability is the condition of the possibility of human freedom (p. 750). That means, according to Mainzer in Computerphilosophie (pp. 180-185) and Komplexität (pp. 113-116), that human freedom of will and to decide is not an illusion, even though they cannot be justified, explained or clarified by the phenomenon of unpredictability. Furthermore it makes a technical co-evolution of artificial intelligence becoming independent possible in as much as two conditions are carried out. First, thinking, emotions and consciousness are related to neurobiological processes of the brain. Second, the laws of those processes are applicable as patterns for the considering technical system. Then they would be able to learn and to deal with their environment.

.

Kaku’s scepticism giving hope to those who are willing to or are in need of work

 Merely about low-level accountants, brokers, and tellers Kaku’s is little pessimistic, “since their work is semirepetitive and involves keeping track of numbers, a task that computers excel at” (p. 112).

 However, against the optimistic speculations of futurists Kaku holds that “workers such as sanitation men, construction workers, firemen, police, and so forth, will also have jobs in the future because what they do involves pattern recognition. Every crime, piece of garbage, tool, and fire is different and hence cannot be managed by robots” (p. 112). Apart from this “the jobs of future will also include those that require common sense, that is, artistic creativity, originality, acting talent, humor, entertainment, analysis, and leadership. These are precisely the qualities that make us uniquely human and that computers have difficulty duplicating” (p. 113).

Advertisements

4 responses to “Evolution of Androids? (A summarizing paper for futurists, those in fear of robots, and those who merely take an interest in this topic.)

  1. The notion of robots/automata are as old as civilization – both to supply workers that never tire or disobey and as a means of demonstrating what can be done – ie that humans can act like god in creating life. This latter was a very dangerous area to get into while the Christian church still had a lot of power. I don’t think anyone has any problem with robots doing repetetive, boring work, or dangerous stuff, but when they compete with humans, that is where we run into moral and social problems.

    • thanks for your thoughts! 🙂 you’re right about the fear, whereas I take a little shifted stance. let me put it that way: I’d prefer the notion “challenge”, which could become accompanied by fright. for me the world is like “the winds they are a changing”. so, we have to find the right notions and practices in order to solve problems. 🙂 anyway, at the moment we are way too far from anything like constructing androids. but I enjoy the movies. and Wall-E is an option, too… 😀

      • A lot of people are put off science fiction/fantasy as they see it as being too technical but it is the genre where ideas can be discussed and worked through, so you can make a case for it being the most important genre for us as a species. they also stimulate our imagination, so help generate all sorts of new ideas, or point out flaws in existing ones. Cartoons work in a similar way – look at what The Simpsons get away with – much of their humour would not be accepted in normal tv series. Context is all!

Well, I'd like to know: What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s