AI is an intriguing thing and this is an intriguing video.
|
Thread: AI - a brave new world
-
06-05-2017, 09:08 PM #1
-
06-05-2017, 11:20 PM #2
Interesting video, and he lays out some interesting things to consider about the use of AI, but I am not sure I buy his argument that machines are getting smarter than humans. The examples he gives are domain-specific, and rely on superior processing speed and computational ability. That is a far cry from human general intelligence.
I read a book by physicist Roger Penrose about 15-20 years ago, and he argues that no machine created by humans can ever eclipse collective human intelligence. If memory serves, his argument is divided into two parts. First, human cognition is non-computational (and he provides examples of problems solved by humans that cannot be solved algorithmically), and cannot be captured by a formal system. He asserts that collective human reasoning is sound given that mathematical axioms and assertions that are claimed to be "unassailably" true, are in fact true (the biggest point of contention). If human reasoning could be captured by a known formal system, you should be able to create a true mathematical statement (Godel sentence) which you know is true, but the truth of which, you should be unable to see within that system (a contradiction). Therefore, human cognition cannot be captured by a mathematical formal system. The second part of his argument is that machines that we create, are of course, computational. Although in practice, a machine's problem-solving computations can be far more complex than humans can ever parse, there is no step in the process that is fundamentally beyond human cognition. Because a machine functions within a formal mathematical system, the same argument can be applied. Knowing the system, we humans should be able to create a true mathematical statement (Godel sentence) that is unprovable to the machine, but the truth of which, can be seen by humans. Therefore, machines created by humans will always have fundamental limitations that prevent them from exceeding, or even matching human cognition/intelligence. There are a lot of objections to the argument, but I find it to be an intriguing one.
Although, who knows? Even if machines ultimately lack the ability solve problems at the upper ends of collective human intelligence, they may still be able to dominate simply through their ability to solve computationally complex problems exceedingly quickly -- And, this will magnify exponentially once quantum computation becomes more advanced.It takes a big man to cry, but it takes a bigger man to laugh at that man.
-
06-06-2017, 12:17 AM #3
-
06-06-2017, 04:47 AM #4
-
-
06-06-2017, 08:19 AM #5
-
06-06-2017, 09:00 AM #6
Or to put it more simply, a system cannot deliberately create a system more complicated that itself. How can an intellect imagine something beyond itself, when the terms in which it can think and express itself are defined and limited by that very same intellect?
The possibility this doesn't exclude is unintended consequences. If somebody believes they are creating A, but actually creates B; with enough iterations B becomes something greater than A. The issue then becomes whether humans recognise the mistake as being superior and preserve it or destroy it.
There is a thermodynamic argument in relation to this, but I am really bad at trying to do thermodynamics in English.Last edited by DuracellBunny; 06-06-2017 at 09:09 AM.
Screw nature; my body will do what I DAMN WELL tell it to do!
The only dangerous thing about an exercise is the person doing it.
They had the technology to rebuild me. They made me better, stronger, faster......
-
06-06-2017, 09:36 AM #7
http://www.scottaaronson.com/blog/?p=2756
I haven't had time to read this so I can't say I agree or disagree put he refutes Penrose here and it looks like an interesting though somewhat long read.Was friends with Methuselah
-
06-06-2017, 10:01 AM #8
Very long read which I just skimmed part of. It's basically a scientific opinion piece, no more or less valid than Penrose's.
My personal opinion is that AI will be something like the Lamb Shift. To explain the Lamb shift very simply, there are two lines very close together on a spectrum. Theory said there should have been one line and everybody dismissed it as being within the margin of error. Lamb said "hang on a minute, we are scientists, just because it doesn't meet our expectations of what we should see, we shouldn't ignore it, we should investigate it (climate "scientists" please take note of this) and bing badda boom he walked away with a Nobel prize and something named after him.
Something will be unexpected but seemingly innocuous and ignored as irrelevant until somebody has a eureka moment. Until that happens we will just see increases in efficiency (assuming that hardware development doesn't slow down due to evanescent wave effects from small die sizes and the statistical breakdown in electron behaviour with ever smaller currents) because people are tied in to thinking a certain way.
Eureka moments are becoming less and less frequent as diverging from scientific orthodoxy or group think is resulting in an ever greater likelihood of career suicide. For an industry that espouses creativity, it does everything it can to stifle it. In today's climate, the Lamb shift would never have been discovered.Screw nature; my body will do what I DAMN WELL tell it to do!
The only dangerous thing about an exercise is the person doing it.
They had the technology to rebuild me. They made me better, stronger, faster......
-
-
06-06-2017, 12:10 PM #9
Penrose actually addresses this. In his view, this could only happen if some step in the iteration or evolutionary process is in principle (not simply by virtue of computational complexity) beyond human understanding, or non-computatational... Which he views as unlikely. Otherwise, it is subject to the same limitation as any other Turing-style machine. However, human brains were subject to an iteration process as well, and he argues that human intelligence cannot be fully captured by a computational process...so, to your point, it seems reasonable that similar forces could operate human-created machines as well.
I will have to check this out later, but at first glance, it appears that he is addressing consciousness. Penrose (unfortunately) conflates intelligence and consciousness, which I believe is an error. Intelligence does not require consciousness, and therefore tearing down a theory of consciousness doesn't necessarily invalidate the logic underlying artificial intelligence. But it looks interesting...I will take a look later.
Yup, agreed. It will take a qualitative shift in approach rather than a simple quantitative iteration of current orthodoxy. And, your latter point is 100% correct. Groupthink and conformity are absolutely rewarded in the sciences. At least in the United States, watch what happens to your federal funds if you stray outside the mainstream.It takes a big man to cry, but it takes a bigger man to laugh at that man.
-
06-06-2017, 01:31 PM #10
I agree that there is a great difference in intelligence and consciousness. I think machines will soon (25 years or so, maybe less) be more intelligent than any of us, of course, I also think by then we'll all be cyborgs of one level or another so the difference in intelligence may be moot. BUT, I don't think machines will ever be conscious beings. The whole transfer your memories to a machine to live forever idea is flawed imho.
Was friends with Methuselah
-
06-06-2017, 01:39 PM #11
Yup, I think it will be more complex than a simple dichotomy of biological and artificial intelligence, and as you suggest, they will likely be integrated. In terms of consciousness, Searle's Chinese room example is compelling, that informational/processing complexity cannot by default give rise to consciousness -- and specifically, that consciousness may be a non-functional byproduct of neurological substrates carrying out action during ordinary cognition. I have always been quite partial to this view.
It takes a big man to cry, but it takes a bigger man to laugh at that man.
-
06-06-2017, 01:44 PM #12
I haven't read the statement from which that was derived, but if he actually stated that it was unlikely (as opposed to you paraphrasing), it demonstrates an intellectual conceit on his part; in that he hasn't pursued something to it's logical conclusion.
With an iterative process, something either can or cannot happen, it is a binary state. If something can happen, however unlikely, sooner or later it will happen; the only variables being timescale and number of iterations before occurrence.
As the number of people in the field increase and the resources invested grow, the number of iterations per unit of time will grow geometrically.
On a side note, for a geek thread, I'm amazed at the number of views this thread has.Last edited by DuracellBunny; 06-06-2017 at 01:56 PM.
Screw nature; my body will do what I DAMN WELL tell it to do!
The only dangerous thing about an exercise is the person doing it.
They had the technology to rebuild me. They made me better, stronger, faster......
-
-
06-06-2017, 02:16 PM #13
-
06-06-2017, 02:56 PM #14
Bookmarks