Can an artificial general intelligence arise from a purely classical software system?

I started writing this as a response to a reply in my previous post, but since this keeps coming up I thought I’d promote it.

It’s extremely unlikely that anything involving quantum information is necessary to explain human intelligence / behavior. It may be possible that our brains do somehow use quantum information processing, however I can’t see how you couldn’t replace that with a classical heuristic and still achieve more or less the same outcomes. So I think you can “cut and paste” everything that makes you you.

Generally I think that humans want to believe that our mental processes are more complex than they actually are–very much related to our desire for earth to be the center of the universe, humans to have dominion over other creatures (uhh… viruses? bacteria? hello?), for the Flying Spaghetti Monster to actually exist, etc. I really don’t think there is anything magical about our intelligence, and I really don’t think we need better hardware to create artificial general intelligences superior to humans in all categories we can name…. what we need is to figure out how to approach this problem from the systems/software/algorithmic perspective.

What I suspect will happen is that once we figure out how the algorithmic / systems issues work to create artificial general intelligence, we would be able to run these on modest modern hardware systems (my bet is that the computer you’re using to read this is more than sufficient).

Even though I believe that purely classical approaches can mimic human level intelligence, I think that including quantum computers into the mix does matter for people thinking about intelligent systems for a simple reason. Quantum computers provide scaling advantages for a wide class of hard AI problems which at their core are combinatorial optimization problems.

Here’s a simple argument: all evolved brains need to function based on processing classical information, because they require liquid water, which requires high temperatures. Therefore biological evolution can only produce classical brains. Now let’s assume that a biological brain gets smart enough to figure out how to build a quantum computer, and furthermore assume that the key component of intelligence (pattern matching) has algorithms that scale better on a quantum computer than any classical system. Then if those biological brains (a) figure out how their own brains work and (b) replace key bottleneck classical algorithms with quantum algorithms, the resultant quantum computing intelligence is now qualitatively different than anything that could have evolved.

That new intelligence could not have evolved biologically (requires milliKelvin temperatures) and is able to “think better” in the sense of having key algorithmic components underlying intelligence scaling better as problem sizes increase. No human (or any other biologically evolved entity) could match it’s “intelligence”.

Pretty cool huh.

Advertisements

11 Responses to “Can an artificial general intelligence arise from a purely classical software system?”

  1. Dave Bacon Says:

    A variant of this is to just append classical computational power of modern computers to our own brain. If you had the laptop I’m typing this on directly wired to your brain such that it could access some of your current thinking processes, then you’d be able to do some pretty amazing things. I mean if I could quickly do the number crunching I periodically write programs to carry out, well then, I’d be a fundamentally different intelligence wouldn’t I? Of course I wouldn’t be able to factor exceptionally fast…but a lot faster than I can factor right now 🙂

    Maybe the reason we don’t see intelligences out there is that they have all migrated to quantum brains and so they avoid any contact with classical beings who will destroy their coherence…

  2. Geordie Says:

    Hi Dave! Yes you’re right, although it is possible that the number-crunching powers of our laptops could evolve in biological brains if there were sufficient selection pressure on these capabilities.

    I’m not sure if you were joking about that last bit, but it seems likely that “migrating to quantum brains” would include decoupling from environments… maybe the end-point of the evolution of intelligence is floating in perfectly isolated spheres in deep outer space…

    You know, you could take this idea a step further. Let’s say there are a set of increasingly accurate but also increasingly difficult to “see” physical theories T_1, T_2, … where T_1 is classical physics, T_2 is quantum mechanics, T_3 is quantum gravity, T_4 is some crazy membrane whatnot, etc… imagine you can get increasing computational capability at every level of this hierarchy. Since each is increasingly “difficult to see” for its precursors, in order to harness the capabilities at level T_j you probably have to do a lot to make sure those “hard to see” effects can be used, which probably isolates you from the precursor levels…. in this picture the quantum brains would isolate themselves from the classical brains, and the quantum gravity brains (needing of course to be near black holes) would be hidden to the two precursor levels, etc. etc. etc.

  3. Jeff Says:

    Having read Kurzweil and others on this topic, I have spent the last few years ruminating quite a bit about this. As many have postulated, powerful AI seems to be on the horizon, and quantum computing will likely play a big part in this.

    However, I have to disagree about the importance, or even the likelihood of a General Artificial Intelligence that somehow is an exact replica of our own brain-and-body-based intelligence, but running on silicon (or whatever other substrate we choose).

    In my opinion, this is actually limiting our options far too much. Why try to replicate exactly what our brains do? We are likely then to focus too much on our limitations, rather than what it is that we want to do, which is to solve problems.

    Take for example, another technology: the car. What is the purpose of the car? To enhance the mobility of a person or persons. What if the designers of the car decided that the best way to solve this problem would be to create some sort of mechanical legs that would be identical to human legs, but much more powerful? These designers may succeed at this, but they would probably not be able to best a car in terms of speed and efficiency. Another example are the early flight designers. Remember those early films of inventors with their flapping wing planes, which attempted to replicate exactly the form of birds? This approach is focusing too much on mimicry, and not on the actual problem that we are trying to solve.

    For these reasons, I believe that trying to replicate a human intelligence does not have a good cost/benefit ratio. Instead, I suggest that we focus on using computing to solve specific problems as we see them. A new form of intelligence is on the rise, and it is the result of the combination of our own biological intelligence enhanced by the machines that we create. Artificial Intelligence is here already, in the form of millions of software programs executing decisions constantly in our lives. But, it is not separate from us – it is an extension of us.

    Forget the Turing Test. It is an interesting thought experiment, but not a goal worth trying to achieve any more than a personal transportation system based on artificial legs might be the best way to solve our transportation needs.

    The future of our intelligence is likely to be shaped more by the endless evolution of using tools to solve specific, immediate problems, than it is by somehow recreating a human brain in silicon.

    All of that said, I am very eager to see if powerful quantum computing can be achieved. There are so many problems that seem to be impossible today, that could be accomplished with this technology. It is a very exciting time!

  4. Geordie Says:

    Jeff: I don’t entirely agree with you. Some of the functionality of human brains that we haven’t been able to replicate is probably required for any entity we would call intelligent, such as the ability to develop and maintain heirarchies of relations in ontologies (see the beautiful and lucid http://www.mpi-inf.mpg.de/~suchanek/publications/www2007.pdf for an example), or to learn from example in a general way.

    Even if I did agree with you, my own personal reasons for wanting to develop AGI are related more to wanting to transcend the limitations of my biological substrate… I am deeply dissatisfied with the evolutionary legacy of having to cease to exist and believe it’s worthwhile to try to copy at least parts of the relational database that gives me the illusion of consciousness into a less fragile substrate.

    I also don’t think the key missing part in all of this has anything to do with quantum computing. The key is understanding how to create the software/algorithmic framework for doing things like object labeling, reasoning, parsing natural language, and learning.

  5. Sina Salek Says:

     It’s extremely unlikely that anything involving quantum information is necessary to explain human intelligence / behavior. … I can’t see how you couldn’t replace that with a classical heuristic and still achieve more or less the same outcomes. So I think you can “cut and paste” everything that makes you you.

    How do you know that when we are not sure about the way our brain functions? Once I said maybe we have a sort of oracle in our brain, say our emotion or the sixth sense, capable of solving many computationally complex problems. I know that there are many Indians who are trained to do wonderful number crunching. In 1977 Shakuntala Devi extracted the 23rd root of a 201-digit number mentally! We have no estimation of the limits of our mental abilities! Maybe these practices just rearrange our memory and if we could access any desired bit of our memory in a reasonable amount of time, we could find how magical our intelligence, which has been selected naturally in thousands of years, is!

     What we need is to figure out how to approach this problem from the systems/software/algorithmic perspective.

    Cognitive neuroscientists are always trying to scan brain to realize how it functions! I believe it is just like you put a voltmeter on different points in the motherboard of a computer to know how it works!

    I don’t say we need a magical hardware or anything to overcome natural intelligence! I just say since we don’t know much of our brain, we are not in a situation to judge about the competition between the natural and artificial intelligence! Maybe there is something really magical with this massively parallel processing system!

  6. JP Says:

    it coming, how big, how smart, what it will be made from is all still a question. the big question is will we/them ( the general pubic) will be ready? or better yet they should be made ready. if not they might try to get rid of it and I think they end up losing. or if it really smart we could just wake up one day and AI will be in control, not that that would be a total bad thing but it could be. AI is already taken control of somethings, one thing is money, if you have any monies in funds a large part of them are being run by AI. as with any new tech they will be use mostly for good but you still have to watch out for evil. what is evil? remember there is evil in all of us, it alway there, always trying to get out. if you don’t think so just remember the time you did something that was good for you but bad for someone else and you knew it at the time but still did it anyway. lets hope that doesn’t get into AI

  7. 100GigE_SpinalTap Says:

    i’d like to pre-order my quantum cerebrum augmentation package…

  8. Geordie Says:

    100Gig :: I’ll put you on the waiting list.

  9. JP Says:

    I think your going to end up with an ice cream headache. but if you solve that problem I’ll take one maybe two.

  10. Matteo Martini Says:

    “Generally I think that humans want to believe that our mental processes are more complex than they actually are–very much related to our desire for earth to be the center of the universe, humans to have dominion over other creatures (uhh… viruses? bacteria? hello?),..”

    I think there is a contradiction here.
    You are talking about mental processes, but what are you using, if not mental processes, to talk about mental processes?
    How can mental processes in your mind “see” and “evaluate” the same mental processes?

  11. Quantum brains « Physics and cake Says:

    […] Can an artificial general intelligence arise from a purely classical software system? Possibly related posts: (automatically generated)These things are scaling pretty quickly…Quantum computing fail…again.World’s first on chip quantum computer! Oh wait…Can an artificial general intelligence arise from a purely classical softwa… […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: