Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is a thread that's reflected extremely well in Charles Stross's Accelerando. It took me a couple of readings of the novel to really make the connection between intelligent corporations in the story and what's actually happening in the real world.


Accelerando is one of the strangest books I ever read.

The 'rapture of the nerds' is a theme worth exploring (the singularity), but I've never given it much credit. After reading accelerando I'm not so sure anymore.

It certainly is food for thought. Given the pace of progress in technology in the last 30 years or so I'm pretty ambivalent about how fast it goes. Sure, there is lots of movement but I don't see anything that is really mindblowingly new. Networks existed, the internet is already decades old.

We're able to communicate faster and better, the cost of a bit has decreased further and further.

But we are not much closer to for instance real AI than we were 30 years ago (only then we were much more optimistic that we'd have it in 30 years).


>But we are not much closer to for instance real AI than we were 30 years ago

This is very hard to evaluate. It seems unlikely that progress towards strong AI will be linear - and I'm not basing that on Kurzweil or claiming it will be exponential, I just don't see how the notion of making steady progress towards strong AI even makes sense.


That's a good point.

Ok, how about this then, if IQ would be some measure would you accept that even lower animals have an IQ of sorts and they can solve simple problems on their own ?

And that we still can't get a computer to solve anything on its own without very precise instructions ?

A first step towards AI would then be to get a computer to be goal driven enough that it could solve a simple problem without it being foreseen (and the solution spelled out) by the programmer.

It could still be years before that would lead to a true AI in the sense that it could do something that we could not but it would be a step along the way.


I think genetic algorithms have been able to surpass that standard a long time ago.

While the developer has to define the playing field and the rules, the solutions the computer comes up with are way better than randomly searching through the solution space. check out http://www.popsci.com/scitech/article/2006-04/john-koza-has-...

Of course that is no way near strong AI, it is still a step in the direction.

And we are working on simulations of the cortex of simple mammals. (read a bit on Blue Brain).

I am not saying we're close to strong AI, but we are making small steps in the right direction, and if philosophically speaking you are a materialist like me (believe that consciousness can be explained in theory using nothing but physical/chemical terms) than it is just a matter of time until it happens.


True, genetic algorithms are definitely a step in the right direction. There is the one famous example of the filter circuit where even the designers of the software couldn't figure out how the damn thing worked, but work it did. Parts connected in the weirdest ways and if you took them out the circuit stopped to function. Apparently the genetic algorithm had in its feeback loop discovered some none-trivial parasitic effects of some components. No sane designer would have done it that way, but that's beside the point because the filter had less parts than any designed by humans directly.

I'm very strongly a believer in the material consciousness,

The emperors new mind was an interesting read but I found it to be a tremendous buildup to an almost magical reliance on quantum mechanics to provide us with our consciousness.

I don't have much with magic so I'll stick with regular physics and chemistry until it has been proven otherwise.

There are too many brain-computer analogies to avoid the strong suggestion that the brain computes. It doesn't do it in a way that we can hook up a debugger to just yet but it does seem to have a lot in common. It's more of an architecture/interface problem.

But strong AI probably won't come from being able to interface with the brain or from a sudden leap of insight in how it all works. I think strong AI will follow from a seed AI which in turn will follow from some relatively simple breakthrough or insight that we still haven't gotten around to.

The simulation of the cortex of simple mammals is another such step in between, do you have a reference for that ?


I wasn't talking about brain computer interfaces, although that is an interesting field. Note to self: try our algorithms on EEG data...

My point regarding the materialism was that if you can simulate the physical/chemical processes that happens in a brain, and I am talking about the molecular/quantum level, the result will be indistinguishable from the original.

The reference you asked for: project Blue Brain by IBM http://online.wsj.com/article/SB124751881557234725.html


I meant that to figure out how the brain works we will have to figure out it's architecture first, then we will be able to hook up to it to refine that knowledge.

Thank you for the link, I'll read it today. The rate at which hacker news produces good stuff to read is getting higher than I can read fast!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: