Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We also have no reason to suspect that it is possible.

The wall that silicon has smashed into should give us (well, not me, really the Singularitists) pause.



We also have no reason to suspect that it is possible.

This is simply not true.

First of all, we already know that intelligent algorithms are possible - we're living examples of that (once you put aside the philosophical objections that assume that something non-algorithmic is happening in our brains). We also have very good reason to think that we're rather poor implementations of intelligence, given the fact that evolution tends to suck efficiency and design-wise, more or less.

Second, we know that we have a reasonable shot at hitting a point where we have enough computer power to actually simulate a full human brain. Now, you may argue that Moore's law will not take us there, exponential vs. sigmoid, etc., but the point is, the probability is distinctly non-zero that within 20/50/100 years you or your children will be able to purchase enough computing power to simulate a brain. I'd probably argue that over a 100 year window, we're at least looking at 50/50 odds (and IMO, that 50% where we don't have such power available mostly involves Big Trouble, world wide nuclear war or something like that).

Of course, without software, such hardware is useless. I'm fully in agreement, this is the biggest pinch point, and I think the most uncertainty comes into the picture here - we don't currently have the technology to scan a brain in detail, we don't currently know the way the neocortex wires up its functionality, so on and so forth. Maybe we'll have the tech to do direct scans by then, maybe we won't; I'd say there's at least a small chance, maybe a few percent, going up over time. Over a hundred year window, if we already have the computing power to simulate a brain, I'd say there's a reasonable shot that we could scan one to simulate, but nowhere near 100%.

There's also the possibility that someone comes up with a better algorithm than the one our brain clumsily implemented, either more compact, more efficient, or in some other way more accessible to us. I'd give at least a small chance of that, too (which adds to the brain-scan chance above), given that we already have a vague sense what such algorithms might look like (see the literature on approximating AIXI, for instance).

Once we've got something that simulates a human brain in software, it's a fairly simple matter to engineer ways to improve on the design, either by increasing speed, parallelism, connectivity, etc. There are hundreds of variables to play with there that we can't safely mess around with in our own brains, and it's overwhelmingly likely that some combination of those can at least result in something that beats our intelligence by some not insignificant factor.

So we've got some non-zero chance (it might be small, but my best estimate still probably puts it in the single digit percentage range, using rather pessimistic assumptions) of building something that's maybe 2x as intelligent as we are over the next 100 years. From there, all bets are off - it might be able to further improve on its own design, it might not, but it also might be able to design something better, or create the technological improvements necessary to speed up Moore's law, etc. There's again at least a reasonable chance that it will continue to improve things, and at least set off an "intelligence explosion" that takes it to 10, 100, 1000x our own intelligence, even if it levels off after that. As best as I can figure, even a 10x intelligence explosion still brings us so far beyond what we know that it might as well be the full Singularity as Kurzweil described it.

Are you really saying that you think the probability at any step along the way here is so small that there's no reason to think it's possible? I'd be curious to hear what percentages you would assign to the various possibilities, if so.


I think you're arguing a far tougher point than you need to. I was talking about the theoretical upper limit to intelligence. We don't need to make the argument that our particular civilisation will actually achieve it (though I agree with your post on that by the way); It's far easier to argue that hyperintelligence is possible in principle, and that's all that's needed to refute the point.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: