Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Kurzweil's rebuttal to Paul Allen (technologyreview.com)
102 points by ca98am79 on Oct 20, 2011 | hide | past | favorite | 121 comments


I grow tired of Kurzweil's vague arguments against people who disagree with his vague predictions.

What I think Kurzweil doesn't understand is that in any argument about what's going to happen in the future, the onus of proof inevitably lies with the guy saying "This is what's going to happen", not the guy saying "Ehh, maybe not".

I don't know what's going to happen in the future, and I don't pretend to know what's going to happen in the future, but whatever happens either (a) I'll find out eventually or (b) it'll happen after I'm dead anyway. But john_b's point about Kurzweil's lack of a null hypothesis is a good one.

So my question for Kurzweil is this: what will the world look like if you're wrong? What possibilities are your predictions excluding? If I'm still alive in 2060, and I look around at the world around me, under precisely what conditions am I entitled to say "Well whaddya know, looks like Kurzweil was wrong about that Singularity thing after all"?


I agree with your conclusions, but

> the onus of proof inevitably lies with the guy saying "This is what's going to happen", not the guy saying "Ehh, maybe not".

I'd say that the onus lies on the one making conjunctions instead of disjunctions. Often, negative predictions are disjunctions, but this isn't always the case: Compare "in 2100, North America will be inhabited by humans" with "Ehh, maybe not."


Something like that. It's actually hard to know where to divide up the onus of proof when we're talking about predictions.

One thing's for sure, though. Kurzweil's "I'm right until proven otherwise" attitude ain't the way to do it.


>but whatever happens either (a) I'll find out eventually or (b) it'll happen after I'm dead anyway

It's not your main point, but I think you're leaving out an important option: (c) you can choose to be intimately involved in making the future turn out a certain way.

The best reason to think and write about the future is so that we can decide what future we want to create for ourselves. Then, as someone with the ability to write code, you can go out and create those very things.


IMHO, the future cannot be predicted. Period. I'm not talking impossible-as-in-hard. I am talking impossible-as-in-perpetual-motion.

Nice well-behaved linear Newtonian systems can be modeled and predicted. There are systems that are chaotic but that can be modeled in the aggregate very well too, like thermodynamic systems and certain kinds of fluid flow.

Life isn't like any of that. Life is complex, chaotic, computationally irreducible, and full of feedback loops on top of feedback loops. Even worse: predictions often create economic incentives to prove them wrong. Take a position on the stock market and you have created an incentive for your prediction to not come true.

People have always wanted to deny the fundamental unpredictability of history, and have always clung to woo-woo prophecy superstitions toward this end. The ancients had Tarot cards and pig entrails. We have graphs and computer models.


Of course, predicting the future with 100% certainty is impossible. But that doesn't mean that making predictions is a mug's game. Predicting the future, with appropriate levels of uncertainty, is a very sensible thing to do with your time. I, for instance, predict that if I wander down to Cheeseboard in twenty minutes I'll find that they're selling delicious pizza. And I predict that if I eat that pizza, then it won't poison me. These are all useful predictions, which may be wrong, but are useful nonetheless.

It's only when you start sticking inappropriate error bars on your predictions that it becomes a problem. Kurzweil predicts things which are unlikely or perhaps impossible as having probabilities near 100%.


Arguably, attempting to predict the future is the very essence of intelligence. (See: Jeff Hawkins.)


I would say that the ability to predict the future is indeed a large part of what we call intelligence. Note that 'intelligent' is a relative term, however. You are considered 'intelligent' if you are able to predict the behavior of a system at a success-rate significantly higher than the average observer, given similar or equivalent prior knowledge about the system.


Ugh, not this again.

That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome.

I believe it was on HN that this discussion came up before, but that's a short sighted way of looking at it. Basically, it doesn't take into account all the interactions of the environment required to turn that "source code" into a person. Sure, the DNA would be sufficient if you were able to accurately simulate cellular actions, protein folding, and physics in general, but we just can't do that yet, and it doesn't look like we'll be able to any time soon.


Exactly. Distributed computing grids still have trouble folding single peptides with reasonable accuracy.

We have the source code, but we don't have the compiler.


Worse, there are about to be seven billion brains on the planet. Not merely one.. and human brains aren't so great in isolation.


Why would it need to be in isolation? Machine vision, voice recognition, speech synthesis, robotics -- there's no obvious reason why if we could build such a brain we couldn't find a way for it to interact with people.


I agree, but the devil is in those details. Look at the variations among actual humans.


Basically, it doesn't take into account all the interactions of the environment required to turn that "source code" into a person.

This is an extremely common, and completely incorrect criticism as applied to AGI complexity estimates.

Here's the way to think of it: the entirety of the information content required to move from non-intelligence to intelligence has to have been figured out some time between single-celled organisms and humans, because the substrate on which our intelligence is implemented literally didn't exist at that point. Which means that any part of the biological machinery that was in place when single-celled organisms ruled the planet does not count towards the complexity of the "intelligence algorithm" itself - it's irrelevant, accidental complexity, not information content that is required to get from "working computer" to "working intelligent computer".

You'll find that almost all of the cellular actions, protein folding, and physics were already working just fine when the single-celled ickies were evolving, so it's all complexity that we can safely ignore, which means we can start the complexity count with DNA. Apart from (IMO) extremely minor epigenetic contributions, the pure-DNA information estimates should provide extremely hard upper bounds on the difficulty of the problem, estimates that we'll probably blow through quite easily once we know what we're doing - evolution rarely finds the optimal solutions to problems, I see no reason to assume that it stumbled across one in this case...


Kurzweil makes some good points here. He doesn't address each and every criticism with his view of AI progress, but he does a good job calling out Paul Allen on not doing the homework.


Neither Kurzweil nor Allen have done their homework. This is a disappointingly informal argument if you're looking for hard scientific facts.


Calling out someone for not doing their homework seems somewhere in the DH1-2 range:

http://www.paulgraham.com/disagree.html

There are some good points in Kurzweil's response, but the ones about Paul Allen are definitely not them.

I think his best point was about extrapolating function from individual cells or structures, without needing to understand every single cell or structure individually.


I disagree. When one "addresses" a long-standing, well-thought out argument with off-the-cuff snap statement or something only slightly more thought-out, it is perfectly permissible to call them out as not being serious. It's like going into a serious religion debate with "Evil exists, therefore God does not. Ah-ha, you are defeated!", or "God must exist because something must have started it all, so there!" as if in the thousands of years of years the debate has been raging on nobody has ever thought of those things, or addressed them at length, in both directions.

If you're going to debate the singularity here, maybe you can get by with just stating "I don't believe it's possible" without citing any logic, which as of this writing there's at least two people in this comment set already who have simply stated that without defense, but if you're going to debate one of the leaders of the field it would help if you would at least grant your opponent the courtesy of thinking that just maybe over the course of the decades he's been thinking about this, the obvious objections that you thought up in five seconds just might have been addressed at some point. You may not think they've been adequately or correctly addressed, but don't pretend they haven't been addressed at all.

Personally I'm not completely sold on the matter for a variety of reasons myself, but the usual logic given for why you should be skeptical about it is terrible. The interesting questions are a great deal more complicated than something that can be dismissed with something that generally boils down to "Look, I just can't imagine the world changing that much, so it won't".


I think your comparison to arguments about religion is apt. Many of the opponents to Kurzweil's ideas remind me of those whose opposition to the possibility of a godless universe amounts to, "I can't imagine it, so I don't believe it."


On the other hand, Kurzweil (at least in his essays and articles) often ignores the question of what a fair null hypothesis is for the possibility of the singularity. I think his gift for creating a compelling vision tends to make people forget that the null hypothesis for a scientific assertion is doubt.

Kurzweil provides both high level general evidence (like improvements in computation) and low level, domain-specific evidence (like the discussion about the pancreas) to support his claims, but none of that justifies the use of the word "law" in "law of accelerating returns". He attempts an analogy with thermodynamic laws and how they are derived from underlying statistical principles, but there are no underlying fundamental principles of human innovation and progress that are in any way comparable to the certainty and universality of physical laws. This, I think, is why a lot of people (myself included) have a hard time taking him seriously. He tries to apply the same kind of formal analysis that works well in science to human beings and the complex, highly non-scientific processes that underly innovation today. The bottom line is that, until the singularity occurs, human beings will still be needed to build ever more complex and powerful systems, but human beings do not progress at anything close to an exponential rate.


I guess my point was just that calling him out, while strictly correct, is doing nothing to refute whatever statements he may have made on the singularity, nor is it doing anything to promote the idea of the singularity.

Replying to "evil exists, therefore God does not," with, "that argument has been made before," doesn't say anything about whether God does or doesn't exist, and it doesn't do anything to refute the argument.

The point of PG's How To Disagree is that the point of disagreeing is to address the truth or falsehood of a proposition, not the way the proposition is presented.

The DH# levels (at least DH0-2) can be restated something like this:

DH0: I don't like the author, therefore his argument is false.

DH1: The author has a relevant fault or weakness, therefore his argument is false.

DH2: The author didn't present his argument well, therefore his argument is false.

While it's nothing against him personally, the first three paragraphs of Kurzweil's reply read like "the author didn't present his argument well, therefore it is false." He recovers later, sure. He easily reaches 5-6 on the DH scale, but the rebuttal would be stronger without the first three paragraphs, and they certainly aren't the best, or most praiseworthy, part of it.


Allen attacks Kurzweil rather directly in his essay and the essay title is even "The Singularity Isn't Near". So in that case I think it's fair game if Kurzweil calls him out for not even regarding the arguments he made in "The Singularity is near".


What are the big advances in linear programming that happened since 1988?

As api mentions in the comments here on HN there are areas of work where progress stopped. For example passenger jet speed was once thought to continue at a rapid rate such that LA to Europe flights would be a few hours. Skyscraper height was thought to continue, with advances in various technologies and engineering methods making it desirable. Both hit realities that signficantly slowed down their progress.

I tend to side with Allen on this. While we're bright people, I don't know if I see us able to keep increasing computing power while keeping actual power consumption reasonably low.


Hmmm? Generally agree with the plane flight example, but linear programming seems like a surprisingly bad example to pick. Karmarkar's algorithm has allowed the essential insight of linear programming to be generalized into a programme for optimizing any convex function, subject to convex constraints, over a convex set (see Stanford's EE364).

I assume you are familiar with this as 1988 = date of Fulkerson prize for Karmarkar's work? I guess the point narrowly holds if you're thinking of pure LP rather than general CP. But general CP is really quite a big deal.


What passenger jet speed and skyscrapers really hit is economics: demand limits to growth.

Most super-tall skyscrapers are economic disasters. There seems to be a maximum economically rational height to a skyscraper, and it's already been reached. You can build higher, but if you do you're wasting your money.

A human-level or beyond AI would probably be like the Burj Khalifa: an economic disaster. Why build it when screwing and popping out babies is far cheaper and already works? If you want to exceed human intelligence, it would be a lot cheaper to augment human brains with external digital assistants (like what you're using now) or implants than to re-engineer an entirely new embodiment.


>Why build it when screwing and popping out babies is far cheaper and already works?

Why build a word processor when pencil, paper, and a scribe is far cheaper and already works?

It's about scale.


The super tall skyscrapers are indeed an economical disaster in them selves, but they skyrocket the value of the surrounding land. The Saudi Prince who's building the Kingdom Tower owns a considerable amount of land around the building site so he'll make a lot of money off the land he sells. It kind of reminds me of how Google makes money with adverting on their free products.


Here is a rough transcript of a Long Now Foundation talk by SF author Vernor Vinge (and coiner of the term "singularity") entitled "What If the Singularity Does NOT Happen?". He sees:

  * Scenario 1: A Return to MADness (nuclear war)
  * Scenario 2: The Golden Age (peace and prosperity)
  * Scenario 3: The Wheel of Time (catastrophic natural disaster)
http://www-rohan.sdsu.edu/faculty/vinge/longnow/


The Singularity concept strikes me as a sort of wishful thinking. Technology advancing so fast that we no longer can control or understand it? Yeah, that already happened to my parent's generation with AOL, yet here I am texting this on my iPhone. New generations understand intuitively what the previous generation understood theoretically.

Even so, I fully expect memristors to deliver strong AI.


Remember there is 3 main schools of thought regarding the "Singularity" concept, which are more or less compatible.

Accelerating change. Basically Kurzweil's view. Exponential improvement of machines, which will eventually reach then exceed human intelligence in every single domain, or something like that.

Even horizon. If we ever build something that achieve greater-than-human intelligence, we cannot predict whatever it will do to the world, because we're just not as smart as that thing.

Intelligence explosion. If we ever build an AI (or something similar) that is more effective at doing AI research than we are, then that AI would be even more effective… and foom you have something that would leave Skynet in the dust, so it'd better be our friend[1]. Note that the first iteration of that thing may not need to be smarter than us: it just have to be able to build something smarter than itself[2] (and of course, the self-improvement cycle must not hit a ceiling too soon).

[1]: https://en.wikipedia.org/wiki/Friendly_AI [2]: https://en.wikipedia.org/wiki/Seed_AI


Also, we should remind ourselves that we live on a planet with limited physical resources. So, assuming that we build something that could exponentially out-smart us, that thing would still need to have access to physical resources for which what is available on this planet might not just be enough. I see it 2 ways: the thing/it/whatever we want to call it either manages to expand from this planet/solar system before it auto-destroys itself, or it just dies off for lack of available resources.


I agree. Note that it also apply to humanity itself.


> Even so, I fully expect memristors to deliver strong AI.

Why would they?

Is strong AI a function of storage capacity or speed?

An AI running at 1/100th of what a future AI may be capable of is still an AI and I can't see how a mere improvement of a couple of orders of magnitude would do what decades of Moore's law have failed to do so far.

If strong AI was just a matter of speed then we could theoretically take any of the large clusters available today and run it at an appreciable fraction of its normal speed leading at a minimum to a validation of the fact that it is indeed a strong AI that's been created.

The barrier seems to be more that we don't know how to go about building one from a software perspective than that we wouldn't have the capability to design the hardware.

So how would an advance in hardware suddenly fix that?


Is strong AI a function of storage capacity or speed?

Yes, absolutely! I actually think the most appropriate benchmark is memory bandwidth, which hasn't been improving as fast as FLOPS or storage capacity. It's not a matter of running a strong AI at 1/100 speed on today's fastest supercomputer. It would be more like 1 billionth or trillionth speed.

The reason for our disappointingly slow progress in AI over the years is that our hardware is still nowhere near powerful enough to usefully implement the same algorithms as the brain, and we likely won't even develop the right algorithms until we have hardware closer to the requirements, so we can test and iterate.


I'd really appreciate it if you could suggest why we couldn't implement similar algorithms as the brain, that possibly require a massive number of fetches & executions simultaneously (guessing this is where memory bandwidth plays in), but have the results show up much slower.

Shouldn't it be possible to have artificial AI mimicking human brain algorithms at 1/100th the speed, where perhaps a single thought based on learned information takes hours, instead of seconds?


You misunderstand. I'm saying we could, but the slowdown wouldn't be 1/100. It would be more like 1/1 billion. At that speed, it would take years to simulate a second of brain time. Not only would that be useless, it would be impossible to know if you'd actually implemented it right without being able to test it in a reasonable timeframe. That's why we'll only be able to develop brain-like AI once our computers are much faster.


Appreciate the reply. Seems I did misunderstand.

I find it hard to agree, that despite the nanosecond latency times and the terabytes of throughput we can wring out of single computing devices(gpu's etc), we couldn't simulate brain-like AI faster than a billionth of what it should be.

You're probably right though.


> It would be more like 1 billionth or trillionth speed.

That's a possibility.

Now what algorithm do you propose to use?


My money's on Andrew Ng's deep learning research. Deep learning has already had huge success, both in reproducing the measured behavior of neurons in the brain, and in outperforming the state of the art on various machine learning classification tasks.

Here's an overview which references some very impressive results: http://www.youtube.com/watch?v=AY4ajbu_G3k


Memristors are the silicon equivalent of neurons -- a time-dependent function with state. In mammals, the connectivity of millions of neurons enable the emergence of intelligence. I don't see any reason why a silicon-based neuron network of sufficient size isn't capable of the same.

Now, that's entirely different from purposeful design of an AI such that it speaks English and knows what I like for breakfast. I don't think we'll ever know how to sit down and write one in Notepad.

Instead, memristor-based AIs will be evolved using genetic algorithms or other evolutionary approaches. Yeah, maybe that's wishful thinking too, but that's how I see it playing out.


A memristor can be simulated by a couple of formulas and a few variables. (just like neural networks can be simulated).

In spite of our ability to simulate this sort of process for decades we have not succeeded in building a strong AI, the difference in having a hardware version is one of performance (just like Neural Network chips are typically a lot faster than their software simulated counterparts). There is no real difference in capability here, just a (possibly very large) speedup.

Now, I'm not ruling out that such a speedup will cause us to be able to create things that so far were not possible but I have a hard time convincing myself that this will almost certainly be the case.


I think there's a very real minimum speed limit necessary to keep a highly connected system like your brain, an AI, or the Internet operational. Does anyone seriously expect 'TCP over Carrier Pigeon (RFC1149)' to work in practice at scale? Or that your own wetware would've developed properly if it plodded along forever below 13 Hz?

Given that biological examples of a minimum speed limit for cognition exist, and that the behavior of computational networks at various speeds seem to follow the same behavior (slower=worse), then it seems reasonable to assume that a similar lower limit for cognition exists for silicon-based networks.

Therefore, faster devices such as memristors might be just the thing necessary to get our machines thinking, and moreover, that we may never see intelligent behaviors in slower simulated environments.


Note that even hardware speedup need insights to be successfully implemented. Seeing that hardware does improve at a regular pace, it looks like insights do pop up regularly.

Therefore, it isn't such a stretch to think that (i) insights may continue to pop up reliably for "a while", and (ii) not just in the domain of hardware speed-ups.


I would say that strong AI could appear sooner than otherwise with faster hardware. This is because, the earliest algorithms for strong AI are likely to be flawed and imperfect. These imperfections can be compensated for by faster processing. Think of the earlier chess playing algorithms. Although Deep Blue was able to defeat Gary Kasparov, the algorithms themselves were relatively primitive compared to modern chess playing algorithms; although they achieved their goals, they did it with more computation than was necessary due to the algorithms being more brute-force orientated. As the algorithms got more sophisticated, the computational hardware needs decreased.

Faster hardware also allows you to get by without fully understanding how the strong AI algorithms work by using evolutionary methods. You use brute force evolution to evolve algorithms that are even more efficient than their parents. Some people might frown upon using evolution to create AI but that's what was used to create human intelligence so why not use evolution guided by humans to create machine intelligence?


You make an excellent point about how early imperfections in the algorithms could be fixed as they evolve through feedback, and that this step-process running faster would get us the desired (evolved) algorithms quicker.

But couldn't we accomplish this with distributed processing rather than faster hardware?


I think distributed processing is another form of faster hardware. It is certainly more operations per unit time. Evolutionary algorithms are interesting because they are able to exploit parallel processing better than most. Evolution amounts to a type of heuristic tree search, where once an algorithm has found a solution to a problem, the knowledge is persisted and shared with other algorithms.

Later algorithms can be run on lesser hardware because they don't have to retread the same territory as their ancestors; they only have to search the space that hasn't already been examined. As the saying goes, they search "smarter, not harder".


Is strong AI a function of storage capacity or speed?

Storage capacity.

The brain is relatively slow. Ignoring microtubules, we're just dumb chemical reactions. But we are a lot of dumb chemical reactions. A human brain is ~ 100 billion neurons with each neuron having 100 to 7000 connections to other neurons.

So, just estimating here, 100 giganeurons * 1000 connections = 100 trillion neuron connections.

To simulate that, we use matrices of floats for each connection. So 100 trillion 4 byte floats = 400 terabytes of memory (with lots of hand-wavy assumptions). The fanciest GPUs we have today top out around 3 GB of memory (but they have 1k great run-in-lockstep cores) and the largest whole non-proprietary system I've seen tops out at 1 TB of local RAM: http://www.siliconmechanics.com/quotes/212240?confirmation=1...

We can simulate parts (vision and hearing have advanced on neural networks in the past 5 years), but higher level cognition/motivation is still an unknown quantity.


> Technology advancing so fast that we no longer can control or understand it? ... Yeah, that already happened to my parent's generation with AOL

That's not the singularity, else we'd have had several singularities across several generations as certain groups of people fail to grasp the utility or value of printing presses, steam engines or atomic bombs.

The notion of the singularity suggests a point past which it is impossible to predict the future.

Global networking, in its basic form, was entirely predictable to certain technologists for decades before its existence. In fact, the original Spaceship Earth ride at Epcot nailed many peaces of later commonplace technologies – even multiplayer gaming. Hell, I read of a B'ahai priest who predicted the world wide web in the 30's.

Meanwhile, a singularity posits a confluence of technologies, connectedness and social change that renders all events past its arrival entirely impossible to predict.


String AI is not a hardware problem. It's not a matter of lack of computational power. It is a software and modeling problem. If you had an AI algorithm and a model, you could still run it on any Turning machine. It would just take a lot longer (perhaps years or decades or more) to compute a single thought on current hardware instead of real time or faster than real time on some super fast future hardware.

There are people (like Roger Penrose) who argued that intelligence and consciousness are not computational in nature (and hence no algorithm can be conscious). Penrose goes all the way down to quantum mechanical effects in the brain. I have not really followed developments on this and where Penrose's argument currently stands.


Nobody takes him seriously. He is just a Christian apologist who wraps it up with quantum who haa.


What about energy?

It's true that if you look at most areas of technology they are advancing rapidly. Except energy. Energy has stagnated since the 1950s.

I'm on the fence on this issue, but there are many very intelligent and knowledgeable people who are predicting a kind of anti-singularity: in the 21st century, fossil fuel depletion will send us way back, perhaps even de-industrialize most societies.

Is our civilization simply a machine that is transferring the order (low entropy state) in fossil fuels into order within itself (technology and economic complexity), and when those fossil fuels run out will this ordering process cease?

The lack of major breakthroughs in energy in the past 50 years is pretty dramatic. Nuclear looked like an energy panacea once, but it's turned out to be clunky and hard to scale. Solar panels and wind turbines are interesting, but the problem with those is that we basically can't store energy. Energy storage is either super-expensive per kilowatt-hour and not scalable (e.g. Li-Ion batteries) or very inefficient (e.g. water electrolysis to hydrogen).

Without a breakthrough on the order of cheap ultra-capacitors or fusion, I'm afraid we'll be seeing peak everything pretty soon, including technological complexity.

The thing is: all the technologies of the "singularity" are energy consumers. Where are the producers? What is going to power the singularity?

Then there's another area that makes me horribly pessimistic: politics. Most of our societies are degenerating to banana republic levels of corruption. Even if the energy problem is technically solvable, it seems to me that our political systems may be set up to do the absolute worst possible thing in this area: ride the fossil fuel crash into the ground in an orgy of war and despotism.


Note: I made a similar comment a few days ago.

It seems that you might be slightly misinformed about energy. Since the 1950s, there have been a number of advancements in energy production. Some examples are efficient shale oil extraction, tar sands oil extraction, horizontal drilling, deep water drilling, and high Arctic drilling.

As The Futurist discusses in depth[1], annual world oil consumption has been hovering around 32 billion barrels since about 1982. That means oil consumption, at $100/barrel, is $3.2 trillion, or 5% of nominal world GDP.

My take away from that is technology has made the oil supply a non-issue. And, it will continue not being a problem for the foreseeable future. It is not a hard resource limit.

Also, check out the the research going into liquid fluoride thorium fusion. It seems to have potential as a way to do cheap, safe, "easy" nuclear fusion.

Personally, I think that the problem is human willpower and chicken-little attitudes. To increase my technology optimism, I read nextbigfuture.com. Nearly daily, I see an article posted there that makes me go, "Holy crap! We can do that now??!"

[1] http://www.singularity2050.com/2011/07/the-end-of-petrotyran...


I'm aware of oil drilling improvements, shale gas, etc.

The problem is that this technical complexity takes energy. To really understand the "peakist" argument, you have to understand the concept of EROEI: Energy Return On Energy Invested.

Peak oil, for example, is not about running out of oil. There could still be tons of oil around. It's about EROEI reaching near 1:1. When it costs the energy content of a barrel of oil to get a barrel of oil, we are out of oil as an energy source. This could occur after only ~5% of the Earth's hydrocarbons have been extracted.

EROEI for oil has been steadily declining. The first oil drilled was, of course, the easiest to get. Drop a well and it spurts out. Almost all of that is gone outside a few supergiant fields, and those are or will soon be entering decline. Heavy oil, shale oil, and tar sands require a lot of energy to extract and process... far more than "light sweet crude."

All fossil fuels follow the same pattern. Coal and gas have not peaked yet, but they will... almost certainly in the next 50 years. Oil probably peaked in 2005, though it won't be possible to say for sure except ~25 years in retrospect.

I'm also aware of the thorium cycle, etc., but I am skeptical of the ability of such things to scale fast enough. Keep in mind that to maintain growth new energy technologies will have to scale faster than the fossil fuel EROEI decline curve.

I personally do not believe in the "doomer" scenario. However, I do think this: I think the first half of the 21st century will be dominated economically and technologically by the energy transition. The story of this era will be a mad rush to do everything we can to increase efficiency and build out new resources simply to tread water against the fossil fuel decline.

And of course our political class will make it worse by doing the absolute dumbest possible things: subsidizing oil prices to prevent price signals from working, going to war over dwindling oil reserves, interfering in the market for new alternatives, restricting new alternatives, NIMBYism against renewable and nuclear energy, etc.


I totally get EROEI. In fact, I used to be the canned beans buying type.

But, after years of thought, I realized that the EROEI has a fatal flaw. It completely discounts humanity's ability to discover creative efficiency improvements. The EROEI calculations are done with today's technology, and do not include future improvements.

For example, I grew up in Western Pennsylvania, where the oil industry started back in the 1800s. Capped oil wells abounded in the woods, because at some point in the past, it stopped being efficient to drill them with 1800s drilling technology. Today, old high school friends are making bank reopening those wells and even drilling new ones with modern techniques.

The political problem is an interesting one, and I have opinions on it. But I hesitate to air them here because they are unconventional. The haters would probably drop my karma back down to -20.


I am in neither the doomer nor the pollyanna camp on this, as I said.

Keep in mind that EROEI includes not only the energy cost of operating technology but also the energy cost of developing it. Every engineer who worked on that fracking technology? Every car trip they made to/from the office, etc., all has to be included in the energy investment required to get that shale gas out of the ground.


Yup, I totally get EROEI. But, it ignores possible efficiency improvements when it makes it's future projections. What do they say in the finance world? Something like "past performance is not an indicator of future returns."

What if in 5-10 years those engineers ride to work in self-driving robo-cars that are all electric, using cheap and powerful graphene-based super capacitors as batteries. They will have time during the commute to daydream about "wild" ways to make the fraking technology itself more efficient. And the all electric robo-cars are charged up from a cheap, efficient liquid fluoride-thorium reactor generator station. Or maybe the they are just daydreaming about an upcoming vacation to outer space while riding in luxury two-stage zepplins[1]. Oh, and the engineer is healthy and performing at peak mental and physical performance nearly continuously because he/she eats a ketogenic diet[2] centered around high-latitude reindeer herding products[3].

There are a lot of ways the world can be better.

[1]http://www.jpaerospace.com/

[2]http://www.ketogenic-diet-resource.com/

[3]http://en.wikipedia.org/wiki/Lomen_Company


Efficiency actually increases energy use, as per Jevon's paradox, which goes hand in hand with the technological cornucopia argument that the energy issue will be solved by better technology. Unfortunately the EROEI numbers reflect quite the contrary - where once oil bubbled up from the ground under its own pressure netting 200x EROEI, we're now griming are way through oil sands which net 5-6x EROEI, or sinking 2 mile long pipes into the ocean. This is why Kurzweil argument fails - technology has a tendency to expand and soak up as much energy as possible, all the salad shooters in the world aren't going to bring back $2 oil, in any form.


Read my comment a few levels up.

The data says that Jevon's paradox is wrong. See the energy usage chart here: http://www.singularity2050.com/2011/07/the-end-of-petrotyran...

Since about 1982, the annual world oil consumption has held at roughly 32 billion barrels despite efficiency improvements in petrol energy usage.

Our technological efficiency improvements are doing more with the same amount of energy and not more with more energy, as Jevon's paradox predicts.

Think about this simplified example. Our cars get better mpg today. Which means that we have energy left over to use to sink those 2 mile long pipes into the ocean. Because of the efficiency improvements, we have just done more with the same amount of energy.


In fact the hard cap as to amount of energy we can extract has been reached - we would in fact use more if we could, but we can't extract it - we are running in place. This is often confused with efficiency when in fact it is a peak energy issue. Three billion people in the world living on under $3 a day and we there's no demand for more energy?

The example is not compelling; 100 years ago there were no cars - are we using more or less energy now with the advent of 'car technology'? The obvious answer is: way more. We are not just taking the net energy of 1910 and 'redistributing' it. Because this is what technologies do, provide an advantage that nature does not. But technology is not free - to develop it, make it, use it, dispose of it, all requires a lot of energy. An insatiable thirst to develop and use thingamajigs is what is causing the problem, not solving it.


Kurzweil's formula is based on recursive intelligence: getting smarter lets you improve the rate at which you get smarter, ad infinitum. Though this can sometimes improve energy technology, intelligence ultimately depends on energy, of which there is a finite usable amount. Though I'm optimistic about new nuclear technologies, I'm forced to agree that absent an unforeseen break-though in fusion, "zero-point", etc., Kurzweil's model is agnostic when it comes to extracting useful energy, and therefore could have an expiration date.

It's worth noting, however, that we'll never completely run out of energy, so long as Sol keeps burning. Can we sustain anything remotely resembling civilization based only on solar energy and its indirect organic by-products? Well, that's obviously debatable.

> Even if the energy problem is technically solvable, it seems to me that our political systems may be set up to do the absolute worst possible thing

Never doubt the self-interest of the powerful. Some may be short-sighted enough to eat their seed and burn resources in resource wars, but I think we're more likely to devolve into feudalism, where the masses eke out their existence serving the needs of those who control the remaining resources. On the other hand, if any kind of information infrastructure remains when the oil runs out, we also have the possibility of new emergent, de-centralized social structures constructed out of necessity. (Imagine, for instance, a mesh network of hand-crank-powered cell phones which virally alert clusters of villages to band together to defend against marauders.)

Either way, I can't help but feel that our lifetimes are the perfect fulfillment of the old Chinese curse: "May you live in interesting times."


China is starting a +1 billion dollar project to create a liquid thorium fission reactor (not fusion). The amount of thorium in the world is greater than the amount of copper, tin and some other metals, the amount of usable uranium is around the level of platinum. A working thorium reactor was created in the 1960s, and once china makes a few and show the world its a repeatable process, investors and banks world wide will be willing to invest in these reactors.

http://thoriumremix.com/2011/


Incidentally, you can blend thorium with uranium for use in conventional reactors, and with some cleverness in the fuel handling, pebble bed reactors can run on thorium. It doesn't require big breakthroughs; they're just nice, is all.


Overall good line of thought, but a few other things are worth considering. It is vastly underappreciated just how much the EPA has retarded energy development in the US. The retroactive revocation of permits for the mountaintop coal project, the Keystone XL imbroglio, and the hasty ban on offshore drilling are just a few examples. (Not many know that Deepwater Horizon was out that far because of regulations forbidding drilling closer to the shore).

These areas are inherently dirty and dangerous. It's always easy to do Monday morning quarterbacking, but little recognition of the fact that infrequent domestic oil spills may be preferable to endless foreign oil wars.

Resource extraction in general is demonized by the EPA just as much as the pharmaceutical industry is by the FDA. It is hard to regulate an industry which is respected by the public, because they will get some benefit of the doubt. But if they can be turned into polluters and poisoners, it is easy to justify ever greater state power over the sector.

Computers and the internet are in the exact opposite space: highly respected and almost entirely unregulated. So it's hard for many here to see what the fedguv means (they make App Store approval look like a walk in the park).

But once you're on the insides of those sectors, you start to realize that what is holding back oil in the West (and nuclear power, and drugs, and all non CS sectors) are human factors, not physical ones. That is simultaneous harder and easier to deal with than a genuine scarcity of energy.


How much oil is in off the East Coast passive margin? 20 bboe (which is twice the best estimates of MMS)? That would make it on par with the North Slope in Alaska, an elephant field. That's 3 years or less of US consumption, or about 7 months of global consumption. BFD.

And that comment about the Deepwater Horizon not being able to drill closer to the shore... that's eyepopping. The shallow-water GoM in the Mississippi Delta has been drilled for decades.

I don't want to use an inappropriate tone for HN, but you should really take stock of how you've been misinformed so badly and seek to educate yourself.


  And that comment about the Deepwater Horizon not being able 
  to drill closer to the shore... that's eyepopping. The 
  shallow-water GoM in the Mississippi Delta has been drilled 
  for decades.
Shallow water drilling has been disincentivized in favor of deep water drilling for several years in part because of tourism and NIMBYism concerns. People do not want to see drills from the shore. This is well known in the sector. You can find articles on the topic before 2010 by using archive search in Google News.

http://www.stpns.net/view_article.html?articleId=10841083261...

  July 13, 2006

  Through financial incentives, the House bill  encourages     
  states to allow offshore drilling. For drilling within 12 
  miles of their coastlines, the states collect 75% of all 
  royalties. They get 50% of all royalties for drilling up to 
  100 miles offshore. Louisiana, whose lawmakers authored the 
  bill, stands to collect an additional $50 billion over the 
  next 30 years. South Carolina, with no offshore drilling in 
  its history, has no idea of the potential payoff. The South 
  Carolina concern appears to be more turf protection by the 
  tourism lobby, and that includes Gov. Mark Sanford.

  South Carolina's $10 billion tourism industry could be 
  threatened, according to Sanford, *even though the state 
  could restrict offshore drilling to a distance more than 
  100 miles out*. The federal authority is demarcated at 200 
  miles out, and beyond that is open international waters.
Notice: tourism concerns (and NIMBY issues, not mentioned in this piece) are cited as a reason to move drilling further offshore, in part by giving financial incentives (50% rather than 75% of royalties) for locating drills out of sight. Note also that Louisiana's lawmakers authored the bill. You can dig in more if you wish, but there were a variety of reasons why lawmakers preferred drilling further away from shores. And Republicans were just as culpable:

http://www.time.com/time/nation/article/0,8599,166334,00.htm...

  July 3, 2001

  "Floridians have spoken loud and clear, and their voices 
  have been heard by President Bush," Florida Gov. Jeb Bush 
  said Monday after Interior Secretary Gale Norton announced 
  the administration would ask Congress to let oil companies 
  drill on about 1.5 million more acres in the Gulf of 
  Mexico. That's just a quarter of the roughly 6 million 
  acres that the Clinton administration first proposed 
  opening for leasing in 1997 and that the Bush/Cheney energy 
  plan had earmarked for drilling.

  Speaking from his parents' home in Kennebunkport, Gov. Bush 
  called the scaled-back proposal a victory for "Florida's 
  fight to protect our coastline" and told Bush the words 
  he's going to be hearing a lot for the next few years: not 
  in my back yard. "Any lease sales that do occur in the 181 
  area" — a patch of energy-rich ground in the derrick-free 
  eastern part of the gulf — "will occur off the coast of 
  Alabama, not Florida," he said.

  The newly proposed area extends to 100 miles south of 
  Mobile and gets no closer than about 200 miles west of 
  Tampa. That's actually OK with the folks in Alabama — they 
  already have plenty of drilling platforms out there, as 
  does most of the Gulf coastline stretching west to Texas. 

  And now it's OK with the folks — the Republicans, anyway —   
  in Florida, who are worried about their white sandy beaches 
  and the mammoth tourism industry that grows on them.

  The compromise was made (and somehow you don't expect 
  Norton to be overruled later in the week) to appease the 
  state GOPers whom Bush offended late last month when he 
  came to Florida to strike an environmental pose at the 
  Everglades National Park with a bunch of Democrats.

  ...

  The plan allows Ari Fleischer to say the president's new 
  plan is "environmentally sensitive and balanced" (although 
  that's what they said about the old plan). It allows Jeb 
  Bush, up for a tough re-election in 2002, to imagine that 
  his brother's winning the presidency was actually a good 
  thing for him politically.
Operative phrase: "gets no closer than about 200 miles west of Tampa". A series of bills and regulations like this, some of them simply interpretations, led to an overall incentive to locate platforms further and further away from the coasts, in uncharted waters.

Now, one can claim that tourism was a legitimate reason for these rules. Perhaps. But one must also concede that regulation was a nontrivial factor in forcing these platforms away from the shore.


I have no incentive for further discussion with you, so I'm not sure why I'm bothering to respond.

I'll just say that I'm well aware of the politics and technical issues in the E&P sector as well as domestic and international energy consumption (across all fuel types), and that you are on the border of being completely uninformed on this issue. Frankly, it's sad. You have an opinion, and have used Google to find articles that validate it... and here I am, having read RigZone for years, participating on Oil Drum, having independently given myself an undergrad+ education in petroleum geology, having invested real money in oil companies and reading dozens if not hundreds of 10-Qs and 10-Ks for domestic small to mid cap producers (whose assets span onshore, offshore, UDW, shale, and bitumen). And nothing I can say will jolt you out of your comfortable mindset where there is abundant petroleum for the taking if only DC would step back and let capitalism run its course.


Since you know the industry so well, what is your opinion of this?

http://www.npr.org/2011/09/25/140784004/new-boom-reshapes-oi...

"[T]he US could be poised to pass Saudi Arabia and overtake Russia as the world's largest oil producer" - true?


The 2T bboe is likely talking about in place reserves, not recoverable reserves. See http://en.wikipedia.org/wiki/Oil_reserves#Proven_reserves for a discussion of the definitions 'in place', 'proven', and 'probable'.

Fracking is still a young-ish technique, and last I knew the recovery rates were less than 5%. I don't think it will get above 10% even with technology improvements. Which is still a boatload of Oil / NG / NGL. It's still just 8 years of global demand at current rates.


I brought up the exact same objection when Kurzweil spoke at my college. He pointed out that the installed cost-per-watt and total installed capacity of PV exhibits an exponential trend over the ~30 years of available data. He extrapolated that within 20 years, 100% of the world's energy could be provided by photovoltaic panels.


"Averaged over 30 years, the trend is for an annual 7 percent reduction in the dollars per watt of solar photovoltaic cells. While in the earlier part of this decade prices flattened for a few years, the sharp decline" http://blogs.scientificamerican.com/guest-blog/2011/03/16/sm...

'The cost of solar, in the average location in the U.S., will cross the current average retail electricity price of 12 cents per kilowatt hour in around 2020, or 9 years from now. In fact, given that retail electricity prices are currently rising by a few percent per year, prices will probably cross earlier, around 2018 for the country as a whole, and as early as 2015 for the sunniest parts of America.'

If solar is at a 7% annual decline is that not a very rapid advancement?


Solve the amount of raw collection (speaking of "next generation" energy technologies), and some leakage in the storage and delivery becomes acceptable. It remains a problem that you can peck at, over time.

For the U.S., I see a/the primary crisis that is addressable as being energy. And solving it would, directly and indirectly, re-invigorate our economy and provide an entire sector not just of jobs but of careers / career growth. And education.

That the current Administration didn't jump into this aggressively, has been of enormous disappointment to me.

That would be some leadership. Chart the course (course corrections allowed), and advocate -- including with that bully pulpit -- to make it so.

Given that we don't have such leadership (which would be unusual, admittedly), it will have to be bottom up. I just hope that's enough.


The energy problem was solved by Freeman Dyson et. al. in the 50s at Los Alamos [1]. Here are his paraphrased proposals:

Kardashev Level I (http://en.wikipedia.org/wiki/Kardashev_scale)

Dyson fusion engines.

He's not talking about unviable Tokamak designs (http://en.wikipedia.org/wiki/Tokamak), but something much simpler. Basically use H-bombs (which are the only tried and tested fusion technology) to drive a large internal combustion engine. The energy released by each explosion is stored by using it to lift water back into hydraulic dams.

Kardashev Level II

Dyson rings and shells (http://en.wikipedia.org/wiki/Dyson_shell).

Kardashev Level III

"Lather, rinse and repeat" with Dyson shells.

[1] Just as all problems in computer science were solved (in principle) at Xerox PARC in the 70s, all energy problems were solved (in principle) at Los Alamos in the 50s.


>What about energy?

>It's true that if you look at most areas of technology they are advancing rapidly. Except energy. Energy has stagnated since the 1950s.

This is demonstrably false. Solar power generation, for example, has been enjoyinging the same kind of Moore's Law exponential price/power over the past 15 years that computer processing power has. This is hardly surprising due to the fact that silicon wafer solar panels often use the same semi-conductor suppliers that computer hardware manufacturers do. Newer thin-film solar panels represent a jump in paradigm that promises even greater price performance.

I could have brought up similar points about the progress of wind-power, bio-fuel, or a number of other fields. Energy has anything but stagnated since the 1950s.

tldr: Stay off the peak oil scaremongering sites. They'll blind you.


OK, but this is not "real" until you have end-to-end solar, as in starting from sand and ending up with installed solar panels, with every step along the way solar-powered, no fossil fuels, not even in the truck to deliver them to the end user. Because what we do have right now is end-to-end fossil. The main use of fossil fuels now should be bootstrapping the next level, not everyday use.


Newer thin-film solar panels represent a jump in paradigm

isn't this what Allen said in his critique? scientific achievement doesn't just grow exponentially - there are those "jumps in paradigm" that move us forward, but they're relatively rare and unpredictable.


As Kurzweil pointed out, Allen hadn't even read his book. In it Kurzweil provides copious volumes of data to support his claim that the overall trend is still exponential. As one paradigm starts to run out of steam, there is greater and greater research pressure to find the next. Much as vacuum tubes improved exponentially until nearing their limit upon which transistors and then later ICs took over, the same has been happening with energy.

For the past 400 years human energy consumption per person has been growing at a relatively smooth exponential curve, despite changes from wood-burning to coal to whale oil to petroleum. Even a cursory unbiased study on the subject will show that. Interestingly, for nearly the entire time, malthusian doomsday prophets have enjoyed more popularity than more rigorous analysts.


I don't think energy production needs to change very much. One dimension of computing improvement over the past N years is that cpus are getting faster per unit of energy. So, it takes less and less energy for more and more powerful computers.

Remember that our brains only need a couple thousand kcals per day.


Hydroelectric power stations run backwards to store energy (electrical->potential).

Energy usage of computers is also decreasing, as in mobile devices. There's also anticipated uptake of ARM processors in server farms, because energy usage is a major cost. The advances include advances in energy usage.

We have many options for sources of power; our current choices are mostly preferences. You're right that energy tech has not advanced at Moore-like rates (nothing but silicon does), but it has matched the accelerating demand for it. When it fails to, we'll see serious research into other power sources. (Cheap energy is a curse.)


Allen writes that "the Law of Accelerating Returns (LOAR). . . is not a physical law." I would point out that most scientific laws are not physical laws, but result from the emergent properties of a large number of events at a finer level. A classical example is the laws of thermodynamics (LOT). If you look at the mathematics underlying the LOT, they model each particle as following a random walk.

Oh that's a terrible point! Thermodynamics laws are nothing like predictions about the future. I would have thought linguistic slight of hand like this is beneath Kurzweil.

Allen's statement that every structure and neural circuit is unique is simply impossible. That would mean that the design of the brain would require hundreds of trillions of bytes of information. Yet the design of the brain (like the rest of the body) is contained in the genome.

The design of the human brain is not entirely contained in the genome!

As soon as we mapped the human genome we were faced with a paradox. How come the complexity difference between us and mice for example, is NOT proportional to the difference in our genomes?

Here's an article form 2002 "Just 2.5% of DNA turns mice into men": http://www.newscientist.com/article/dn2352-just-25-of-dna-tu...

In other words, if you look at just how the genomes are different then humans and mice ought be a lot more similar than we are.

We have since come to find out just what a huge role the feedback-interactions of DNA and its products, like proteins and all kinds of RNA, play in the development of life.

This staggeringly complex feedback mechanism is why despite the mapping of the human genome, medical progress still remains excruciatingly slow. Much, much faster the before! But not nearly as fast as we had hopped when the human genome was first mapped.

Note that epigenetic information (such as the peptides controlling gene expression) do not appreciably add to the amount of information in the genome.

This is true in that they don't add much to the genome. But it is profoundly wrong in that they do add hugely to the actual resulting phenotype.

Kurzweil continues in this same vane for a while. I don't know if he has just never bothered to look into the latest research or if his understandably strong desire to not die has resulted in a huge confirmation bias.

When Kurzweil talks about the general trend of scientific progress I tend to agree with him. But neither Paul Allen nor anyone else disagrees with the notion that we will reach the singularity at some point in the future.

The argument is about the timing. And timing the future, is like timing the stock market, something I don't care to try to do.

But when Kurzweil attempts to convince the reader that the singularity is near, by using specific examples, that's when I start do disagree with him. Because once he starts being specific, it becomes easy for me to see where he is wrong, factually, objectively wrong.


>Oh that's a terrible point! Thermodynamics laws are nothing like predictions about the future. I would have thought linguistic slight of hand like this is beneath Kurzweil.

Could you elaborate on this? I'm not a huge Kurzweil fan, but as far as I can tell he's saying something reasonable here - that when he talks about LOAR, he's describing a phenomenon rather than a physical process, and that this is an accepted usage of the word "law". I don't think he's playing semantic tricks so much as responding to a semantic complaint.


Our understanding of thermodynamics is very thorough. Our understanding allows us to make a plethora of predictions, all of which are falsifiable, and have been throughly tested over the years.

This is what makes our theories about thermodynamics real scientific theories.

Predictions about the future, no matter how simple or based on long running past trends, are only falsifiable in exactly one way: wait until the predicted date passes.

I think a very, very informal use of the term "law" could cover both. But what irks me as a science minded person, is that Kurzweil is attempting to equate the informal meaning of "law", generic description of a phenomenon, with the scientific "law", an actual testable, falsifiable theory with predictive powers.


No, Kurzweil was not equating the "Law of Accelerating Returns" to a physical law of the universe. Instead, he was comparing it, albeit clumsily, to laws that govern aggregate behavior, like the second law of thermodynamics. There are lot of things that we colloquially call "laws" that clearly don't have the same footing as laws in physics, "Moore's Law" being one of them.


I thought it was unreasonable for him to choose to compare to the second law of thermo, which is lawful in a much stronger way (very precisely stated, understood in a quantitative way at microfoundations...) than the pattern in accelerating returns that he's pointing out. There are dozens or hundreds of comparisons he could have made, he didn't need to pick one which is so central and stable that thinking one has found a way around it has become a classic sign of being a crank. It would be more reasonable to choose to compare to some other pattern that is generally understood to be important in economics --- e.g., returns to specialization or returns to capital investment.


It seems to me that Kurzweil is on rather strong grounds when he argues in effect that 25Mbytes is a safe conservative upper bound on the information needed to specify a human infant brain. The relevant information content of the epigenetic stuff is unlikely to be tens of megabytes, and extremely unlikely to be hundreds of megabytes. Otherwise, it's hard to see how we could've overlooked such a high proportion of non-DNA design information being passed around in all the work being done on genetics. It's also hard to see how so much extra information would stay stable against mutation-ish pressures unless its copy error rate was much lower than DNA, and hard to see how we'd've overlooked all the machinery that would accomplish that.

Moreover, I think 25M bytes is probably a very conservative upper bound, so that the relevant uncompressable complexity of what computer scientists need to design for general AI is likely no more than 1M bytes. A lot of actual brain stuff is likely to be description of the physical layer that silicon engineers won't care about, because they do the physical layer in a completely different way (silicon and masks and resists and hardest of hard UV and two low digital voltages, not wet tendrils groping toward each other in the dark and washing each other with neurotransmitters). A significant amount of actual brain stuff is likely to be application layer stuff that we don't need (e.g., the Bearnaise sauce effect, and fear of heights and snakes) and optimizations that we don't strictly need (all sorts of shortcuts for visual processing and language grammar and so forth, when more general-purpose mechanisms would still suffice to pass a Turing test). A lot of brain stuff is likely to be stuff in common with a fish, much of which we already know how to implement from scratch. And all brain stuff seems pretty likely to be encoded rather inefficiently: lots of twisty little protein substructures and nucleic acid binding sites are unlikely to be nearly as concise as the kind of mathematical or programming language notation that describes what's going on.

When Kurzweil writes "do not appreciably add" I understand him to be willing to stand by roughly the quantitative information theoretical claims I made at first (25Mbytes, tens of Mbytes, hundreds of Mbytes). When you write "profoundly wrong ... add hugely" I am unable to tell what you are claiming. How many uncompressable bits of design information you are talking about? Perhaps you believe that natural selection pounded out and mitosis reliably propagates 200M uncompressable bytes of brain design information? or 1G bytes? As above, I think that is probably false. Or perhaps I should read "hugely" as "vitally" and understand that you merely mean that the epigenetic information might be less than a million uncompressable bytes but still if you corrupt it badly you have a dead or hopelessly moronic infant. If that's what you mean, I think you are factually correct, but also don't think that that fact contradicts Kurzweil's argument.


I fully agree with your estimates, and would just like to point out more precisely where people (especially biologists) tend to misinterpret this argument.

The strawman argument that people tend to hate is this: the brain is encoded in 25M of DNA, so it would only take 25M to build a physical brain. To be clear: we're not arguing that.

Then they go on about how complex the process of creating a physical brain from a string of DNA is, how there's so much information that we'd need about the chemical reactions, that because the building-up is so complicated we couldn't do it with computers 100 years from now even if Moore's law held up, etc. And I agree with all of that, but it's not what that 25M figure refers to.

What we're saying when we give that number is that an algorithm that does more or less what the brain can do can be coded in less than 25M. It won't implement its physical structure exactly, but some algorithm that comes in under the 25M limit in almost any suitably strong programming language is all but guaranteed to qualify as "intelligence". Whether we can find it or not is another matter; all we're saying is that it's there (and I'd go further, and say that many such algorithms exist in the <25M algorithm-space, because if they weren't relatively easy to find, evolution never would have figured them out).

That the particular genotype->phenotype->algorithm encoding that creates the brain's algorithm is hideously complex doesn't change the information theoretic content, it loosely corresponds to inserting a massively complex general-purpose compiler in front of a Turing-complete language, which doesn't change the compressibility of the code one bit. Unless the compiler is specifically built to compress a certain type of algorithm very well, the compressed information density will not change significantly for any program of sufficient complexity (this is provable mathematically if you properly define the various conditions). In fact, there's a very good chance that the genotype->phenotype->algorithm mapping that results in human intelligence uses a less efficient coding of the algorithm than we could achieve via a modern expressive programming language, because the brain's physical implementation severely limits the expressivity of algorithms that can be baked into it.


It seems to me that Kurzweil is on rather strong grounds when he argues in effect that 25Mbytes is a safe conservative upper bound on the information needed to specify a human infant brain.

This is the best I could do: http://www.sciencedaily.com/releases/2005/01/050111115721.ht...

I can't find the actual scientific papers. Anyway, form the article above:

The lack of correlation between genome size and an organism’s complexity raised a question – how do complexity and diversity arise in higher life forms?

Or to rephrase that, why is there no correlation between source code size and application complexity? Why are mice with 24.99Mb of DNA so much less than humans with 25Mb of DNA? Why is there not a linear relationship between the complexity of the animal and the complexity of it's DNA? Well...:

RNA editing involves the process by which cells use their genetic code to manufacture proteins. More specifically, says Maas, RNA editing “describes the posttranscriptional alteration of gene sequences by mechanisms including the deletion, insertion and modification of nucleotides.”

RNA editing, says Maas, can “increase exponentially the number of gene products generated from a single gene.”

Increase the number of gene products from a single gene exponentially, that says.

The paper I can't find describes how this process, as it takes place in the brain, is an almost exact match for the complexity difference between mice and humans.

And yes, posttranscriptional alteration is much more fragile than good old double helix DNA. And no, evolution doesn't care.

...hard to see how we'd've overlooked all the machinery that would accomplish that.

We didn't overlook it for long, shortly after the human genome project raised the question, we spotted it. See above.

the relevant uncompressable complexity of what computer scientists need to design for general AI is likely no more than 1M bytes.

What do you base this statement on?

How many uncompressable bits of design information you are talking about?

Scientists have already discovered that posttranscriptional alteration (a part of epigenetic information) adds huge complexity. How much more? I am absolutely not comfortable guessing at numbers of Mbytes because I know how little I know.

And what about the actual cell machinery. As the egg is being formed inside the mother how much complexity does the way its machinery work add the what will happen after the egg is fertilized? Again, I dare not guess.


You're saying a lot there, so rather than create a wall of text in response I'd like to boil it down a bit - assume N=25Mb, give or take an order of magnitude:

Are you making the claim that the N bits of DNA involved in coding the brain can encode more than 2^N neural algorithms?

Or do you think that the particular set of 2^N (assuming no redundancy, which is generous...) neural algorithms that N bits of DNA can encode are more likely to result in intelligence than a random sampling of algorithms of equivalent Kolmogorov complexity?

Or are you claiming that epigenetic factors are able to reliably transmit significantly more than N bits of mission-critical data across the generations, and that epigenetic evolution is likely to thank for devising the human intelligence algorithm rather than evolution of DNA?

Edit: looking over your post, I suspect that part of the misunderstanding is over the word "complexity". You seem to be focusing on the complexity of the products; these estimates focus on the complexity of the spec. In humans the difference is muddled because the spec goes through such ridiculously complicated machinery to become the product, but when it comes to designing algorithms, that complicated machinery might as well be a random shuffle for all it matters to the algorithm's proper functioning, so the Kolmogorov complexity that it adds is effectively zero.


Are you making the claim that the N bits of DNA involved in coding the brain can encode more than 2^N neural algorithms?

That's exactly what the article above explains. Did you read it?

Or do you think that the particular set of 2^N (assuming no redundancy, which is generous...) neural algorithms that N bits of DNA can encode are more likely to result in intelligence than a random sampling of algorithms of equivalent Kolmogorov complexity?

I am not sure I understand the question. Are you asking if I believe the brain is a large but mostly simply designed neural network? If that is the question, then no.

Or are you claiming that epigenetic factors are able to reliably transmit significantly more than N bits of mission-critical data across the generations

I am claiming that do get a human you must "host" the human genome in a pre-existing human. Sticking it in a mouse will not result in a human. What does that imply?

that epigenetic evolution is likely to thank for devising the human intelligence algorithm rather than evolution of DNA?

I don't see two kind of evolutions there. It's all just human evolution genome and all. After all, it's not like human dna is out there evolving in something else besides humans.

In humans the difference is muddled because the spec goes through such ridiculously complicated machinery to become the product

Yes!

but when it comes to designing algorithms, that complicated machinery might as well be a random shuffle for all it matters to the algorithm's proper functioning

What implies that? How do you go form yes a hugely complex compiler is necessary, to no we can just randomly shuffle the code and it'll be just as good?

How many bits does it take to describe the string "aaaaaaa"? Not many. How many bits to describe the human genome to a scientist? I'll just gzip it and email it and were done, awesome!

How many bits to describe a human brain or how to turn that genome into a human brain? Well lets see, its a complex self-modifying process, the human brain expands the number of sequence products exponentially and interestingly the mouse brain does not do this.

In mice the complexity difference between their brain and their genome is linear. In humans it is not.

In mice the Kolmogorov complexity of their brain is equal to the Kolmogorov complexity of their genome + some linear factor.

In humans it's the Kolmogorov complexity of our genome + a lot more.

How much is "a lot more"? No idea.

Is all of this inherited? Yes, partly through the genome, partly through the fact that that genome must be planted in a pre-existing human. Again, if you swap it out with a mice genome humans won't be giving birth to healthy mice and mice won't be producing humans.

You can move a simple sequence across species, like a glowing protein form jellyfish to rabbits for example. You can not move whole genomes in higher order life forms.

I think the disagreement between early and late singularity people often comes down to is the human brain mostly a large but simple mass of neurons or not.

I think computer scientist are often in the it's just a large neural network camp. Brain scientists are in the it's much more complicated than that camp. As a computer scientist and software engineer who's worked in biotech for many years, I agree with the brain scientists.


I've already replied to some of this, but re: your mice vs. humans example, my views on this are that the fundamental algorithmic innovation that makes humans so intelligent was already present in mice, and almost all critters in the "bigger than a bug" families. Fundamentally we do process information in the same way as mice, it's merely a matter of turning up some of the intensity knobs (or more likely, adding a few more well-tuned layers to the network that already exists) to let humans take intelligent thought to new realms of utility.


Another response:

Kurzweil also ignores economics. The advance of technology is driven in part by economic forces. Computing power may stagnate not because we have reached physical limits but because present-day computers are good enough for what 98% of the market wants.

I see this trend developing. If anything, the trend in consumer computing is toward less powerful but lower-power and more portable computing devices. My current laptop -- a Macbook Air -- is actually slower than my previous laptop. But it is more portable and uses less energy. And it does everything I want. I don't need more power right now.

The only areas driving the performance end are gaming, high performance computing, and high-capacity data centers. How long will those go until they too are basically satiated?

We've seen this in other areas. The envelope for aviation maxed out in the 1970s with things like the U2. Space flight seems to just now be emerging from a long coma with things like SpaceX, but on closer examination SpaceX is just reviving 1960s ideas and doing them at a lower cost with modern control systems and materials technology.

My other reply about energy deals with supply-side limits to growth. This response deals with demand limits to growth.


I don't think Kurzweil ignores economics at all. In fact, many of his arguments are largely based on economics, such as the cost to obtain a given amount of computation. It is undeniable that this has gone down steadily with time. Computation is so cheap that cloud computing (e.g., E2C, Azure) allows individual developers access to more computing than most know what to do with.

Your example of the MacBook Air is invalid because the MacBook Air of today isn't priced principally on performance -- it is priced mostly by build quality and performance/watt/kilogram. Never before have computers had a better performance/watt/kilogram profile. You have to consider the whole package.

The argument that there is less need for performance is also flawed. It is not that humans have less need for computation, it's that the distribution of computation is changing. As mentioned before, a lot of computation is moving to the cloud. In many cases, this is simply the most efficient place for it to be; instead of having an abundance of computation sitting mostly unused on a laptop or desktop, computation is becoming a service where it is used on demand. Think of the computation required for cloud services like speech to text processing or Google Goggles. Most of the computation is farmed out the servers in cloud and only minimal.

I'm not even sure I agree that the decrease in local computational needs is a long term trend. For one thing, migration to the cloud can only continue as long as network bandwidth keeps up with cloud data demands. If network bandwidth is not able to keep up then I predict we'll see local computational needs spike again.

Even with some computation siphoned off to the cloud, AI will create a whole new class of applications for even local computation because it is so data intensive. When the next crop of AI applications emerge and become more common, the early algorithms will probably be inefficient, which will itself cause a spike in computational needs. The applications are here yet but there are many (including myself) that are working hard to change that.


What the market wants is every map, book, song, movie, game, and poem ever made to be instantly accessible, searchable, and reviewable. They want their work to be autosyncing, autobackedup, and to follow them from device to device. They want to be able to securely talk with friends and family at any time, to publically talk with friends and family at any time, to discover new friends and family, and to be able to completely disconnect from friends and family at will. They want intelligent tools that keep them from making dumb decisions, tools to help them make even better good decisions, and tools that won't get in the way of them making dumb decisions.

I think that the market for computing power has a looong way to go before it taps-out what the market demands.


But how much do they want all this, and does this require major improvements in computing power?

It looks to me that almost everything you listed could be done on the computers of five years ago. It's all nothing but software improvements. Existing hardware is good enough.


Two years ago, I was developing some HVAC equipment modeling software as part of a sales automation package whose worst case scenario needed to calculate the max cooling capacity, and a few other thermodynamic stats, for roughly 18 million different configurations. No matter how much I optimized it, the CPU didn't have enough juice to "instantaneously" plow through all of that math. The best I could get it down to was about 20 seconds. More CPU power would definitely be nice.


But how much did that 20 seconds delay cost you? It would have been nice to have the answer faster, but how much would you have been willing to shell out for it?

I'm talking about economics. I'm talking about what people are willing to pay for.


I am also talking economics.

The customer would have loved for it to go faster, because then it would have looked like magic.

In 1998, it would have been nice if I had had photo-realistic computer games, but I patiently made do with what I had. Today, people eagerly shell-out money for hardware that can run this year's even-more-photorealistic version of Madden Football.

I think the demand for more power is definitely there.


Moore's Law gives us more for your money, where "more" can be more speed or more (cheaper) devices for the same cost.


Gaming alone will keep driving computer growth for a long long time.


It seems that there is a shift from PC gaming to console gaming... Here's me citing my single data point - Rage. id focused on the console experience above the pc experience, so when you have a predominately pc-driven company focusing on consoles, you feel that the industry is/has changed. That, and that current consoles are half way through their life-cycle according to MS/Sony, so new console designs are not slated until beyond 2015.

Graphics hardware companies like nvidia/amd will continue pushing the envelope though


Science progress is not just a question of intelligence.

Therefore, even if computers become more intelligent than humans, it is doubtful a "singularity" will occur.


If not, then what?

Even while I agree with Alan Kay when he says that IQ << knowledge << outlook. But all three happen in the brain regardless.

Imagine we manage to build a machine that produce more insights than Newton. That particular form of intelligence would be quite likely to trigger a singularity, don't you think?


No I don't, because you still need time and a certain amount of randomnesses to make discoveries.

Another way to put it is that even if you are twice as intelligent as Newton, you won't discover twice as many things or discover things twice faster.


Sifting through the day's top-list on the AI appstore... WTH's this? "Newton AI. The power of a 1000 research assistants at the click of a button, and they'll run all day tirelessly.". "99c launch deal, just for today -- get it now!" Hmm. Click.

You're now waiting for it to download to your little AiPod that'll beam 'brain bits' to your home-bots, that're now busy sketching out the next monalisa onto a couple of shiny new dreamPads.

Why wouldn't those startup dudes down the street try and build a beefier/faster "runs at 50x universe speed" AiPod for the AI platform you just bought your Newton AI app for?

-------

Why wouldn't AI be able to simulate 'regular universe time/ human time' faster? Why couldn't AI have stronger, more varied randomness?

The bottleneck would be interactions that require a peek into regular universe time: live human input (phone calls, emails), weather, biological data etc.


It's startling to me that everything I do comes from 50 megabytes of source code.


It's startling to me that everything I do comes from 50 megabytes of source code.

This fact was a result of the mapping of the human genome, and has since been proven wrong.

But like many such facts it has a certain "stickiness" to the human brain. I expect to be hearing it for many years.

I tried finding the paper which found that the complexity of mRNA or tRNA (or some other kind, I forget) produced in the brain matches the complexity difference between mice and humans almost exactly, unlike the difference between the two genomes.

It also turns out that type of RNA is very fragile and very easily mutated compared to DNA. And evolution does like to follow the path of least resistance. By my google-fu fails me.

tl;dr: It's not 50 megabytes, shit's complicated.


Every creature starts from a single cell. At that point the only difference between a mouse and a human is the DNA, right? Isn't mRNA and tRNA is produced from DNA? It's been a long time since high school. So I don't see why it isn't 50MB.


Every creature starts from a single cell. At that point the only difference between a mouse and a human is the DNA, right?

No. That single cell is a working machine of which the DNA is just one component. The proteins which transcribe the DNA, they are in there too, but they are not the DNA. The whole point is that everything else that's in a cell, and there's a lot of "stuff" besides the DNA, has a major effect on what happens.

For embryos specifically both the egg and the sperm contribute "stuff" in addition to DNA.

The metaphor of code and compiler is a bit clunky but decent. If you think the animal as the compiled application and of DNA as its source code, it should be obvious there are millions of lines of code in the OS and compiler, which were required for the compilation of the application.

Is "Hello World" in C just the one .c file, or do we have to count the standard libraries? How about the hardware designs of the PC running it? If you look at just the .c file, it's a few lines, if you look at everything that's actually required to run it, that's a bit more complicated.


Is "Hello World" in C just the one .c file, or do we have to count the standard libraries? How about the hardware designs of the PC running it? If you look at just the .c file, it's a few lines, if you look at everything that's actually required to run it, that's a bit more complicated.

And yet nobody would ever claim that the fundamental minimum complexity of the "Hello, world!" algorithm was more than a few lines of text, because the "important stuff" in that algorithm has nothing to do with all of the irrelevant complexity in the operating system and hardware, all of which could be done in millions of different ways without changing the fundamental algorithmic insights that "Hello, World!" requires.

Yes, there are epigenetic factors that get passed along, but their effects tend to be transient, lost over a few generations at most, adding maybe a handful of tunable bits of information to the genome that can be quick-flipped as the environment demands. I don't know that anyone has ever suggested that any non-trivial amount of data is actually passed millions of generations down the line through this mechanism (keep in mind, to seriously take issue [i.e. beyond a mere factor of 2] with the estimates that you disagree with, you'd need to find more than 25 megabytes of evolutionarily-accessible data that lives somewhere other than DNA), and I'd be extremely interested to know if that was the case. Every evolutionary biologist I know focuses almost exclusively on genetic code as the evolutionary substrate because it's the only available channel that seems reliable enough for information to flow through unmolested over millions of years.


As a final question, if you hypothetically took human DNA only and put it into a frog embryo, there is no way you could pass on the parts that make up our highly advanced minds? That is what I meant by everything we do is in those 50MB.

What I am getting at is that very similar standard libraries are in stem cells in animals. There's countless examples of people taking DNA from one animal and putting it into another that is completely different, and copying the same traits.


Don't forget about the massive amount of information stored in society (customs, language, etc). You have to download all of this info through the course of your life.


"The genetic code does not, and cannot, specify the nature and position of every capillary in the body or every neuron in the brain. What it can do is describe the underlying fractal pattern which creates them." --Academician Prokhor Zakharov, "Nonlinear Genetics"

Despite what the exact number is, the brain's connections are much more numerous than the data in my DNA. All information flows from DNA. Chemistry between RNA & proteins & everything does change the end product, but even the chemistry that those do is largely controlled by DNA. (The DNA only produces proteins which does chemistry that it wants to do. Or there is chemistry already happening in the environment, which the development strategy accounts for, but which doesn't add much, if any, complexity to the brain.)

And my quote, by the way, is from a fictional person.


It hasn't been so since before you were born. Your experiences in your mother's womb have shaped your brain. All that your DNA contains is the basic rules on how to assemble a generic human brain. What happens to it has far greater consequence on what you become.

I like to joke I was born an engineer. While it's true I have always been curious towards all things technological (I was born during the height of the Apollo program and, as a kid, wanted to be an astronaut) but parental support (getting lego-like kits with transmissions, gears, motors, good books and a good school) landed me on one of the most prestigious engineering schools in Brazil where I was further "perfected" by some good teachers (and some awful ones - you have to learn to avoid them, after all).


Well of course that's just the substrate. A lot of what you do comes from what you've learned and continue to learn from your environment.


It also doesn't count the crazy amount of epigenetics that occurs too, and the specialization that becomes emergently interactive with the environment of other cells and outer world.

Still 50Mb is crazy small for what you'd think the genome would be.


It's actually not crazy small considering what you can expect from natural selection (strongly constrained by the bit error rate in reproduction, more weakly constrained by the total number of selection events in the population since some earlier less complex point). And remember it's specifying the learning infant, not the learned human.

(I plan to write more about the likely information content in a reply to another comment.)


Don't forget the runtime environment though :)


Isn't it the same runtime environment as a worm's?


Well, there's the womb, and interaction with other humans, school, etc.


Running on a very complicated instruction set.


Let's say that you saw an executable file weighing in at 25 megabytes of hand-coded assembly running on a modern day Core i7 chip.

Would you really assume that the source code that it would take to write such a program would be over an order of magnitude different if you were targeting a RISC processor instead? Now give yourself access to an expressive compiled language, and estimate how much code it would take. Does the fact that you're targeting RISC even matter anymore, algorithmically?

Unless you're assuming that the biological "instruction set" has some hard coded primitives that make AI an easy problem, it literally doesn't matter at all that the instruction set is complicated (or rather, it matters up to a small constant factor), given that our programming constructs are vastly more powerful than those available to neurons. It's the connectivity algorithms that are important, and biology has absolutely no advantage there.


Same as the comment above, is it different from a worm's instruction set?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: