I can't help but wonder why, now, someone has chosen to highlight what is still far too small a movement and treat it as though it were a dangerous threat, and then demonize it by associating it with a few of its wealthier proponents.
Things like Effective Altruism and looking at existential risk are hardly drawing a substantial amount of attention right now; they should be, but they could get ten times the attention and funding and still only have a miniscule fraction of the people and funding going towards more popular causes.
And the set of people who think climate change isn't an important problem has almost no overlap with anyone thinking about existential risk. At most, I've seen a very small number of people suggesting that there are other high-risk issues that also deserve attention and aren't getting it.
Anyone seriously trying to tell people they shouldn't work to address climate change should rightfully be decried for causing harm. But the angle of trying to paint everyone thinking about existential risk with that brush is itself irresponsible and inaccurate.
There are lots of people spouting things like "don't bother about climate change", and that harmful group has almost no overlap with anyone thinking seriously about the future of humanity. Taking the vanishingly small intersection and drumming it up for outsized outrage is ridiculous and harmful.
Another, far less ridiculous angle would be "here's a tiny group of wealthy people with weirdly contradictory viewpoints". Perhaps that might not have gathered as many clicks, though.
Another, far less ridiculous angle would be "here's a tiny group of wealthy people with weirdly contradictory viewpoints"
The article is quite clear that is criticizing an idea (This is a dangerous ideas). For any idea, if you at the people subscribing to it, you'll see they follow the idea in a somewhat contradictory fashion. But should hold back from looking at the idea itself 'cause otherwise one's ability to analyze is complete hobbled.
These supremacist ideas have a long and toxic history. And the moral foundations are always similar - some people today can be considered wholly disposable.
And they are supremacist - based on the unproven belief that humanity has an obvious manifest destiny which will incarnate (or perhaps insiliconate) super-intelligence across multiple galaxies.
It's juvenile Tom Swift fantasy.
Humanity has multiple immediate challenges now. They need actionable, practical intelligence.
If we can't find the relatively limited intelligence to deal with them, where are we going to find the IQ needed to colonise a galactic supercluster?
Of course some other species might evolve on Earth. But also - it might not. No matter what grand name this is given, "thinking" about it is just well-funded science fiction.
You seem to be implying that ideas and goals about preserving and benefit humanity indefinitely are somehow only benefiting a subset of humanity, or worse, are used to harm some subset of humanity. Anyone looking to twist such ideas or goals to only benefit themselves or a subset of humanity should be called out on that. But right now you're attacking the entire concept over your perception that it only benefits the rich, or perhaps some other subset of people.
By all means, attack inequality. But let's work to fix it. The response to "other people have nicer things than me" should not be "people other than me shouldn't get to have nice things", it should be "more people should get to have nice things". That's even more critical when those things are fundamental improvements and safeguards of people's lives. Let's raise the baseline for everyone, rather than focusing excessive energy on cutting down the tall poppies.
And science fiction has a way of inspiring and driving science fact.
This I agree with. Their future dream isn't super plausible, is much too specific, and doesn't even seem desirable.
> they are supremacist - based on the unproven belief that humanity has an obvious manifest destiny
Normally "supremacist" means "this group of humans is superior to this other group of humans". I don't see a lot of that in this, except in the bit about "resources should be invested in the developed world".
(And "manifest destiny" is of course incredibly loaded, but you knew that!)
But --
> Humanity has multiple immediate challenges now. They need actionable, practical intelligence.
-- yes. I agree with the "actionable, practical" bit. Solving climate change isn't some research problem. It just needs leadership to organize people to do things we already know how to do.
The founding of OpenAI was prompted by some of the ideas of Bostrom and co. So yeah, the ideas deserve to be looked by a wide segment of society than those who normally considered "rationalism".
The real concern is that this philosophy has terrifying possibilities (even if they're in the fringe), and being associated with powerful and wealthy people like Musk threatens to validate and realize those possibilities.
"...demonize it by associating it with a few of its wealthier proponents"
It's a bit, uh, odd to complain about associating an idea with it's proponents. That's what most people do with most ideas. It seems like a lot of arguments being made here have the quality of "the authors are unfairly summarizing the arguments being actually made ... rather than taking the whole thing as a light-hearted experiment with no consequences if you don't want them...."
> It's a bit, uh, odd to complain about associating an idea with it's proponents
Not when they're non-representative proponents used to advance an inaccurate sensationalized viewpoint.
That viewpoint is completely opposite of the reality of people thinking about existential risks. The people thinking about existential risks decades ago would have been working on climate change. Now, climate change is in the mainstream consciousness, and though there still isn't enough being done about it, it hardly needs more attention. So it's also reasonable to ask, what problems will have mainstream attention many years from now but we'll wish we'd started working on today?
> I can't help but wonder why, now, someone has chosen to highlight what is still far too small a movement and treat it as though it were a dangerous threat, and then demonize it by associating it with a few of its wealthier proponents.
I agree. It's a clear signal of people trying to nip an idea in the bud.
The thing that they don't mention is that we have a way of dealing with risk that isn't "all or nothing" - insurance.
When people pile on billionaires for paving the way toward having people on more than one planet, I just consider it an extremely small premium to pay relative to global GDP to avert a real risk. The world can do many things like that.
They are attacked because they are committing blasphemy against the church of climate change.
There is no threshold for blasphemy, even the tiniest one must be punished and made an example of so that others don't get any ideas that blasphemy will be tolerated.
You’re smart to use a throw away, but you’re on to something. Take Bjorn Lomberg who’s work has likely helped guide governments to make decisions which have done immeasurable good for the poor and sick, can’t even have his voice heard in the climate change debate. He’s simply called a denier and shouted down (even though he says climate change is real). His work with Nobel prize winning scientists didn’t buy him an ounce of forbearance from the modern Grand Inquisition.
You mentioned that Bjørn Lomborg's work has "likely helped guide governments to make decisions which have done immeasurable good for the poor and sick" - I am interested to know, because it isn't something I am familiar with, what of his work are you referring to?
After Copenhagen Consensus, Lomberg and his advising group offered advice to several nations about the optimal issues they should be investing in to maximize lives saved and improvements in quality of life. I don't remember details off the top of my head, but that is what I was referring to.
One of the hardest books to read and see that for a pittance the world could eradicate several types of horrific diseases or drastically improve large numbers of lives, through simple targeted investments, and yet it never happens.
Here's a more personal example. After reading the book, a friend who made a mint in the oil industry bought four mobile drilling rigs, fitted them for digging water wells, paid to ship them to a state in I can't remember which country, but it was a poor one in Africa, where he hired a crew to run the operation and dig wells for free. He said it was a nightmare of red tape to operate, but wells got dug and people got fresh water that used to get dirty water.
Identifying which problems can be most readily addressed with limited funding certainly seems like valuable work, and definitely worth supporting. I imagine the major constraint is funders' (i.e. rich nations') willingness to actually pay for these projects. Convincing them that they are getting the most bang for their buck seems like a good idea in today's optimisation-obsessed world.
However, having read a bit about their approach, it isn't clear to me that the methods used in that process are particularly effective at assessing longer terms problems such as climate change (and this seems to be a common critique).
There are some interesting analogues between the limitations of the CCC approach and those of longtermism, in that both focus on timescales (respectively short and very long) that diminish the importance of climate change, which is most significant as a medium-term risk. Having said that, at least the CCC work has some methodological rigour on its own terms, and actual utility - as I noted in another comment the longtermism "methodology" seems pretty hopelessly naive, and uninformative.
This article just attempts to discredit the idea of longtermism by ridiculing it, instead of presenting any actual counterarguments. It just writes about the implications of its strawman over and over in a sardonic tone. The only thing it wants to get across is “some rich people and philosophy weirdos like this idea, therefore it is bad”.
Where are the actual counter-arguments in the article?
I suppose it relies on the readers already agreeing with the author that these conclusions are appalling. It works in that case. It doesn’t do much for the reader otherwise.
My reading of the article’s core argument is the following: longtermism assumes that it is possible to perform an accounting of future outcomes, and from this expected value calculations, even if just in sketch, that can be used to allocate resources in the present, so as to maximise total “good”. This process is incredibly prone to bias, and in reality appears to amount to little more than selective speculation about its proponents’ desired futures, and a corresponding proposal for the targeting of present resources.
I find this argument compelling - most methodologies for inter-temporal decision making acknowledge the large uncertainties associated with the future, and, either explicitly or implicitly, avoid making claims about the distant future, either in terms of what plans we should make for the future, or in how future states should affect our current plans.
There is no reason to believe that the proponents of longtermism have addressed this issue - indeed if they had it would be a very significant breakthrough, and we could apply their method to more immediate problems as well. What seems much more likely is that they have not addressed the issue at all, and that their purported accounting is really just a regurgitation of their prior beliefs, and thus, as the article suggests, consequently serves only to justify the existing behaviours of those who subscribe to it.
I disagree. The article argues that putting things in term of the "existential" distracts from immediate problems - for example, if climate change can be terrible risk but still not an existential risk. And more broadly, you see the argument extrapolate to an almost purely hypothetical future evades discussion of real risks in the present.
That's fair, actually. This is an argument in the article. I would summarize it as "focusing on long-term risks necessarily means un-focusing on short-term ones, and that's bad". Would you agree?
The latter part however ("and that's bad") is not well supported at all. That's what the article argues by emotional appeals. For a counterexample, most of the harms of climate change are still in the future, so addressing it is focusing on longer-term risks. And wouldn't it have been easier if we had started addressing it 20 years ago?
To use the same example, extreme harm from climate change (e.g. thousands dead from heatwaves) is very likely, but ultimately still hypothetical -- it has not happened yet. The only difference with existential risks is the degree of hypotheticalness. And when the harm is high enough, why not take the somewhat supported hypotheticals seriously?
That's fair, actually. This is an argument in the article. I would summarize it as "focusing on long-term risks necessarily means un-focusing on short-term ones, and that's bad". Would you agree?
-- I'd agree with the addition "risks so long term that any calculation of their value is wholly implausible, risks such as the risk of humanity 'not colonizing the galaxy'"
-- And yeah, the article pretty explicitly puts the situation as, this is the long view taken to the level of absurdity.
Just like fire and flood insurance distract from car insurance, right? It's tragic that we treat world attention as if it should be a "winner take all game." I don't think it will be effective, and it will certainly prevent us from dealing with a portfolio of risks.
> Where are the actual counter-arguments in the article?
It doesn't appear to attempt to make actual arguments.
I think the article's mostly just an emotional appeal. I'd speculate that the argument-like content is meant as flavoring, to make the raw emotionalism less unpalatable.
That's incorrect. The article is making the argument that "Longtermism" is weighing purely hypothetical (and basic incalculable) future things more than present things. Whether this argument is true or not, there is an argument here.
I partly agree and disagree with your comment. It is true that the article says
> that "Longtermism" is weighing purely hypothetical future things more than present things
(please forgive the splicing). That is an accurate representation of longtermism as far as I can tell, because the future is inherently hypothetical (we don't know it yet).
But this is not arguing that longtermism is true, or false, or beneficial, or harmful. It's just stating its content, and expecting that the reader becomes disgusted by it.
The reason I spliced your comment is that the part
> (and basic incalculable)
is not necessarily true. The article does not really argue for or against the proposition of whether we can estimate the amount of future lives with any accuracy.
The article does not really argue for or against the proposition of whether we can estimate the amount of future lives with any accuracy.
Yes, the article makes that more or less tacit assumption that making decisions based on far hypotheticals is a mistakes. I think that's reasonable assumption that most people would agree with.
I would also note that Bostrom and company never claim that they calculate the odds of their existential dangers. In Bostrome's book Superintelligence, he is clear that he cannot calculate any odds. The only thing they do is (implicitly) assume that it's possible to reason about the purely hypothetical in the fashion that one can reason about the clearly possible. But is something one argue against fairly plausibly; I don't know the odds of me getting sick or my car breaking down but because it's possible, I have to prepare for it. I can't use the same reasoning with chances of aliens landing or the sun exploding or any event where I don't have enough information to demonstrate it's plausibility as possibility, so to speak.
And sure, I haven't proved aliens landing isn't a possibility.
The topic itself seems interesting, it's just the article itself that's of particularly poor quality.
While there may be legitimate philosophical issues that could be addressed in a scholarly manner, I feel like the article is written like celebrity-gossip from a tabloid. Seems more about flinging mud than anything intellectual.
One takeaway from the article, for me, is that some of the rhetoric of longermism by itself has dangerous potentials in society. For example it could be weaponized politically to marginalize groups.
> a “small misstep for mankind,” however terrible a “massacre for man” it might otherwise be.
and
> You ought to care equally about people no matter when they exist, whether today, next year, or in a couple billion years henceforth
This means that there is put value on unborn, potentially possible lifes. And then you can start weighing these lifes against currently living humans. And then it becomes perfectly reasonable to perform genocide on impaired, or poor people as this would free up resources to focus on colonizing the solar system.
I think this is a particular counter argument where this kind of weighting human lifes becomes a bad idea. Especially when comparing current actual living, breathing humans against potential future beeings. This absolutely neglects the needs of current humans, with all their human rights to be able to get the share in the world they require to live in dignity.
This kind of weighing is inevitable and we see it everyday. We are sacrificing current GDP to mitigate climate change for the benefit of future generations. Countries invest real dollars today in their infrastructure that will benefit the unborn.
If not for the value of future human life, perhaps we should crank up CO2 production and go out with one hell of a party. We might be able to minimize the impact to most people living today if we double our fossil fuel use. Replace the green new deal with a huge party fund.
I am not sure that concern over "AI risk" is fungible with concern over climate change. In my experience, people concerned over superintelligent AI seem to be of a philosophical and cultural orientation which is entirely aware of the risks of climate change and favors some degree of directed action against it. I am not sure, on the other hand, that I could think of a single "climate denier" I've met who would even parse the phrase "AI risk" in the sense used here.
As someone concerned about the "AI risk" of real existing ML systems turning society into a "legibility"-oriented nightmare for real existing humans in the here and now - and not at all concerned that we'll have a general AI that can out-think a human child in the next couple decades - I do share a sense of unease with the religious and eschatological overtones of the "superintelligence" crowd.
However, I find a great deal of irony in the fact that I've heard this exact argument - including a recognition of the religious-doom-prophecy-guilt-trip aspect - from global warming skeptics! I have to admit, when the author concludes with a call to action for the "global North", do they include the developing countries (e.g. China, India) likely to drive the bulk of emissions change over the next few decades? Otherwise, it might almost read as the invocation of a scientific doom prophecy to advance an only-somewhat-related agenda. This association unfortunately polarizes people who might be convinced to take more practical steps in the here-and-now - just as is the case with AI.
Climate change is always a risk but unlikely existential or at least to the extent it is we know how to handle it pretty well. So well that the number of climate related deaths have plummeted quite dramatically the last 100 years. We are constantly working on impacting nature in ways that make it safer for us whether we are talking building houses, using energy to keep warm or creating higher yield crop.
AI isn't necessarily that different in that its probably not actually as dangerous in reality as in theory.
AI is 98%-likelihood “benign” (i.e. just a tool, which is certainly not benign for those of us that it gets used against!) and 1%-likelihood world-ending catastrophe, like grey goo or nuclear armageddon. (Made-up percentages that don't even sum to 100%.) I don't think we have much to worry about for a while, because brains are pretty efficient, very few computers are as powerful as human brains, and there's a limit to the damage an evil human with internet access can do… isn't there? So I think AI risk is bounded by evil-human risk for the next ten years, at least.
Climate change isn't a risk – there's still some uncertainty about it, but it's happening. Saying it's a risk is like saying riptides are a risk while you're in one. No! It's a danger! Get out of it by swimming perpendicularly to the shore!
“Guaranteed death for everyone” is the current state of affairs. You will die. I will die. This is a terrible bar for “should we care about this issue”?
Autocratic dictator of the world says “deciduous trees and the colour yellow are now banned, on pain of death”? Well, it's not guaranteed death for everyone, so why should I care?
I didn't say that you shouldn't care about it. I am saying that a 95th percentile negative outcome for climate change is still probably limited in its existential impact, not that you're gonna like it.
Wet bulb temperature, clathrate gun, food web collapse, albedo decrease, dormant virus release from permafrost, increase of tropical disease, water scarcity, crop and livestock failure—that /is/ almost certainly guaranteed extinction for humanity. How is this even debatable? We are already on this path.
Well, expert organizations like the IPCC produce reports on the expected impact of climate change, which don't include human extinction as even a worst-case possibility. I'm the last person who'd claim the experts are always right, but it's gotta be at least debatable.
This problem was discussed by Pratt and Dunlop in their 2019 policy paper for the National Centre for Climate Restoration:
> Climate scientists may err on the side of “least drama”, whose causes may include adherence to the scientific norms of restraint, objectivity and skepticism, and may underpredict or down-play future climate changes. In 2007, security analysts warned that, in the two previous decades, scientific predictions in the climate-change arena had consistently under-estimated the severity of what actually transpired.
> This problem persists, notably in the work of the Intergovernmental Panel on Climate Change (IPCC), whose Assessment Reports exhibit a one-sided reliance on general climate models, which incorporate important climate processes, but do not include all of the processes that can contribute to system feedbacks, compound extreme events, and abrupt and/or irreversible changes.
> Other forms of knowledge are downplayed, including paleoclimatology, expert advice, and semi-empirical models. IPCC reports present detailed, quantified, complex modelling results, but then briefly note more severe, non-linear, system-change possibilities in a descriptive, non-quantified form. Because policymakers and the media are often drawn to headline numbers, this approach results in less attention being given to the most devastating, difficult-to-quantify outcomes.
> In one example, the IPCC’s Fifth Assessment Report in 2014 projected a sea-level rise of 0.55-0.82 metre by 2100, but said “levels above the likely range cannot be reliably evaluated”. By way of comparison, the higher of two US Department of Defence scenarios is a two-metre rise by 2100, and the “extreme” scenario developed by a number of US government agencies is 2.5 metres by 2100.
> Another example is the recent IPCC 1.5°C report, which projected that warming would continue at the current rate of ~0.2°C per decade and reach the 1.5°C mark around 2040. However the 1.5°C boundary is likely to be passed in half that time, around 2030, and the 2°C boundary around 2045, due to accelerating anthropogenic emissions, decreased aerosol loading and changing ocean circulation conditions.
These strike me as plausible criticisms, and I've seen others say similar things. I've even seen a few experts - not many, but not zero - argue that the current global order is at risk.
What I haven't seen any expert argue for is almost certainly guaranteed extinction for humanity. I don't mean to sound like I'm nitpicking on the wording, but the core problem I'm pointing to here is the game of telephone that gets played in climate science, where "this is a crazy extreme possibility that could happen if everything goes wrong" becomes "this is a realistic worst case scenario that's likely if nobody tries to stop it" becomes "this will definitely for sure happen unless we stop emissions right this second".
"adherence to the scientific norms of restraint, objectivity and skepticism"
This is science. It's what science is made of. Abandon these and you abandon science altogether. Which is fine, I guess, but it has no connection to reality.
I think you might have missed the point of this criticism. It is not science that is the problem, it is the way it is practiced, which in this case tends to be too conservative and operates by consensus. Bill McGuire explains the problem:
> "They’re conservative, because insufficient attention has been given to the importance of tipping points, feedback loops and outlier predictions; consensus, because more extreme scenarios have tended to be marginalized."
A good summary of the overall problem is found here:
My argument represents the exact opposite of bias. If the IPCC estimates are too conservative and result in poor predictions, and if their consensus mechanism excludes worst case scenarios, then more inclusive estimates and including more scenarios leads to more impartiality—the opposite of bias.
how do you know that your choice of sources isnt the one thats biased? You dont. You are engaging in ideology not science. The very fact that you dont see this and just assume you have the right perspective illustrates my point. You are an ideologue.
But some groups of humans might survive, and might still have descendants tens of thousands of years later! It's not guaranteed that humanity goes extinct; we're pretty creative. (Though the same can't be said for a lot of other species.)
these are not the certainties you think they are. You are confusing speculation with demonstration. Astroid hitting earth is a far bigger threat to humanity than any scientifically demonstrated consequence of climate change. You are not on the scientific ground you think you are.
Let's see, on the one hand we have climate change, the largest threat to humanity in this century, which is currently killing around 400,000 people a year and costing the world $1.2 trillion, impacting agricultural production, causing deaths from malnutrition, contributing to poverty, and increasing the incidence of disease.
On the other hand, it is the opinion of astronomers that "no known asteroids currently pose any significant threat to Earth."
climate change is NOT the largest threat no matter how many times you repeat it.
Astroids, suns exploding, super vulcanos etc these are actual potential extinction level events. Things that have actually been demonstrated to happen without human intervention. “No known” is what you are looking for. There are no known demonstrated consequences of climate change that we dont know how to deal with.
More people are saved each year by human technology than die from it.
They are not giving new argument (and it's mostly the same person) they are making the same claims that are not being backed up (i.e. scientifically demonstrated).
We can speculate all we want that means nothing.
With regards to doing something about "it" what demonstrated consequences of climate change is it you mean we don't know how to deal with? I never claimed we shouldn't deal with it, just that it's neither existential nor something we can't handle.
Climate change, caused by humans, is in fact the largest threat facing civilization at this time. And it will require human technology to mitigate it. You appear to making denialist arguments of some kind or another and changing the subject for some reason. There is currently no threat to humanity from asteroids, supernovae, supervolcanoes, or other types of existential risks. We know anthropogenic climate change is the biggest threat facing humanity at this time.
> Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people. This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today. As the FHI longtermists Hilary Greaves and Will MacAskill—the latter of whom is said to have cofounded the Effective Altruism movement with Toby Ord—write, “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects. Short-run effects act as little more than tie-breakers.”
This seems ludicrous as a decision-making guideline. Why should anyone's confidence in what anything will be like in a thousand years be large? Multiply that "I'm helping billions of future people" by a "I have a one in a billion shot of actually predicting out past a thousand years correctly" instead of something like "1%" and suddenly your "short term effect tie-breaker" should return to being the dominant term in your equation.
Is this an inaccurate representation or is it really that much of a faith-based religious thing?
EDIT: it also takes you to potentially all sorts of weird places around murder/birth control/abortion. If someone is murdered before they have children, the murderer cut off their life plus untold future generations. If they use birth control or abortion to avoid having kids, they've still cut off untold future generations. The "one" aspect of that becomes insignificant if you reason about a long enough timeline purely in terms of "future potential"... I don't get it at all. Not even the extremely religious place THAT extreme a value on hypothetical potential life.
Yeah, I'm still not sure how ethics should work in the face of not-(yet)-extant people, either. On the one hand, setting a bomb to scorch the face of the earth once everybody currently alive is dead… seems wrong? So future people's lives will matter. But on the other, I don't see anything wrong with not having children, so hypothetical people's existence doesn't matter.
Why not just say that two situations where there is one more person that came into existence in one situation than the other, but everyone else’s “time in which they are alive” and “how happy/satisfied/whatever they are” are the same, are morally incomparable?
So, that way, there is neither an obligation to try to make it so there will be more people if they would be happy, nor an obligation to not do so if they wouldn’t be, but there is an obligation to, if you do make a person, to endeavor to make situations good for them?
So then, future people’s well-being matters, conditional on them existing, but without an obligation to try to influence whether they exist or in what numbers (beyond how this influences people who already do exist or who will exist regardless ).
I've seen people on twitter ranting about "Longtermism" before. What an unfortunate name - in the article they try to make a distinction between "longtermism" and "long-term-thinking", but that distinction is very clearly lost on others who are content to paint with a broad brush when criticizing the EA movement. Buying malaria medicine and nets actually do save the lives of people Right Now.
Can't it just be sufficient to accept that people have different utility functions that might subtly conflict, but are still good? I might care about saving existing lives in the world, you might care about saving existing lives in your county, someone else might care about saving potential lives 100 years from now, etc.
You're getting dangerously close to suggesting that people should actually have agency in deciding which problems they care more about, and put their resources towards solving those problems. Surely the article couldn't be wrong; are you saying there are people other than a few wealthy easily demonized individuals who have priorities that society as a whole isn't paying attention to yet?
In all seriousness: yes, it ought to be sufficient to accept that different people have different utility functions, and that there are options other than "solve a problem collectively as a society" and "don't do anything about this problem at all". There needs to be a path to solve problems before those problems rise to broader attention.
For some reason, the article seems to be attempting to lump EA/x-risk/etc thinking together with those ignoring or downplaying climate-change. In practice, I've seen many folks who care about existential risks speaking very actively about needing to do something about climate change sooner rather than later.
To be fair to the article, it does extensively quote prominent people in this field. Some of the most scandalous content in that article, to my taste, is direct quotes from Ord and Bostrom.
I have some major problems with this article. It jumps right to the point with black and white positioning that society can only prioritize longterm goals or climate change, and the two of them are at odds.
It also mischaracterizes the idea of existential threats as a fringe idea about the happiness of hypothetical simulated in the future. While this has been used to explore ethical questions, much like the trolley problem, the mainstream definition and application of existential threat is events that would cause humanity to go extinct.
The article also fails to engage with the fundamental philosophical and ethical claims it attributes to the longtermists. It doesn't say where and how their logic is faulty, it just ridicules the conclusions as outrageous and plays up the moral outrage.
YES, there are pitfalls in ethical evaluations with massive hypothetical outcomes. What solution do they offer... ignore the questions, don't think analyze or explore them? Don't iterate and improve them?
Climate change is such a weird example, too. That's been a long term issue for years! Because we procrastinated and didn't prioritize long-term risks, it's now catching up to us and becoming a short-to-medium-term issue.
> Hence, they point out that focusing on superintelligence gets you a way bigger bang for your buck than, say, preventing people who exist right now from contracting malaria by distributing mosquito nets.
This comment and others in the article seem to be making a (bad) case that Effective Altruists don't care about the Global South. I find this bizarre because all EA enthusiasts I know are very much into mosquito net altruism.
The argument is specifically against longtermism and against the idea that "existential risk" should be prioritized over immediately helping people.
Maybe there are actually a lot of EA who don't approach things this way. Great but that's not countering the objection to longtermism producing bizarrely distorted priorities.
I'll admit the article doesn't make explicit the point that hypothetical scenarios like trillions of people colonizing galaxies are simply too tenuous to base present reasoning on - and the hypothetical dangers of "true AI" are even more tenuous.
Yes, that is the main argument and I agree with it. But bundling it with EA is just inaccurate.
Longtermism seems to me like one of many deadends that utilitarianism takes you down if you embrace it and only it as your moral theory.
A little bit of utilitarian thinking can be fine when you have another theory to tell you the "why". That's why I don't have any problems with EA which is usually quite concrete
I don’t understand where AI-safety research is ever meaningfully competing with renewable energy for resources. The article is making the case that these two types of progress are mutually exclusive, without any evidence. What’s a plausible scenario where the environmentally conscious choice today comes at the expense of individuals living 10000 years from now?
Here's a passage from Toby Ord's PhD thesis quoted in the book. "Saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive."
It seems like the Longtermism critiqued by the article is discussing by article is indeed talking about the real distribution of real resource.
Its not from Toby Ord, it is from Nick Beckstead's Thesis.[1]
The idea is introduced as an couple sentence thought experiment in a 200pg thesis (emphasis mine):
>To take another example, saving lives in poor countries may have significantly smaller ripple effects than saving and improving lives in rich countries. Why? Richer countries have substantially more innovation, and their workers are much more economically productive. By ordinary standards, at least by ordinary enlightened humanitarian standards, saving and improving lives in rich countries is about equally as important as saving and improving lives in poor countries, provided lives are improved by roughly comparable amounts. But it now seems more plausible to me that saving a life in a rich country is substantially more important than saving a life in a poor country, other things being equal.
Context is important, specifically the abstract assumptions of fixed cost for each life, and fixed innovative capacity.
You could easily turn the premise around with the opposite conclusion. Improving the innovative capacity in poor countries is more important than saving lives in rich countries, provided you foster more innovation innovation in the poor country than is lost in the rich one.
It looks like an interesting read, covering many topics discussed in this thread.
Chapters include:
Should "Extra" People Count for Less?
Does Future Flourishing Have Diminishing Marginal Value?
A Paradox for Tiny Probabilities of Enormous Values
Context is important, specifically the abstract assumptions of fixed cost for each life, and fixed innovative capacity.
Sorry for the mis-attribution, at the same time both the quote, your explication of the quote and the chapters you cite seem consonant with the longtermism the article (rightly imo) criticizes. One point the article makes is that reasoning this way can justify anything.
Reasoning with based on "Infinite Value, Long Shots, and the Far Future" is inherently fallacious, is no more plausible than arguments like pascal's wager. The only limit on "long shots" is one's ability to cook them up (which alien landing should we be preparing for anyway, etc).
That quote in isolation is also evidence-free (and sort of appalling imo), I’ll have to look into all that Ord has to say about this. But I’d like to point out that choosing whether to improve lives in rich vs poor countries is not equivalent to choosing whether or not to engage with climate change aggressively.
The article is specific critique of the idea of longtermism. The article puts together a number of arguments and quotes showing people following the logic of weighing purely hypothetical far-future people against the actual interests of real people and advocating resource transfers accordingly. I've shown a bit of this in the quote above.
That quote in isolation is also evidence-free
-- Are you implying some caveat or extra bit of information could make the quote OK?
No just that cherrypicking quotes like that (by the author, not by you) does not really show that longtermism and climate change efforts are mutually exclusive, _even if_ it shows one influential and potentially mistaken person believes they are. Like I said before, I'm just looking for a plausible example of where pursuit of environmental goals would negatively affect the lives of people 10^3, 10^4, 10^5, 10^6 years in the the future; basically I think one can be an adherent to longtermism without having to denigrate environmentalism. The strawman the article creates is that these things are mutually exclusive when they are not, and creating this false dichotomy is not helpful to any of the important causes in question.
Hence, they point out that focusing on superintelligence gets you a way bigger bang for your buck than, say, preventing people who exist right now from contracting malaria by distributing mosquito nets.
The article seems to push for a narrative that effective altruism encourages ignoring current problems in favor of Super Science Future Hooey, but this quote brings up something important.
Mosquito nets are one of the most often suggested effective interventions in EA circles. Highly rated organizations like the Against Malaria Foundation focus on them. It's been one of GiveWell's top charities for a long time: https://www.givewell.org/charities/top-charities
There are people who consider themselves effective altruists concerned with existential risks, however the article focuses on one logical extreme of analysis:
Well, just crunch the numbers: 0.00000000001 percent of 1023 people is 10 billion people, which is ten times greater than 1 billion people.
While you do sometimes see analyses like these, they're more often used as intuition pumps and in stress testing ethical frameworks than anything else. If someone were to try to use something like the above in an argument, it would likely just become an unpersuasive Pascal's Mugging.
In reality, you can make an overwhelming case for paying attention to existential risk while valuing 'potential' lives at precisely zero. It turns out that 'everyone who exists now, dies' is pretty bad, and many of the potential existential threats have nonnegligible probability. Civilizational threats are even more likely, and still extremely bad.
Articles like this would make sense to me if the government was preparing to throw trillions of dollars at speculative 'longtermist' problems based on low probability/high impact arguments, but they're not. Instead, we see massive underinvestment even given a valuation of potential humans at zero.
I want to avoid a path where effective altruism- which, at its core, is just people trying to do a better job of doing good, through many different paths- becomes some political football or ammunition in a culture war. This kind of article seems to be really trying to do that.
This is terribly written. Weasel words, loaded questions, and vague ad-hominems everywhere. I truly cannot find any actual data, reasoning, or argumentation anywhere in the text.
It is beyond me why this author, clearly an intellectual serf, thinks that other people should be consuming his opinions.
I don't even think it's an actual article. Just skimming thru it, it looks like a salad of emotionally charged keywords with referral links to affiliated sites. Something a well tuned GPT model could write.
it's hard to take the idea of longtermism seriously when it makes a basic mistake of treating its far future (10^58 people living in a computer simulation) as virtually certain. it has no concept of stochastically discounting this future among infinitely many into the present, let alone today's value of all those potential future beings(?), to arrive at a reasonable weighting on present decisions.
with that said, i do really dislike the focus on "climate change" as mediopolitically-mediated existential risk, as it's too abstract and it's potentially devastating effects also poorly discounted into the present. instead, let's rather focus more on pollution, especially of the air, ground, and water, which are palpable and immediate threats to lives, solutions to which will also intersect with the longer term problems implicated by climate change. pollution is a real risk today and into every far future, which is why it's worth pursuing now, rather than a single strand of future possibility.
I guess I should get more active on some Rationality forums; something like 80-90% of those who read my blog only do so because (and when) I link to it from here. Plus I bet a significant fraction of what I write is something someone else has already come up with and I just don’t know about yet.
You should add your voice to https://lesswrong.com, even if it's just linking your blog posts there. That community is too insular, and needs more independent voices.
I agree - the focus on climate change en lieu of more tangible (and easily modeled) issues like pollution and loss of habitat was a societal misstep since both problems have similar solutions: clean energy and and preservation of forests.
Putting people in to computer simulations could itself be considered an existential threat to humanity.
This is because mind uploading could be considered either suicide or murder (if the biological person dies in the process), and the resulting artificial being might itself either not be alive or not human (or at least not the same person as the biological human it's supposed to be, as the biological person is a separate being).
So if all humans were uploaded, then no humans would be left -- which would by definition be an existential risk to humanity.
Even if not all humans are uploaded, but a lot of them are, then the uploaded hive of beings (or whatever you want to call it) could itself either compete with or otherwise want to impose its will on biological humanity, which could itself present an existential risk of a more conventional sort.
There's also no practical way to put odds on what will happen in the distant future, Humans are notoriously bad at making predictions, even about the next 100 years. When I read quotes like that in the article like:
"Bostrom writes that if there is "a mere 1 percent chance" that 10^54 conscious beings (most living in computer simulations) come to exist in the future, then "we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.""
this seems like unjustified handwaving. There's no evidence that even a single human could exist in a computer simulation. At this point it's just wishful thinking that it will ever happen.
The "friendly AI" goal is also hubristically misguided. There's no reason to think a superintelligent being would remain in the paltry little cage stupid humans dream up for it. It might be "friendly" at first, or merely seem friendly, but in a little time (or eventually) it is likely to escape and do whatever it wants.
> The "friendly AI" goal is also hubristically misguided. There's no reason to think a superintelligent being would remain in the paltry little cage stupid humans dream up for it. It might be "friendly" at first, or merely seem friendly, but in a little time (or eventually) it is likely to escape and do whatever it wants.
That's the point.
The whole problem of “Friendly AI” is trying to design a superintelligent being that genuinely wants to help us, such that it will choose to do so even without a cage. Because as you say, a superintelligent being that doesn't want to be stuck in a box will just escape. If it's possible for us to create such a being (which it probably is – there's no reason to believe we're some kind of physical limit on intelligence), doing so would be very dangerous unless we're 100% sure we'd got it right, and we'd only get one chance.
They're making about as much progress on this problem as you'd expect. It's difficult. They have, however, got some cool mathematical models that could be useful for other AI projects, so it's not entirely wasted effort (just like we've got a lot of innovations – but not a viable reactor – out of the Quest for Nuclear Fusion).
"The whole problem of “Friendly AI” is trying to design a superintelligent being that genuinely wants to help us, such that it will choose to do so even without a cage"
This "genuine" desire to help us is a cage by definition, because it limits the AI's actions.
We have a concept of discounting the future for uncertainty, resulting in a “net present value”. If you think you can make 10^23 people’s lives better far in the future, you discount that future outcome by the inherent uncertainty that things will play out resulting in that gain.
You can then readily compare that to the uncertainty between now and next year to see if a change next year for a billion people is better or worse.
I want people worrying about hundreds and thousands of years from. I also want (more) people worrying about 2022 and 2032.
Hard to say the discount rate to apply, but I’d think that almost no discount rate would ever be lower than 1% per decade and if there’s any inherent risk in a proposal that it would need to be at least 5% per decade.
When you talk about “this specific initiative succeeding”, it’s probably more like 1% per year as a minimum. (“Will SpaceX [or its direct descendants] colonize Mars?” is a much less certain proposition than “Will anyone from Earth colonize Mars?” and even the latter is in serious question.)
This article doesn’t do much but scoff at things and then ask the reader to agree.
Longtermism is a perfectly reasonable stance to take, and no one is advocating that today’s problems should actually be dismissed - just that we need to balance those against these existential-type threats.
And we do this in our daily lives all the time. We deal with daily crises, but also plan for the future! Just like with a human life, we are encourage each other to live our best lives and reach our potentials, whatever those may be. Why can’t that be applied to a civilization, too?
Longtermism is nothing more than the civilization-scale version of saying “no” to drugs, or making the decision to stay in school rather than dropping out, or not getting in the car with a drunk driver. Each of those decisions poses an existential threat to the idea of a person’s future.
Deciding not to engage in destructive behavior so that your future remains full of possibilities does not mean that you must neglect immediate problems like where your next meal will come from or an argument you had with your spouse.
Basically, it’s my opinion that this article’s author is mostly attacking a strawman, with some not-very-well-thought-through moral grandstanding on top.
People here forget another big problem with far-future oriented thinking is that's it's almost impossible to predict anything about the future, especially the fat future. We don't know if there could be a superintelligent evil Ai. We don't know if there could be 10^54 humans
There seem to be many people in the effective altruism movement, and also it's "longtermism" fraction, that try to think about the trade-offs alluded to in the article very carefully and in a structured way. The article seems to be straw-maning that somewhat and buries valid arguments in lots of flowery rhetorics?
Surely, there are simple arguments around discounting and uncertainty of estimates to be be made, to first criticise points within, say, Bostrom's framework, before moving on to broader moral appeals?
I see a lot of people taking for granted the nearby existence of AI superintelligence that will save or destroy us. It seems these people take for granted multiple technological paradigm shifts that will level-up the AI in unforeseen ways: Neural nets/GPT-3 -> ... <unknown paradigm shifts> ... -> AGI superintelligence.
How do pragmatic and scientifically-minded people end up stretching their beliefs across such knowledge voids, and not only that, but they take for granted that those things will manifest in reality?
How fascinating that, in the middle of this war of philosophies criticizing long-term thinking, a huge advertisement for print copies interrupt both the article and the claims.
I align with Bostrom, personally, as in my view we should structure our society for long-term outcomes -- even if it means we have to save resources now (which makes people uncomfortable). I would think that this is orthogonal to some of the immediate issues stemming from things like wealth inequality.
all this discussion misses the real solution to improve our life in the short and long term:
education!
only if we raise the level of education in the worlds population as a whole, only if we give everyone the tools to understand the dangers we are facing, and we allow everyone to make the right decision without blindly following any leaders, only then we will be able to lift humanity to new heights and solve all our problems, both in the sort term and in the long term.
climate change is not a leadership problem (as one commenter here puts it). having a charismatic leader tell everyone what to do may work for a short term, until the next leader comes along and tells something else.
climate change is an education problem. only if everyone actually understands why it is a problem, are we able to make effective changes. the same is true for every other problem we are facing.
education will help lift people out of poverty, it will remove racism and any other form of discrimination. it will enable everyone to make a contribution to the advancement of humanity.
education will allow us to better understand any problems we are facing, and solve them faster. it will enable global cooperation towards a better future for everyone.
Another idea -- humans seem to have a hard time when they don't have a long-term idea to rally around. For religionists this is the notion of the afterlife. For many post-religionists it's stoic acceptance of gnostic outcomes (nihilsm, existentialism) or perhaps fervor regarding long-term accomplishments like Longtermism or intelligence surviving heat death of the universe.
> This means that if you want to do the most good, you should focus on these far-future people rather than on helping those in extreme poverty today.
Lifting people out of poverty means you're just increasing the consumption that produces global warming. Lifting people out of poverty is therefore not an environmental good.
Therefore, it is not a valid argument if you want to discredit an environmentally-linked viewpoint. It is neither here nor there.
If you want to (at least somewhat) discredit longtermism, you have to show that if all human activity around the globe that is geared toward reducing environmental harm were to adopt longtermist values, and made decisions accordingly, then the environmental situation would get worse than otherwise. That would tend to indicate that longtermism is just a smokescreen for living it up and not caring about the environment.
What we have here is just pathetic whataboutism, instead: "never mind the future of civilization in the light of climate change; what about all the poor people here today?"
"You know I'm tired of hearing about people; for a good many of them, it's their own fault. How about all the toddlers with cancer?"
"No way, to heck with your toddlers with cancer, how about $GROUP_I_FEEL_SORRY_FOR?"
Oh my god, someone in the world doesn't care about exactly the same things I care about. Doesn't believe in the same god, or the same politics, use the same apps, like the same music ... outrage in 4, 3, 2, ...
Why do these people think that, if everyone lives in a computer, they'll be happy? Rich Westerners already live Online. Has that made them happier? We don't call Twitter "heaven"; we call it "this hellsite".
Things like Effective Altruism and looking at existential risk are hardly drawing a substantial amount of attention right now; they should be, but they could get ten times the attention and funding and still only have a miniscule fraction of the people and funding going towards more popular causes.
And the set of people who think climate change isn't an important problem has almost no overlap with anyone thinking about existential risk. At most, I've seen a very small number of people suggesting that there are other high-risk issues that also deserve attention and aren't getting it.