Institute for Extinction Risk Shuts Down: What We Know
Image: AnnieCee via Wikimedia Commons
The Future of Humanity Institute has shut down. I hope this isn’t an omen. Let’s have a look at what happened.
The Future of Humanity Institute was one of the few places worldwide studying the risk of human extinction and, hmm, some other things. It was located at the University of Oxford in the UK until its demise earlier this year, which was announced last week.
The institute was founded in 2005 by Nick Bostrom, that’s the guy who believes, among other things, that we live in a computer simulation. More about him in a moment. The maybe most impactful work to come out of the institute was to get artificial intelligence on everyone’s agenda as a potentially existential threat to the entire human species.
About 10 years ago, the Future of Humanity Institute became strongly linked to the Effective Altruism movement by the work of several philosophers including Hilary Greaves and William MacAskill, who are also in Oxford.
Effective Altruism is a research area and community of practice which tries to use economic reasoning to find the most effective way of improving the world.
This sounds all well and good, but this line of thought later gave rise to the more controversial idea of “longtermism” that was strongly represented at the Future of Humanity Institute.
Longtermism says that your altruism should focus on benefitting the longterm development of humanity because, if all goes well, many more humans will live in the future than now. In this illustration, each grain of sand illustrates 10 million people. The green grains are those alive today, that’s about 8 billion, the red ones are those who lived in the past, about 110 billion. But that is just a tiny part of all the lives that are yet to come.
Unless we go extinct. And so, longermists argue, it’s only rational to focus on all the many people who will be born in the far future. Those currently alive only matter insofar as that they need to produce sufficient offspring to avoid extinction, but with 8 billion people on the planet we could spare a few billion.
In a 2019 paper, MacAskill wrote “for the purposes of evaluating actions, we can in the first instance often simply ignore all the effects contained in the first 100 (or even 1,000) years, focussing primarily on the further-future effects.”
Longtermism basically gives rich guys an excuse to pursue their future visions while ignoring the suffering of people around them. Not so surprisingly, Elon Musk has found longtermism very compatible with his way of thinking. Others have decried it as ‘anti-democratic’ and ‘anti-humanistic’ and many of the Effective Altruism crowd have distanced themselves from it.
The questionable moral value of longtermism probably didn’t make the future of humanity people popular with their hosting institution. But the major problem seems to have been that the institute run into administrative problems. In their final report, you can read that they
“were affected by a gradual suffocation by Faculty bureaucracy. The flexible, fast-moving approach of the institute did not function well with the rigid rules and slow decision-making of the surrounding organization.”
They also write that “Several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received.”
According to Anders Sandberg who worked at the institute, one of the issues was that as university employees, institute members were not allowed to nap in their offices.
Then, “Starting in 2020, the faculty imposed a freeze on fundraising and hiring.”
And this would basically have spelled the death of the institute. But I have a hard time seeing how this was just about admin issues, because it means that the faculty decided they’re not getting much in return for their trouble.
Well, maybe in 2020, the future of humanity people were still hoping to put up camp at a different department. However, last year, Nick Bostrom, drew unwanted attention to himself because of text he’d sent 26 years earlier over an email list.
In this text, he lays out his opinion that black people are intellectually inferior. After that text surfaced, he wrote an apology which he, however, mainly used to elaborate how his views are so much more intelligent now than they were then. If the institute still had any hopes to muddle through at that point, this episode would have killed those hopes.
Bostom’s pseudo-apology has the same spirit that you can also sense in their final report, it basically says: we are so advanced that those intellectually inferior normal people just cannot comprehend out greatness.
The Future of Humanity Institute should not be confused with the Future of Life Institute, that’s the brainchild of Max Tegmark and the place which published an open letter last year asking for a pause on AI development. That pause has of course not happened, instead everyone keeps on screaming and yelling how AI is going to kill us all and it would be funny if it wasn’t so stupid. But maybe one lesson to take away here is that even if humanity doesn’t have a future, life will continue.
Are we surrounded by dark energy? A spacecraft tetrad will look for it
Image: NASA Goddard Space Flight Center
Astrophysicists say that 95% of the matter-energy content of the universe is dark stuff. Either dark matter or dark energy. It’s supposedly all around us, but we can’t see, feel, or hear it. Shut up.
I am not particularly excited about most of the experiments looking for this dark stuff, but a few days ago I read about a new one that actually makes a lot of sense. Let’s have a look.
So this new proposal comes from a team of NASA researchers and they are suggesting to use 4 small spacecraft them that fly around in the solar system in the configuration of a tetrahedron. The idea is fairly simple, they want to very precisely measure the distances between the spacecraft to look for deviations from Einstein’s theory of gravity, right here around us.
This makes a lot of sense because you see, we know from observations that this dark stuff makes itself noticeable on large distances. Galaxies, galaxy clusters, the expansion of the entire universe. Whether it’s really some sort of stuff, or whether we have got something wrong with the law of gravity, the differences kick on only at large distances, and for some reason we can’t detect it on our planet.
But somehow these two things have to fit together. So nature must interpolate between this local Einsteinian gravity and whatever is going on out there in the cosmos. This means that there should be an effect in the solar system. It’s just that it’s so small that it’s hard to measure. This is why precision tests of gravity in the solar system make a lot of sense.
If you look at the explanations that have been proposed for dark energy or dark matter like modified gravity, they all need some sort of interpolation between galactic and earth-scales. They have a mechanism that makes those effect go away close by the sun. But they never entirely shut down.
Modified Newtonian Dynamics is somewhat awkward in that it doesn’t tell you exactly how the modification shuts down, it just has an arbitrary interpolation function. But there are other ideas, for example Chameleon fields that could make up dark energy. They have what’s called a “shielding mechanism” that suppresses these fields near heavy objects like our planet or the sun. But they don't go away. So you can measure them if you measure precisely enough. But how?
Keep reading with a 7-day free trial
Subscribe to Science without the gobbledygook to keep reading this post and get 7 days of free access to the full post archives.