News, People, Organizations, Publications, Critiques

As noted in the History section of this website, the Bulletin of American Scientists stated, " the 61 years since the Doomsday Clock’s creation, the risk of human extinction has received relatively scant scientific attention, with a bibliography filling perhaps one page. ... [I]t’s surprising that a topic of primary significance to humanity has provoked so little serious research. ..."

This page covers many publications, organizations, and people. First, it's important to put it into context.

The people and organizations analyzing and publishing about existential risks serve very important purposes:

  • Public awareness
  • Assessments of risk
  • Advice on what governments and organizations can do to reduce threats

However, there are sometimes issues:

  • There are disagreements on risk assessments. Biases must be considered, as well as just questionable analyses.
  • There are disagreements on what governments and organizations should and can do.
  • Many active entities have little impact in the world. They may be too academic. They may not be very proactive. For example, they do a study and issue a report and them move on to other things, rather than follow up.

With some risks, such as synthetic biology, we simply cannot eliminate the threat no matter how much government and other organizational regulation there is, as discussed in other sections on this website. However, we might be able to regulate some entities and delay a catastrophe or extinction event, so we should try our best.

Unfortunately, while public awareness has some benefits, on the other hand it also gives ideas to bad actors and informs them of terrible things they can do, so that public awareness can actually decrease security.

On the other hand, public awareness of space settlements as a solution is extremely important.

Unfortunately, there are some opinions and attitudes out there which are counterproductive:

  • Some of these criticize space settlement, as if we should focus on trying to fix all problems within Earth's biosphere first. My response is" "Why not both?" But some still simplistically criticize space settlement as wrong, with an attitude of supreme arrogance and populism. (They gain a lot of applauds from others who see spending otherwise in their own self-interest, or who don't want to see others leave them behind.)

  • Some academics and establishment people believe that we can regulate away these threats, or push them into the far future, using methods they have employed for other issues. (This is despite limited success for many other matters, and this one is an existential, terminal threat.)

  • Many of these people have lived their lives in overly protected elite circles and have little direct experience with other kinds of people and business outside their usual circles, so have limited understanding of how the real world operates, which limits their assessments of situations. Too much idealism, too little realism. While academically or institutionally accomplished to extremely high levels, they have poor competence as regards many things going on in the world beyond their own circles and echo chambers, and what's required to be successful outside of academia.

  • Many are just lazy and enjoy their easy and pleasant life, not wanting to take on the more difficult and unpleasant details of the nitty gritty threats to humankind, or risk getting criticized for taking on very controversial issues, but instead just conforming to the groupthink of their colleagues and staying within their comfort zone. They can just mind their own business. They can keep doing things the same ways they've always done things, because that works for them personally, and why disturb their pleasant life in their nice position. It's easy to rationalize things away, and return to bliss.

  • There are many egos and "relative status" conscious people, who "compete" with solution providers, e.g., their position of influence and status is as a policy advisor as regards regulations, and they are not one of the solution providers in space settlement. They promote their in-group vs. their out-group. They take positions which promote their own self-importance.

  • Many people are reluctant to admit they were wrong, and back down from a position they already took and published statements about. Some seem like they will arrogantly be stubborn to the day they die.

Many counterproductive behaviors come from the human instincts of relative status and tribalism.

In reading detailed analyses of existential risks, it's best to not intellectually escape into various imaginative outlets, in order to avoid actually doing the more difficult work on a realistic solution. We are also way past the point of deferring action until after recommended further committees and studies ...

Based on many of these publications, it is important to try to get governments and other organizations to do things to try to deal with threats:

  • to try to reduce and delay catastrophes and extinction threats, and
  • work on space settlement

Eventually, efforts to avoid extinction threats will fail, but they may be a key to delaying a catastrophic event long enough for space settlement to happen. Without these efforts, we are more likely to fail to settle space in time.

If you want to pursue solutions to the threat of humankind extinction, then just go to my website or else other places where you can actually get involved in making space settlement happen soon. No need to spend too much time studying existential risks after you understand enough of what those risks are and just how close we are.

Specific Organizations, People, and Activities

The Bulletin of the Atomic Scientists, with their Doomsday Clock, regularly publishes updates on advancements in synthetic biology and other threats, with an unusually good sense of urgency.

The Centre for the Study of Existential Risk (CSER) was founded in 2012 at the University of Cambridge. Very academic, and I think often too academic. I have been following them, especially via their monthly email. Website:

A frequently cited "expert" is Martin Rees (aka Lord Rees or Sir Martin Rees), a British astrophysicist and cosmologist with a long list of publications, awards, and titles, including 60th President of the Royal Society. In 2003, he published a book titled "Our Final Hour" about ways humankind could destroy itself, and gave humankind a 50-50 chance of surviving the 21st century. Generally about his writings, while his analyses are interesting, I am not convinced by how he has weighed some of them, and moreso relative to each other, nor do I see much realism in his solutions, to say the least.

In 2010, Rees was quoted in The Guardian as saying: "... It's hard to see any particular reason or purpose in going back to the moon or indeed sending people into space at all."

In 2012, Rees was a co-founder of the Centre for the Study of Existential Risk (CSER) at the University of Cambridge, noted above.

In 2018, he published another book titled "On the Future: Prospects for Humanity" on generally the same topic.

For an example of Rees' views, in an interview published on in 2017, Rees stated:

"... But I strongly disagree with Elon Musk and my late colleague Stephen Hawking who talk about mass immigration to Mars. I think that’s a dangerous delusion because Mars will be a more hostile environment than the top of Everest or the South Pole, and dealing with climate change here on Earth is far more important than terraforming Mars." (Hawking had said we should colonize space to mitigate human extinction risks on Earth.)

People follow this guy because of his establishment credentials? Why does he persist in his position which is so obviously wrong?

(The only child of two teachers, Lord Rees has lived a long protected life in academia and the conservative establishment. I think he has little understanding of the wide range of bad actors far outside of his Ivory Towers. He is an opposite sort of personality to Elon Musk.)

Another academic institution which sometimes covers existential risks is the "Future of Humanity Institute" at Oxford University, led by Professor Nick Bostrom, at . Notable is a Senior Research Fellow, Toby Orb, an Australian, who has published the 480 page book "The Precipice: Existential Risk and the Future of Humanity" in the year 2020. He figures a 1 in 6 chance of existential catastrophe over the next century, including 1 in 10 due to artificial general intelligence. What a way to go into the historical record ...

As a Wikipedia article on the book put it in 2023, "Ord predicts that the Precipice is likely to last no more than a few centuries, as humanity will either quickly develop the necessary self-control or succumb to the rapidly accumulating risk of catastrophe."

Again, highly academic and impressive in many detailed analyses, while being so obviously wrong in the end.

The Global Challenges Foundation, established in 2012, is often mentioned, and has good goals. "Humanity currently faces multiple global catastrophic risks that need to be addressed. To address these challenges, the Global Challenges Foundation is dedicated to promoting the creation and development of improved global decision-making models aimed at reducing and mitigating these risks." Unfortunately, I find them to be rather slow and bureaucratic, and too dependent on government institutions.

The Future of Life Institute, founded in 2015, is more impressive to me, though I also see shortcomings. See for yourself.
"Our Mission
"Steering transformative technology towards benefiting life and away from extreme large-scale risks.
"We believe that the way powerful technology is developed and used will be the most important factor in determining the prospects for the future of life. This is why we have made it our mission to ensure that technology continues to improve those prospects."

Wikipedia is also disappointing:

A more recently established organization than the ones above, and one which I find interesting, is the Center for Security and Emerging Technology (CSET) at Georgetown University in Washington, D.C. Created in 2019, this think tank interfaces between emerging technologies and public policy. They state a focus on A.I. and biotechnology. However, when you go through their actual work, you can see current affairs items which don't actually seem to fit well, like they are staying busy on other things, and there are gaps in their work while they at the same time seem to duplicate reporting I've seen elsewhere. It's difficult to see what is actually reducing existential risk, though there is a lot of information to inform the public about what's going on in the world. What do they get successful funding for? I see nothing on space settlement. Nevertheless, I do find some of the publications in their "Publications" section to be quite interesting, which you can also see in their "Research Topics" section. They're another one on the checklist to keep an eye on and hope for, but not depend on.

There are various publications by individuals and co-authors spread around which are not associated with any of the above organizations, so I may add the better ones here later as I come across them again. However, like the Bulletin of the Atomic Scientists has long said, it is remarkable how such an extremely important topic of humankind's survival gets so little attention, relatively speaking, in the published literature. I would add that not only does it get little attention, what attention it does get is lacking in completeness, realism, and practicality. It has improved a lot since the turn of the century, but is still not nearly what I had expected it to be.

I sometimes see things which remind me of COVID-19 and the year of 2020, when we had so many warnings starting in January in Wuhan and beyond, yet "authorities" stubbornly persisted in their old positions which were obviously wrong. For example:

On March 29, 2020, the World Health Organization (WHO), stated: "FACT CHECK: COVID-19 is NOT airborne ..." and on their website stated "There is limited evidence that wearing a medical mask by healthy individuals in the households or among contacts of a sick patient, or among attendees of mass gatherings may be beneficial as a preventive measure." Some of their other statements include: "However, there is currently no evidence that wearing a mask (whether medical or other types) by healthy persons in the wider community setting, including universal community masking, can prevent them from infection with respiratory viruses, including COVID-19." Also: "... the wide use of masks by healthy people in the community setting is not supported by current evidence and carries uncertainties and critical risks." (I really don't want to get started on the WHO, as they made so many questionable statements, and persisted in their position for so long ...)

The US Surgeon General on February 29, 2020, tweeted:

"Seriously people- STOP BUYING MASKS!

"They are NOT effective in preventing general public from catching #Coronavirus, but if healthcare providers can�t get them to care for sick patients, it puts them and our communities at risk!"

On March 4, 2020, Dr. Oxiris Barbot, the commissioner of the New York City Department of Health and Mental Hygiene, stated "There's no indication that being in a car, being in the subways with someone who's potentially sick is a risk factor, because, again, it goes back to the issue of casual contact."

... and there are so many others ... but I will stop quoting here.

I was already well read in the scientific literature about spread of pathogens, and also about face masks (including against air pollution, e.g., I had registered the domain name in March 2019, i.e., many months before COVID-19 was detected), so seeing the above plus so many other "experts" and "authorities" standing up on their soapboxes to try to assert their high status to put down others raising concerns, and getting things so wrong about a serious virus like the original COVID-19 ... Well, that experience in 2020 reminds me of the situation today with synthetic biology, A.I., and human existential risk -- so many establishment experts and authorities getting things so wrong.

(By the way, you may discuss with me designs for exceptionally good face masks against pathogens which are not too difficult to attain. I was wearing some unique homemade ones in public, and had more ideas based on persistent research of scientific literature. You can see some of my early 2020 responses on my site with some of that work done during the initial face mask shortage. Maybe enough of us smarter people can survive some kinds of future and worse pandemics. However, I had a difficult time getting others to follow up.)

So, anyway, when you read literature on emerging technologies and existential risks, keep COVID-19 and the events of 2020 in mind.

(I also spent many years working as a contractor for various U.S. government agencies before ... but have resisted getting into that topic ... but it was enough to make me lose a lot of confidence in the ability of governments to go beyond small and slow steps.)

I have little doubt that companies and some bad actors (including military competition) will race to create a destructive future, and the real race is between space settlement vs. catastropic events or extinction.

People will also "cherry pick" what they believe ... whatever suits their self-interests.

It's good to read literature and understand things, but it's far more important to work on solutions, and not just to rely on and follow the "experts" and "authorities". There may not be a second chance when they get it wrong. It would be "unforgiveable neglect" (as Carl Sagan put it ...) to get things wrong on humankind survival. This is about life from Earth in this Universe and beyond.

Our choice is:

  • just return to business as usual,


  • help humankind survive.

As noted in the History section of this website, the late Dr. Carl Sagan stated on his immensely popular Cosmos TV series in 1980, trying to answer the question of why there are no observations of other civilizations in our Universe: "... Civilizations then might take billions of years of tortuous evolution to arise and then snuff themselves out in an instant of unforgiveable neglect. If this is a typical case, there may be few others, maybe nobody else at all for us to talk to. But consider the alternative, that occasionally civilizations learn to live with high technology and survive ... If only 1% of civilizations can survive technological adolescence, then ... [there could be millions of extraterrestrial civilizations ...]" He also spoke of civilizations surviving "the trap jointly set by their technology and their passions."

So, will we just continue to stumble forward by returning to business as usual, or will we help humankind survive?

Please provide quick feedback on this page. It is encouraging to just know people read anything on this site and care enough to give some quick feedback.

Which one are you?:
Robot of narrow A.I. -- search engine, spam, aggregator, etc.
Sentient artificial intelligence

How many stars would you give this page?
1 = very bad
2 = less than expected but okay
3 = average or no opinion
4 = good
5 = excellent

What is your age range?
Under 20
over 60

If you choose to submit feedback, then I wish to thank you in advance. After you click on Submit, the page will jump to the top.

[End of page.]