Global Catastrophic Risk and Artificial Intelligence
In thinking about the future of humanity, this class has taught me to consider certain global catastrophic risks. As human progression increases does our likelihood of destroying ourselves increase as well? Dr. Nick Bostrom published a paper on this topic and it has helped answer many of these questions. Specifically, that human extinction won’t be caused by disease or natural disasters (even if a good amount of people might die). Additionally, in the event of nuclear or hydrogen warfare, Bostrom argues there is a strong likelihood that a handful of humans could survive. The biggest threat to humankind is the advance of technology and the accompanying lack of comprehension and control. Synthetic biology, nanotechnology (if used with cruel intentions), and Artificial Intelligence all pose as both tools and potential deterrents for societal progress. AI seems to be the area with the most power and the most mystery. On the other hand, there is also a price for not advancing as a society. In Bostrom’s other article entitled Astronomical Waste, he claims that “For every year of delayed colonization of technologies to harness the energy of the universe, the opportunity cost of non-implementation is about 10^23 human lives per second. Even marginal technological advances that allow us to colonize the supercluster one second faster would bring about 10^14 lives.” But there are reasons to be optimistic. And we actually should be optimistic because of the unimaginable changes that will occur in our lifetimes changing life as we know it. Technological advances are exponential -- that is, as time goes on, the scale increases and the frequency of these advances increases. Society needs time to catch up with everything. But we become more and more unified as a planet every day and with that, two things happen. The first is that people become more aware of Existential Risk, global activities, and their individual impact on various things such as the environment. The second is that international organizations form to solve problems and communication becomes easier. As the world becomes more unified with each advance, we get better and better at catching up with and solving these problems and will play a large part in helping with this in the future. If the Effective Altruism movement itself can maintain its core values and mitigate the four main threats (the eternal September Effect, coordination effect, ossification, and epistemic failure, then other organizations can utilize the resources that EA as a field has provided to solve problems in the most efficient and best way possible.