On updating the worldview

In this post, I'd like to describe how I update my worldview in response to new informaiton.

As an example, I recently read Max Tegmark's book "Life 3.0", wherein I learned that there is now a widespread consensus among AI researchers along the following lines:

  • An artificial general intelligence (AGI - something that has comparable capabilities to a human) is possible
  • It is also possible that it would undergo self-improvement which presents existential risk for humanity
  • This could happen in some number of decades (averaged estimate)
  • Economic and other forces are providing strong incentives and a lot of funding to keep pushing this forward
  • We have no idea how to align AGI goals with our goals, and the estimate for figuring this out is also decades, if ever
  • Therefore, we have to put effort into solving the alignment issue now (and the threshold for action on existential risks has to be quite low).

My view is that once I learned this, I had to change my position from being equivocal to accepting that there is an existential risk posed by AGI and that effort should be put into mitigation as of right now.

It was one thing when there was just Ray Kurzweil prophesying singularity, and not much progress seemed to be happening in AI; it's a completely different game now. It's virtually impossible that whatever clever reasons I come up with against the possibility of general AI haven't been thought about by actual AI researchers, so I can only accept their position, have no position, or make a serious attempt to establish how their position is invalid (some lines of further inquiry I listed below); anything else is tantamount to science denial.

Note that I'm applying this in the context of an invention which would pose existential risk to humanity. Perhaps with other things, it's OK to be more conservative?

Threshold for reversal

Conversely, I'm not going to be convinced to reverse my position by any of the following:

  • An opinion of a non-expert (eg this article by Kevin Kelly, a journalist).
  • The position expressed by a single scientist (eg this article by a robotics professor) when there is a large group of AI researchers agreed upon a different position.
  • The position expressed by a group of scientists who are experts in a different domain, unless they've found a counterargument within their domain of expertise (eg physicists show that the brain represents a minimum bound of complexity for AGI due to laws of physics). Neuroscientists wouldn't be such a group, for example, because their refutation can only be "you can't build a brain" (that's their knowledge domain), and that's not at all the same as "you can't build AGI".

Basically, I'm relying on the process of science to navigate the information flow. If somebody comes up with a useful diverging view (say, a new assessment of the lower bound for computational resources required for AGI which means it's no less than two centuries off), then I expect the regular process of science to shift the consensus, and in time I will read about that. But until then, there's a whole swath of AI articles and books I just don't need to look at. It's a very convenient tool for choosing what to read and what to debate.

There are certainly problems with science (eg the recent replication crisis in social psychology), but I expect that I will still end up with a more accurate view by relying on the scientific consensus. Besides, there's simply no way to form a position by reading individual AI articles expressing opposing views.

This approach is generally applicable; it works just as well for climate change, for example. If somebody brings up Judith Curry, who might look like a climate scientist from a distance, I don't really need to examine her views and spend time finding out that she's in fact much more likely to be pseudoscientist sponsored by fossil fuel companies. I don't need to try to debate her views point by point either. There is the simple fact that 98%+ of climate scientists agree on the main points (climate change is anthropogenic etc.), and additionally, meta-studies have been done on the studies in the remaining 2%, showing serious issues with all of them.

Valid lines of arguments for reversal

It would still be possible for me to reverse my position on AGI! Valid lines of argument which could convince me to do it would be, for example:

  1. Showing that there isn't, in fact, a widespread consensus (eg Tegmark has overstated his case or I misunderstood things).

  2. Showing that AI researchers are not engaged in science or a science based pursuit like engineering. Eg no matter how many homeopaths are in agreement, we still don't want to listen to them.

  3. Showing that there is a sufficiently large group of qualified AI researchers who don't think AGI is possible. This would be a weak argument unless this group is actually proposing a proof of impossibility, because the crux of the issue is the possibility, not inevitability of AGI.

Finally...

All of this ties in with some vague ideas I have about my increasing reliance on meta-information when dealing with the torrents of information generated in the age of the internet. But that would be a subject for another post.