In this post, I’d like to describe how I update my worldview in response to new informaiton.
As an example, I recently read Max Tegmark’s book “Life 3.0”, wherein I learned that there is now a widespread consensus among AI researchers along the following lines:
My view is that once I learned this, I had to change my position from being equivocal to accepting that there is an existential risk posed by AGI and that effort should be put into mitigation as of right now.
It was one thing when there was just Ray Kurzweil prophesying singularity, and not much progress seemed to be happening in AI; it’s a completely different game now. It’s virtually impossible that whatever clever reasons I come up with against the possibility of general AI haven’t been thought about by actual AI researchers, so I can only accept their position, have no position, or make a serious attempt to establish how their position is invalid (some lines of further inquiry I listed below); anything else is tantamount to science denial.
Note that I’m applying this in the context of an invention which would pose existential risk to humanity. Perhaps with other things, it’s OK to be more conservative?
Conversely, I’m not going to be convinced to reverse my position by any of the following:
Basically, I’m relying on the process of science to navigate the information flow. If somebody comes up with a useful diverging view (say, a new assessment of the lower bound for computational resources required for AGI which means it’s no less than two centuries off), then I expect the regular process of science to shift the consensus, and in time I will read about that. But until then, there’s a whole swath of AI articles and books I just don’t need to look at. It’s a very convenient tool for choosing what to read and what to debate.
There are certainly problems with science (eg the recent replication crisis in social psychology), but I expect that I will still end up with a more accurate view by relying on the scientific consensus. Besides, there’s simply no way to form a position by reading individual AI articles expressing opposing views.
This approach is generally applicable; it works just as well for climate change, for example. If somebody brings up Judith Curry, who might look like a climate scientist from a distance, I don’t really need to examine her views and spend time finding out that she’s in fact much more likely to be pseudoscientist sponsored by fossil fuel companies. I don’t need to try to debate her views point by point either. There is the simple fact that 98%+ of climate scientists agree on the main points (climate change is anthropogenic etc.), and additionally, meta-studies have been done on the studies in the remaining 2%, showing serious issues with all of them.
It would still be possible for me to reverse my position on AGI! Valid lines of argument which could convince me to do it would be, for example:
Showing that there isn’t, in fact, a widespread consensus (eg Tegmark has overstated his case or I misunderstood things).
Showing that AI researchers are not engaged in science or a science based pursuit like engineering. Eg no matter how many homeopaths are in agreement, we still don’t want to listen to them.
Showing that there is a sufficiently large group of qualified AI researchers who don’t think AGI is possible. This would be a weak argument unless this group is actually proposing a proof of impossibility, because the crux of the issue is the possibility, not inevitability of AGI.
All of this ties in with some vague ideas I have about my increasing reliance on meta-information when dealing with the torrents of information generated in the age of the internet. But that would be a subject for another post.