Solution #76 to the Fermi Paradox

The Fermi paradox is
The Universe is very big and very old; given there is a human technological civilization on Earth, why don't we see evidence of technologically advanced extraterrestrial civilizations?
This has inspired much speculation, from professional scientists, science fiction (SF) writers and many others. Devising possible explanations is an interesting exercise in organized logic. A top-down organization might start with the alternatives 75 more detailed possible explanations are given in the second edition of Stephen Webb's non-technical book If the Universe Is Teeming with Aliens ... WHERE IS EVERYBODY? which I spotlight as an example of good popular science writing.

Probability is implicit throughout any discussion of the Fermi paradox -- which logically or scientifically possible explanations are not extremely unlikely? The popular Drake equation formulation requires one to estimate numerical probabilities of successive key stages of development of life and civilization, but these numbers are highly conjectural. A 2018 Sandberg-Drexler-Ord analysis asserts (but see this riposte) that if one accounts for uncertain values by putting priors on relevant parameters, then we canot exclude either possibility -- that technological civilizations are intrinsically so unlikely to arise that we are likely alone, or that one would a priori expect many such, in which case the Fermi paradox really is a paradox. A variant argument, implicit in Robin Hanson's Great Filter metaphor, does use some honest mathematics. Under the usual assumptions that each key step is unlikely, then regardless of the actual probabilities the times of the successive steps would be uniform random over the relevant time interval, and this seems to be true for the history of life on Earth. (My own minor contribution to this topic is to observe that the argument works for an arbitrary branching tree of possible development steps, not just those that actually happened on Earth -- see this semi-technical paper).

On this page I want to make a different point. As Stephen Webb argues, explanations based on the presumed psychology or motivation of aliens are unconvincing. However, regardless of psychology or motivation, in order for a civilization to last a million years it cannot have much more than a one in a million chance per year of being destroyed. From all risks combined, known and unknown. Such a civilization would surely be incredibly risk-averse (whether consciously or unconsciously doesn't matter). Now we can imagine various reasons why it might be dangerous to become known to other civilizations -- call this the Dark Forest scenario, from Liu Cixin's recent SF trilogy.

Now my point is that it's not necessary for this "dangerous galaxy" possibility to be true, or even for our hypothetical aliens to believe it is more than incredibly unlikely to be true. Instead, regardless of alien psychology or motivation, it is hard to imagine a civilization being incredibly risk-averse about a myriad of other risks but not about this particular possibility, however unlikely it is perceived to be. And recognizing this possibility would require efforts not to become known to other civilizations.