This all means: You can't just take the plot and anything goes. Even for an assumed Dyson Swarm, there are constraints. And there are other observations that result of these constraints and that could be observed.
Of course. But here's the funny thing: if the proposed theory was that it's a cloud of comets, you wouldn't be demanding a detailed orbital solution.
Worst of all: Can you even make a coarse prediction what orbital radius the Dyson Swarm has?
Mercury's maybe less.
---------- Post added at 05:35 PM ---------- Previous post was at 04:06 PM ----------
As for life that can build mega-structures, we have observed zero instances of this.
So what? We have also observed zero instances of gravity waves before the 15SEP2015 event at LIGO.
In reality, it's all about the difference between a type 1 and type 2 error:
The skeptical mindset nowadays prevalent in the scientific community is characterized by being extremely harsh on people making false positive (type 1) errors combined with extreme leniency on people making false negative (type 2) errors. For that reason, falsely announcing the detection of aliens (type 1) will land you in trouble, while ignoring an actual alien (type 2) has no negative consequences.
The "extraordinary claims..." adage follows this mindset. As there is a negative (social and professional) consequence of making a type 1 error, the wisdom is that we should artificially increase the standard of evidence of required for "extraordinary" claims, so we avoid making a type 1 error. Now the weakness here is that by doing so we increase risk of making a type 2 error, but that's okay, because a type 2 error has no negative consequences for the scientist.
Say, for example, that I have an objective proof of alien megastructure at 95% confidence. This means that there is an objective 5% chance that there is no alien megastructure. So if I publish, I have 5% risk of trashing my career, but if I don't, I have no risk... So there is no reason for me to publish it. This is why I have said earlier that we may have had earlier observations of astroengineering, but people are sitting on them out of fear of ridicule.
Naomi Oreskes
argues that different treatment of type 1 and 2 errors is detrimental, because it leads to suppression of research results in situations where the cost of type 1 error is lower than that of a type 2 error:
Is a Type 1 error worse than a Type 2? It depends on your point of view, and on the risks inherent in getting the answer wrong. The fear of the Type 1 error asks us to play dumb; in effect, to start from scratch and act as if we know nothing. That makes sense when we really don’t know what’s going on, as in the early stages of a scientific investigation. It also makes sense in a court of law, where we presume innocence to protect ourselves from government tyranny and overzealous prosecutors — but there are no doubt prosecutors who would argue for a lower standard to protect society from crime.
When applied to evaluating environmental hazards, the fear of gullibility can lead us to understate threats. It places the burden of proof on the victim rather than, for example, on the manufacturer of a harmful product. The consequence is that we may fail to protect people who are really getting hurt.
And what if we aren’t dumb? What if we have evidence to support a cause-and-effect relationship? Let’s say you know how a particular chemical is harmful; for example, that it has been shown to interfere with cell function in laboratory mice. Then it might be reasonable to accept a lower statistical threshold when examining effects in people, because you already have reason to believe that the observed effect is not just chance.
This is what the United States government argued in the case of secondhand smoke. Since bystanders inhaled the same chemicals as smokers, and those chemicals were known to be carcinogenic, it stood to reason that secondhand smoke would be carcinogenic, too. That is why the Environmental Protection Agency accepted a (slightly) lower burden of proof: 90 percent instead of 95 percent.