No problem, and good clarification. I think the problem that bugs me the most, is that this "margin-of-error" attached to conclusions derived from the Scientific Method is inherently impossible to calculate (as far as I can tell).
I agree that certain things can be proven absolutely, I suppose certain mathematical proofs would be examples of a priori knowledge, and could be proven logically with no need for inductive reasoning? When I said "nothing can ever be proven" I meant things based on inductive reasoning (lazy writing from me).
The margin-of-error can only be calculated based upon the number of trials. If I've been alive for 3,000 days and the sun hasn't exploded yet, then based upon 3,000 "trials" I can predict with very high statistical confidence that the sun will not explode tomorrow due to a very small margin-of-error. Of course, that confidence does no good if the sun goes nova tomorrow. The margin-of-error exists specifically because you always have access to a limited data set. The margin-of-error could be eliminated completely if you somehow had knowledge of all trials that ever were, are, and ever will be, but obviously we don't have this ability.
And correct, mathematical proofs are fully abstract, internally consistent, and (at least) to that extent, sound. Whether or not (and how) they actually apply to physical reality is another issue. But regardless, they constitute 'a priori' knowledge and are knowable at a 100% level of confidence, without any margin-of-error.
This is interesting stuff. I'd be lying if I said I understood it all, but I would like to question your final point.
I
think I agree on everything up to that. If I'm understanding correctly, metrics are inherently abstract because they rely on perception to exist. Even if you had a perfect machine which used the binary metric to ask whether something existed or not, the result must be perceived by a "mind", so even this binary metric is abstract.
On to your last paragraph. Now, I agree that "metrics are self-descriptively invoked by an intelligent mind, and that all real definition is a product of these metrics", but why should that mean that "Intelligent Design is the
necessary mechanism by which reality is created/defined."?
Why is it not possible that, for example, reality always existed, and the metrics that we use to define it are of our own making? Or in other words, why should our logical definition of reality have anything to do with how it was created? Just because we need metrics to understand reality, why does that mean that said reality has to have an Intelligent Designer using the same metrics?
(sorry, finding it hard to explain myself...
)
Yes, your understanding is basically correct, and also correct about the "perfect machine." Sensory technology seems to function as a 2nd-order observer. In the double-slit experiment of quantum mechanics, the suggestive collapse of the wave function occurred in the presence of both human and technological observers.
Your question about whether Intelligent Design is the "necessary" mechanism by which reality is created/defined is fantastic. You are correct to imply that conclusion didn't necessarily follow.
The best model one can theoretically come up with to explain something must meet a few criteria: It must 1) Be internally consistent, 2) Comprehensively and soundly explain all information it attempts to do so, and 3) Introduce the fewest number of assumptions, ideally zero. Falsification of the model can happen on two levels. At a lower level, the model can be rendered internally invalid if new information is introduced which should be explained by it, but isn't. At a higher level, the model can be rendered externally invalid if another model, which is broader in its scope, not only explains all information in the original model, but synthesizes this knowledge with other information unexplained by the original model (the result being a deeper understanding which predicates any topological understanding).
That being said, could reality have "always existed," independent of metrics? From an empirical perspective, maybe, but there's no possible way to know without introducing some unnecessary assumptions. This actually gets right back to the Positivistic Universe assumption, as your question yields to the same impossible means of empirical falsification, i.e. you would need to somehow collect metric data via observation in a Universe totally void of observers and metrics. What we do know, however, is that the data suggest that in 100% of cases where reality has been affirmed to exist, perception and metrics were present, and in exactly 0 cases has reality been affirmed to exist in the absence of perception and metrics. That's why the Positivistic Universe assumption exists in the first place; it's as practical to adhere to this assumption as it is to assume the sun won't go nova tomorrow.
From a philosophical perspective, no lol, reality could not have existed independent of metrics. One reason is we have the sameness-in-difference tautology of logic to turn to, which states that all relational entities must necessarily reduce to a common medium. Because what is real and unreal are relational entities, it follows they, too, reduce to a common medium. Metrics axiomatically create the distinction between real and unreal according to a simple difference metric (i.e. 1 vs. 0). No metric --> no distinction between what's real and unreal.
Just found your post, I'll try to reply as best I can.
Regarding the margin-of-error, we are on the same page here. I understand that more trials = higher statistical evidence. It's just that, as you say, we can never have complete knowledge. This means that it is possible, for example, that every single trial ever done was influenced by an alien race from a parallel universe and they "tweaked" the outcome of every trial to affect our understanding of reality. My point was that, if something like this had happened, we would have no way of knowing. We also don't have any way of measuring how likely this is because it would be beyond our empirical understanding of reality. Such a scenario is logically possible, but is totally impossible to provide evidence for, due to the faults in inductive reasoning. That's what bugs me.
Regarding the double-slit experiment, I suppose you're right in saying that observation is 2nd order. But the reason the experiment works, is that when observing anything on the quantum scale, we have to interact with it. Whether it is a human interacting, or a sensor, we have to measure photons that have bounced off the particles we are trying to measure, and these photons
must have influenced the particles. In normal day-to-day life, we don't need to worry about these interactions because we humans are not sensitive to anything on the quantum level, and photons do not affect anything that we interact with in this way. So although all observation is inherently 2nd order and not 1st order, I think it makes more sense to falsely treat our own human-specific observations as 1st order.
So there is no "mystical" element of the result. (I'm not insinuating that you said this, it's just that it's a common misconception. Many people think that the experiment is evidence of magic or some shit...)
I totally agree with your definition of an optimum model, and with your point about it not being possible to know if reality "always existed", due to the limitations of inductive reasoning. You rightly say that, to know this
"you would need to somehow collect metric data via observation in a Universe totally void of observers and metrics." (Great line, it pretty much sums up my feelings on philosophy and why I both love it and hate it
, kinda links back to my point about the interfering alien race)
I have to admit, I'm finding your final paragraph hard to understand (when I google sameness-in-difference I get loads of obscure philosophical papers about feminism and racism). From what I do understand though, it seems to me that you're providing a valid and compelling case for agnosticism, but not for the existence of an intelligent designer.