Yes I agree that at the mining level or forking the block chain with enough computational power is an issue (inherent with any coin using a similar consensus model for verification of transactions), BUT that is not the issue we are discussing as that is merely determining the block/ledger for which a coin stores transactions. The item of topic was "privacy" and how Dash's approach has been an "attacker economist" approach.
As far as I am aware, the "attacker economist" approach would also be used for dealing with the possibility of Sybil attacks in a cryptonote coin.
The fee structure would also be an "attacker economist" approach to prevent the blockchain from being bloated to DOA levels (and the coin / transaction network going to the grave), due to the
problematic scaling and bloat of ring signatures at high mixing level.
What scaling problem exactly are you referring to here? ^
Concerning your first statement, have you read through the monero lab papers? This is to address your cryptonote comment since Monero is cryptonote based.
https://lab.getmonero.org/pubs/MRL-0001.pdfIt gives an example of how the math and
probabilities (keyword) of the level of mixing increasing decreases the probability an attacker would be able to successfully link addresses together on the monero block chain in a non-liner fashion.
In DJB's block post he states the following:
Cutting things too close: the "attacker economist" philosophy. There are three common approaches to choosing cryptographic key sizes:
The "attacker will probably fail" approach. People taking this approach say things like this: "We're designing this system so that the cost of breaking it is larger than the resources available to the attacker."
The "attacker will definitely fail" approach, producing larger key sizes. I take this approach, and I say things like this: "We're designing this system so that the attacker has chance at most one in a billion of breaking it using the available resources."
The "attacker economist" approach, producing smaller key sizes. People taking this approach say things like this: "We don't need attacks to be infeasible; that would be overkill. We're designing this system so that the attacker's expected cost of carrying out an attack exceeds the attacker's expected benefit from doing so."
For some attacks, notably the number-field sieve (NFS) for integer factorization, the success probability jumps very quickly from 0 to 1 as the cost increases past a particular threshold. It's then clear what the "attacker will probably fail" approach says, namely that the key size has to be chosen so that this threshold is beyond the attacker's resources.
However, for many other attacks the success probability grows much more gradually, and then it's not clear what the "attacker will probably fail" approach means. If "the cost of breaking" is defined as the median cost then the "attacker will probably fail" approach ends up saying that the attacker has chance below 50% of breaking the system; but cryptographic users aren't happy to hear that they've been given a cryptographic system that the attacker can break with probability 40%. NIST seems to focus on the average cost ("an algorithm that provides X bits of security would, on average, take 2X-1 T units of time to attack"), and with this definition the attacker's success probability could be 60% or even larger.
The "attacker economist" approach provides a rationale for focusing on averages. It views the attacker as rationally deciding whether to carry out attacks, saying "I will carry out this attack because its average benefit to me is larger than its average cost" or "I will skip this attack because its average benefit to me is smaller than its average cost". Of course, if the attacker carries out an attack of the second type, the attacker might still end up luckily gaining something (maybe the benefit turns out to be unusually large or the cost turns out to be unusually small), but after a long series of such attacks the attacker expects to lose.
The attraction of the "attacker economist" approach is that it produces smaller key sizes. The attack cost is limited to min{B,C}, where B is the attacker's benefit and C is the attacker's resources; for comparison, in the "attacker will definitely fail" approach, the attack cost is limited to C, which is usually much larger than min{B,C}. In other words, the "attacker economist" approach excludes not merely infeasible attacks, but also uneconomical attacks.
I'm still convinced that Dash is taking the "attacker economist" approach that is the approach that "the privacy Dash has is good enough why bother making things any more difficult/bigger/complex for an attacker.
Is there any actual Math-based proof of how the probabilities of attacking dash play out in different scenarios? Has there been any research into this at all? If so please link me there.