So far, from all the devs i have spoken to, they all say burst is flawed. The core dev hasn't shown his face in a while, and devs have only left this coin. Why would I stick around? I have never been paid, and tbh i'm not much good anyway. It's a shame that burstcoin.biz doesn't has an asset explorer....but that's all i've managed to achieve. The burstcoin.info is about as crap as a site gets. I dunno guys, can you tell me why I should stick around without a whitepaper? I am not good enough to be able to review the algo.
All your efforts are apperciated. I'm sure they'll often go un-noticed by many but
I really apperciate all that you've added to this coin and the community as a whole.
A few other guys here are also great and all lots of value to this coin!
We've just got to push through these rough times together.
I would still like a clear explanation of exactly what it is you think is flawed in PoC.
To my knowledge it has been tested to see if it can be exploited, and every attempt has either failed, or been completely pointless... such as trying to mine on GPU. It would be completely pointless to try because a great GPU would only equate to a very small amount of HDD space and thus be extremely ridiculous in power consumption VS HDD...
But yea, I don't know exactly what you're worried about with this flaw theory.
I think you should stick around because this is not the end, by far. Just because the few active devs we had have decided to move on, doesn't mean we can't get new ones, and in fact I'm already prepared if need-be.
I'm going to wait a while for the main dev to show up again before making any executive decisions, but it is entirely possible to re-boot the coin, new OP, new devs, and pick up where the other left off, if we need to.
People are acting like he has been gone forever, when in reality I spoke to him personally on the first of August. That's less than a month of actually not being around. Is this really enough time for people to lose their minds?
if PoC is flawed in some way, I would like to know what way you think that is, so it can be documented and fixed.
Only criticism I remember seeing leveled at BURST:
https://www.reddit.com/r/ethereum/comments/2tukar/cryptocurrency_burst_makes_smart_contracts_a/co2fywmAnd by the authors own logic, the weakness does not exist.
"* In order for the algo to be storage-bound and for a shortcut attack involving recomputing everything not to exist, we need reading from the hard drive to take less time than recomputing the data. But then we want a 1000x safety margin if we want that condition to hold true against potential ASIC implementations, hence reads need to be 1000x cheaper than the plot computation step. Hence, reading more than one time would have a marginal incremental cost of only 0.1%."
So if reading is 1000x faster than plotting, there's no shortcut attack.
So lets take a look at some numbers.
I took a look at my plot files, and each 500GB plot - 2,097,152 sets of 4096 nonces took 4 hours to plot using dual GPU's. So around 3.5 Million nonces/minute. So to be safe from attack we'd need mining that plot to occur at greater than 3.5 Billion nonces/minute.
I just checked one of my miners, and it read 34TB in 77 seconds, so 26.5TB minute. 26.5TB/min = 455 Billion nonces/minute.
455 Billion nonces/minute is a little higher than the 3.5 Million nonces/minute we need to be safe from attack.
Rather than the 1000x cheaper we need to be safe, reads are more than 100,000x cheaper ....
Now that is based on the time it took to write to disk all those nonces, an attacker doesn't need to write them, so can calculate them faster. So:
Based on the time taken to mine, we're processing around 120M nonces/minute. Each verification step requires 2 shabal256 calculations, so around 240M/min. But lets be generous and call it 500M/minute.
A no storage attacker needs to do 4097 shabals per deadline they're potentially submitting, which equates to around 125,000 plots, or 32GB of storage.
So this theoretical ASIC 1,000x faster than a GPU would do around 32TB.
The theoretical ASIC that's 1,000 times faster than the extremely generous performance specs we give to a high performance GPU could come close to the mining performance of hard disks, but will not match it. And as SSDs get bigger and cheaper (Samsung is planning on launching a 128TB SSD in 2018) the performance gap will only grow.
And that's for a theoretical ASIC; as it stands today, you'd need over 1,000 high performance GPU's, to match the mining capacity of 10 off the shelf SATA drives.
I think we're safe.
H.