Can anyone explain please:
- Why we need a hardfork (or not)?
- What about the scratchpad ?
there are gpu miners for wild keccack now (which is used by bbr).
as this coin is intended to be cpu only there is a scratchpad needed for mining (just as a added memory requirement).
a gpu dev figured out how to store it effeciently: so we are talking about a hardfork to change the size of the scratchpad (or to change anything else we want to)
to be more correct - wild keccak scratchpad is increasing with each block. Now scratchpad is about 7Mb. It will grow about 90MB per year, and, since at this moment scratchpad is not big enough - it is possible to use gpu textures cache to mine it faster, as i know.
But according to dga calculations gpu miner is about 2.3 times more cost-effective than cpu, since big advantage gives only expansive cards.
PS: Probability of that i'll do hardfork is less than 1%. Now we focused on optimizing miner and implementing stratum pools.
Hmm, I based my previous post off of the belief that the GPU miner was 7 times faster. If the difference is 2.3 and will likely become less with CPU miner optimization and as the scratchpad grows, then a hard fork isn't necessary. But that's just my unsolicited opinion.
BTW, I like the GUI and have been using it. Going to play around with the HTML.
If you'll forgive me for my academic caution, I want to be careful with those numbers -- they're an estimate.
From some numbers Zoidberg shared with me, it sounds like the GPU miner is getting about 7 megahash/s.
From his posts, I believe Christian is mining on 3x NVIDIA 780ti cards. I confirmed this via PM. Each of those costs $700, depending on which one you buy.
Simpleminer is now getting about 420kh/s on my i7-4770 (it's actually a xeon e3-1241 v3). Price is about $280.
So if I put together a few numbers:
device speed cost hash/s/$
3x780ti 7000000 3*$700 3333
1xi7 420000 $280 1500
That gives an efficiency advantage of about 2.2 (I'm doing this from memory - I think the actual hashrate for the GPU rig was like 7.2, leading to an advantage of 2.3).
But the caution is that this is only one example -- there may be better GPUs for this, in which case the GPU advantage might be higher, or there might be better CPUs in terms of hash/s/$ (the i7 is a pretty expensive CPU), in which case the advantage might be lower.
I'd feel comfortable saying "1.5-4x" based upon these numbers.
Let me throw in one more thing to ponder for the longer term: It's hard to predict what effect the increasing scratchpad size will have on the GPU to CPU efficiency ratio. I believe that as the scratchpad starts to slip out of L3 cache on "normal" processors, it will be necessary to re-architect the CPU miners to operate differently for best efficiency. I'm going to make a more risky prediction and say that compared to XMR, BBR will be a bit more GPU-friendly. It has a bit more compute for each iteration, and its random memory access width is 256 bits instead of 128 bits). But BBR will probably be less ASIC-friendly: its growing scratchpad size (which is already very large by embedded memory standards) seems like a real pain, though I'm sure there are some ways to be super-clever about it.
(Also - there are a ton of other factors involved, such as the cost of the motherboard, or whether you already own the CPU and use it for other things, etc., etc. -- please don't use this as a guide to trying to decide profitability!