...........
I am open to discussion, questions, and refutations
Anony
When you put it that way it makes me think , " but how could they get it so wrong" ?
The final verification is implementation, the "DAC" community of mpow, will want you to implement.
In fact, according to the principals of DAC you are both helping and hurting protoshares...
You are helping with open discussion, but hurting by stating that its vulnerable but NOT moving to implement it, thus the DAC community may believe that others have behind the scenes.
So look at the DAC as a processor you have opened a "mpow is vulnerable" thread but it hasn't been resolved yet , and eventually it will effect efficency, though confidence.
So a result is needed.
AnonyMint, I am glad to see that you put together a very sound argument and I can confirm that hyper threading does double performance. GPUs are basically massively hyper-threaded.
Nice to get this tone from you. You mentioned you've been sick, and I am sure you are overworked.
So we have a situation where a GPU is the most effective way to hit the memory bandwidth limitation and to get the highest memory bandwidth. I am OK with that because most integrated graphics these days can support Open CL as well which means that a CPU with integrated graphics can apply the same type of optimization and thus hit the memory bandwidth.
AGPUs (i.e. integrated with CPU) have even worse memory latency and bandwidth than the CPU.
So now you are telling me that a GPU has 2x the memory bandwidth. A factor of 2x on the GPU is insignificant.
10x. 260 GB per sec on HD7970 versus 20 GB per sec on i7 Haswell, Ivy Bridge, and Sandy Bridge. The E variants go up to 40 GB per sec. But you will never even reach even that 10x because you don't have enough threads and you aren't grabbing 128 or 256 bytes per access with SIMD instructions. Probably closer to 100x and at least 10x, but someone would need to implement. And the AGPU which has more threads is bottlenecked to memory, although this might improve in the unification towards GPGPU.
For a CPU-only coin, I would want GPUs to be worse than CPUs, not just only somewhat faster than CPUs as is the case for Litecoin.
It will be fun to see where this all goes.
digitalindustry, you are right that this DAC will cause the community to develop these algorithms and either break or prove the proof of work. This is a large part of why I released ProtoShares because it is a proto-type for the proof-of-work and we have all learned a great deal by this experience. Future DACs will be stronger for it.
Ok that is fine. Good to get this sort of attitude and response. Thanks.