Pages:
Author

Topic: SolidCoin v2.0 features new hashing algorithm, faster on CPUs - page 3. (Read 12200 times)

sr. member
Activity: 252
Merit: 251
I would love to volunteer to be a beta tester.  Cheers!

Hi just keep an eye on the official forum or IRC channel for the public beta testnet. Although I will likely also post about it here (no guarantees though).
sr. member
Activity: 252
Merit: 251
(If there was nothing CPUs do best, why would be using them?)
Sure, CPUs do all sorts of things better than GPUs. CPUs are great at managing interrupts from multiple devices like USB, PCI, etc. CPUs are great at managing memory spaces, protecting system resources from other threads of execution. And so on. None of those have anything to do with hashing algorithms that make the blockchain secure.

The beauty of the Bitcoin hashing system is the asymetric nature combined with robust security. Incredible amount of computing power to find a hash, but trivial to verify the hash. I suspect that this new SC "design" is going to mess it up.

The final hash of the SC2.0 system is a SHA256. This means the security is the same as in Bitcoin, it's just done differently to get there. The reason why I decided to stick with SHA256 as the final hash is it's known to work well as a difficulty modifier. So I'm not quite sure what you're talking about in regards to "Security". The worst case if my new "algorithm" is broken is a faster way to generate hashes. Making hashes faster does not equal a broken system, or Bitcoin could be considered broken since a modern CPU can do nearly 2 million hashes per second.

A modern CPU can do 20000-35000 SC2.0 hashes per second per core. No node is receiving that many blocks to verify a second.
hero member
Activity: 756
Merit: 500
I would love to volunteer to be a beta tester.  Cheers!
legendary
Activity: 3878
Merit: 1193
(If there was nothing CPUs do best, why would be using them?)
Sure, CPUs do all sorts of things better than GPUs. CPUs are great at managing interrupts from multiple devices like USB, PCI, etc. CPUs are great at managing memory spaces, protecting system resources from other threads of execution. And so on. None of those have anything to do with hashing algorithms that make the blockchain secure.

The beauty of the Bitcoin hashing system is the asymetric nature combined with robust security. Incredible amount of computing power to find a hash, but trivial to verify the hash. I suspect that this new SC "design" is going to mess it up.
sr. member
Activity: 252
Merit: 251
When I worked at Silicon Graphics I saw several interesting algorithms implemented using OpenGL operations reading and writing to texture memory and/or the accumulation buffer and/or the framebuffer. That was before OpenCL and GPU programming languages, but the experience gave me a lot of respect for the ability of good programmers to look at problems sideways and come up with ... interesting ... solutions.

It's a bit different when you design something which goes against what good GPU programming practices are. The latency for video cards for instance is relatively large compared to CPU.

Quote
2.4 CPU Has Larger Cache and Lower Latency than GPGPU

T7500 has 32K L1 and 256K L2 per compute unit and 4M unified L3 while C2050 has 16K L1 per compute unit and 768K unified L2, which means that CPU is more lenient to the requirement on spatial locality than GPGPU and also has lower memory access latency than GPGPU.
However GPGPU is optimized for high-throughput by hiding the latency for memory access and other instructions through fast context switching to ready threads in the scheduling queue, which assumes, in addition to there being a large number of in-flight thread, the number of arithmetic operations per memory operation is also high (Imagine if you only perform one arithmetic per memory load, context switching doesn’t make much sense).

Read more here

I'm surprised as a seasoned developer you are unaware of such things. If you want to take an existing algorithm and optimize it for GPU then it's a different story, most things aren't designed to execute 100% in linear order with low latency, so you can break the problem into many. Not so with SolidCoin 2.0

Like JoelKatz said, if GPU could do everything as well as a CPU can we would have no reason for a CPU and we'd be running OSes on a GPU.
legendary
Activity: 1386
Merit: 1004

No. GPUs are faster at parallelizing things, but if I told you to do a long, state-dependent task (modelling weather, for example) CPUs would excel.

There seems to be an ongoing misunderstanding of what is possible with GPU computing.  Weather modeling is a GREAT GPU application.

http://www.vizworld.com/2010/03/weather-modeling-80x-faster-gpu/

Not everything does well on a GPU, but tasks that can be broken apart and rely on a small data source are usually GPU candidates.  Even large data sets are fine if all of the processors can look at the same data.   With almost any sort of mining you are working with a small data and all of the work is done on the same data for a while.  I fail to see what can be done to alter mining that will make it WORSE on a gpu.  It may not accelerate as well, but once someone smart does the work, it will accelerate. 

All of this work to try to de-gpu the mining process actually makes the end product less secure by making botnets the new problem.  Even if it works, you are trading one (known issue) with a worse unknown one. 
legendary
Activity: 1246
Merit: 1077
(If there was nothing CPUs do best, why would be using them?)

Well, CPUs are easy-to-program general purpose hardware that can do lots of things (and several things at the same time, in these days of multicore CPUs) pretty darn fast.

GPUs are hard-to-program more-specialized hardware. These days they can do pretty much any raw calculation a CPU can do, faster-- it just takes a lot more effort on the programmer's part to figure out how. That extra effort is only worthwhile for the most performance-critical code.

When I worked at Silicon Graphics I saw several interesting algorithms implemented using OpenGL operations reading and writing to texture memory and/or the accumulation buffer and/or the framebuffer. That was before OpenCL and GPU programming languages, but the experience gave me a lot of respect for the ability of good programmers to look at problems sideways and come up with ... interesting ... solutions.

No. GPUs are faster at parallelizing things, but if I told you to do a long, state-dependent task (modelling weather, for example) CPUs would excel.
legendary
Activity: 1652
Merit: 2301
Chief Scientist
(If there was nothing CPUs do best, why would be using them?)

Well, CPUs are easy-to-program general purpose hardware that can do lots of things (and several things at the same time, in these days of multicore CPUs) pretty darn fast.

GPUs are hard-to-program more-specialized hardware. These days they can do pretty much any raw calculation a CPU can do, faster-- it just takes a lot more effort on the programmer's part to figure out how. That extra effort is only worthwhile for the most performance-critical code.

When I worked at Silicon Graphics I saw several interesting algorithms implemented using OpenGL operations reading and writing to texture memory and/or the accumulation buffer and/or the framebuffer. That was before OpenCL and GPU programming languages, but the experience gave me a lot of respect for the ability of good programmers to look at problems sideways and come up with ... interesting ... solutions.
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.
If they had an FPGA already it wouldn't be a waste of money and 100 megahash would be god-like in the land of the CPUs.
An FPGA couldn't run a memory-hard algorithm at those kinds of speeds. If this is done right, you carefully design the algorithm so that a CPU is the ideal platform to implement it on. That is, you specifically design it around the things that CPUs do best. (If there was nothing CPUs do best, why would be using them?)
legendary
Activity: 1246
Merit: 1077
I predict someone will use FPGA's to mine solidcoin 2.0  and get faster than CPU hashing.
Doubt it, no one would waste that much money on a short term chain.

If they had an FPGA already it wouldn't be a waste of money and 100 megahash would be god-like in the land of the CPUs.
My 2 core CPUs ran at 17kHps at peak.
hero member
Activity: 756
Merit: 500
I predict someone will use FPGA's to mine solidcoin 2.0  and get faster than CPU hashing.
Doubt it, no one would waste that much money on a short term chain.

If they had an FPGA already it wouldn't be a waste of money and 100 megahash would be god-like in the land of the CPUs.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I predict someone will use FPGA's to mine solidcoin 2.0  and get faster than CPU hashing.
Doubt it, no one would waste that much money on a short term chain.
hero member
Activity: 756
Merit: 500
Wonder when can we start the new Solidcoin mining on CPUs?  Would love to make full use of my excess inventories Smiley
hero member
Activity: 756
Merit: 500
I predict someone will use FPGA's to mine solidcoin 2.0  and get faster than CPU hashing.
legendary
Activity: 1596
Merit: 1012
Democracy is vulnerable to a 51% attack.
I predict it'll take... mmm... 3 weeks after source code is released for the first faster-on-a-GPU solidcoin 2.0 closed-source miner to come out.  8 weeks until there's an open-source one available.

But my predictions are often wrong.
There's a good chance you're right. While it's trivial in theory, it's quite difficult in practice. It's in many ways analogous to coming up with a secure encryption algorithm, but more difficult.
newbie
Activity: 40
Merit: 0
Imo,

if people are stupid enough to go mine ShittyCoins again, they deserve to be scammed ...
full member
Activity: 154
Merit: 100
Hmmm

I'm kinda curious (honestly) if this could run SC 2.0

Guy made a working cpu in Minecraft.  Another video of a 16-bit ALU is out there too.

http://www.youtube.com/watch?v=7sNge0Ywz-M
legendary
Activity: 2128
Merit: 1073
I was hoping that someone will critique (or at least criticize) my suggestions. In any case I'm going to try to poke holes in my own arguments:

1) Find how OpenCL deals with recursion manually converted to iterations with continuations and explicit stack.
2) I recall that during Pentium processor recall somebody posted bit-by-bit accurate implementation of Intel floating point in Mathematica, complete with a flag to simulate the Pentium flaw. I'll see if I can port that to OpenCL.

I need to learn a bit of OpenCL and experience its strengths and limitations first hand.
legendary
Activity: 2128
Merit: 1073
Hi again, SolidCoin developers!

After a dinner I came up with another two ideas:

1) Instead of evaluating a total recursive function over a commutative ring of integers you could try a simpler thing. Require evaluating a specific value of primitive recursive function over the field of reals that has fractional dimension. Good starting example:

http://en.wikipedia.org/wiki/Logistic_map

just pick the value of r that is in the chaotic region.

Implement reals as long double that is 80-bit (10-bytes) on the Intel/AMD CPUs. Not all the C++ compilers really support long double, but the logistic map function is so trivial that you can include in-line assembly which will be in the order of 10 lines.

2) This is a variant of the above that embraces the enemy instead of fighting it. Implement (1) with reals as cl_double which is supported in OpenCL by NVidia cards but not by AMD cards. Then short AMD stock and go long NVDA for additiona gains.

Again, good luck!
legendary
Activity: 2128
Merit: 1073
Hi SolidCoin developers!

Take the Gavin Andresen challenge.

Your weapon against OpenCL will be recursion. Implement something along the suggestions of JoelKatz where MESS() is a highly recursive function. I don't have my textbooks of the computability theory anywhere near, but I would suggest to start your search for an appropriate candidate with:

http://en.wikipedia.org/wiki/Ackermann_function

In the worst case the OpenCL vendors will come up with greatly improved implementations.

Good luck!
Pages:
Jump to: