Author

Topic: Gateless Gate Sharp 1.3.8: 30Mh/s (Ethash) on RX 480! - page 151. (Read 214431 times)

hero member
Activity: 588
Merit: 520
His changes are opensource. You can compile and check.


https://github.com/zawawawa/gatelessgate/network



Don't have AMD cards so...
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
His changes are opensource. You can compile and check.


https://github.com/zawawawa/gatelessgate/network

hero member
Activity: 588
Merit: 520
GG is finally running faster with the parallelized Round 0.
I fused Round 0 with Rounds 7 and 8 to alleviate cache contamination and to improve the cache hit ratio for the next Round 1. I could even merge it with the solution-searching kernel for better results.
Good stuff.

How much speed you gained that way?
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
GG is finally running faster with the parallelized Round 0.
I fused Round 0 with Rounds 7 and 8 to alleviate cache contamination and to improve the cache hit ratio for the next Round 1. I could even merge it with the solution-searching kernel for better results.
Good stuff.

So how much faster?
sr. member
Activity: 728
Merit: 304
Miner Developer
GG is finally running faster with the parallelized Round 0.
I fused Round 0 with Rounds 7 and 8 to alleviate cache contamination and to improve the cache hit ratio for the next Round 1. I could even merge it with the solution-searching kernel for better results.
Good stuff.
sr. member
Activity: 728
Merit: 304
Miner Developer
I think I figured out why the performance of the new code is not as good as expected.
It is because the L2 cache gets contaminated by Round 0 in the background.
Very tricky, huh.
sr. member
Activity: 728
Merit: 304
Miner Developer
You need to profile each round and check the total time used. I believe round 7 and 8 are the fastest rounds. So then you need to add less blake2s instructions here.  

Yeah, I was measuring the duration of each Equihash round with CodeXL, but the results are pretty unpredictable, though. This whole dual mining thing is actually very complicated as inserted codes in the background have unintended effects, interfering with the foreground threads. I would say Claymore deserves respect just for the fact that he was able to pull it off. I tried various ways to parallelize two Equihash runs with better results. We will see.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
You need to profile each round and check the total time used. I believe round 7 and 8 are the fastest rounds. So then you need to add less blake2s instructions here. 
sr. member
Activity: 728
Merit: 304
Miner Developer
My current algo splits Round 0 into four chunks and assigns each of them to one of the other Equihash rounds.
These chunks are then processed in the background while slot data is loaded into the LDS.
The VALU is pretty much doing nothing during those global memory reads, so there shouldn't be any performance penalty in theory.
However, for some unknown reasons, Rounds 3 and 4 become much slower than Rounds 1 and 2 with this optimization.
This result is rather counter-intuitive as the former are supposed to be shorter than the latter.
What I'm going to do next is to assign the four chunks to Rounds 1, 2, 7, and 8.
Just like what they say: "In theory, theory and practice are the same. In practice, they are not."
sr. member
Activity: 728
Merit: 304
Miner Developer
The code is working now. I just need to tweak parameters now, I think...
sr. member
Activity: 728
Merit: 304
Miner Developer
There was a problem with storing slot data for Round 0, but I should be able to fix that pretty easily by adding an extra buffer and row counters. It's getting pretty late, so I will save the fun for tomorrow. I cannot wait to enjoy the full speed of ZEC mining with my own miner...
full member
Activity: 190
Merit: 100
it's in zawawa's signature just before you're post
hero member
Activity: 803
Merit: 501
can some one post the url for the latest build ..the posts seems pretty big..im looking for windows x64
sr. member
Activity: 728
Merit: 304
Miner Developer
It seems like I need to weave Round 0 into Rounds 1 through 8 explicitly with AMD drivers.
I thought this was an easy task, but, alas, it wasn't...

Did you try:

clEnqueueNDRangeKernel(q1, round0, ...)
clFinish(q1);
clEnqueueNDRangeKernel(q2, round0, ...)
clEnqueueNDRangeKernel(q1, round1, ...)
...


I essentially did the same thing with two threads, two queues, and a mutex, but that didn't work.
The original idea seems to work, though. I'm almost done with my implementation.
sr. member
Activity: 588
Merit: 251
It seems like I need to weave Round 0 into Rounds 1 through 8 explicitly with AMD drivers.
I thought this was an easy task, but, alas, it wasn't...

Did you try:

clEnqueueNDRangeKernel(q1, round0, ...)
clFinish(q1);
clEnqueueNDRangeKernel(q2, round0, ...)
clEnqueueNDRangeKernel(q1, round1, ...)
...
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
But you probobly lazy.

Try to run round0 with 1/5 of the intensity on all the threads<32 at the same time as round1. you also need to increase the total threads in the kernel by 32. and fix the indexing. you might need to remove 75% of the blake2s instructions as well to emulate the effect.(but keep the reads and writes to memory in round0)
 


How much slower is round1?
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
The only way to get 2 kernels to run in parallel is that if each kernels use only 50% of gpu usage. So in most case it isn't practical

Round0 is only using 20% of the time, so you can run the round0 thread at 25% speed and it still will complete the task before the other thread. Round 1-8 is spending most of its time on waiting for memory. so you can reduce the gpu load here as well)

I just tried with software work (odd blockthreads doing round1, even blocks doing round0) and the results are worse, lower speed. Some more could be tweaked with proper parameters etc, but not much more, like I said, because of high amount of shared memory needed, there are only few blockthreads being run on a SM which lowers occupancy. And let's not forget that even round0 does what is the slowest thing of everything -> doing random memory writes which are hard to coalesce so there are many memory transactions.


Round1 is not slow enough, so by dividing the load 50%. Round1 will run at round0 speed. (Slower)



Try with 2 buffers and 2 streams let round0 run with very few threads emulate 25% gpu load) I've already outlined the pseudocode.


hero member
Activity: 588
Merit: 520
The only way to get 2 kernels to run in parallel is that if each kernels use only 50% of gpu usage. So in most case it isn't practical

Round0 is only using 20% of the time, so you can run the round0 thread at 25% speed and it still will complete the task before the other thread. Round 1-8 is spending most of its time on waiting for memory. so you can reduce the gpu load here as well)

I just tried with software work (odd blockthreads doing round1, even blocks doing round0) and the results are worse, lower speed. Some more could be tweaked with proper parameters etc, but not much more, like I said, because of high amount of shared memory needed, there are only few blockthreads being run on a SM which lowers occupancy. And let's not forget that even round0 does what is the slowest thing of everything -> doing random memory writes which are hard to coalesce so there are many memory transactions.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
lol why is that discussion happening on amd thread ? lol
parallelizing too ?

Because dual mining works on NVIDIA hardware and AMD hardware. Claymore have already done it...
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
The only way to get 2 kernels to run in parallel is that if each kernels use only 50% of gpu usage. So in most case it isn't practical

Round0 is only using 20% of the time, so you can run the round0 thread at 25% speed and it still will complete the task before the other thread. Round 1-8 is spending most of its time on waiting for memory. so you can reduce the gpu load here as well)
Jump to: