Author

Topic: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. - page 776. (Read 2347664 times)

legendary
Activity: 1470
Merit: 1114
I was hoping to get a better response to my technical trolls but all I got was more bluster.
I was trying to find out if our skills were complementary. I am a complete noob when it comes
to cuda so I was hoping SP could implement some of my ideas with his knowledge of cuda.
When I provided a demonstration of my skills he respnded with sillly you that was cpu verification
code, and why don't you do better, without ever considering the technical merit or other
applications for the changes I made
He's more interested in selling what he has over and over again rather than providing anything new
that sells itself. I'm afraid SP has turned into a telemarketer.
Assembler for NVIDIA Maxwell architecture
https://github.com/NervanaSystems/maxas
Thanks, that will be useful when I learn how to use it. I'm looking for docs that describe the cuda
processir architecture in detail so I can dtetertmine things like how many loads to queue up to
fill the pipe, how many executions units, user cache management, etc. That kind of information
is necessary to maximize instruction throughput at the processor level. Do you know of any avaiable
docs with this kind of info?

There is not much info available, but if you disassemble compiled code you will see that the maxwell is superscalar with 2 pipes. 2 instructions per cycle. It's able to execute instructions while writing to memory if the code is in the instruction cache. And you to avoid ALU stalls you need to reorder your instructions carefully.   There are vector instructions that can write bigger chunks of memory with fewer instructions... etc etc. The compiler is usually doing a good job here. Little to gain.. Ask DJM34 for more info. He is good in the random stuff...

Thanks again.

Have you tried interleaving memory accesses with arith instructions so they can be issued the same clock?
When copying mem do you issue the first load an the first store immediately after it. Thr first load fills the cache
line and the first store waits for the first bytes to become available. Then you can queue up enough loads to fill
the pipe and do other things while waiting for mem. Multi-buffering is a given being careful not to overuse regs.

If your doing a load, process, and store it's even better because you can have one instruction slot focussed on memory
while the other can do the processing.

These are things I'd like to try but haven't got the time. Although I've done similar in the past there was no performance
tests that could quantify the effect, good or bad.

If you think this has merit give it a shot. Like I said if it works just keep it open because I could still implement it myself.
The hotter the code segments you choose the bigger the result should be. Some of the assembly routines would be logical
targets.

GDS (global memory) LDS (local memory), and work-item shuffle all require a little waiting period before they complete. So, say I'm using ds_swizzle_b32 (work-item shuffle) like I had fun with in my 4-way Echo-512... On AMD GCN, you can do some shit like so:

Code:
# These shuffle in the correct state dwords from other work-items that are adjacent.
# This is done in place of BigShiftRows, but before BigMixColumns.
# So, my uint4 variables (in OpenCL notation) named b and c are now loaded properly without the need for shifting rows.
ds_swizzle_b32 v36, v80 offset:0x8039 # b.z
ds_swizzle_b32 v37, v81 offset:0x8039 # b.w
ds_swizzle_b32 v38, v78 offset:0x8039 # b.x
ds_swizzle_b32 v39, v79 offset:0x8039 # b.y
ds_swizzle_b32 v15, v84 offset:0x804E # c.z
ds_swizzle_b32 v16, v85 offset:0x804E # c.w
ds_swizzle_b32 v33, v82 offset:0x804E # c.x
ds_swizzle_b32 v34, v83 offset:0x804E # c.y

# Each and every one of these takes time, however - and each one increments a little counter.
# What I can do is this - since the first row in the state is not shifted, the a variable is already ready
# It's in registers and ready to be used.

# The first thing I do in the OpenCL after loading up the proper state values - in BigMixColumns - is a ^ b.
# So, I can do something like this:

s_waitcnt lgkmcnt(4)

# What this does is, it waits on the pending operations until there are four left.
# They're queued in the order the instructions were issued - so the b uint4 should now be loaded
# Note, however, that the c uint4 is NOT guaranteed to have been loaded, and cannot be relied on (yet.)
# Now, I can process the XOR while the swizzle operation on the c uint4 is working!

v_xor_b32 v42, v15, v36    # v42 = a.z ^ b.z
v_xor_b32 v43, v16, v37    # v43 = a.w ^ b.w
v_xor_b32 v38, v74, v38    # v38 = a.x ^ b.x
v_xor_b32 v39, v75, v39    # v39 = a.y ^ b.y

# And then we can put in an instruction to wait for the c uint4 before we continue...
s_waitcnt lgkmcnt(0)

In case you're wondering, I load the d uint4 later in the code. Also, if you *really* wanna try your damndest to maximize the time spent executing compute shit during loads, you could do this (although you've probably figured it out by now):

Code:
ds_swizzle_b32 v36, v80 offset:0x8039 # b.z
ds_swizzle_b32 v37, v81 offset:0x8039 # b.w
ds_swizzle_b32 v38, v78 offset:0x8039 # b.x
ds_swizzle_b32 v39, v79 offset:0x8039 # b.y
ds_swizzle_b32 v15, v84 offset:0x804E # c.z
ds_swizzle_b32 v16, v85 offset:0x804E # c.w
ds_swizzle_b32 v33, v82 offset:0x804E # c.x
ds_swizzle_b32 v34, v83 offset:0x804E # c.y

s_waitcnt lgkmcnt(7)
v_xor_b32 v42, v15, v36    # v42 = a.z ^ b.z
s_waitcnt lgkmcnt(6)
v_xor_b32 v43, v16, v37    # v43 = a.w ^ b.w
s_waitcnt lgkmcnt(5)
v_xor_b32 v38, v74, v38    # v38 = a.x ^ b.x
s_waitcnt lgkmcnt(4)
v_xor_b32 v39, v75, v39    # v39 = a.y ^ b.y

# You get the idea...
[code]
[/code]

I think I follow even though that syntax is completely foreign to me. I think what you did is what
I was talking about. But I would go one step farther. It may not apply because I don't understand
the wait instructions unless there are synchronization issues.

In addition to what you did I would put the first  xor on b immediately after the first load. I know
it's stalled waiting for data but I want its dependant instruction already queued for when the data
becomes available.

Secondly that first load will fill the cache line so there is no need to queue up the load instruction until
the first load completes. Susequent loads will finish immediately because they hit the cache:

What I would not do is have a string of indentical instruuctions because they all compete for the
same execution unit and can only be issued one per clock. I would interleave the swizzles and xors
to they can both be issued on the same clock, assuming all dependencies are met.


With comments:

ds_swizzle_b32 v36, v80 offset:0x8039      # b.z    // start filling the cache with b
v_xor_b32 v42, v15, v36    # v42 = a.z ^ b.z           // queue first xor for when b is ready
 ds_swizzle_b32 v37, v81 offset:0x8039      # b.w  // this will complete one clock after the previous swizzle so...
v_xor_b32 v43, v16, v37    # v43 = a.w ^ b.w                 // make sure we're ready for it

I think you get it. When all the B vars are loaded you can queue the C vars while still processing and
saving the first batch.

I would even go one step farther to the loading of a, if possible.

I would start with swizzle a immediately followed by swizzle b then the first xor.
There wil be a lot of stalling waiting for memory here so if there are any other trivial tasks
do them next.

Loading a & b in parallel may seem odd but once both are in the cache you're flying. Then you can
mix saving processed data and loading new data, giving priority to loads to keep the GPU hot and
you can stick in the first swizzle c early to get the data ready.

I learned some of this stuff on a company paid Motorola course. The instructor was a geek and our class
was pretty sharp so we covered the material eraly then started having fun. At the time we were in a
performance cruch with customers demanding more capacity so we focsussed on code scheduling and user
cache management. One of the more bizarre instructions was the delayed branch. It exssentially means
branch AFTER the next instruction. That next instruction was often returning the rc. It took some getting
used to but oit gives an idea of the level of optimization they were into at the time.

It's the same CPU that had the ability to mark a cache line valid without touching mem. It great for
malloc because the data is initially undefined anyway. Who cares whether the garbage comes from
mem or stale cache, it's all garbage. Imagine mallocing 1k and having it cached without ever touching
the bus. They also have an instruction to preload the cache for real. that is essentially what I was
simulating above. It also had a user flush so you could flush data at any convenient time after you
no longer needed it instead of a system initiated flush when you are stalled waiting for new data.
legendary
Activity: 1764
Merit: 1024
He has added a duplicate checker in the code.

I also added it. You need to recompile latest@git

Cryptominingblog rebuilt the latest and either it doesn't work or they compiled it wrong.

http://cryptomining-blog.com/wp-content/download/ccminer-1.5.78-git-spmod-vanillacoin.zip
legendary
Activity: 1470
Merit: 1114
I was hoping to get a better response to my technical trolls but all I got was more bluster.
I was trying to find out if our skills were complementary. I am a complete noob when it comes
to cuda so I was hoping SP could implement some of my ideas with his knowledge of cuda.
When I provided a demonstration of my skills he respnded with sillly you that was cpu verification
code, and why don't you do better, without ever considering the technical merit or other
applications for the changes I made
He's more interested in selling what he has over and over again rather than providing anything new
that sells itself. I'm afraid SP has turned into a telemarketer.
Assembler for NVIDIA Maxwell architecture
https://github.com/NervanaSystems/maxas
Thanks, that will be useful when I learn how to use it. I'm looking for docs that describe the cuda
processir architecture in detail so I can dtetertmine things like how many loads to queue up to
fill the pipe, how many executions units, user cache management, etc. That kind of information
is necessary to maximize instruction throughput at the processor level. Do you know of any avaiable
docs with this kind of info?

There is not much info available, but if you disassemble compiled code you will see that the maxwell is superscalar with 2 pipes. 2 instructions per cycle. It's able to execute instructions while writing to memory if the code is in the instruction cache. And you to avoid ALU stalls you need to reorder your instructions carefully.   There are vector instructions that can write bigger chunks of memory with fewer instructions... etc etc. The compiler is usually doing a good job here. Little to gain.. Ask DJM34 for more info. He is good in the random stuff...

Thanks again.

Have you tried interleaving memory accesses with arith instructions so they can be issued the same clock?
When copying mem do you issue the first load an the first store immediately after it. Thr first load fills the cache
line and the first store waits for the first bytes to become available. Then you can queue up enough loads to fill
the pipe and do other things while waiting for mem. Multi-buffering is a given being careful not to overuse regs.

If your doing a load, process, and store it's even better because you can have one instruction slot focussed on memory
while the other can do the processing.

These are things I'd like to try but haven't got the time. Although I've done similar in the past there was no performance
tests that could quantify the effect, good or bad.

If you think this has merit give it a shot. Like I said if it works just keep it open because I could still implement it myself.
The hotter the code segments you choose the bigger the result should be. Some of the assembly routines would be logical
targets.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
He has added a duplicate checker in the code.

I also added it. You need to recompile latest@git
legendary
Activity: 1764
Merit: 1024
Update: SPs miner for Vanillacoin seems to be messed up on Suprnova. Tpruvot version works fine for Vanillacoin, no duplicate share issues.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
I was hoping to get a better response to my technical trolls but all I got was more bluster.
I was trying to find out if our skills were complementary. I am a complete noob when it comes
to cuda so I was hoping SP could implement some of my ideas with his knowledge of cuda.
When I provided a demonstration of my skills he respnded with sillly you that was cpu verification
code, and why don't you do better, without ever considering the technical merit or other
applications for the changes I made
He's more interested in selling what he has over and over again rather than providing anything new
that sells itself. I'm afraid SP has turned into a telemarketer.
Assembler for NVIDIA Maxwell architecture
https://github.com/NervanaSystems/maxas
Thanks, that will be useful when I learn how to use it. I'm looking for docs that describe the cuda
processir architecture in detail so I can dtetertmine things like how many loads to queue up to
fill the pipe, how many executions units, user cache management, etc. That kind of information
is necessary to maximize instruction throughput at the processor level. Do you know of any avaiable
docs with this kind of info?

There is not much info available, but if you disassemble compiled code you will see that the maxwell is superscalar with 2 pipes. 2 instructions per cycle. It's able to execute instructions while writing to memory if the code is in the instruction cache. And you to avoid ALU stalls you need to reorder your instructions carefully.   There are vector instructions that can write bigger chunks of memory with fewer instructions... etc etc. The compiler is usually doing a good job here. Little to gain.. Ask DJM34 for more info. He is good in the random stuff...

legendary
Activity: 1764
Merit: 1024
Trying out Vanillacoin on Nova, anyone getting a lot of duplicate shares? I'm using the build of SP .78 off of Cryptominingblog as well since SP hasn't updated his releases yet.
legendary
Activity: 1470
Merit: 1114
I was hoping to get a better response to my technical trolls but all I got was more bluster.
I was trying to find out if our skills were complementary. I am a complete noob when it comes
to cuda so I was hoping SP could implement some of my ideas with his knowledge of cuda.
When I provided a demonstration of my skills he respnded with sillly you that was cpu verification
code, and why don't you do better, without ever considering the technical merit or other
applications for the changes I made

He's more interested in selling what he has over and over again rather than providing anything new
that sells itself. I'm afraid SP has turned into a telemarketer.

Assembler for NVIDIA Maxwell architecture

https://github.com/NervanaSystems/maxas

Thanks, that will be useful when I learn how to use it. I'm looking for docs that describe the cuda
processir architecture in detail so I can dtetertmine things like how many loads to queue up to
fill the pipe, how many executions units, user cache management, etc. That kind of information
is necessary to maximize instruction throughput at the processor level. Do you know of any avaiable
docs with this kind of info?
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
I was hoping to get a better response to my technical trolls but all I got was more bluster.
I was trying to find out if our skills were complementary. I am a complete noob when it comes
to cuda so I was hoping SP could implement some of my ideas with his knowledge of cuda.
When I provided a demonstration of my skills he respnded with sillly you that was cpu verification
code, and why don't you do better, without ever considering the technical merit or other
applications for the changes I made

He's more interested in selling what he has over and over again rather than providing anything new
that sells itself. I'm afraid SP has turned into a telemarketer.

Assembler for NVIDIA Maxwell architecture

https://github.com/NervanaSystems/maxas
legendary
Activity: 1764
Merit: 1024
Faster but not profitable. I didn`t reach 5mhash yet
Well, did you modify SIMD in it? And if you feel like sharing, how much was gained from SIMD alone in X11 speed percentage?
Why don't you stick with AMD. Disassemble the kochur bins and check for yourself.
I am, thanks. What I'm interested in is how much there is to be gained from SIMD. While the architecture is different, many things are similar - if there's an unexpectedly massive improvement from SIMD on Nvidia GPUs, it is quite likely there is on AMD.
Also, why so defensive? I have no intention of enroaching on your turf, here - I could do more CUDA if I wanted, but for now it does not interest me. You don't need to see me as a threat.

You are no competition to me...

Then why are you afraid?

Grills grills, there is plenty of optimizations to go around! Since wolf0 is hot stuff, he should make Eth better.

Oh, by the way, sp_ - the comment about open sourcing some of my work I think is a little unfounded. For example, I semi-recently not only did the ONLY open-source implementation of a CryptoNight AMD miner, but I didn't base it on existing code infected with the GPL. This means there's now a base that's not only open, but MIT/BSD licensed to work off of for others. And on top of this, the community around the coins using the CryptoNight PoW really needed it, because the only existing AMD miner for it before mine was Claymore's, which was closed-source with a fee, and WAS Windows-only for the longest time. I even forked my own project and made a CryptoNight-Lite miner for that PoW - Claymore refused to implement it. You can find my CryptoNight miner here: https://github.com/wolf9466/wolf-xmr-miner -- and my CryptoNight-Lite miner here: https://github.com/wolf9466/wolf-aeon-miner

Unless it's like 2x faster then Claymore, it's not worth mining with. Monero hasn't been profitable for sometime. Botnets consumed Cryptonote.


Quick check on Vanillacoin. I get about 2.9GH/s per 970, it doesn't appear to be more profitable then Ethereum right now, always nice to have options though.
legendary
Activity: 1470
Merit: 1114
I was hoping to get a better response to my technical trolls but all I got was more bluster.
I was trying to find out if our skills were complementary. I am a complete noob when it comes
to cuda so I was hoping SP could implement some of my ideas with his knowledge of cuda.
When I provided a demonstration of my skills he respnded with sillly you that was cpu verification
code, and why don't you do better, without ever considering the technical merit or other
applications for the changes I made

He's more interested in selling what he has over and over again rather than providing anything new
that sells itself. I'm afraid SP has turned into a telemarketer.
legendary
Activity: 1470
Merit: 1114
joblo, does your quark optimisation work at the end? not sure I understand your conversation with sp_ fully: where does the +30% come from?

Joblo's optimization impacts CPU validation of any found shares. This is usually insignificant, but since he's also mining with all CPU cores, it did have an impact for him. It was that his CPU mining was slowing down ccminer.

Joblo: You're invited for a beer over at #ccminer @freenode: there's friendlier dev talk there, some collaboration now and then, and certainly a lot less BS  Wink

Thanks for the invite. I plan to join #ccminer (and github, and...) when things settle down, which they are beginning to do.
I've been so busy trying to get all the algos supported and delivering the quick optimizations that I'm only now starting to think
longer term.

I'm working on a design to modulerize algos that doesn't require any base code changes when adding a new algo.
But that's a big feature that requires a lot of thought. I have high standards and don't want to present a half-baked plan.
legendary
Activity: 1470
Merit: 1114
joblo, does your quark optimisation work at the end? not sure I understand your conversation with sp_ fully: where does the +30% come from?

Mostly from more efficient management of the groestl ctx. Because quark can run groestl twice per round
and was running the init function twice every time the hash function was called and it was called in a scanhash loop.
That's  2* the number of hash calls for something that only needs to be done once. That was a big boost though I
don't recall exactly how much. The reduction in the number of inits also helped other algos like x11.

I also created a fast reinit function that skipped the constants.  So now a full init is done once when scanhash
is called and any subsequent reinits that are necessary are fast. That alone added another 5%.

I have another idea to factor out the full init from scanhash so the ctx will be fully initted only once, ever, before
entering the thread loop.
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
Wolf0 I've found the issue. Fiji doesn't want to run with libopencl from APP SDK 3.0.
Fiji now works but it's slower than Hawaii, guess HSM is to blame.
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
For example, I semi-recently not only did the ONLY open-source implementation of a CryptoNight AMD miner, but I didn't base it on existing code infected with the GPL. This means there's now a base that's not only open, but MIT/BSD licensed to work off of for others.
Kudos to you!

Thank you. Please, if you wish to, use that base publicly or privately for your own miner work - or take bits and pieces from it. A nice guy caught an oversight that I made - I should have used a pthread condition variable and didn't - so now the miner uses pretty much 0% CPU. I'm rather happy and proud that the rest of it was so nice and light.

I'll have a look as well, thanks for sharing and shame on me for not noticing it earlier Smiley

Found this runtime issue:

[15:54:44] Error -6 when calling clCreateCommandQueueWithProperties.
[15:54:44] Error -36 when calling clEnqueueWriteBuffer to fill input buffer.

EDIT: looks like it doesn't like the Fiji card but hashes fine on the Hawaii.

Works on mine. What's your settings file look like?

{
        "Algorithms":
        [
                {
                        "name": "CryptoNight",
                        "devices":
                        [
                                {
                                        "index": 0,
                                        "threads": 1,
                                        "rawintensity": 1336,
                                        "worksize": 16
                                },
                                {
                                        "index": 1,
                                        "threads": 1,
                                        "rawintensity": 1336,
                                        "worksize": 16
                                }
                        ],
                        "pools":
                        [
                                {
                                        "url": "XXXXXXXXXX",
                                        "user": "XXXXXXXXXXXX",
                                        "pass": "x"
                                }
                        ]
                }
        ]
}
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
For example, I semi-recently not only did the ONLY open-source implementation of a CryptoNight AMD miner, but I didn't base it on existing code infected with the GPL. This means there's now a base that's not only open, but MIT/BSD licensed to work off of for others.
Kudos to you!

Thank you. Please, if you wish to, use that base publicly or privately for your own miner work - or take bits and pieces from it. A nice guy caught an oversight that I made - I should have used a pthread condition variable and didn't - so now the miner uses pretty much 0% CPU. I'm rather happy and proud that the rest of it was so nice and light.

I'll have a look as well, thanks for sharing and shame on me for not noticing it earlier Smiley

Found this runtime issue:

[15:54:44] Error -6 when calling clCreateCommandQueueWithProperties.
[15:54:44] Error -36 when calling clEnqueueWriteBuffer to fill input buffer.

EDIT: looks like it doesn't like the Fiji card but hashes fine on the Hawaii.
hero member
Activity: 672
Merit: 500
For example, I semi-recently not only did the ONLY open-source implementation of a CryptoNight AMD miner, but I didn't base it on existing code infected with the GPL. This means there's now a base that's not only open, but MIT/BSD licensed to work off of for others.
Kudos to you!
legendary
Activity: 1400
Merit: 1050
Then why are you afraid?

I am not afraid, but I don't feel the need to help you. Perhaps if you opensource some of your work we can talk again..

See? That's all you had to say - like I said, if you don't want to answer, I'm perfectly okay with that. Just be honest about it.

I can opensource my 5% faster miner, but then all the donators will be angry wouldn't they. Works on linux as well.

If you don't want to answer the question, don't answer the question - but don't dodge it.
Grin Grin
getting some popcorn  Grin

ps: for once I am not involved in the trolling  Grin

Haha, djm34, I'm not trolling - I asked a question; he didn't wish to answer. It's all good.

never said you were, I was saying that usually I am the one getting into some argument with sp  Cheesy
legendary
Activity: 1400
Merit: 1050
I can opensource my 5% faster miner, but then all the donators will be angry wouldn't they. Works on linux as well.

If you don't want to answer the question, don't answer the question - but don't dodge it.
Grin Grin
getting some popcorn  Grin

ps: for once I am not involved in the trolling  Grin
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
Then why are you afraid?

I am not afraid, but I don't feel the need to help you. Perhaps if you opensource some of your work we can talk again..
Jump to: