Author

Topic: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. - page 775. (Read 2347664 times)

sr. member
Activity: 248
Merit: 250
Is suprnova pool  pps  or  per block? plz    thx
all suprnova pools are prop
Proportional (Prop) - The block reward is distributed among miners in proportion to the number of shares they submitted in a round. The expected reward per share depends on the number of shares already submitted in the round.
@sp thanks for inform us..
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer


My .bat is ccminer.exe -a vanilla -o stratum+tcp://vnl.suprnova.cc:1118 -u x -p x -q -i 25 -s 30 -d x
I am not getting 2.7 Ghash Sad in my MSI GTX 970 4G

My room temp is 20C. GPU temp 80C. Should I OC a bit ?

Try -i 31
legendary
Activity: 3164
Merit: 1003
Is suprnova pool  pps  or  per block? plz    thx
legendary
Activity: 1526
Merit: 1026


My .bat is ccminer.exe -a vanilla -o stratum+tcp://vnl.suprnova.cc:1118 -u x -p x -q -i 25 -s 30 -d x
I am not getting 2.7 Ghash Sad in my MSI GTX 970 4G

My room temp is 20C. GPU temp 80C. Should I OC a bit ?
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
legendary
Activity: 1526
Merit: 1026
VANILLACOIN--

After reading the buzz posts on VanillaCoin (VNL), I downloaded and set up tpruvot's CCminer v1.7.2.  With my EVGA GTX 960 SC, I get the standard 1.7GH/s on SuprNova's VNL pool.  In the last hour I've earned over 1 VNL, mining with just the single card.  I am at a 99.42% acceptance rate with more than 600 accepts.

I hope that release #6 of sp_s Private Miner is optimized, I may set some of my larger rigs on the coin.  It is trending up on the markets.       --scryptr


can you tell me the download link of v1.7.2 please

CRYPTOMINING-BLOG.COM--

There are 2 posts in the last 2 days regarding VanillaCoin.  There is one post from the 27th of January with a link to several mining packages, including tpruvot's 1.7.2:

http://cryptomining-blog.com/wp-content/download/vanillacoin-blake256-miners-windows.zip

and one post, also from the 27th of January, with sp_' s release dot 78:

http://cryptomining-blog.com/wp-content/download/ccminer-1.5.78-git-spmod-vanillacoin.zip

I am probably going to try sp_ 's version later today.       --scryptr

this is my .bat file
ccminer.exe -a vanilla -o stratum+tcp://vnl.suprnova.cc:1118 -u xxxx -p x -i 25 -s 30
I am getting These. Is it ok ?
Or how should I optimize the .bat ?

[2016-01-30 00:38:18] accepted: 127/128 (99.22%), 2448184 kH/s yes!
[2016-01-30 00:38:19] GPU #0: GeForce GTX 970, 2474499
[2016-01-30 00:38:19] GPU #0: GeForce GTX 970, 2396574
[2016-01-30 00:38:21] GPU #0: GeForce GTX 970, 2468811
[2016-01-30 00:38:21] GPU #0: GeForce GTX 970, 2396574
[2016-01-30 00:38:23] GPU #0: GeForce GTX 970, 2473111
[2016-01-30 00:38:23] GPU #0: GeForce GTX 970, 2396574
[2016-01-30 00:38:25] GPU #0: GeForce GTX 970, 2470242
[2016-01-30 00:38:25] GPU #0: GeForce GTX 970, 2236813
[2016-01-30 00:38:26] GPU #0: GeForce GTX 970, 2467383
[2016-01-30 00:38:26] GPU #0: GeForce GTX 970, 2581110
[2016-01-30 00:38:28] GPU #0: GeForce GTX 970, 2458840
[2016-01-30 00:38:28] GPU #0: GeForce GTX 970, 2396574
[2016-01-30 00:38:30] GPU #0: GeForce GTX 970, 2446136
[2016-01-30 00:38:30] GPU #0: GeForce GTX 970, 2580911
[2016-01-30 00:38:31] GPU #0: GeForce GTX 970, 2461792
[2016-01-30 00:38:32] GPU #0: GeForce GTX 970, 2460511
[2016-01-30 00:38:32] GPU #0: GeForce GTX 970, 2396745
[2016-01-30 00:38:32] accepted: 128/129 (99.22%), 2457221 kH/s yes!
[2016-01-30 00:38:33] GPU #0: GeForce GTX 970, 2454590
[2016-01-30 00:38:33] GPU #0: GeForce GTX 970, 2396574
[2016-01-30 00:38:35] GPU #0: GeForce GTX 970, 2461681
[2016-01-30 00:38:35] GPU #0: GeForce GTX 970, 2396745
[2016-01-30 00:38:37] GPU #0: GeForce GTX 970, 2465954
[2016-01-30 00:38:37] GPU #0: GeForce GTX 970, 2396574
[2016-01-30 00:38:38] GPU #0: GeForce GTX 970, 2462863
[2016-01-30 00:38:38] accepted: 129/130 (99.23%), 2456087 kH/s yes!
[2016-01-30 00:38:39] GPU #0: GeForce GTX 970, 2459862
[2016-01-30 00:38:39] GPU #0: GeForce GTX 970, 2580911
[2016-01-30 00:38:39] vnl.suprnova.cc:1118 vanilla block 310171
[2016-01-30 00:38:39] GPU #0: GeForce GTX 970, 2465909
[2016-01-30 00:38:40] GPU #0: GeForce GTX 970, 2477032
[2016-01-30 00:38:40] accepted: 130/131 (99.24%), 2455281 kH/s yes!
[2016-01-30 00:38:41] GPU #0: GeForce GTX 970, 2472290
[2016-01-30 00:38:41] GPU #0: GeForce GTX 970, 2580911
[2016-01-30 00:38:43] GPU #0: GeForce GTX 970, 2454591
[2016-01-30 00:38:43] GPU #0: GeForce GTX 970, 2580911
[2016-01-30 00:38:45] GPU #0: GeForce GTX 970, 2454591
[2016-01-30 00:38:45] GPU #0: GeForce GTX 970, 2580911
[2016-01-30 00:38:45] GPU #0: GeForce GTX 970, 2431340
[2016-01-30 00:38:45] accepted: 131/132 (99.24%), 2464262 kH/s yes!
[2016-01-30 00:38:46] GPU #0: GeForce GTX 970, 2460519
[2016-01-30 00:38:46] accepted: 132/133 (99.25%), 2466394 kH/s yes!
[2016-01-30 00:38:46] GPU #0: GeForce GTX 970, 2485360
[2016-01-30 00:38:46] accepted: 133/134 (99.25%), 2466898 kH/s yes!
[2016-01-30 00:38:46] GPU #0: GeForce GTX 970, 2460519
legendary
Activity: 1797
Merit: 1028
VANILLACOIN--

After reading the buzz posts on VanillaCoin (VNL), I downloaded and set up tpruvot's CCminer v1.7.2.  With my EVGA GTX 960 SC, I get the standard 1.7GH/s on SuprNova's VNL pool.  In the last hour I've earned over 1 VNL, mining with just the single card.  I am at a 99.42% acceptance rate with more than 600 accepts.

I hope that release #6 of sp_s Private Miner is optimized, I may set some of my larger rigs on the coin.  It is trending up on the markets.       --scryptr


can you tell me the download link of v1.7.2 please

CRYPTOMINING-BLOG.COM--

There are 2 posts in the last 2 days regarding VanillaCoin.  There is one post from the 27th of January with a link to several mining packages, including tpruvot's 1.7.2:

http://cryptomining-blog.com/wp-content/download/vanillacoin-blake256-miners-windows.zip

and one post, also from the 27th of January, with sp_' s release dot 78:

http://cryptomining-blog.com/wp-content/download/ccminer-1.5.78-git-spmod-vanillacoin.zip

I am probably going to try sp_ 's version later today.       --scryptr

EDIT:  I tried sp_ 's release dot 78 as packaged by CryptoMining-Blog.com.  It hashed at roughly the same speed as tpruvot's release, but the pool reported a much lower hash rate.  There were no error messages, but the pool was reporting less than 1/2 the hash rate as on my console.  I switched back to tpruvot's version for the moment.  I'll look at Private Miner #6 when I get it.

SuprNova pool has most of the VanillaCoin network hash rate to itself for the moment, BTW.       --scryptr
legendary
Activity: 1526
Merit: 1026
VANILLACOIN--

After reading the buzz posts on VanillaCoin (VNL), I downloaded and set up tpruvot's CCminer v1.7.2.  With my EVGA GTX 960 SC, I get the standard 1.7GH/s on SuprNova's VNL pool.  In the last hour I've earned over 1 VNL, mining with just the single card.  I am at a 99.42% acceptance rate with more than 600 accepts.

I hope that release #6 of sp_s Private Miner is optimized, I may set some of my larger rigs on the coin.  It is trending up on the markets.       --scryptr


can you tell me the download link of v1.7.2 please
legendary
Activity: 1797
Merit: 1028
VANILLACOIN--

After reading the buzz posts on VanillaCoin (VNL), I downloaded and set up tpruvot's CCminer v1.7.2.  With my EVGA GTX 960 SC, I get the standard 1.7GH/s on SuprNova's VNL pool.  In the last hour I've earned over 1 VNL, mining with just the single card.  I am at a 99.42% acceptance rate with more than 600 accepts.

I hope that release #6 of sp_s Private Miner is optimized, I may set some of my larger rigs on the coin.  It is trending up on the markets.       --scryptr
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
how profitablel the new vanillacoin algo on maxwells?And Sp do you have any optimisaztion about this algo?  Grin

The global hashrate is 1,862.96 GH/s.
980ti: 4-5GHASH(untested) and then more profitable than Etherum..
970: 2,7GHASH more profitable than Etherum..
960: 1,7GHASH more profitable than Etherum..
750ti: 940Mhash (asus strix 1240mhz) more profitable than Etherum..

Bensam reports some problems with my fork on some pools(the cryptomining build), so I will take a look when I have time. I have added duplicate checks and atomic operations to the code (@github) and I don't know if they buildt the latest version.

The unreleased sp-mod private #6 will have a 0.4% speed increase Smiley I will try to add some more later. (I already did a 100% opensource increase from the first blake-256 implementation.)

You can mine it here:

https://vnl.suprnova.cc

6.4GH/s - 6.9GH/s on a AMD Radeon R9 295X2
3.3GH/s - 3.5GH/s on a AMD Radeon R9 290X
2.9GH/s - 3.1GH/s on a AMD Radeon R9 290
2.6GH/s - 2.8GH/s on a AMD Radeon R9 280X
2.2GH/s - 2.4GH/s on a AMD Radeon R9 280
1.2GH/s - 1.5GH/s on a AMD Radeon R9 270
4.8GH/s - 5.1GH/s on a AMD ATI Radeon 7990
2.2GH/s - 2.8GH/s on a AMD ATI Radeon 7970
2.1GH/s - 2.4GH/s on a AMD ATI Radeon 7950
2.6GH/s - 2.9GH/s on a AMD ATI Radeon 6990
1.4GH/s - 1.5GH/s on a AMD ATI Radeon 6970
1.4GH/s on a AMD ATI Radeon 6950
900MH/s on a AMD ATI Radeon 6870
800MH/s on a AMD ATI Radeon 6850
1.3GH/s on a AMD ATI Radeon 5870
1.1GH/s on a AMD ATI Radeon 5850
1.6GH/s on a ZTEX USB-FPGA 1.15y Quad Spartan-6 LX150 Development Board
1.5GH/s on a Enterpoint Cairnsmore 1 Quad Spartan-6 LX150 Development Board
960MH/s on a Lancelot Dual Spartan-6 LX150 Development Board
360MH/s on a ZTEX USB-FPGA 1.15x Spartan-6 LX150 Development Board

(8 round blake-256 speeds in ccminer sp-mod release 77)
4.1GH/s on a NVIDIA GeForce 980ti
3.05GH/s on a NVIDIA GeForce 980
2.63GH/s on a NVIDIA GeForce 970
1.71GH/s on a NVIDIA GeForce 960
930MH/s on a NVIDIA GeForce 750ti
sr. member
Activity: 248
Merit: 250
how profitablel the new vanillacoin algo on maxwells?And Sp do you have any optimisaztion about this algo?  Grin
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
He has added a duplicate checker in the code.
I also added it. You need to recompile latest@git
Cryptominingblog rebuilt the latest and either it doesn't work or they compiled it wrong.
http://cryptomining-blog.com/wp-content/download/ccminer-1.5.78-git-spmod-vanillacoin.zip
Use the official version. They are using my optimized kernal as well..
I'm using that currently, it's about 10% slower then your version (unless it's faster because of duplicate shares).

Did you try to lower the intensity.

Try with -i 25

You can also try to increase the scantime

-s 30

 (default is -s 5)
legendary
Activity: 1764
Merit: 1024
He has added a duplicate checker in the code.
I also added it. You need to recompile latest@git
Cryptominingblog rebuilt the latest and either it doesn't work or they compiled it wrong.
http://cryptomining-blog.com/wp-content/download/ccminer-1.5.78-git-spmod-vanillacoin.zip

Use the official version. They are using my optimized kernal as well..

I'm using that currently, it's about 10% slower then your version (unless it's faster because of duplicate shares).
legendary
Activity: 1470
Merit: 1114
You're not getting the real issue here - GDS is read ONCE. ONLY ONCE. In pretty much all X algos. Well, per kernel.
I'm not getting what your saying. It's not about repeated accesses to the same data it's about access to
different data in the same cache line only once. Preloading the cache line with the initial load instruction
means the subsequent data wil be available sooner.
Anyway SP doesn't seem interested and it's his thread so I should probably drop it.

X11 and quark only read memory linary. In my mod I use vector instructions in the gpu to load many 32bit words in one instruction.

if you compile this to ptx you will see what I meen.

#include "cuda_vector.h"
...
   uint32_t h[16];
   uint28 *phash = (uint28*)hash;
   uint28 *outpt = (uint28*)h;
   outpt[0] = phash[0];
   outpt[1] = phash[1];




That makes sense. I presume the size of the vector is the same as a cache line. That pretty much neutralizes
what I intended to accomplish. What I was proposing had two stages: fill the cache and load register from cache
with other instructions in between. If cuda does all that in one instruction I have to just wait. Got it now.

I'll look for some suitable code in cpuminer to try it on.
newbie
Activity: 13
Merit: 0
this sites are scam dont trust to this sites
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
You're not getting the real issue here - GDS is read ONCE. ONLY ONCE. In pretty much all X algos. Well, per kernel.
I'm not getting what your saying. It's not about repeated accesses to the same data it's about access to
different data in the same cache line only once. Preloading the cache line with the initial load instruction
means the subsequent data wil be available sooner.
Anyway SP doesn't seem interested and it's his thread so I should probably drop it.

X11 and quark only read memory linary. In my mod I use vector instructions in the gpu to load many 32bit words in one instruction.

if you compile this to ptx you will see what I meen.

#include "cuda_vector.h"
...
   uint32_t h[16];
   uint28 *phash = (uint28*)hash;
   uint28 *outpt = (uint28*)h;
   outpt[0] = phash[0];
   outpt[1] = phash[1];


legendary
Activity: 1470
Merit: 1114

You're not getting the real issue here - GDS is read ONCE. ONLY ONCE. In pretty much all X algos. Well, per kernel.

I'm not getting what your saying. It's not about repeated accesses to the same data it's about access to
different data in the same cache line only once. Preloading the cache line with the initial load instruction
means the subsequent data wil be available sooner.

Anyway SP doesn't seem interested and it's his thread so I should probably drop it.
sp_
legendary
Activity: 2954
Merit: 1087
Team Black developer
He has added a duplicate checker in the code.
I also added it. You need to recompile latest@git
Cryptominingblog rebuilt the latest and either it doesn't work or they compiled it wrong.
http://cryptomining-blog.com/wp-content/download/ccminer-1.5.78-git-spmod-vanillacoin.zip

Use the official version. They are using my optimized kernal as well..
legendary
Activity: 1470
Merit: 1114
Here's the short answer assuming a 64 bit mem bus and a 32 byte cache line, ie 1 address cycle and
4 data cycles per burst to fill the cache line, a 4 deep mem queue and 2 instructions per clock.
An optimized memcpy in pseudo asm.

ld r0, src                        ; start loading 1st src cache line
ld r4, src+4                  ; start loading 2nd src cache line
preallocate dst cache      ;  intent to write so cache fill from mem not required, no wait
st r0, dst                        ; be ready as soon as first word arrives, stall here
ld r1, src+1                   ; load 2nd word of 1st line, cached now no wait
st r1, dst+1                   ; store it immediately, no stall
ld r2, src+2                   ; etc
st r2, src+2
ld r3, src+3
sr r3, src+3
flush src         ; flush the first source cache line unmodified, no writeback
flush dst         ; modified, writeback to mem, now to keep bus busy
st r4, src+4    ; by now the second cache line is filled, no wait
ld r0, src+5    : start filling 3rd cache line
finish saving second cache line etc.

This does not maximize double instruction issue because all instructions use the same execution unit.
The bus is kept busy after an initial wait for the first word, while you wait do anything else you can
that uses another execution unit like incrementing counters. Those instructions are essentially free.
If the function was modified to  process every word it would also be free. In fact the more processing you
do the more efficient it gets because you are using the alu more and all that while the mem bus is busy
doing its thing as fast as it can. If the mem IF is desiged properly there should be no problem with collisions.
It should always prioritize reads before writes.

If this model can be implemented on cuda we should see  some gains. I just don't have the cuda knowledge
to know if it can be done or how.
legendary
Activity: 1470
Merit: 1114
I was hoping to get a better response to my technical trolls but all I got was more bluster.
I was trying to find out if our skills were complementary. I am a complete noob when it comes
to cuda so I was hoping SP could implement some of my ideas with his knowledge of cuda.
When I provided a demonstration of my skills he respnded with sillly you that was cpu verification
code, and why don't you do better, without ever considering the technical merit or other
applications for the changes I made
He's more interested in selling what he has over and over again rather than providing anything new
that sells itself. I'm afraid SP has turned into a telemarketer.
Assembler for NVIDIA Maxwell architecture
https://github.com/NervanaSystems/maxas
Thanks, that will be useful when I learn how to use it. I'm looking for docs that describe the cuda
processir architecture in detail so I can dtetertmine things like how many loads to queue up to
fill the pipe, how many executions units, user cache management, etc. That kind of information
is necessary to maximize instruction throughput at the processor level. Do you know of any avaiable
docs with this kind of info?

There is not much info available, but if you disassemble compiled code you will see that the maxwell is superscalar with 2 pipes. 2 instructions per cycle. It's able to execute instructions while writing to memory if the code is in the instruction cache. And you to avoid ALU stalls you need to reorder your instructions carefully.   There are vector instructions that can write bigger chunks of memory with fewer instructions... etc etc. The compiler is usually doing a good job here. Little to gain.. Ask DJM34 for more info. He is good in the random stuff...

Thanks again.

Have you tried interleaving memory accesses with arith instructions so they can be issued the same clock?
When copying mem do you issue the first load an the first store immediately after it. Thr first load fills the cache
line and the first store waits for the first bytes to become available. Then you can queue up enough loads to fill
the pipe and do other things while waiting for mem. Multi-buffering is a given being careful not to overuse regs.

If your doing a load, process, and store it's even better because you can have one instruction slot focussed on memory
while the other can do the processing.

These are things I'd like to try but haven't got the time. Although I've done similar in the past there was no performance
tests that could quantify the effect, good or bad.

If you think this has merit give it a shot. Like I said if it works just keep it open because I could still implement it myself.
The hotter the code segments you choose the bigger the result should be. Some of the assembly routines would be logical
targets.

GDS (global memory) LDS (local memory), and work-item shuffle all require a little waiting period before they complete. So, say I'm using ds_swizzle_b32 (work-item shuffle) like I had fun with in my 4-way Echo-512... On AMD GCN, you can do some shit like so:

Code:
# These shuffle in the correct state dwords from other work-items that are adjacent.
# This is done in place of BigShiftRows, but before BigMixColumns.
# So, my uint4 variables (in OpenCL notation) named b and c are now loaded properly without the need for shifting rows.
ds_swizzle_b32 v36, v80 offset:0x8039 # b.z
ds_swizzle_b32 v37, v81 offset:0x8039 # b.w
ds_swizzle_b32 v38, v78 offset:0x8039 # b.x
ds_swizzle_b32 v39, v79 offset:0x8039 # b.y
ds_swizzle_b32 v15, v84 offset:0x804E # c.z
ds_swizzle_b32 v16, v85 offset:0x804E # c.w
ds_swizzle_b32 v33, v82 offset:0x804E # c.x
ds_swizzle_b32 v34, v83 offset:0x804E # c.y

# Each and every one of these takes time, however - and each one increments a little counter.
# What I can do is this - since the first row in the state is not shifted, the a variable is already ready
# It's in registers and ready to be used.

# The first thing I do in the OpenCL after loading up the proper state values - in BigMixColumns - is a ^ b.
# So, I can do something like this:

s_waitcnt lgkmcnt(4)

# What this does is, it waits on the pending operations until there are four left.
# They're queued in the order the instructions were issued - so the b uint4 should now be loaded
# Note, however, that the c uint4 is NOT guaranteed to have been loaded, and cannot be relied on (yet.)
# Now, I can process the XOR while the swizzle operation on the c uint4 is working!

v_xor_b32 v42, v15, v36    # v42 = a.z ^ b.z
v_xor_b32 v43, v16, v37    # v43 = a.w ^ b.w
v_xor_b32 v38, v74, v38    # v38 = a.x ^ b.x
v_xor_b32 v39, v75, v39    # v39 = a.y ^ b.y

# And then we can put in an instruction to wait for the c uint4 before we continue...
s_waitcnt lgkmcnt(0)

In case you're wondering, I load the d uint4 later in the code. Also, if you *really* wanna try your damndest to maximize the time spent executing compute shit during loads, you could do this (although you've probably figured it out by now):

Code:
ds_swizzle_b32 v36, v80 offset:0x8039 # b.z
ds_swizzle_b32 v37, v81 offset:0x8039 # b.w
ds_swizzle_b32 v38, v78 offset:0x8039 # b.x
ds_swizzle_b32 v39, v79 offset:0x8039 # b.y
ds_swizzle_b32 v15, v84 offset:0x804E # c.z
ds_swizzle_b32 v16, v85 offset:0x804E # c.w
ds_swizzle_b32 v33, v82 offset:0x804E # c.x
ds_swizzle_b32 v34, v83 offset:0x804E # c.y

s_waitcnt lgkmcnt(7)
v_xor_b32 v42, v15, v36    # v42 = a.z ^ b.z
s_waitcnt lgkmcnt(6)
v_xor_b32 v43, v16, v37    # v43 = a.w ^ b.w
s_waitcnt lgkmcnt(5)
v_xor_b32 v38, v74, v38    # v38 = a.x ^ b.x
s_waitcnt lgkmcnt(4)
v_xor_b32 v39, v75, v39    # v39 = a.y ^ b.y

# You get the idea...
[code]
[/code]

I think I follow even though that syntax is completely foreign to me. I think what you did is what
I was talking about. But I would go one step farther. It may not apply because I don't understand
the wait instructions unless there are synchronization issues.

In addition to what you did I would put the first  xor on b immediately after the first load. I know
it's stalled waiting for data but I want its dependant instruction already queued for when the data
becomes available.

Secondly that first load will fill the cache line so there is no need to queue up the load instruction until
the first load completes. Susequent loads will finish immediately because they hit the cache:

What I would not do is have a string of indentical instruuctions because they all compete for the
same execution unit and can only be issued one per clock. I would interleave the swizzles and xors
to they can both be issued on the same clock, assuming all dependencies are met.


With comments:

ds_swizzle_b32 v36, v80 offset:0x8039      # b.z    // start filling the cache with b
v_xor_b32 v42, v15, v36    # v42 = a.z ^ b.z           // queue first xor for when b is ready
 ds_swizzle_b32 v37, v81 offset:0x8039      # b.w  // this will complete one clock after the previous swizzle so...
v_xor_b32 v43, v16, v37    # v43 = a.w ^ b.w                 // make sure we're ready for it

I think you get it. When all the B vars are loaded you can queue the C vars while still processing and
saving the first batch.

I would even go one step farther to the loading of a, if possible.

NO, NO, NO. The swizzle operation, like LDS and GDS loads, take TIME. Clock cycles. If you try and use the result without using s_waitcnt to be sure that the operation has completed, more than likely you'll be reading garbage. The likelihood of this occurring becomes greater and greater the closer your read instruction is to the load instruction that must be waited on. Or, more accurately, how few clock cycles have passed.

The uint4 I named a is already in registers - if you wanna walk it back, it's actually from an AES operation before, which may be done via LDS lookups into a table and XORs, or a bitsliced AES S-box followed by an otherwise mostly classic-style AES implementation.

I think your misunderstanding is that you think v_xor_b32 queues something. It doesn't. ds_* instructions you might be able to say "queue" something, in the sense that they trigger an LDS read/write and immediately allow for the next instruction to be executed. v_xor_b32 is an immediate XOR of two registers. It couldn't give a fuck less what's in them, or what you meant to put in them - it's going to XOR them and put the result into the destination register, and if it's not what you intended it to be, that's your problem.

I would start with swizzle a immediately followed by swizzle b then the first xor.
There wil be a lot of stalling waiting for memory here so if there are any other trivial tasks
do them next.

Loading a & b in parallel may seem odd but once both are in the cache you're flying. Then you can
mix saving processed data and loading new data, giving priority to loads to keep the GPU hot and
you can stick in the first swizzle c early to get the data ready.

I learned some of this stuff on a company paid Motorola course. The instructor was a geek and our class
was pretty sharp so we covered the material eraly then started having fun. At the time we were in a
performance cruch with customers demanding more capacity so we focsussed on code scheduling and user
cache management. One of the more bizarre instructions was the delayed branch. It exssentially means
branch AFTER the next instruction. That next instruction was often returning the rc. It took some getting
used to but oit gives an idea of the level of optimization they were into at the time.

It's the same CPU that had the ability to mark a cache line valid without touching mem. It great for
malloc because the data is initially undefined anyway. Who cares whether the garbage comes from
mem or stale cache, it's all garbage. Imagine mallocing 1k and having it cached without ever touching
the bus. They also have an instruction to preload the cache for real. that is essentially what I was
simulating above. It also had a user flush so you could flush data at any convenient time after you
no longer needed it instead of a system initiated flush when you are stalled waiting for new data.


Keep in mind - there is no swizzle for the uint4 named a - the first row is not shifted. This is why you don't see any swizzle ops for it - it is entirely contained within the single work-item. This is why I swizzle b, then c, and then begin my XORs. Keep in mind, again, that this triggers the start of the swizzle and immediately goes to the next instruction - this means if I do b AND c one after another, and only wait on b in order to XOR it with a, I'm putting more clock cycles between the time I initiated the swizzle for c, and the time I need it to complete. It's entirely possible that by the time I call on s_waitcnt to ensure the c variables are ready, they already are and the instruction takes basically no time at all.

You're also thinking about cache, which doesn't apply here at all - swizzle is a 4-way crossbar that allows transfer of values between work-items on a compute unit. In addition, even if it wasn't, I couldn't give a fuck less if it's in the cache - hell, I'd rather it NOT be. Why? Simply because I just loaded those values into registers, and were they in memory, I would never be reading them from memory again. X11 and friends can be done using ZERO global memory at all (besides getting work, storing the state for the next kernel, and output, of course) - if you work at it, it can even be done without using LDS for storage of shit like AES tables. Now, you *may* want to use LDS for other reasons to create an optimal GPU implementation, but these are related more to parallelizing the hash functions more, by unrolling them across multiple WIs (like this Echo-512 we're discussing), rather than actual storage of data that's honestly needed to compute the hash function. Because of this, cache is really irrelevant, and in an extremely well optimized X11 kernel set, you should be able to downclock memory to hell and have it not matter one iota.

Fun fact: This is why the claim of X11 being "ASIC-resistant" is more or less a flat out lie. What most people call "ASIC-resistant" is actually "ASIC-unprofitable" - meaning that the ASIC would cost so much that its advantage over the currently used mining hardware doesn't justify making it. Usually, this is done via memory usage, at least for now. But X11 isn't memory-hard - shit, it doesn't really need memory at all, especially if implemented in hardware.

Perhaps your example was not well chosen, too many new concepts for me. Try to think of in  the general sense where
data is loaded from mem some processing done and stored back in mem.  I usually see a string of 4 or 8 loads followed
by a similiar string of xors or adds or whatever and then a string of stores. This is ok in the sense it uses multi buffering but
is inneficient because it can't take advantage of  multiple instruction issue. It's all serial.

There also no need to rush the second load because the first one will get the cache
filled (assuming there is cache). And user cache management doesn't depend on whether the application caches well.
It's useful because the coder can manage the cache to overcome the apps defficiency.

Need some data soon but have other things to do first?  preload it so it's ready when you are.

Done with a buffer? flush the cache line to get rid of the data and free up the bus for the next data you need.

It's all about managing the data and planning when you need it and how to have it when you need it so you don't
have to wait as long. With mem being the bottleneck you want to prioritize managing mem accesses to reduce
latency. You don't want the bus sitting idle while you do a shitload of alu stuff just to have to wait when you ask
for more data.

When I mentioned queing the xor i didn't mean it literally. I just meant have it ready to be issued as soon as the
data arrives.
Jump to: