Author

Topic: CCminer(SP-MOD) Modded NVIDIA Maxwell / Pascal kernels. - page 850. (Read 2347659 times)

legendary
Activity: 2002
Merit: 1051
ICO? Not even once.
How much hash does a 750Ti get on Lyra2REv2?

I get between 5000 and 5500 depending on the card and overclock

Huh, I only get about 4.6 Mhs with about +100 Mhz.
legendary
Activity: 1049
Merit: 1001
How much hash does a 750Ti get on Lyra2REv2?

I get between 5000 and 5500 depending on the card and overclock
newbie
Activity: 16
Merit: 0
hashpower connection refuse what that mien 
it means that the data packet got corrupt some where between you and the pool.
with all of the bad weather that is going on in the usa right a lot of pools are having internet problems
jr. member
Activity: 63
Merit: 4
it been working for 3 week
legendary
Activity: 1030
Merit: 1006
I have been using the nicehash miner for the last few days.
Not sure what is most profitable because the 980 rig is going x13/neoscrypt and the 750Ti rigs are going lyra2rev2.
Why are they choosing different algos ? If the miner itself should choose the most profitable algo all my miners should mine the same ?

power-to-hashrate ratio is likely different? Haven't really looked at the nicehash miner myself but that would make sense.

It's a fact that lyra2v2 hashes better on 750ti than 9xx when compared to other algos, so there are times when lyra2v2 is
most profitable with 750ti but another algo is highest with 9xx GPUs. Maybe the nicehash miner knows this and adjusts
its profit switching accordingly. You should post your question in the nicehash thread for a definitive answer.
that is correct, that's why you have "benchmark" - to test your rig and it mines accordingly. That is why for nicehash miner you should build rig of 1 type card.
jr. member
Activity: 63
Merit: 4
hashpower connection refuse what that mien 
legendary
Activity: 1470
Merit: 1114
I have been using the nicehash miner for the last few days.
Not sure what is most profitable because the 980 rig is going x13/neoscrypt and the 750Ti rigs are going lyra2rev2.
Why are they choosing different algos ? If the miner itself should choose the most profitable algo all my miners should mine the same ?

power-to-hashrate ratio is likely different? Haven't really looked at the nicehash miner myself but that would make sense.

It's a fact that lyra2v2 hashes better on 750ti than 9xx when compared to other algos, so there are times when lyra2v2 is
most profitable with 750ti but another algo is highest with 9xx GPUs. Maybe the nicehash miner knows this and adjusts
its profit switching accordingly. You should post your question in the nicehash thread for a definitive answer.
hero member
Activity: 750
Merit: 500
I have been using the nicehash miner for the last few days.
Not sure what is most profitable because the 980 rig is going x13/neoscrypt and the 750Ti rigs are going lyra2rev2.
Why are they choosing different algos ? If the miner itself should choose the most profitable algo all my miners should mine the same ?

power-to-hashrate ratio is likely different? Haven't really looked at the nicehash miner myself but that would make sense.
hero member
Activity: 687
Merit: 502
I have been using the nicehash miner for the last few days.
Not sure what is most profitable because the 980 rig is going x13/neoscrypt and the 750Ti rigs are going lyra2rev2.
Why are they choosing different algos ? If the miner itself should choose the most profitable algo all my miners should mine the same ?
member
Activity: 81
Merit: 10
hmm... heavy magic...  Grin (well magical stuff is just science which isn't yet understood  Grin Grin)
The compiler has a bit a life on its own and depending how things are written you might see a few magical tricks...  Grin)

then a magician is just a scientist who is a bit ahead :-D
yep  Grin
legendary
Activity: 2716
Merit: 1094
Black Belt Developer
hmm... heavy magic...  Grin (well magical stuff is just science which isn't yet understood  Grin Grin)
The compiler has a bit a life on its own and depending how things are written you might see a few magical tricks...  Grin)

then a magician is just a scientist who is a bit ahead :-D
member
Activity: 81
Merit: 10
So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation
strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram
While this is technically correct I can tell from experience some drivers will still keep the address range as reserved, apparently this has some benefits for driver mangling (I can see how assuming different range <--> different resource can help).
Beware, CUDA is way more than your GPU or CPU, sometimes it goes through some heavy magic.
hmm... heavy magic...  Grin (well magical stuff is just science which isn't yet understood  Grin Grin)
The compiler has a bit a life on its own and depending how things are written you might see a few magical tricks...  Grin)
hero member
Activity: 672
Merit: 500
So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation
strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram
While this is technically correct I can tell from experience some drivers will still keep the address range as reserved, apparently this has some benefits for driver mangling (I can see how assuming different range <--> different resource can help).
Beware, CUDA is way more than your GPU or CPU, sometimes it goes through some heavy magic.
How is the memory consumption being measured?
legendary
Activity: 1797
Merit: 1028
always
Cuda error in func 'x11_simd512_cpu_init' at line 791 : out of memory.
i dont hope any profit just to test my gpu
its gt9800 with 4gb memory
cuda 7.5 installed
any working config for me?

GT9800--

A GT9800 was top of the line once.  I have a GT9800+ and it will mine scrypt at 14kh/s with CudaMiner, early 2014 vintage software, and written to compile on CUDA 5.5.  The GT9800 simply does not have the capacity to mine at a reasonable rate.  It will not perform at all with software designed for the Maxwell chipset, the circuitry is not there.  The version of CCminer written by sp_ and discussed in this thread is specifically written to only work on Maxwell chip architecture.       --scryptr
legendary
Activity: 1470
Merit: 1114
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.

Glad to see you're running stable.

I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not
realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux?

From my understanding the memory/pagefile requirements increase with the number of GPUs.

Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram.
This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...)
(provided, if I am not too wrong in my representation of how it works...)

So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation
strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram

When memory is not deallocated when it is no longer needed it's usually called a leak. Is this something that cudamalloc
does transparently or does the application have any control?

Wrong - it's only LEAKED if the pointer to that memory is lost, meaning you couldn't deallocate it if you wanted to - and it happens in repeated code. To "leak," you actually have to slowly continue to eat memory until there's no more left.

You are technically correct, perhaps "hog" would be a better term. That doesn't change the point that large amounts
of CPU memory remain allocated after it is no longer needed.
legendary
Activity: 1288
Merit: 1068
always
Cuda error in func 'x11_simd512_cpu_init' at line 791 : out of memory.
i dont hope any profit just to test my gpu
its gt9800 with 4gb memory
cuda 7.5 installed
any working config for me?
legendary
Activity: 1470
Merit: 1114
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.

Glad to see you're running stable.

I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not
realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux?

From my understanding the memory/pagefile requirements increase with the number of GPUs.

Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram.
This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...)
(provided, if I am not too wrong in my representation of how it works...)

So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation
strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram

When memory is not deallocated when it is no longer needed it's usually called a leak. Is this something that cudamalloc
does transparently or does the application have any control?
legendary
Activity: 1400
Merit: 1050
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.

Glad to see you're running stable.

I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not
realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux?

From my understanding the memory/pagefile requirements increase with the number of GPUs.

Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram.
This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...)
(provided, if I am not too wrong in my representation of how it works...)

So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation
strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
this memory isn't used anymore once it has been allocated to the vram
legendary
Activity: 1470
Merit: 1114
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.

Glad to see you're running stable.

I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not
realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux?

From my understanding the memory/pagefile requirements increase with the number of GPUs.

Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram.
This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...)
(provided, if I am not too wrong in my representation of how it works...)

So it would seem that the CPU needs enough virtual memory to match all the GPU memory in use. I can see a preallocation
strategy being better for speed but if the CPU is swapping to disk it would be slower than dynamic memory allocation.
legendary
Activity: 1030
Merit: 1006
My 980 rig have now been running perfectly for 24 hours after i added 4GB more ram and increased the virtual memory to 16GB.

Glad to see you're running stable.

I'd like to understand more about the pagefile size issue, particularly the notion that's it's needed but not
realy used. Clearly it is being used even if only momentarilly. Does anyone know if this issue exists on Linux?

From my understanding the memory/pagefile requirements increase with the number of GPUs.

Pagefile/RAM requirement is due to the initial memory allocation (cudamalloc) which is done for each GPU and transit through pagefile/ram.
This memory alloc being done first on ram/Pagefile then copied to gpu vram (however it is never deallocated, as it gives the possibility to copy back to cpu/ram portion of what has been allocated...)
(provided, if I am not too wrong in my representation of how it works...)
Definitely important. My " problematic" rig ( miner crashes) had small pagefile.  I added RAM and miner now works on Lyra2v2 and x13, neoscrypt not again. Then I increased Pagefile and voila! - neoscrypt works too... So that is the most memory dependant algo?
Jump to: