Author

Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000 - page 903. (Read 2171056 times)

hero member
Activity: 588
Merit: 500
after i compiled gpuPlotGenerator on ubuntu 14.04 i got this error when i try to plot

bash: ./gpuPlotGenerator.exe: Permission denied

any clue ?
you need to chown it I had thatcwith a qt wallet. can't remember how to do it sorry
member
Activity: 67
Merit: 10

Maybe the 280x and 290x memory system works/handles better with the plotter compared to hd7970/7990, that why it kind of work on those cards.

Maybe it's not just the card.  I am running the same card (R9 290) and similar specs to the 1 or 2 reported working gpu plotters.  However I can not get mine to start plotting as the .exe crashes.

I've tried various definitions for the batch file and various driver versions with and without SDK.

There is something else I am missing and until I find it I will sit here and stare at my screen :/

Ava.

Did you try 13.x drivers and install APP SDK?


Yes sir I did, it's on my list.
I've seen that combo mentioned more than a couple of times. I'll give it another go on the off chance that I wrote it down without actually running it :s

Just when I start my plotter up again -.-

Ava
sr. member
Activity: 560
Merit: 250
after i compiled gpuPlotGenerator on ubuntu 14.04 i got this error when i try to plot

bash: ./gpuPlotGenerator.exe: Permission denied

any clue ?
newbie
Activity: 31
Merit: 0
For the first error (-61 = CL_INVALID_BUFFER_SIZE), it is due to a lack of memory space in your GPU. Try with a lower stagger size.
For the second error (-54 = CL_INVALID_WORK_GROUP_SIZE), it is due to an incorrect threads number. Try with a lower threads number.
the first error is solved - added 4 files opencl driver 13.12 + corrected stragger size.
And the gpu_generator chooses R9_270x graphics card from the second slot pci-express.

The second error in system with HD5870 (driver 13.12) is repeated for all smaller values  threads number.
member
Activity: 64
Merit: 10
I got the "too early" message initially, too.  Try browsing to the faucet in an incognito window.

It worked! Thanks Smiley
hero member
Activity: 518
Merit: 500

Maybe the 280x and 290x memory system works/handles better with the plotter compared to hd7970/7990, that why it kind of work on those cards.

Maybe it's not just the card.  I am running the same card (R9 290) and similar specs to the 1 or 2 reported working gpu plotters.  However I can not get mine to start plotting as the .exe crashes.

I've tried various definitions for the batch file and various driver versions with and without SDK.

There is something else I am missing and until I find it I will sit here and stare at my screen :/

Ava.

Did you try 13.x drivers and install APP SDK?
sr. member
Activity: 462
Merit: 250
I got the "too early" message initially, too.  Try browsing to the faucet in an incognito window.
member
Activity: 64
Merit: 10
I want to try out mining pool. To do so I need 1 Burst. Is there some kind soul who would help me out with 1 Burst? Smiley I will pay you back as soon as I can Smiley and yes, I tried faucet but hit enter before entering wallet address and now Im getting "Its too early" error.

BURST-XPY7-7V95-BDA5-FMY8R
sr. member
Activity: 462
Merit: 250
I keep getting this error now when I issue the "make" command:
Quote
Linking [bin/gpuPlotGenerator.exe]
/usr/bin/ld: cannot find -lopencl
collect2: error: ld returned 1 exit status
make: *** [bin/gpuPlotGenerator.exe] Error 1

I've tried Googling the error and see that a bunch of posts on the issue, but can't seem to correct this "/usr/bin/ld: cannot find -lopencl" error. Any suggestions?

The correction for that particular error is to use "-lOpenCL" instead (assuming you've installed opencl with "sudo apt-get install ocl-icd-libopencl1" first.)

However, I found the makefile is incompatible with linux.  It may work better if you change the "CC"s to "C++", but I haven't tried.  I did get it to compile with the single command

Code:
g++ -ansi -pedantic -W -Wall -std=c++0x gpuPlotGenerator.cpp -lOpenCL

I also had to do "( cd kernel && ln -s ../nonce.cl )", since it seems to expect to find the file there (In my version, it's in the root dir.)  Then the command "./gpuPlotGenerator ./plots/ 0 10 10 10 10" ran without errors (on an NVIDIA GTX Black.)

Edit: However, when I try something more substantial, like a stagger size of 64 and a thread num of 64, I get ">>> [-5] Error in synchronous read".  -5 is out of resources.

Edit 2:  Following the advice here, I added

Code:
			error = clFinish(commandQueue);
if (error != CL_SUCCESS) {
  throw opencl_error(error, "Error after running kernel");
}

in between the clEnqueueNDRangeKernel and clEnqueueReadBuffer calls.  I get error -36, "invalid command queue", which apparently can still mean insufficient resources.  A titan black is pretty beefy, though.
sr. member
Activity: 252
Merit: 250
My two questions? :p
hero member
Activity: 588
Merit: 500
can anyone explain why 1tb hdd gets 1 payment from cryptoport of 250 where a 300gb got 5 payments plus 3000 burst 10 mins after setting up rewards in wallet. .... is pool broken

wow!
whats your address? both for 1tb and 300gb
1 tb BURST-LPXH-LLSZ-MSC9-BNTNJ

the  300 isnt mine its a m8 BURST-GDA6-2FGN-D6KD-D7VUS

3.5K is not from pool : http://burst.cryptoport.io/tx/17733463673204335285
he had made another account and 5 mins or so got the 3k which he transfered there. but thats the 300gb 5 payments opoose to my 1 th 1 payment. follow the 3.5k u will see
hero member
Activity: 1400
Merit: 505
I once two questions.

1. to the dcct Miner. I have the miner tested again over 4 hours on 2 linux dedicatet Server. With the dcct miner i become onyl no deadline messages on both systems.
With the java miner saw it in the last minute like this. And the most deadlines are under 1 mil .

"deadline":86659}
"deadline":898648}
"deadline":433123}
"deadline":701}
"deadline":166045}
"deadline":814765}
"deadline":14561}

But why i become with the dcct miner only no deadline messages?
I have no idea why.

2. How important is the latency when mining. Many miner run on systems with direct attached HDDs and other on VMs with NAS connection.
I have the feeling that the miner with NAS connection have a significantly higher deadline. The read performance is high enough.

So my question is, is the latency important or is it just important that the whole plot is read?

Quote myself. To many spam here. Smiley

Any answers for me? Smiley

which question you want to be answered? Tongue
hero member
Activity: 1400
Merit: 505
can anyone explain why 1tb hdd gets 1 payment from cryptoport of 250 where a 300gb got 5 payments plus 3000 burst 10 mins after setting up rewards in wallet. .... is pool broken

wow!
whats your address? both for 1tb and 300gb
1 tb BURST-LPXH-LLSZ-MSC9-BNTNJ

the  300 isnt mine its a m8 BURST-GDA6-2FGN-D6KD-D7VUS

3.5K is not from pool : http://burst.cryptoport.io/tx/17733463673204335285
hero member
Activity: 1400
Merit: 505
sr. member
Activity: 252
Merit: 250
I once two questions.

1. to the dcct Miner. I have the miner tested again over 4 hours on 2 linux dedicatet Server. With the dcct miner i become onyl no deadline messages on both systems.
With the java miner saw it in the last minute like this. And the most deadlines are under 1 mil .

"deadline":86659}
"deadline":898648}
"deadline":433123}
"deadline":701}
"deadline":166045}
"deadline":814765}
"deadline":14561}

But why i become with the dcct miner only no deadline messages?
I have no idea why.

2. How important is the latency when mining. Many miner run on systems with direct attached HDDs and other on VMs with NAS connection.
I have the feeling that the miner with NAS connection have a significantly higher deadline. The read performance is high enough.

So my question is, is the latency important or is it just important that the whole plot is read?

Quote myself. To many spam here. Smiley

Any answers for me? Smiley
hero member
Activity: 588
Merit: 500
can anyone explain why 1tb hdd gets 1 payment from cryptoport of 250 where a 300gb got 5 payments plus 3000 burst 10 mins after setting up rewards in wallet. .... is pool broken

wow!
whats your address? both for 1tb and 300gb
1 tb BURST-LPXH-LLSZ-MSC9-BNTNJ

the  300 isnt mine its a m8 BURST-GDA6-2FGN-D6KD-D7VUS
member
Activity: 60
Merit: 10
How much memory do you have in your GPU?
The most I can gen @ is 3000 on a higher end card.

The lower end ones with 1GB constantly fail with the error posted above.


It's a 3GB R9 280x...  Undecided

Hi guys,

I am aware of this issue. The fact is that I have to create two full size buffers on the GPU side to reduce thread-local memory consumption. Thus the memory amount needed on the CPU side has to be doubled to get an estimate of what is needed on the GPU side.
As an example, for a stagger size of 4000 you will need 1GB RAM on CPU side and more than 2GB (exactly (PLOT_SIZE + 16) x stagger) on GPU side (doesn't include here the local buffers and the kernel code itself).
Once I have a stable version (really soon Grin), I will work on this particular problem.

Please, consider also to test a version for nvidia cards! Afaik there isn't yet a user that was able to start the plot generator on nvidia gpus

The next version should work on both NVIDIA an AMD cards. Still need some test but sounds promising.

Hope that you will release it quickly (at least when burst profitability is still good)

It's nearly working already. Need some more tests and two or three bugs correction ^^
legendary
Activity: 3248
Merit: 1070

how many TB on your pool?

i dont know

probably numbers of your miner per 3tb as average
legendary
Activity: 3766
Merit: 1742
Join the world-leading crypto sportsbook NOW!
How much memory do you have in your GPU?
The most I can gen @ is 3000 on a higher end card.

The lower end ones with 1GB constantly fail with the error posted above.


It's a 3GB R9 280x...  Undecided

Hi guys,

I am aware of this issue. The fact is that I have to create two full size buffers on the GPU side to reduce thread-local memory consumption. Thus the memory amount needed on the CPU side has to be doubled to get an estimate of what is needed on the GPU side.
As an example, for a stagger size of 4000 you will need 1GB RAM on CPU side and more than 2GB (exactly (PLOT_SIZE + 16) x stagger) on GPU side (doesn't include here the local buffers and the kernel code itself).
Once I have a stable version (really soon Grin), I will work on this particular problem.

Please, consider also to test a version for nvidia cards! Afaik there isn't yet a user that was able to start the plot generator on nvidia gpus

The next version should work on both NVIDIA an AMD cards. Still need some test but sounds promising.

Hope that you will release it quickly (at least when burst profitability is still good)
hero member
Activity: 1400
Merit: 505
gpu aren't that good for plotting, one i7 4790k can compare with a 290x for 200w less consumption, a no-brainer

oh really how much nonce / minutes it writes?
compared to google cloud it can produce about 7800 nonce / minutes, how much for gpu ?

7800 for the 4970k? someone early said that his 290x can plot 1tb in 3.30 hours

we have a 2:1 ratio then, still the gpu consume more per time
1tb plots are 4194304 nonce.
3.5h are 12600 seconds.
12600 seconds are 210 minutes.
4194304 nonce / 210 minutes are 19972 nonce per minute.


yeah i know, still x3 the cosumption vs 2.5 the nonce lol

can anyone explain why 1tb hdd gets 1 payment from cryptoport of 250 where a 300gb got 5 payments plus 3000 burst 10 mins after setting up rewards in wallet. .... is pool broken

wow!
whats your address? both for 1tb and 300gb

how many TB on your pool?

i dont know
Jump to: