Author

Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000 - page 929. (Read 2170895 times)

full member
Activity: 212
Merit: 100
Sorry if this got answer already but i can't find it, lets say i have multiple hdd can i just copy paste the miner and plot gen to the other drives and run both miner and plot gen from there on all hdd ?
legendary
Activity: 1164
Merit: 1010
I got a new hard drive in the mail today.  This compy is already busy plotting, so I'm gonna use a different comp to start plotting on.  Can I still just copy over the pool PocMiner, adjust the plot ranges, and then generate/mine on another computer?  Will this still connect to the v2 pool with the same account?
sr. member
Activity: 350
Merit: 250
Is there any way to generate plot files locally using an EC2 instance?

For example, I have a local 4TB storage but I want to plot in the cloud via streaming the files. Does something like this exist? I DON'T want to store the files in EBS since that will cost me a ton. Or is downloading from EBS and then wiping the only way? Does S3 charge for the whole month on the spot or is it hourly/daily/averaged?
hero member
Activity: 518
Merit: 500
Hi everyone,

After many hours of setup I finally made it. I have a 1Tb generation in progress and 3x100Gb already finished.
I would like to test the V2 pool but I haven't any BURST for now. Could someone send me 1 BURST to test it please ? Here is my address : BURST-YA29-QCEW-QXC3-BKXDL.

Regarding the plot generation, I found an OpenCL implementation of Shabal (https://github.com/aznboy84/X15GPU/blob/master/kernel/shabal.cl) that could be used to make a GPU version of the generator. I will try to work on it when I have some free time.

Regards

Hi everyone,

As promised I have been working on a GPU plot generator on the last few days. I made a little program built on top of OpenCL, and it seems to work pretty well in CPU mode. Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).

Here is a preview you can test for now :
gpuPlotGenerator-src-1.0.0.7z : https://mega.co.nz/#!bcF2yKKL!3Ud86GaibgvwBehoxkbO4UNdiBgsaixRx7ksHrgNbDI
gpuPlotGenerator-bin-win-x86-1.0.0.7z : https://mega.co.nz/#!HJsziTCK!UmAMoEHQ3z34R4RsXoIkYo9rYd4LnFtO_pw-R4KObJs

I will build another release in the end of the day with some minor improvements (threads per compute unit selection, output of OpenCL error codes, improvement of the Makefile to generate the distribution directly).
I will also try to figure out another mean to dispatch the work between the GPU threads to reduce the amount of private memory needed by the program.

For the windows people, you can use the binary version directly.
For the linux people, just download the source archive, make sure to modify the OpenCL library and lib path in the makefile (and maybe the executable name), and build the project via "make". To run the program, you need the "kernel" and the "plots" directories beside the executable.

The executable usage is : ./gpuPlotGenerator

The parameters are the same as the original plot generator, without the threads number.

If you find bugs or if you want some new features, let me now.

If you want to support me, here are my Bitcoin and Burst addresses :
Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD
Burst: BURST-YA29-QCEW-QXC3-BKXDL

Regards

Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).
It's nice to see someone else working on this, since I seem to have failed in it.

Private memory is actually part of global on AMD cards, so storing it in private isn't any better than just using global for everything; it's local that needs to aimed for for the massive speedup. No AMD cards have more than 64KB local per workgroup, which makes storing it all in local impossible however.

I haven't tried your implementation yet, but on my own first attempt, I also used global on everything also, and the result was faster than the java plotter, but slower than dcct's c plotter. My 2nd attempt used a 32KB local buffer I rotated through for storing the currently being hashed stuff, however I couldn't figure out how to get it copied also to global fast enough, and the local -> global copy killed the performance.

You might be interested in those kernels here: https://bitcointalksearch.org/topic/m.8695829

Thanks, I will look at your kernels to see if I can find a better solution.

Here is the new version. I reduced the amount of memory used from 40KB to about 1KB per unit. The only drawback is that it requires twice the global memory as before. I will search a mean to reduce this overhead later.
In CPU mode, it all goes pretty well (when no graphic card is detected).
The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow.

Here are the files :
gpuPlotGenerator-src-1.1.0.7z : https://mega.co.nz/#!iYFWAL5B!BvtmRQ5qGq4gGwjDglFNtDtNIX4LDaUvATBtClBdTlQ
gpuPlotGenerator-bin-win-x86-1.1.0.7z : https://mega.co.nz/#!aBVGBBQD!tBsRtb8VrHR12_anrFTrl41U0fPQu_OqFnxyi5nCyBY

For the linux users, the Makefile has a new target named "dist" that builds and copy/paste all the necessary files to the "bin" directory.

The executable usage is : ./gpuPlotGenerator

: the path to the plots directory
: number of parrallel threads for each work group

So the usage would be like this: "D:/gpuPlotGenerator 0  819200 4096 "

Is that format correct? Is the thread count need for gpu plotting(Point out in bold)? What's the nonce/minute rate?
legendary
Activity: 1596
Merit: 1000
How long would 4TB take to plot on a last gen i7 mobile? Anyone know how many nonces/second and how long it would take? How could I calculate this?

Don't wanna waste my time.

My laptop's i7 4700QM @3.2ghz generates 3700 NPM using dcct's plotter running Ubuntu 14.04.
hero member
Activity: 631
Merit: 501
Currently I am generating plots in 2GB files at a time while I am giving this a go.
One q.

As I generate new plot files, I know I need to move them to my v2 folder for pooled mining.
If I move new plot files in, do I need to restart my miner -- or will it automatically pick up the new plot files?
full member
Activity: 164
Merit: 100
How long would 4TB take to plot on a last gen i7 mobile? Anyone know how many nonces/second and how long it would take? How could I calculate this?

Don't wanna waste my time.
Not sure about your specific processor, but my 4 core AMD Phenom II X4 965 Black Edition and 4GB RAM is plotting around 2400 nonces/minute.
Calculating it out, that's about 4.3 days to fully plot 4TB.
full member
Activity: 494
Merit: 100
Got about 3.4 TB I'm was able to get 4K BURST per day so at current rate of 0.00000430 BTC will be able to get 0.01728 BTC per day. Still profitable coin I'm ever mined. Will buy more TB soon.
legendary
Activity: 1713
Merit: 1029
How long would 4TB take to plot on a last gen i7 mobile? Anyone know how many nonces/second and how long it would take? How could I calculate this?

Don't wanna waste my time.

You're probably looking at somewhere between 4 and 5 days, depending on how good of a mobile i7 you have.
sr. member
Activity: 350
Merit: 250
How long would 4TB take to plot on a last gen i7 mobile? Anyone know how many nonces/second and how long it would take? How could I calculate this?

Don't wanna waste my time.
sr. member
Activity: 280
Merit: 250
The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow.

I would never say never, but I am quite skeptical about the nonce computation ever being modified for major speed gains on current GPUs.  It requires each thread to crawl over every part of a 256k buffer while computing shabal hashes for all of it, and the cores on top-end GPUs only contain 64kb of memory.  I can't see any way to avoid a huge amount memory bandwidth.  This is a different problem than for scrypt (where you can trade the memory for extra computation), in that it actually requires access to the memory, or perhaps some fast way to find shabal preimages.

Well only the last hash crawls over the whole thing. The earlier ones have a much lower cap of what it needs to hash, and that can be kept in local memory.
sr. member
Activity: 280
Merit: 250
You don't get paid twice for doing the same work for the same employer just because you send them your bill to two different locations.

Might work with unorganized employers but not with BURST  Wink
sr. member
Activity: 462
Merit: 250
The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow.

I would never say never, but I am quite skeptical about the nonce computation ever being modified for major speed gains on current GPUs.  It requires each thread to crawl over every part of a 256k buffer while computing shabal hashes for all of it, and the cores on top-end GPUs only contain 64kb of memory.  I can't see any way to avoid a huge amount memory bandwidth.  This is a different problem than for scrypt (where you can trade the memory for extra computation), in that it actually requires access to the memory, or perhaps some fast way to find shabal preimages.
sr. member
Activity: 378
Merit: 254
small fry
Come on miners! Get your SHA rigs pointed at stratum+tcp://pool.burstmultipool.com:5555

Our profitability stats for the current shift update on the main page every 10 minutes! We need more scrypt miners to start solving some blocks, but our SHA miners are currently nearly 1/3 more profitable than BTC directly.

Why settle for a generic NOMP rip?



Just pointed 1TH at the SHA256 port, and will later switch 150MH at the scrypt port. Any eta on the stats page? Hate not seeing what my workers are doing.

Hey man,

The 'My Stats' page is a bit borked right now in terms of showing you the separate coins.  It is still however displaying your historical hashrates correctly, as well as your estimated number of BURST at next payout.
The entire server is under a constant DDOS attack in excess of 5 GB, thankfully our mitigation service is working well.

A shortcut that can be used to access your miners stats is: http://burstmultipool.com/miner/
so for example:
http://burstmultipool.com/miner/BURST-79PK-DGC2-M4XP-HUAVB


I will get the coins all displaying on it properly later tonight. Smiley

OK cool, that's better than nothing!  Grin I just switched over 150MH of ASICs on scrypt.

It seems your pool is only handing out diff 512 shares as max? If so could you increase it to at least 2048? (ideally up to 4096 for A2s)
[/quote
Will make that adjustment right now, will require a short stratum reboot.
Let's all hop on #BURSTCOIN
member
Activity: 112
Merit: 10
So what happens if I run a miner for each BURST pool using the same plots / nonces at the same time?

Nothing. You can only win once. It's like buying a lotto ticket with the same numbers from 4 different stores.

Just wondered if I would get more shares.

No you would not.

You don't get paid twice for doing the same work for the same employer just because you send them your bill to two different locations.
legendary
Activity: 1582
Merit: 1019
011110000110110101110010
So what happens if I run a miner for each BURST pool using the same plots / nonces at the same time?

Nothing. You can only win once. It's like buying a lotto ticket with the same numbers from 4 different stores.

Just wondered if I would get more shares.
full member
Activity: 494
Merit: 100
Can't get it work and What is reco for R9 Series Card ?

Hi everyone,

After many hours of setup I finally made it. I have a 1Tb generation in progress and 3x100Gb already finished.
I would like to test the V2 pool but I haven't any BURST for now. Could someone send me 1 BURST to test it please ? Here is my address : BURST-YA29-QCEW-QXC3-BKXDL.

Regarding the plot generation, I found an OpenCL implementation of Shabal (https://github.com/aznboy84/X15GPU/blob/master/kernel/shabal.cl) that could be used to make a GPU version of the generator. I will try to work on it when I have some free time.

Regards

Hi everyone,

As promised I have been working on a GPU plot generator on the last few days. I made a little program built on top of OpenCL, and it seems to work pretty well in CPU mode. Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).

Here is a preview you can test for now :
gpuPlotGenerator-src-1.0.0.7z : https://mega.co.nz/#!bcF2yKKL!3Ud86GaibgvwBehoxkbO4UNdiBgsaixRx7ksHrgNbDI
gpuPlotGenerator-bin-win-x86-1.0.0.7z : https://mega.co.nz/#!HJsziTCK!UmAMoEHQ3z34R4RsXoIkYo9rYd4LnFtO_pw-R4KObJs

I will build another release in the end of the day with some minor improvements (threads per compute unit selection, output of OpenCL error codes, improvement of the Makefile to generate the distribution directly).
I will also try to figure out another mean to dispatch the work between the GPU threads to reduce the amount of private memory needed by the program.

For the windows people, you can use the binary version directly.
For the linux people, just download the source archive, make sure to modify the OpenCL library and lib path in the makefile (and maybe the executable name), and build the project via "make". To run the program, you need the "kernel" and the "plots" directories beside the executable.

The executable usage is : ./gpuPlotGenerator

The parameters are the same as the original plot generator, without the threads number.

If you find bugs or if you want some new features, let me now.

If you want to support me, here are my Bitcoin and Burst addresses :
Bitcoin: 138gMBhCrNkbaiTCmUhP9HLU9xwn5QKZgD
Burst: BURST-YA29-QCEW-QXC3-BKXDL

Regards

Unfortunately, I can't test the GPU mode as it requires a very powerfull graphic card (with at least 46kB private memory per compute unit, because the algorithm needs at least 4096*64 static bytes to store an entire plot).
It's nice to see someone else working on this, since I seem to have failed in it.

Private memory is actually part of global on AMD cards, so storing it in private isn't any better than just using global for everything; it's local that needs to aimed for for the massive speedup. No AMD cards have more than 64KB local per workgroup, which makes storing it all in local impossible however.

I haven't tried your implementation yet, but on my own first attempt, I also used global on everything also, and the result was faster than the java plotter, but slower than dcct's c plotter. My 2nd attempt used a 32KB local buffer I rotated through for storing the currently being hashed stuff, however I couldn't figure out how to get it copied also to global fast enough, and the local -> global copy killed the performance.

You might be interested in those kernels here: https://bitcointalksearch.org/topic/m.8695829

Thanks, I will look at your kernels to see if I can find a better solution.

Here is the new version. I reduced the amount of memory used from 40KB to about 1KB per unit. The only drawback is that it requires twice the global memory as before. I will search a mean to reduce this overhead later.
In CPU mode, it all goes pretty well (when no graphic card is detected).
The GPU mode is still kind of buggy on my graphic card (an old GeForce 9300M GS), don't know the exact reason yet. Sometimes it works, sometimes not. I will try to fix this issue tomorrow.

Here are the files :
gpuPlotGenerator-src-1.1.0.7z : https://mega.co.nz/#!iYFWAL5B!BvtmRQ5qGq4gGwjDglFNtDtNIX4LDaUvATBtClBdTlQ
gpuPlotGenerator-bin-win-x86-1.1.0.7z : https://mega.co.nz/#!aBVGBBQD!tBsRtb8VrHR12_anrFTrl41U0fPQu_OqFnxyi5nCyBY

For the linux users, the Makefile has a new target named "dist" that builds and copy/paste all the necessary files to the "bin" directory.

The executable usage is : ./gpuPlotGenerator

: the path to the plots directory
: number of parrallel threads for each work group
member
Activity: 112
Merit: 10
So what happens if I run a miner for each BURST pool using the same plots / nonces at the same time?

Nothing. You can only win once. It's like buying a lotto ticket with the same numbers from 4 different stores.
legendary
Activity: 1582
Merit: 1019
011110000110110101110010
So what happens if I run a miner for each BURST pool using the same plots / nonces at the same time?
sr. member
Activity: 280
Merit: 250
What exactly is plot overlapping? How do I avoid it?

If two or more plotfiles contain the same nonces, they will result in the same deadlines. That won't increase your chances of finding a block and should be avoided.

You can check your files with this tool:

https://bchain.info/BURST/tools/overlap
Jump to: