Author

Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000 - page 140. (Read 2170889 times)

hero member
Activity: 619
Merit: 500
buy offer for 5 burst per share is placed. i will add later another bigger one, but atm, im on the road.....
legendary
Activity: 1120
Merit: 1004
Hi,
with this accouncement i will cancel my pool.burstcoin.de asset. as written in terms, all users should sell theier shares to me within 7 days.

If you got any questions, feel free to pn me.

Regards
boba

You posted a buy offer on the asset page ? Or do I need to send it manually to you and then you send me BURST ?
hero member
Activity: 619
Merit: 500
It will happen soon. Don't worry.

- daWallet

[Burst Windows Client Update 0.1.9.2 ]

http://sourceforge.net/projects/burstwindowswallet/
https://github.com/dawallet/burstwindowswallet

- updated blago's miner to the newest version
- fixed too large plots when slider on max


You can just install the new version over the old one... or change the .exe and blago's miner folder.

Feedback welcome

thanks for the update. i will try i today an give you a feedback!
hero member
Activity: 619
Merit: 500
Hi,
with this accouncement i will cancel my pool.burstcoin.de asset. as written in terms, all users should sell theier shares to me within 7 days.

If you got any questions, feel free to pn me.

Regards
boba
sr. member
Activity: 302
Merit: 250
It will happen soon. Don't worry.

- daWallet

[Burst Windows Client Update 0.1.9.2 ]

http://sourceforge.net/projects/burstwindowswallet/
https://github.com/dawallet/burstwindowswallet

- updated blago's miner to the newest version
- fixed too large plots when slider on max


You can just install the new version over the old one... or change the .exe and blago's miner folder.

Feedback welcome
legendary
Activity: 1120
Merit: 1004
It will happen soon. Don't worry.

- daWallet

So this time we'll see a real new thread organised by a coordonated team that have goals ? Not a thug that secede from the group ?
jr. member
Activity: 112
Merit: 2
It will happen soon. Don't worry.

- daWallet
legendary
Activity: 1288
Merit: 1002
. I would expect the price to go up 300% from here just on the news of a new development team in place.



Insider news?


Seriously this is such a great coin but it's all indians and no chiefs in this thread, hopefully an official and competent dev team can be recruited soon to organize house  Roll Eyes . Original poster is long gone so opening a new thread wouldn't be a bad idea.
hero member
Activity: 868
Merit: 1000
. I would expect the price to go up 300% from here just on the news of a new development team in place.



Insider news?
legendary
Activity: 1401
Merit: 1008
northern exposure
guys anyone know why none of the block explorers on the OP post works? or if there is someone working on it trying to fix it?

I bought 16TB today, had 12TB crash on me inside one week  ( none of them burst drives btw.) However, 8TB of the 12 was not a faulty drive, but a faulty cable, so now i have way more space than i had planned for. I'll be adding approx 10TB, if not more, to my plots over the next week or so.

If burst ever takes off, everyone will be regretting they didn't buy and mine more this autumn. I would expect the price to go up 300% from here just on the news of a new development team in place.



where did you read all those news?
sr. member
Activity: 286
Merit: 250
I bought 16TB today, had 12TB crash on me inside one week  ( none of them burst drives btw.) However, 8TB of the 12 was not a faulty drive, but a faulty cable, so now i have way more space than i had planned for. I'll be adding approx 10TB, if not more, to my plots over the next week or so.

If burst ever takes off, everyone will be regretting they didn't buy and mine more this autumn. I would expect the price to go up 300% from here just on the news of a new development team in place.

sr. member
Activity: 416
Merit: 250
Im the owner i only wrote that i would sell it if somebody makes me a good offer. Pool was short down because of a unexpectet error of the task. Now up again.
Shit lol, just started mining another pool. Yours is still the best with most consistent payouts, back to your pool!
Guess you haven't mined on ninja then .....
H.

Is ninja a better pool?
Yes, I recommend  Wink
newbie
Activity: 15
Merit: 0
Or in my case - open plot file, seek to 1323 * 9,142,272 nonces from start, read next 9,142,272 nonces (which are all consecutive as it was written directly to the file). No difference in bandwidth, and certainly not 4096 times less bandwidth.

You're correct, I forgot that they're grouped inside of file. If seeking is supported then splitting them to different files is unnecessary, I assumed that seek is not supported.
hero member
Activity: 785
Merit: 500
BURST got Smart Contracts (AT)
Im the owner i only wrote that i would sell it if somebody makes me a good offer. Pool was short down because of a unexpectet error of the task. Now up again.

Shit lol, just started mining another pool. Yours is still the best with most consistent payouts, back to your pool!

Guess you haven't mined on ninja then .....

H.


Is ninja a better pool?

Pick one:
Code:
Pool Statistics
Current time: 2015-10-26 08:23:24 UTC
Block: 157,623
Difficulty: 5,990
Est. Networksize (PB): 6.2814501429195

BURST.MiningHere.com
Pool balance: 84,777
Registered Miners: 568

burst.ninja
Pool balance: 23,083
Registered Miners: 453

burst.poolto.be
Pool balance: 4,986
Registered Miners: 879

DevPool v2
Pool balance: 606,425
Registered Miners: 564

mining.tompool.org
Pool balance: 311
Registered Miners: 58

pool.blago
Pool balance: 2
Registered Miners: 148

pool.burstcoin.de
Pool balance: 13,373
Registered Miners: 124

pool.burstcoin.it
Pool balance: 211
Registered Miners: 58
member
Activity: 70
Merit: 10
Im the owner i only wrote that i would sell it if somebody makes me a good offer. Pool was short down because of a unexpectet error of the task. Now up again.

Shit lol, just started mining another pool. Yours is still the best with most consistent payouts, back to your pool!

Guess you haven't mined on ninja then .....

H.


Is ninja a better pool?
hero member
Activity: 539
Merit: 500
I can mount amazon cloud drive as a physical drive, which reads as somewhere around 1PB of storage, can I use this to mine this coin?

Theoretically yes, but your bandwidth to the cloud is going to be a limiting factor.

To get my plots to mine in an acceptable time window, I'm looking at 250-300+ MB/sec.

H.


I can see getting this speed and then some out of a vps with a 10gbps connection. Amazon Cloud Drive does throttle uploads, but I don't think they throttle downloads. So reading whats on the amazon cloud drive would be nearly as fast as a physical hard drive with the right setup... in theory.

In theory - if your miner has access to all that bandwidth. If you mine in the cloud it's possible, if you mine locally - highly improbable.
hero member
Activity: 539
Merit: 500
I can mount amazon cloud drive as a physical drive, which reads as somewhere around 1PB of storage, can I use this to mine this coin?

Theoretically yes, but your bandwidth to the cloud is going to be a limiting factor.

To get my plots to mine in an acceptable time window, I'm looking at 250-300+ MB/sec.

H.


As I recall, by tweaking the plotter it's possible to store data for one block per file, creating a set of 4096 files and only reading one of them on each block. So, the required bandwidth can be 4096 times smaller.


So, instead of reading 9,142,272 nonces from one single, optimized file (and that's one of 50), I should read one nonce from each of 9,142,272 individual files? I somehow don't see that being more efficient ....

Create a plot [256 kb] -> store plot[0] to plots0.dat, store plot[1] to plots1.dat etc for each 4096 scoops.
Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc
Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc etc

This way there will be 4096 large files (plot0.dat ... plot4095.dat) which can be uploaded to the cloud. You decide how large these files will be, of course they can be split if necessary. They also will be badly fragmented when stored on HDD, but this doesn't matter because they will be uploaded one by one to the cloud anyway.

When mining block 1323, simply fetch scoops from plot1323.dat and so on. 4096 times less bandwidth because unneeded scoops won't be downloaded.

p.s. In normal mining mode only ONE scoop from 4096 scoops in each plot is used. It's inefficient but it was done this way because reading speed is not the bottleneck when using local HDDs and it allows avoiding fragmentation when plotting (leading to faster plotting as there's no defrag step).

Or in my case - open plot file, seek to 1323 * 9,142,272 nonces from start, read next 9,142,272 nonces (which are all consecutive as it was written directly to the file). No difference in bandwidth, and certainly not 4096 times less bandwidth.
sr. member
Activity: 420
Merit: 250
I can mount amazon cloud drive as a physical drive, which reads as somewhere around 1PB of storage, can I use this to mine this coin?

Theoretically yes, but your bandwidth to the cloud is going to be a limiting factor.

To get my plots to mine in an acceptable time window, I'm looking at 250-300+ MB/sec.

H.


I can see getting this speed and then some out of a vps with a 10gbps connection. Amazon Cloud Drive does throttle uploads, but I don't think they throttle downloads. So reading whats on the amazon cloud drive would be nearly as fast as a physical hard drive with the right setup... in theory.
hero member
Activity: 588
Merit: 500
I can mount amazon cloud drive as a physical drive, which reads as somewhere around 1PB of storage, can I use this to mine this coin?

Theoretically yes, but your bandwidth to the cloud is going to be a limiting factor.

To get my plots to mine in an acceptable time window, I'm looking at 250-300+ MB/sec.

H.


As I recall, by tweaking the plotter it's possible to store data for one block per file, creating a set of 4096 files and only reading one of them on each block. So, the required bandwidth can be 4096 times smaller.


So, instead of reading 9,142,272 nonces from one single, optimized file (and that's one of 50), I should read one nonce from each of 9,142,272 individual files? I somehow don't see that being more efficient ....

Create a plot [256 kb] -> store plot[0] to plots0.dat, store plot[1] to plots1.dat etc for each 4096 scoops.
Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc
Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc etc

This way there will be 4096 large files (plot0.dat ... plot4095.dat) which can be uploaded to the cloud. You decide how large these files will be, of course they can be split if necessary. They also will be badly fragmented when stored on HDD, but this doesn't matter because they will be uploaded one by one to the cloud anyway.

When mining block 1323, simply fetch scoops from plot1323.dat and so on. 4096 times less bandwidth because unneeded scoops won't be downloaded.

p.s. In normal mining mode only ONE scoop from 4096 scoops in each plot is used. It's inefficient but it was done this way because reading speed is not the bottleneck when using local HDDs and it allows avoiding fragmentation when plotting (leading to faster plotting as there's no defrag step).

You're right....ish.

By only plotting 1 scoop you would have to read 1pb in one block per 4096...

You can easily mine on aws using ec2 computing.
newbie
Activity: 15
Merit: 0
I can mount amazon cloud drive as a physical drive, which reads as somewhere around 1PB of storage, can I use this to mine this coin?

Theoretically yes, but your bandwidth to the cloud is going to be a limiting factor.

To get my plots to mine in an acceptable time window, I'm looking at 250-300+ MB/sec.

H.


As I recall, by tweaking the plotter it's possible to store data for one block per file, creating a set of 4096 files and only reading one of them on each block. So, the required bandwidth can be 4096 times smaller.


So, instead of reading 9,142,272 nonces from one single, optimized file (and that's one of 50), I should read one nonce from each of 9,142,272 individual files? I somehow don't see that being more efficient ....

Create a plot [256 kb] -> store plot[0] to plots0.dat, store plot[1] to plots1.dat etc for each 4096 scoops.
Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc
Create a plot [256 kb] -> append plot[0] to plots0.dat, append plot[1] to plots1.dat etc etc

This way there will be 4096 large files (plot0.dat ... plot4095.dat) which can be uploaded to the cloud. You decide how large these files will be, of course they can be split if necessary. They also will be badly fragmented when stored on HDD, but this doesn't matter because they will be uploaded one by one to the cloud anyway.

When mining block 1323, simply fetch scoops from plot1323.dat and so on. 4096 times less bandwidth because unneeded scoops won't be downloaded.

p.s. In normal mining mode only ONE scoop from 4096 scoops in each plot is used. It's inefficient but it was done this way because reading speed is not the bottleneck when using local HDDs and it allows avoiding fragmentation when plotting (leading to faster plotting as there's no defrag step).
Jump to: