Author

Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000 - page 1241. (Read 2171078 times)

sr. member
Activity: 394
Merit: 250
Crypto enthusiast
If anyone is interested, the conclusion of my test to see if I could speed up plot generation by farming it out using cluster/cloud computing is:
Yes it can be done, but it is probably not worth it.  This is what I did using Amazon EC2...

First I created a single instance (2 core) to act as the server for the plot storage and also act as the wallet server / miner.
This instance had a 1TB Elastic Block Store volume attached to it (1TB is max EBS volume size on AWS). I made this volume available to the network using NFS

I then created a "plot generator" instance with much higher CPU capacity (C3.xlarge 8 core). I mounted the shared volume so it appeared like a local disk and started creating 10GB plots
I then created a few scripts so that upon startup the instance would auto-mount the NFS volume, figure out what the last used nonce range was, and start mining from the next 10GB plot
Finally, I created an image of this instance and spooled up 100 more instances like it (spot rates, nice and cheap).

To begin with it all looked great but I soon hit the bandwidth limit of the instance I was using as the server.  In essence the server didn't have enough network bandwidth to cope with all of the data being copied to it.  This restricted plot creation to a little over 100GB / hour.  Changing the instance type of the server instance to one with a "high" network performance got this up to around 400 GB/hour.  If I had have left it running I would have created all 1 TB of plots in 2.5 hours, at a cost of around $20.  This could have been repeated or even parallelized to create several 1TB volumes full of plots in a relatively short amount of time.

After all the fiddling about getting it working, I spent around $70 testing this stuff out, only to conclude it was too much hassle.  And as OP said earlier, the cost of EBS storage on AWS is too high to make this a viable long term mining option.  That said, had I have done all of this on day one, I would have scaled the process out and had 10TB+ up within the first few hours and dominated the mining.  Good to know I have the process licked for the next PoC coin Smiley

Unfortunately I STILL don't have any BURST as I don't have any spare capacity at home so haven't been able to mine at all Sad  So if anyone would like to donate some coins to offset the cost of testing all of this, my address is BURST-WADY-CBZE-HSJU-2NH5G

Many thanks!

thanks for sharing. sent you some for your efforts.

Thanks also paul. I would send you some, but no luck for the pass 72 hours plus mining. These guys with smaller hdd's are getting block left and right with 48 hours....

i guess that's the frustrating thing about this coin. it does seem like luck has a lot to do with it. it would be interesting to get a poll going to see what the average number of block found per miner is so far.
hero member
Activity: 686
Merit: 500
What is "Generate token" in the wallet home page!?
sr. member
Activity: 532
Merit: 250
hero member
Activity: 518
Merit: 500
If anyone is interested, the conclusion of my test to see if I could speed up plot generation by farming it out using cluster/cloud computing is:
Yes it can be done, but it is probably not worth it.  This is what I did using Amazon EC2...

First I created a single instance (2 core) to act as the server for the plot storage and also act as the wallet server / miner.
This instance had a 1TB Elastic Block Store volume attached to it (1TB is max EBS volume size on AWS). I made this volume available to the network using NFS

I then created a "plot generator" instance with much higher CPU capacity (C3.xlarge 8 core). I mounted the shared volume so it appeared like a local disk and started creating 10GB plots
I then created a few scripts so that upon startup the instance would auto-mount the NFS volume, figure out what the last used nonce range was, and start mining from the next 10GB plot
Finally, I created an image of this instance and spooled up 100 more instances like it (spot rates, nice and cheap).

To begin with it all looked great but I soon hit the bandwidth limit of the instance I was using as the server.  In essence the server didn't have enough network bandwidth to cope with all of the data being copied to it.  This restricted plot creation to a little over 100GB / hour.  Changing the instance type of the server instance to one with a "high" network performance got this up to around 400 GB/hour.  If I had have left it running I would have created all 1 TB of plots in 2.5 hours, at a cost of around $20.  This could have been repeated or even parallelized to create several 1TB volumes full of plots in a relatively short amount of time.

After all the fiddling about getting it working, I spent around $70 testing this stuff out, only to conclude it was too much hassle.  And as OP said earlier, the cost of EBS storage on AWS is too high to make this a viable long term mining option.  That said, had I have done all of this on day one, I would have scaled the process out and had 10TB+ up within the first few hours and dominated the mining.  Good to know I have the process licked for the next PoC coin Smiley

Unfortunately I STILL don't have any BURST as I don't have any spare capacity at home so haven't been able to mine at all Sad  So if anyone would like to donate some coins to offset the cost of testing all of this, my address is BURST-WADY-CBZE-HSJU-2NH5G

Many thanks!

thanks for sharing. sent you some for your efforts.

Thanks also paul. I would send you some, but no luck for the pass 72 hours plus mining. These guys with smaller hdd's are getting block left and right with 48 hours....
hero member
Activity: 1400
Merit: 505
If anyone is interested, the conclusion of my test to see if I could speed up plot generation by farming it out using cluster/cloud computing is:
Yes it can be done, but it is probably not worth it.  This is what I did using Amazon EC2...

First I created a single instance (2 core) to act as the server for the plot storage and also act as the wallet server / miner.
This instance had a 1TB Elastic Block Store volume attached to it (1TB is max EBS volume size on AWS). I made this volume available to the network using NFS

I then created a "plot generator" instance with much higher CPU capacity (C3.xlarge 8 core). I mounted the shared volume so it appeared like a local disk and started creating 10GB plots
I then created a few scripts so that upon startup the instance would auto-mount the NFS volume, figure out what the last used nonce range was, and start mining from the next 10GB plot
Finally, I created an image of this instance and spooled up 100 more instances like it (spot rates, nice and cheap).

To begin with it all looked great but I soon hit the bandwidth limit of the instance I was using as the server.  In essence the server didn't have enough network bandwidth to cope with all of the data being copied to it.  This restricted plot creation to a little over 100GB / hour.  Changing the instance type of the server instance to one with a "high" network performance got this up to around 400 GB/hour.  If I had have left it running I would have created all 1 TB of plots in 2.5 hours, at a cost of around $20.  This could have been repeated or even parallelized to create several 1TB volumes full of plots in a relatively short amount of time.

After all the fiddling about getting it working, I spent around $70 testing this stuff out, only to conclude it was too much hassle.  And as OP said earlier, the cost of EBS storage on AWS is too high to make this a viable long term mining option.  That said, had I have done all of this on day one, I would have scaled the process out and had 10TB+ up within the first few hours and dominated the mining.  Good to know I have the process licked for the next PoC coin Smiley

Unfortunately I STILL don't have any BURST as I don't have any spare capacity at home so haven't been able to mine at all Sad  So if anyone would like to donate some coins to offset the cost of testing all of this, my address is BURST-WADY-CBZE-HSJU-2NH5G

Many thanks!

why u need NFS?,
1. first u create large instance 8 core or more
2. mount EBS to that instance
3. generate plot, while generating u r mining from it too (for 1 TB it will take about half day)
4. when plot generated, detach EBS, and shutdown + remove that instance
5. create new micro instance, attach that EBS and mining from it

this will cost you about 60$ / TB / Month

anyway, azure is cheaper, you can get it free, but i can't and won't tell you how
sr. member
Activity: 394
Merit: 250
Crypto enthusiast
If anyone is interested, the conclusion of my test to see if I could speed up plot generation by farming it out using cluster/cloud computing is:
Yes it can be done, but it is probably not worth it.  This is what I did using Amazon EC2...

First I created a single instance (2 core) to act as the server for the plot storage and also act as the wallet server / miner.
This instance had a 1TB Elastic Block Store volume attached to it (1TB is max EBS volume size on AWS). I made this volume available to the network using NFS

I then created a "plot generator" instance with much higher CPU capacity (C3.xlarge 8 core). I mounted the shared volume so it appeared like a local disk and started creating 10GB plots
I then created a few scripts so that upon startup the instance would auto-mount the NFS volume, figure out what the last used nonce range was, and start mining from the next 10GB plot
Finally, I created an image of this instance and spooled up 100 more instances like it (spot rates, nice and cheap).

To begin with it all looked great but I soon hit the bandwidth limit of the instance I was using as the server.  In essence the server didn't have enough network bandwidth to cope with all of the data being copied to it.  This restricted plot creation to a little over 100GB / hour.  Changing the instance type of the server instance to one with a "high" network performance got this up to around 400 GB/hour.  If I had have left it running I would have created all 1 TB of plots in 2.5 hours, at a cost of around $20.  This could have been repeated or even parallelized to create several 1TB volumes full of plots in a relatively short amount of time.

After all the fiddling about getting it working, I spent around $70 testing this stuff out, only to conclude it was too much hassle.  And as OP said earlier, the cost of EBS storage on AWS is too high to make this a viable long term mining option.  That said, had I have done all of this on day one, I would have scaled the process out and had 10TB+ up within the first few hours and dominated the mining.  Good to know I have the process licked for the next PoC coin Smiley

Unfortunately I STILL don't have any BURST as I don't have any spare capacity at home so haven't been able to mine at all Sad  So if anyone would like to donate some coins to offset the cost of testing all of this, my address is BURST-WADY-CBZE-HSJU-2NH5G

Many thanks!

thanks for sharing. sent you some for your efforts.
hero member
Activity: 518
Merit: 500
If anyone is interested, the conclusion of my test to see if I could speed up plot generation by farming it out using cluster/cloud computing is:
Yes it can be done, but it is probably not worth it.  This is what I did using Amazon EC2...

First I created a single instance (2 core) to act as the server for the plot storage and also act as the wallet server / miner.
This instance had a 1TB Elastic Block Store volume attached to it (1TB is max EBS volume size on AWS). I made this volume available to the network using NFS

I then created a "plot generator" instance with much higher CPU capacity (C3.xlarge 8 core). I mounted the shared volume so it appeared like a local disk and started creating 10GB plots
I then created a few scripts so that upon startup the instance would auto-mount the NFS volume, figure out what the last used nonce range was, and start mining from the next 10GB plot
Finally, I created an image of this instance and spooled up 100 more instances like it (spot rates, nice and cheap).

To begin with it all looked great but I soon hit the bandwidth limit of the instance I was using as the server.  In essence the server didn't have enough network bandwidth to cope with all of the data being copied to it.  This restricted plot creation to a little over 100GB / hour.  Changing the instance type of the server instance to one with a "high" network performance got this up to around 400 GB/hour.  If I had have left it running I would have created all 1 TB of plots in 2.5 hours, at a cost of around $20.  This could have been repeated or even parallelized to create several 1TB volumes full of plots in a relatively short amount of time.

After all the fiddling about getting it working, I spent around $70 testing this stuff out, only to conclude it was too much hassle.  And as OP said earlier, the cost of EBS storage on AWS is too high to make this a viable long term mining option.  That said, had I have done all of this on day one, I would have scaled the process out and had 10TB+ up within the first few hours and dominated the mining.  Good to know I have the process licked for the next PoC coin Smiley

Unfortunately I STILL don't have any BURST as I don't have any spare capacity at home so haven't been able to mine at all Sad  So if anyone would like to donate some coins to offset the cost of testing all of this, my address is BURST-WADY-CBZE-HSJU-2NH5G

Many thanks!

At $70 per day, that's $2100 per month. You can nearly buy a mobo, a used 6/12 core (12/24 threads) Intel cpu, Ram and some HDD's for that price, give or take....LOL
hero member
Activity: 820
Merit: 1000
If anyone is interested, the conclusion of my test to see if I could speed up plot generation by farming it out using cluster/cloud computing is:
Yes it can be done, but it is probably not worth it.  This is what I did using Amazon EC2...

First I created a single instance (2 core) to act as the server for the plot storage and also act as the wallet server / miner.
This instance had a 1TB Elastic Block Store volume attached to it (1TB is max EBS volume size on AWS). I made this volume available to the network using NFS

I then created a "plot generator" instance with much higher CPU capacity (C3.xlarge 8 core). I mounted the shared volume so it appeared like a local disk and started creating 10GB plots
I then created a few scripts so that upon startup the instance would auto-mount the NFS volume, figure out what the last used nonce range was, and start mining from the next 10GB plot
Finally, I created an image of this instance and spooled up 100 more instances like it (spot rates, nice and cheap).

To begin with it all looked great but I soon hit the bandwidth limit of the instance I was using as the server.  In essence the server didn't have enough network bandwidth to cope with all of the data being copied to it.  This restricted plot creation to a little over 100GB / hour.  Changing the instance type of the server instance to one with a "high" network performance got this up to around 400 GB/hour.  If I had have left it running I would have created all 1 TB of plots in 2.5 hours, at a cost of around $20.  This could have been repeated or even parallelized to create several 1TB volumes full of plots in a relatively short amount of time.

After all the fiddling about getting it working, I spent around $70 testing this stuff out, only to conclude it was too much hassle.  And as OP said earlier, the cost of EBS storage on AWS is too high to make this a viable long term mining option.  That said, had I have done all of this on day one, I would have scaled the process out and had 10TB+ up within the first few hours and dominated the mining.  Good to know I have the process licked for the next PoC coin Smiley

Unfortunately I STILL don't have any BURST as I don't have any spare capacity at home so haven't been able to mine at all Sad  So if anyone would like to donate some coins to offset the cost of testing all of this, my address is BURST-WADY-CBZE-HSJU-2NH5G

Many thanks!
sr. member
Activity: 257
Merit: 255
1201 - Thu Aug 14 2014 13:35:28 GMT+0200 - 67567251495709
1502 - Fri Aug 15 2014 13:35:04 GMT+0200 - 111403016327443

diff increased 65% in last 24h ... that means network hd capacity rises by the same factor ... correct?!


So is it better to have one big plot on a hdd, or many plots?
should make no difference ... but for every block, some data has to be read by miner from every plot file, so if you have a lot, thats slower than reading from one file ... but 1-10 or some more are no problem.

if you once need more space for 'real' data, its a good thing to have multiple plot files ... so you can simple delete one of them :-)




btw. selling some BURST: https://bitcointalksearch.org/topic/m.8330476
newbie
Activity: 49
Merit: 0
So is it better to have one big plot on a hdd, or many plots?
Sy
legendary
Activity: 1484
Merit: 1003
Bounty Detective
Pre generated saves alot of power though so it really comes down to gpu speed Smiley
hero member
Activity: 1400
Merit: 505
Quote
lifetime ? do you mean "on the fly" without pre-generated data?
exactly

it is possible, but considering the size of each nonce 256KB compared to GPU style about 4-16Bytes, the possible combination is too huge.

assuming speed of generating hash on GPU is 1 MH/s, this means we can try 0.24 Giga-Nonce per block time

compare it if we have 4 TB disk, it store 1 Giga-Nonce which we can try all of them per block time, resulting
in storing nonce on disk is faster 4x than hashing it "on the fly" on GPU, ofcourse my assumption of 1 MH/s is wild guess...

before we have exact value how much nonce can be generated on GPU per seconds, we can't come to conclusion which one is more efficient, hash on the fly or pre-generate it on disk
sr. member
Activity: 336
Merit: 250
Found a block about 26 hours since last found block on 800gb. 
Sy
legendary
Activity: 1484
Merit: 1003
Bounty Detective
What is the matter with all these transactions to BURST-2222-2222-2222-22222?

Alias registration.
newbie
Activity: 40
Merit: 0
What is the matter with all these transactions to BURST-2222-2222-2222-22222?
newbie
Activity: 21
Merit: 0
Quote
lifetime ? do you mean "on the fly" without pre-generated data?
exactly
full member
Activity: 238
Merit: 100
{
    "lastBlock": "16322078320396019945",
    "lastBlockchainFeederHeight": 1222,
    "time": 299736,
    "lastBlockchainFeeder": "54.167.111.103:8123",
    "numberOfBlocks": 1223,
    "isScanning": false,
    "cumulativeDifficulty": "70385126787201",
    "version": "1.0.0"
}

Another ~50% increase within ~16h
Code:
{
    "lastBlock": "1520116135762103722",
    "lastBlockchainFeederHeight": 1457,
    "time": 367546,
    "lastBlockchainFeeder": "109.195.211.62",
    "numberOfBlocks": 1458,
    "isScanning": false,
    "cumulativeDifficulty": "105192234724606",
    "version": "1.0.0"
}

when the pools come out, its difficulty will increase more faster Smiley ,now only just 5 days since the coin came out,when the difficulty increase until you can only less than 1 block/TB/Month, then the coins value will be enhanced Grin

i think its already expensive, since we dont have fractional coin here
so coin cap of 2,158,812,800 coins is similar to 21 coin cap of bitcoin style shitcoin ( because of 8 digits fractional )
so i expect the price will be 100 sat btc to 1 burst ( 1million burst = 1 btc )

xcn, come out for three weeks,  has 1.8billion coin, its total number is similar to burst, two weeks ago ,it was mined with cpu, its highest price is 0.00012btc, now , with gpu miners come out, its diffculty falls very fast, and the value falled very fast, and now steady at 0.00002btc for a long time, which means ,even at the lowest price, it still can reach 2000 sat btc,,total number can not decide a coin's value, it was determined by difficulty and popularity

and difficulty is determined by energy or value spent to mine those coins, in PoW GPU we waste electricity, so miner wouldn't want their mined coin priced below electricity cost, on PoC its should depend on how much the price of storage wasted, on PoS its stupid idea, they dont have natural intrinsic value except by popularity, utility functions and developer efforts

you think poc miner won't waste electricity because now the difficulty is not high ,,only five days out with no logo and no pool
!  how big diffculty can be,,when pools come out, those who mine the cpu and gpu based on join this, if you can only mine with 1000 burst /TB/week , would you sell at 100sat? poc needs electricty too, and what's important, it will damage your disk! every disk has the top read and write times, if you can only mine with 1000 a week ,that is ,  4000 a month, 4000 * 100 sat = 0.0004btc, then ,would you willing to sacrifice your disk for just 0.0004btc a month?

if you have  good cpus, you will choose cpu based coin only, like XMR,etc. and you will not use your cpu to mine the X11 ,X13,X15...if you have good gpus, you will choose gpu based only, like darkcoin, X11based, X13 based,,..look at the forum, so many these coins, and you will not choose your gpus to mine the scrypt or sha256 based coin to ,because these have asic machines,, now, what's the point is they all have hdd, little or large, when the burst have a wonderful logo, have several steady pools, until lunch several exchange platforms, every one will kown there is a coin which can be mined with hdd,,  burst is the only  coin which can mined with hdd, at least now,, if you are miners in them,would choose to mined the burst when you still can mine the gpu or cpu based coin? at that time, how can you have confidence of you can still mine this coin several blocks a day?
hero member
Activity: 1400
Merit: 505
Wait, how is this work globaly verifiable by peers, how do you know my miner isn't modded to just relay random data until something stivks and I get a block ?

my simplified answer is, you broadcast ur account number and ur selected nonce to network, these two numbers can be verified into deadline value, so yes you can just broadcast any data (nonce and account number), but i doubt you want to broadcast random account number since you wont receive ur reward, and for nonce yes it is just random value, we select it which one has lowest deadline value

edit : i think nonce is not so random, because they are relevant scoop for each block (depend on previous block hash), i am not sure, we need to read the implementation, its just based on OP post

it seems that the dev hashing every 256KB and compare it with the announced hash =>(,"generationSignature":"b51990f5dce09d649397d19380483003bf2906d3
cbc1678ad5220e1a0e4ea48a"})

after that send the best value. If you have more 256KB blocks, you will have better chanse to find best value.

The algorithm is not so simple as i describe it.

The question is - Is it possible to run it into mining rig with 10 GPUs and to generate the nounce lifetime?

lifetime ? do you mean "on the fly" without pre-generated data?
hero member
Activity: 518
Merit: 500
I created a thread specially for mining Burst/PoC coins. It has a spreadsheet to share data to the community. Here's the link: https://bitcointalk.org/index.php?topic=740158.new#new

Hope the dev can put this on the OP.
newbie
Activity: 21
Merit: 0
Wait, how is this work globaly verifiable by peers, how do you know my miner isn't modded to just relay random data until something stivks and I get a block ?

my simplified answer is, you broadcast ur account number and ur selected nonce to network, these two numbers can be verified into deadline value, so yes you can just broadcast any data (nonce and account number), but i doubt you want to broadcast random account number since you wont receive ur reward, and for nonce yes it is just random value, we select it which one has lowest deadline value

edit : i think nonce is not so random, because they are relevant scoop for each block (depend on previous block hash), i am not sure, we need to read the implementation, its just based on OP post

it seems that the dev hashing every 256KB and compare it with the announced hash =>(,"generationSignature":"b51990f5dce09d649397d19380483003bf2906d3
cbc1678ad5220e1a0e4ea48a"})

after that send the best value. If you have more 256KB blocks, you will have better chanse to find best value.

The algorithm is not so simple as i describe it.

The question is - Is it possible to run it into mining rig with 10 GPUs and to generate the nounce lifetime?
Jump to: