Author

Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000 - page 739. (Read 2171056 times)

legendary
Activity: 2282
Merit: 1072
https://crowetic.com | https://qortal.org
ANNOUNCEMENT:

I have stopped mining this piece of shit crapcoin.  I hadn't had a payout for DAYS with 10.43 TB of disk space pimping away, I was tired of the piece of crap Java miner hogging 9 GB of ram for two instances of mining, I'm tired of the developers' half-assed miner and having to do half the programming work just to compile and run the damn thing.

My 4 3TB Seagate Barracuda hard drives are on ebay right now for a VERY fair $340 and free shipping in the USA.

Furthermore, I predict that this crapcoin will never see 500 satoshis again.  Anyone who tries to talk it up by saying things like "to da mooooooon", is a fool, an idiot, and a moron.

So screw this crapcoin and everything about it.

Basically whe you say screw everything about this coin you are telling us all to f*ck off...
Thats not very nice of you.

+ a billions!


I have been mining and minting and whatnot more than 30 altcoins the last half year, and nowhere have i seen more competent developers and a more productive main developer but here. I am staying. I don't mind other ppl leaving cause that means my plots generate more coins. In the long run this one will be one of the main altcoins, in fact, it already is, only not so many have found out yet.

My 20TB will be shipped on the 24th and will be operational a few days after that. I would really love a price crash and some of the more childish people leave, cause that will leave more burst to be had by the ones who are in for the long haul. Looking back at how short a period of time this coin has been live, the innovation and the development (and the quality of software developed) is lots higher than what i see in the  professional software development world. I surely hope the dev team behind this coin like good beer, cause there might just one day happen to drop a large box of said liquind on their doorstep. (not now... down the line..)

BTW seagate barracuda disks (at least the 3TB version) are  extremely crappy by my experience. i've had two of these crash in short order, while all my other 3TB drives have been working flawlessly. I hope for seagate that they've gotten their shit together reg. their 4tb drives.

It's amazing to see how many ppl are actively contributing to the coin and the software. Cheers to every one of you.. it is very much appreciated.



legendary
Activity: 2282
Merit: 1072
https://crowetic.com | https://qortal.org
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


Only 100TB?
129*1.9tb for now as spare parts which i can mine with for 6 month or more as testcase.
also awaiting new hardware arrivals  Cool

Just try not to kill the price any more than it already has been killed, please kind sir.


On a negative note....


Today my 9 month old baby pulled one of my brand new 4TB drives to the floor while it was plotting in an external dock... The drive hit the ground without any protection at all, while spinning AND writing... I had my fiance try to plug it back in, but no luck with the drive showing up, hopefully it's something else and I can fix it when I get home, but I'm sorta leaning towards my drive being broken...


ironic, my first burst drive to break, is not from mining burst, but from my baby son.

so the moral of the story is burst and baby does not mix ?

Lol, well, sorta, but actually more like this... Don't leave your plotting drives in an external dock anywhere near where the baby can reach it. It's all bad for the life of the drive. xD 4TB down the drain captain.
hero member
Activity: 644
Merit: 500
Estimated network size    8228 TB
The highest level i have seen since till day 1.
sr. member
Activity: 286
Merit: 250
ANNOUNCEMENT:

I have stopped mining this piece of shit crapcoin.  I hadn't had a payout for DAYS with 10.43 TB of disk space pimping away, I was tired of the piece of crap Java miner hogging 9 GB of ram for two instances of mining, I'm tired of the developers' half-assed miner and having to do half the programming work just to compile and run the damn thing.

My 4 3TB Seagate Barracuda hard drives are on ebay right now for a VERY fair $340 and free shipping in the USA.

Furthermore, I predict that this crapcoin will never see 500 satoshis again.  Anyone who tries to talk it up by saying things like "to da mooooooon", is a fool, an idiot, and a moron.

So screw this crapcoin and everything about it.

Basically whe you say screw everything about this coin you are telling us all to f*ck off...
Thats not very nice of you.


I have been mining and minting and whatnot more than 30 altcoins the last half year, and nowhere have i seen more competent developers and a more productive main developer but here. I am staying. I don't mind other ppl leaving cause that means my plots generate more coins. In the long run this one will be one of the main altcoins, in fact, it already is, only not so many have found out yet.

My 20TB will be shipped on the 24th and will be operational a few days after that. I would really love a price crash and some of the more childish people leave, cause that will leave more burst to be had by the ones who are in for the long haul. Looking back at how short a period of time this coin has been live, the innovation and the development (and the quality of software developed) is lots higher than what i see in the  professional software development world. I surely hope the dev team behind this coin like good beer, cause there might just one day happen to drop a large box of said liquind on their doorstep. (not now... down the line..)

BTW seagate barracuda disks (at least the 3TB version) are  extremely crappy by my experience. i've had two of these crash in short order, while all my other 3TB drives have been working flawlessly. I hope for seagate that they've gotten their shit together reg. their 4tb drives.

It's amazing to see how many ppl are actively contributing to the coin and the software. Cheers to every one of you.. it is very much appreciated.


sr. member
Activity: 826
Merit: 250
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?

So, why my miner use 14Gb of memory for 8Tb plots with 4096 stagger ?
There is a prolem with my plots ?

i don't know exatcly, but i am using it fine for 15 TB, it use only 600MB for 8192 stagger
how do you create that plot? which plotter r u using?

I use Gpu plotter
hero member
Activity: 1400
Merit: 505
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


Only 100TB?
129*1.9tb for now as spare parts which i can mine with for 6 month or more as testcase.
also awaiting new hardware arrivals  Cool

Just try not to kill the price any more than it already has been killed, please kind sir.


On a negative note....


Today my 9 month old baby pulled one of my brand new 4TB drives to the floor while it was plotting in an external dock... The drive hit the ground without any protection at all, while spinning AND writing... I had my fiance try to plug it back in, but no luck with the drive showing up, hopefully it's something else and I can fix it when I get home, but I'm sorta leaning towards my drive being broken...


ironic, my first burst drive to break, is not from mining burst, but from my baby son.

so the moral of the story is burst and baby does not mix ?
legendary
Activity: 2282
Merit: 1072
https://crowetic.com | https://qortal.org
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


Only 100TB?
129*1.9tb for now as spare parts which i can mine with for 6 month or more as testcase.
also awaiting new hardware arrivals  Cool

Just try not to kill the price any more than it already has been killed, please kind sir.


On a negative note....


Today my 9 month old baby pulled one of my brand new 4TB drives to the floor while it was plotting in an external dock... The drive hit the ground without any protection at all, while spinning AND writing... I had my fiance try to plug it back in, but no luck with the drive showing up, hopefully it's something else and I can fix it when I get home, but I'm sorta leaning towards my drive being broken...


ironic, my first burst drive to break, is not from mining burst, but from my baby son.
hero member
Activity: 1400
Merit: 505
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?

So, why my miner use 14Gb of memory for 8Tb plots with 4096 stagger ?
There is a prolem with my plots ?

i don't know exatcly, but i am using it fine for 15 TB, it use only 600MB for 8192 stagger
how do you create that plot? which plotter r u using?
sr. member
Activity: 826
Merit: 250
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?

So, why my miner use 14Gb of memory for 8Tb plots with 4096 stagger ?
There is a prolem with my plots ?
hero member
Activity: 1400
Merit: 505
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?
the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically.
i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution.
i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load.
i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb and kept my inital setup like it was.
for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks.
if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it?


have you try using my miner, i want to know how the result compared to java miner, bigger plot file should not use more memory unless you set it to use larger stagger size, and, mining only did one shabal hash for each nonce to determine its deadline, is not computation intensive, most of time spent is on disk read/seek, and also mining only read 1/4096 of your data during each round
not yet cause i am fine with the java miner in my setup. on a average node i get a disk i/o of 500-600 mb/s when a new block arrives. this lasts for less than 15-20 seconds depending on the disks. its basically what the storage is capable of. can your miner mine with the default wallet or requires it the pool counterpart?


its pool only
sr. member
Activity: 256
Merit: 250
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?
the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically.
i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution.
i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load.
i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb and kept my inital setup like it was.
for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks.
if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it?


have you try using my miner, i want to know how the result compared to java miner, bigger plot file should not use more memory unless you set it to use larger stagger size, and, mining only did one shabal hash for each nonce to determine its deadline, is not computation intensive, most of time spent is on disk read/seek, and also mining only read 1/4096 of your data during each round
not yet cause i am fine with the java miner in my setup. on a average node i get a disk i/o of 500-600 mb/s when a new block arrives. this lasts for less than 15-20 seconds depending on the disks. its basically what the storage is capable of. can your miner mine with the default wallet or requires it the pool counterpart?
hero member
Activity: 1400
Merit: 505
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?
the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically.
i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution.
i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load.
i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb and kept my inital setup like it was.
for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks.
if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it?


have you try using my miner, i want to know how the result compared to java miner, bigger plot file should not use more memory unless you set it to use larger stagger size, and, mining only did one shabal hash for each nonce to determine its deadline, is not computation intensive, most of time spent is on disk read/seek, and also mining only read 1/4096 of your data during each round
sr. member
Activity: 256
Merit: 250
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?
the origin is in the max staggersize of the original java plotter. i ran many nodes with custom scripts to plot the files automatically.
i thought if i had to move them from one node to another a 2gb chunk is quite handy for simple gigabit networks and also fits completely into the fs read and write buffers during the creation and distribution.
i realized the java miner on a average cpu can only handle about 8-10tb plots to stay below half of the blocktime cause of cpu load.
i havent analyzed this further cause i could avoid it by running a miner for each hdd. tests with bigger plotfiles resulted in much more memory usage so i was fine with up to 20 seconds parsing for 2tb in 2gb plots and kept my inital setup like it was.
for me the question is if the compute power during the mining is required for the amount of nonce or for the disk seeks.
if i figure out a way to create the 20tb plots could this file also be parsed with one miner instance in time or is the cpu too slow and would skip most of it?
sr. member
Activity: 256
Merit: 250
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


Only 100TB?
129*1.9tb for now. waiting for new hardware arrivals  Cool

You rock! Congratulations.  Wink
the good thing is that it runs totally unattended in the background and if a drive fails i know the drive was bad and it cannot fail in production use.
today i just told some friends of mine this story and they think of throwing some pb onto burst during the next couple of weeks to test their spare hardware with something useful too. if they do so i am not sure about but i know to what they have access to Grin
hero member
Activity: 1400
Merit: 505
Math question: How can I calculate difficulty from baseTarget of block?

Does it still uses NXT's formula? https://wiki.nxtcrypto.org/wiki/Whitepaper:Nxt#Base_Target_Value

Was it changed when there is not Proof of Stake?

And most importantly: How can we estimate total network plot size from cumulative difficulty?

I'd like to graph it and offer to public as online tool.

Thanks in advance

baseTarget is the difficulty, to estimate network plot size i posted it here -> https://bitcointalksearch.org/topic/m.8577852
hero member
Activity: 1400
Merit: 505
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


why u need multiple miner instance? which miner are u using?

if you want to be safe, and use less memory while mining, keep stagger size low (<= 8192), merging plot or not does not have any difference, why only 2GB files? its too small are u using FAT32 ?
full member
Activity: 294
Merit: 101
The Future of Security Tokens
Come on people, join http://burstpool.ddns.net
29 miners away from 100. The 100th will get 5000 BURST welcoming bonus!

I am on your pool now but receiving the error -
"Unable to get mining info from wallet :http://burstpool.ddns.net:8124/....

It was working earlier..

Ideas?

I cannot get to the site directly, either.  Must  be down  Cry

Just a moment


I've already said - a ddns.net is a dynamic DNS, which means this crap is running off a home connection, not a dedicated IP on a VM or just hosting on a datacenter.
Apparently, pool owner is even too lame to install an auto updater.

But even still, whenever his grandma's ISP connection on the basement is restarted, IP will change and all miners will lose connection.




Whatever, this grannypool works 10 times better than any other pool i tried so far.
hero member
Activity: 644
Merit: 500
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


Only 100TB?
129*1.9tb for now. waiting for new hardware arrivals  Cool

You rock! Congratulations.  Wink
sr. member
Activity: 256
Merit: 250
i dont get why everyone requires such much memory to plot.
i created a few 100 tb plots splitted in 2gb files with stagger 8191 and 8191 nonce which work great.
even the java plotter only used few hundred megabytes ram during plotting.
a 2 tb disk is read in less than 20 seconds during the mining and each miner instance runs with Xmx750m without crashes.
would i have any advantage if i would merge my disks and run eg. 10-30 tb plots instead of 120 miner instances reading many 2gb files in the cluster?


Only 100TB?
129*1.9tb for now as spare parts which i can mine with for 6 month or more as testcase.
also awaiting new hardware arrivals  Cool
sr. member
Activity: 435
Merit: 250
Come on people, join http://burstpool.ddns.net
29 miners away from 100. The 100th will get 5000 BURST welcoming bonus!

I am on your pool now but receiving the error -
"Unable to get mining info from wallet :http://burstpool.ddns.net:8124/....

It was working earlier..

Ideas?

I cannot get to the site directly, either.  Must  be down  Cry

Just a moment

I've already said - a ddns.net is a dynamic DNS, which means this crap is running off a home connection, not a dedicated IP on a VM or just hosting on a datacenter.
Apparently, pool owner is even too lame to install an auto updater.

But even still, whenever his grandma's ISP connection on the basement is restarted, IP will change and all miners will lose connection.

Jump to: