Author

Topic: [ANN][BURST] Burst | Efficient HDD Mining | New 1.2.3 Fork block 92000 - page 1286. (Read 2171083 times)

member
Activity: 101
Merit: 10
Twitter -> @z0rius
Has anyone else noticed that Block 282 (Current block) is approaching 4 hours?

Yah current block is 341, I think your miner has somehow crashed, have you tryed restarting it ? if it doesent work, restart the wallet and allow it to resync.

What is the -Xmx750m?

Can we increase it if we have more RAM?
Xmx sets the max size of memory for the miner. This was put there to keep it from unnecessarily wasting ram.

What is the -Xmx750m?

Can we increase it if we have more RAM?

Yes, physically 750m is as stated, 750mb of ram reserved, it means that java virtual manager is able to use 750mb, it doesent use it all, it just reverses it so it can write as it pleases.

on a secondary note, who sent me 2k worth of burst ?

thank you whomever it was Smiley

That was me. Donation for the guide which I linked to from the op, and I'm hoping will save me a bit of time replying to people having setup issues.

Thanks mate Smiley
sr. member
Activity: 280
Merit: 250
Current Block is 341 here
member
Activity: 87
Merit: 10
Has anyone else noticed that Block 282 (Current block) is approaching 4 hours?
sr. member
Activity: 280
Merit: 250
What is the -Xmx750m?

Can we increase it if we have more RAM?
Xmx sets the max size of memory for the miner. This was put there to keep it from unnecessarily wasting ram.

What is the -Xmx750m?

Can we increase it if we have more RAM?

Yes, physically 750m is as stated, 750mb of ram reserved, it means that java virtual manager is able to use 750mb, it doesent use it all, it just reverses it so it can write as it pleases.

on a secondary note, who sent me 2k worth of burst ?

thank you whomever it was Smiley

That was me. Donation for the guide which I linked to from the op, and I'm hoping will save me a bit of time replying to people having setup issues.
sr. member
Activity: 257
Merit: 255
Once I fill my drive with plots am I set or do I need to keep generating new plots over the old ones?

generating is to MARK the hd space you want to mine with ... if drive is filled with plots you are done with generating.
newbie
Activity: 32
Merit: 0
Selling 50k BURST PM me.
member
Activity: 101
Merit: 10
Twitter -> @z0rius
Once I fill my drive with plots am I set or do I need to keep generating new plots over the old ones?

no, once you have the plots you are set to start mining, you can increase the plots for more theoretical hashing power, the plots are full of nonces so like the dev said, eventually each on of the nonces will be valid or so.
sr. member
Activity: 363
Merit: 250
Once I fill my drive with plots am I set or do I need to keep generating new plots over the old ones?
legendary
Activity: 1778
Merit: 1043
#Free market
Think I have 2 out of 3 VM's running now on separate plots.  However one just gave me this error...
[ERROR] [08/11/2014 23:30:12.353] [default-akka.actor.default-dispatcher-7] [akka://default/user/$a] No space left on device
java.io.IOException: No space left on device

And I have 400 GB free!

Would someone mind sending me a small amount of burst to BURST-WADY-CBZE-HSJU-2NH5G so that I can check I'm up and running ok?

Run the file .bat from the hdd freer than you have .
hero member
Activity: 820
Merit: 1000
Think I have 2 out of 3 VM's running now on separate plots.  However one just gave me this error...
[ERROR] [08/11/2014 23:30:12.353] [default-akka.actor.default-dispatcher-7] [akka://default/user/$a] No space left on device
java.io.IOException: No space left on device

And I have 400 GB free!

Would someone mind sending me a small amount of burst to BURST-WADY-CBZE-HSJU-2NH5G so that I can check I'm up and running ok?
sr. member
Activity: 257
Merit: 255
So I wonder what matters more: CPU, # of plots you make, how hast you make plots or what....

16 Core w/ 48GB of RAM isnt helping much.

u only mine with created plots ... once all hd space is filled with plots u dont need cpu anymore ... but while u are still creating plots cpu and memory helps ... cause you faster generate plots for mining
member
Activity: 101
Merit: 10
Twitter -> @z0rius
What is the -Xmx750m?

Can we increase it if we have more RAM?

Yes, physically 750m is as stated, 750mb of ram reserved, it means that java virtual manager is able to use 750mb, it doesent use it all, it just reverses it so it can write as it pleases.

on a secondary note, who sent me 2k worth of burst ?

thank you whomever it was Smiley
newbie
Activity: 32
Merit: 0
selling 50k if the price is right, PM me or im on freenode #intelnetwork
legendary
Activity: 1778
Merit: 1043
#Free market
I've given up on mining for now, it's like playing the lotto solo mining. :/

Yes .. we need a "mining pool".... Dev we're waiting  Wink .
sr. member
Activity: 363
Merit: 250
Does the system currently have a way of estimating the total capacity of the network? Or is there any other way to guesstimate your chances of finding a block?

I'm loving BURST but it's definitely hard to know if I even have a shot of getting a block at this point.
hero member
Activity: 672
Merit: 500
I've given up on mining for now, it's like playing the lotto solo mining. :/
hero member
Activity: 820
Merit: 1000
So I left the plots generating and mining on a couple of VM's overnight, woke up this morning to find that they had all had the same error generating the plots...

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 630517760 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2769), pid=8593, tid=139980355077888

Each VM has 16GB of RAM / dual core and they were being generated with ./run_generate.sh 1234567890 1 819100 8191 2
Any ideas?
Incidentally they all crapped out after generating just over 6GB of the plot. Very frustrating.  I have to start over now.

not sure ... but i guess it could be the following ...
looks like u use 32bit ... try 500 to 1000 instead of 8191 for memory ...
plots are created in memory and then written to hd, as soon as you reach >1000 in memory you get an exception with 32bit ... right?!

Thanks, I've changed the command line to force it to use the 64bit runtime so that should sort it.

Is there any way to get the generator to resume creating a plot file rather than starting over in the case of a crash?
sr. member
Activity: 280
Merit: 250
So I left the plots generating and mining on a couple of VM's overnight, woke up this morning to find that they had all had the same error generating the plots...

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 630517760 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2769), pid=8593, tid=139980355077888

Each VM has 16GB of RAM / dual core and they were being generated with ./run_generate.sh 1234567890 1 819100 8191 2
Any ideas?
Incidentally they all crapped out after generating just over 6GB of the plot. Very frustrating.  I have to start over now.

First time I've seen that one. The -Xmx4000m switch in the run_generate.sh script is supposed to cap the JVM to 4GB of ram so I don't see how it would get anywhere near high enough for the jvm host to run out of memory. Maybe try a different JVM (oracle or openjdk) if no one has any better ideas.
hero member
Activity: 672
Merit: 500
Selling 10k burst .15 BTC
Selling 20k BRST for .3 BTC
Selling 10k burst .14 BTC
sr. member
Activity: 257
Merit: 255
So I left the plots generating and mining on a couple of VM's overnight, woke up this morning to find that they had all had the same error generating the plots...

#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 630517760 bytes for committing reserved memory.
# Possible reasons:
#   The system is out of physical RAM or swap space
#   In 32 bit mode, the process size limit was hit
# Possible solutions:
#   Reduce memory load on the system
#   Increase physical memory or swap space
#   Check if swap backing store is full
#   Use 64 bit Java on a 64 bit OS
#   Decrease Java heap size (-Xmx/-Xms)
#   Decrease number of Java threads
#   Decrease Java thread stack sizes (-Xss)
#   Set larger code cache with -XX:ReservedCodeCacheSize=
# This output file may be truncated or incomplete.
#
#  Out of Memory Error (os_linux.cpp:2769), pid=8593, tid=139980355077888

Each VM has 16GB of RAM / dual core and they were being generated with ./run_generate.sh 1234567890 1 819100 8191 2
Any ideas?
Incidentally they all crapped out after generating just over 6GB of the plot. Very frustrating.  I have to start over now.

not sure ... but i guess it could be the following ...
looks like u use 32bit ... try 500 to 1000 instead of 8191 for memory ...
plots are created in memory and then written to hd, as soon as you reach >1000 in memory you get an exception with 32bit ... right?!
Jump to: