Pages:
Author

Topic: [ANN] Bitfury ASIC sales in EU and Europe - page 69. (Read 250482 times)

newbie
Activity: 59
Merit: 0
September 06, 2013, 08:58:55 AM
punin

When an order is placed and paid via bitpay does it go to processing automatically after so many confirmations?

Only asking as it took a few hours for the first confirmation (usually only takes 10 or so mins) and the order is still on hold.

Yes it should go automatically into processing when payment has been made.

Sent you a pm i know your really busy but hopefully you get a min to look at it.
hero member
Activity: 560
Merit: 500
September 06, 2013, 08:50:25 AM
Hi,
Niko
I've send you a few Emails asking about bulk chip order prices.
Is there any problem ?

Sorry Marto, I'm drowning in emails, just had someone take over the communication for me. We will introduce new chip pricing on the site this weekend.
hero member
Activity: 560
Merit: 500
September 06, 2013, 08:44:46 AM
punin

When an order is placed and paid via bitpay does it go to processing automatically after so many confirmations?

Only asking as it took a few hours for the first confirmation (usually only takes 10 or so mins) and the order is still on hold.

Yes it should go automatically into processing when payment has been made.
hero member
Activity: 493
Merit: 500
Hooray for non-equilibrium thermodynamics!
September 06, 2013, 08:40:53 AM
we just put out .... some more data in the stat.json which we then ... use for the web gui ... and done ...

... no magic ... stick needed here ...

... we need to decide what ... data we want to be displayed ... what do we need ...

Great stuff darkfriend77, sounds like you may have just volunteered Wink!
sr. member
Activity: 434
Merit: 265
September 06, 2013, 08:32:39 AM
Code:
FILE* fp_json=fopen("stat.json","w");
fprintf(fp_json,"{ \"stats\": \n {");
fprintf(fp_json,"\"speed\": %d, \"noncerate\": %.3f, \"noncerateperchip\":%.3f, \"hashrate\":%.3f, \"good\":%d, \"errors\":%d, \"spi-errors\":%d, \"miso-errors\":%d, \"jobs\":%d, \"record\":%.3f\n",
speed,nr,(nr/chips),hr,nrate,error,espi,miso,job-last,record);
fprintf(fp_json,",\"boards\": [");
int firstboard = 0;
for(b=0;b if(b_speed[b]){
if (firstboard > 0)
fprintf(fp_json,",");
fprintf(fp_json,"\n{ ");
fprintf(fp_json,"\"slot\": \"%c\", \"speed\": %d, \"noncerate\":%.3f, \"hashrate\": %.3f, \"good\": %d, \"errors\": %d, \"spi-errors\": %d, \"miso-errors\":%d",board[b],b_speed[b],
(double)0xFFFFFFFF/1000000000.0*(double)b_nrate[b]/(double)wait,
(double)0xFFFFFFFF/1000000000.0*(double)b_hrate[b]/(double)wait*(double)756/(double)1024,
b_nrate[b],b_error[b],b_espi[b],b_miso[b]);
fprintf(fp_json," }\n");
firstboard = 1;
}
}
fprintf(fp_json,"\n ]");
fprintf(fp_json,"\n } }");
fclose(fp_json);


we just put out .... some more data in the stat.json which we then ... use for the web gui ... and done ...

... no magic ... stick needed here ...

... we need to decide what ... data we want to be displayed ... what do we need ...
hero member
Activity: 910
Merit: 1000
Items flashing here available at btctrinkets.com
September 06, 2013, 08:31:54 AM
The file we want is /run/smh/.stat.log

This is what they look like: http://pastebin.com/xN7t9WaH

Please note that: pastebin screws up the line lenght, all the info regarding a single chip is actually on one line.
And secondly, this is a sample of that has two h-boards, the maximum is sixteen.

Let me know if theres something else I can help with.
[edit] btw since theres propably no nice way to time the data collection exactly when the new logs are generated (or perhaps there is, Im out of familiar waters here) I would suggest that you have the script pick up data every 330 seconds (5min 30 sec).

OK, looks good. When copied over to one of my linux boxes it looks like this (http://s15.postimg.org/z1kuze2l7/bfminer_log.jpg). I presume that there are no lines of whitespace at the beginning or end of the file. Does that look right?

I was thinking that it might be better to have the script running as a cron job rather than building in a certain waiting period before sampling into the script itself, but we can do it the other way if you like. I can get the script to check the timestamp of the log file and sleep for 30 secs or so if it's the same as the previous data point in any case, so that data isn't double counted.

Any preference on file format (was thinking comma separated text for easy input into excel - xml should be possible, but might take longer to get working)? Do you have any particular data headings in mind for the consolidated output?

Personally I would just like larger samples of data to chew on, so hardly any preference here Smiley
hero member
Activity: 493
Merit: 500
Hooray for non-equilibrium thermodynamics!
September 06, 2013, 08:31:36 AM

Maybee we should just try to add our stuff into ... the chainminer ...

https://github.com/bfsb/chainminer

We could do a other view ... in the webGui which shows each chip and his performace ... with his stats ...

... information is already been collected in the software for each chip ...


Yes, that's definitely a better option in the long term. It would certainly take me a lot longer than a quick text-file-parsing bash script, but there are much better coders than me in bitcoinland, so they may be able to implement this more quickly.
hero member
Activity: 910
Merit: 1000
Items flashing here available at btctrinkets.com
September 06, 2013, 08:30:19 AM

What would be awesome is if someone whipped up a script that can collect larger samples of data by picking up each individual 5 min log and compiling them in to a longer one with averages. I'd personally prefer a one that shows info on every single chip.

Either this or an official statement about the need of fans and/or the temperature range which chips can withstand.

spiccioli
I have officially (punin said this) heard over irc that a chip was tested to hash untill the solder melted and it fell. The chip in question was heated with a soldering iron, while hashing.
hero member
Activity: 493
Merit: 500
Hooray for non-equilibrium thermodynamics!
September 06, 2013, 08:28:41 AM
The file we want is /run/smh/.stat.log

This is what they look like: http://pastebin.com/xN7t9WaH

Please note that: pastebin screws up the line lenght, all the info regarding a single chip is actually on one line.
And secondly, this is a sample of that has two h-boards, the maximum is sixteen.

Let me know if theres something else I can help with.
[edit] btw since theres propably no nice way to time the data collection exactly when the new logs are generated (or perhaps there is, Im out of familiar waters here) I would suggest that you have the script pick up data every 330 seconds (5min 30 sec).

OK, looks good. When copied over to one of my linux boxes it looks like this (http://s15.postimg.org/z1kuze2l7/bfminer_log.jpg). I presume that there are no lines of whitespace at the beginning or end of the file. Does that look right?

I was thinking that it might be better to have the script running as a cron job rather than building in a certain waiting period before sampling into the script itself, but we can do it the other way if you like. I can get the script to check the timestamp of the log file and sleep for 30 secs or so if it's the same as the previous data point in any case, so that data isn't double counted.

Any preference on file format (was thinking comma separated text for easy input into excel - xml should be possible, but might take longer to get working)? Do you have any particular data headings in mind for the consolidated output?
sr. member
Activity: 434
Merit: 265
September 06, 2013, 08:24:54 AM
What would be awesome is if someone whipped up a script that can collect larger samples of data by picking up each individual 5 min log and compiling them in to a longer one with averages. I'd personally prefer a one that shows info on every single chip.

Hi Isokivi, I'd be happy to give this a go over the weekend (we do a reasonable amount of parsing of text files in bash, so it should be straightforward). However, we don't have any bitfury hardware yet (October can't come round soon enough Cheesy), so would need some sample log files and some info on the file locations etc. Let me know if you're interested - somebody else may have done it already of course and may be able to share more quickly.
The file we want is /run/smh/.stat.log

This is what they look like: http://pastebin.com/xN7t9WaH

Please note that: pastebin screws up the line lenght, all the info regarding a single chip is actually on one line.
And secondly, this is a sample of that has two h-boards, the maximum is sixteen.

Let me know if theres something else I can help with.
[edit] btw since theres propably no nice way to time the data collection exactly when the new logs are generated (or perhaps there is, Im out of familiar waters here) I would suggest that you have the script pick up data every 330 seconds (5min 30 sec).


Maybee we should just try to add our stuff into ... the chainminer ...

https://github.com/bfsb/chainminer

We could do a other view ... in the webGui which shows each chip and his performace ... with his stats ...

... information is already been collected in the software for each chip ...


member
Activity: 89
Merit: 10
September 06, 2013, 08:10:04 AM
One of my boards drops to ~10ghs
The chips turn off...anyone got any suggestions on getting a bit more out of it ?

Is it worth turning auto-tune off on those chips and setting them to 52 or something ?

I am having the same problem. Initially I thought that it was a problem with Bus A, so moved my cards to buses B and C, but the problem repeated. I experimented with turning autotune off and hand tuning frequencies, and it has been hashing ok for about an hour now.
hero member
Activity: 910
Merit: 1000
Items flashing here available at btctrinkets.com
September 06, 2013, 08:03:03 AM
What would be awesome is if someone whipped up a script that can collect larger samples of data by picking up each individual 5 min log and compiling them in to a longer one with averages. I'd personally prefer a one that shows info on every single chip.

Hi Isokivi, I'd be happy to give this a go over the weekend (we do a reasonable amount of parsing of text files in bash, so it should be straightforward). However, we don't have any bitfury hardware yet (October can't come round soon enough Cheesy), so would need some sample log files and some info on the file locations etc. Let me know if you're interested - somebody else may have done it already of course and may be able to share more quickly.
The file we want is /run/smh/.stat.log

This is what they look like: http://pastebin.com/xN7t9WaH

Please note that: pastebin screws up the line lenght, all the info regarding a single chip is actually on one line.
And secondly, this is a sample of that has two h-boards, the maximum is sixteen.

Let me know if theres something else I can help with.
[edit] btw since theres propably no nice way to time the data collection exactly when the new logs are generated (or perhaps there is, Im out of familiar waters here) I would suggest that you have the script pick up data every 330 seconds (5min 30 sec).
legendary
Activity: 1379
Merit: 1003
nec sine labore
September 06, 2013, 07:55:30 AM

What would be awesome is if someone whipped up a script that can collect larger samples of data by picking up each individual 5 min log and compiling them in to a longer one with averages. I'd personally prefer a one that shows info on every single chip.

Either this or an official statement about the need of fans and/or the temperature range which chips can withstand.

spiccioli
hero member
Activity: 493
Merit: 500
Hooray for non-equilibrium thermodynamics!
September 06, 2013, 07:54:37 AM
What would be awesome is if someone whipped up a script that can collect larger samples of data by picking up each individual 5 min log and compiling them in to a longer one with averages. I'd personally prefer a one that shows info on every single chip.

Hi Isokivi, I'd be happy to give this a go over the weekend (we do a reasonable amount of parsing of text files in bash, so it should be straightforward). However, we don't have any bitfury hardware yet (October can't come round soon enough Cheesy), so would need some sample log files and some info on the file locations etc. Let me know if you're interested - somebody else may have done it already of course and may be able to share more quickly.
hero member
Activity: 910
Merit: 1000
Items flashing here available at btctrinkets.com
September 06, 2013, 07:46:17 AM
I have done 2 test runs ....

temperature was measured from the PCB.

without fans:

PCB @ 60° ... avg. Hashingrate around 41.5 GH

with fans:

PCB @ 41° ... avg. Hashingrate around 42.0 GH

So my conclusion is ... that they do work more or less the same cooled or uncooled ....

... maybee we should add some more information to the web gui ...



What would be awesome is if someone whipped up a script that can collect larger samples of data by picking up each individual 5 min log and compiling them in to a longer one with averages. I'd personally prefer a one that shows info on every single chip.
sr. member
Activity: 434
Merit: 265
September 06, 2013, 07:40:23 AM
I have done 2 test runs ....

temperature was measured from the PCB.

without fans:

PCB @ 60° ... avg. Hashingrate around 41.5 GH

with fans:

PCB @ 41° ... avg. Hashingrate around 42.0 GH

So my conclusion is ... that they do work more or less the same cooled or uncooled ....

... maybee we should add some more information to the web gui ...

full member
Activity: 146
Merit: 100
@WiRED
September 06, 2013, 07:34:53 AM
Received miner(starter kit) yesterday, everything is looking good @ 47GH/S!

Setup was insanely fast!

Thanks bitfury & punin !

legendary
Activity: 974
Merit: 1000
September 06, 2013, 06:08:00 AM
I get 27 on slushs with diff. 16 and 38 on eligius at the same time. I'm new to eligius and don't even know where to change difficulty Wink

For me its like the less I touch, the better it gets. Seems to be a pretty self containing ecosystem, I like that, I'm a lazy guy  Cool
legendary
Activity: 1379
Merit: 1003
nec sine labore
September 06, 2013, 06:06:35 AM
dani, did you try other pools/difficulties?
I tried eligius with diff 1 for the first hour or so, went "okay", changed diff to 32. Went like it did. I was on slush's pool with diff suggested 32, but it seemed not to work (hashrate <3gh according to pool). My friend's unit hashes with difficulty 1, tonight it slowed down to 2.9gh, restarted it and it's working fine again. Any recommendations for pools and difficulty?

My recomendations for a pool is a few messages back, but difficulty depends on your hashing speed.

At 40 GH you can use 128-256, at 400 you shoud use 1024 or higher.

My 400 GHs kit, which is running at just a little less than 400 GH, mostly because of that partially broken card, is running at 1024 difficulty right now.

You should let it run for a day, at least, before assessing average speed.

spiccioli
hero member
Activity: 525
Merit: 500
..yeah
September 06, 2013, 05:56:03 AM
dani, did you try other pools/difficulties?
I tried eligius with diff 1 for the first hour or so, went "okay", changed diff to 32. Went like it did. I was on slush's pool with diff suggested 32, but it seemed not to work (hashrate <3gh according to pool). My friend's unit hashes with difficulty 1, tonight it slowed down to 2.9gh, restarted it and it's working fine again. Any recommendations for pools and difficulty?
Pages:
Jump to: