Pages:
Author

Topic: OFFICIAL CGMINER mining software thread for linux/win/osx/mips/arm/r-pi 4.11.0 - page 41. (Read 5805508 times)

hero member
Activity: 518
Merit: 500
You'll probably have to swap the PWM and tachometer terminals on the fan if it is not showing speeds on the second fan.
legendary
Activity: 1274
Merit: 1000
The fan display is something from the hardware, not software.  It's probably like you seem to have already guessed, it's on the control board somewhere and that somewhere exists on the S3 board but not the S1.

AFAIK no one has ever determined how to get S1 hardware to show the second fan speed in the gui.
legendary
Activity: 1274
Merit: 1000
I know this has been asked before but i never saw a answer  any place, I wasn't the one that asked it before but am now I upgraded a s1 to S3 how do i make fan 2 show the speed . it shows everything else but that even with the new update . again  i know its been asked but i have yet to see any good answer why or how to and this is the first time i have come out and  asked it .


Just peaced to gather a S3 from parts i bought i had laying around for fun it does the same thing . is it because of the controller to power board for the S1 to  S3 .

I found a Full S3 controller i have coming some one sold me if that's it , ( The controller from S1 to S3 ) .  i may buy one more full controller off the guy.
legendary
Activity: 1274
Merit: 1000
If it's consistent, then it is hashing faster.

Where it starts, depends on the luck in the early time after the start ... up to an hour or so ... then it will gradually average out to the expected rate.
If you are lucky at the start it can be quite high, I've had my S3 mine at around 500GHs for an hour - luck/variance - but eventually it settles down and heads towards 440.


Yes, it consistently hashes faster for one to several days after a restart.  With my overclock settings I should expect it to hash at 500GH/s.  After a day or several days, however, this machine will slowly ratchet down to about 480GH/s.  I like to restart it between 485-490 to keep that extra hash on my side.

I reset it last night and took a screen cap this morning to show it holding 500+ over night, I noticed the 5s hash rate was doing it's crazy swings again so I screen capped it high and low.  Variance I expect, especially right at start up as you mentioned, but a swing of order of magnitude 10X is pretty substantial, wouldn't you say?  Especially after running for over 9 hours.



legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
I'm not sure how accurately the Bitmain driver handles the work difficulty from the pool.
I do it as follow:
https://github.com/ckolivas/cgminer/blob/master/driver-bitmain.c#L592

Thus the nonce finding variance of the Ant is close to the share finding variance - which reduces internal IO from the Ant chip also.
The Ants can do work difficulty at a power of 2 - 1, 2, 4, 8, 16, 32, 64 ,128, 256, 512, 1024, 2048, 4096 (I stop at 4096 so direct solo gives results)
So if the pool difficulty is e.g. 300 the ant will mine internally at 256 difficulty and since the hash rate is based on nonces found, the hash rate's variance will be the equivalent of mining on a pool at 256 difficulty so will be noticeable.
Bitmain's driver does this also, but I'm not sure how accurately - it may be the same or lower for them.

I do not understand the code, but I think I follow the rest of what you are saying.

One question remains to me, is my machine actually hashing faster, as reported, for the 1-2 days after a restart, or is the elevated rate merely a result of the difficulty/nonce reason above?  If it is actually hashing faster then I should be restarting it every day or two, the difference is like a free usb stick or 3.

Thanks again.
If it's consistent, then it is hashing faster.

Where it starts, depends on the luck in the early time after the start ... up to an hour or so ... then it will gradually average out to the expected rate.
If you are lucky at the start it can be quite high, I've had my S3 mine at around 500GHs for an hour - luck/variance - but eventually it settles down and heads towards 440.

Think of it like block finding.
A pool can get lucky and find lots of blocks fast and get unlucky and have long dry spells of no blocks.
Over time the expected average of a pool is 100% diff (less orphan rate)

WIth the S3, it's on a much smaller scale, but the same idea.
Over a days or two it will certainly approach the average expected hash rate, but when you first start it up the variance can seem quite high.
This is, again, due to the fact that it is returning a LOT fewer nonces.
If it is internally mining at 256 diff, then it will have only 1/256th of the amount of results - so it's sort of like saying it will take 256 times longer to average out - well it's not 256, but the idea is the same. That's variance.

It's not affecting the number of shares you are submitting to the pool.
legendary
Activity: 1274
Merit: 1000
I'm not sure how accurately the Bitmain driver handles the work difficulty from the pool.
I do it as follow:
https://github.com/ckolivas/cgminer/blob/master/driver-bitmain.c#L592

Thus the nonce finding variance of the Ant is close to the share finding variance - which reduces internal IO from the Ant chip also.
The Ants can do work difficulty at a power of 2 - 1, 2, 4, 8, 16, 32, 64 ,128, 256, 512, 1024, 2048, 4096 (I stop at 4096 so direct solo gives results)
So if the pool difficulty is e.g. 300 the ant will mine internally at 256 difficulty and since the hash rate is based on nonces found, the hash rate's variance will be the equivalent of mining on a pool at 256 difficulty so will be noticeable.
Bitmain's driver does this also, but I'm not sure how accurately - it may be the same or lower for them.

I do not understand the code, but I think I follow the rest of what you are saying.

One question remains to me, is my machine actually hashing faster, as reported, for the 1-2 days after a restart, or is the elevated rate merely a result of the difficulty/nonce reason above?  If it is actually hashing faster then I should be restarting it every day or two, the difference is like a free usb stick or 3.

Thanks again.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...

12 hours and running just fine, thanks!

http://i61.tinypic.com/11t3dky.jpg

The 5s hash rate swings pretty wildly, I've seen it as high as 1.1TH, heh.
I'm not sure how accurately the Bitmain driver handles the work difficulty from the pool.
I do it as follow:
https://github.com/ckolivas/cgminer/blob/master/driver-bitmain.c#L592

Thus the nonce finding variance of the Ant is close to the share finding variance - which reduces internal IO from the Ant chip also.
The Ants can do work difficulty at a power of 2 - 1, 2, 4, 8, 16, 32, 64 ,128, 256, 512, 1024, 2048, 4096 (I stop at 4096 so direct solo gives results)
So if the pool difficulty is e.g. 300 the ant will mine internally at 256 difficulty and since the hash rate is based on nonces found, the hash rate's variance will be the equivalent of mining on a pool at 256 difficulty so will be noticeable.
Bitmain's driver does this also, but I'm not sure how accurately - it may be the same or lower for them.
legendary
Activity: 1274
Merit: 1000
Yes the hash rate has high variance due to the fact that the device nonces are at close to the pool difficulty.

So e.g. if you have the pool getting 18 shares per minute, cgminer will only see (on average) somewhere between 36 and 18 nonces per minute.
It can take a day (or more) to settle into the expected hash rate.
The share variance in the first 10 minutes can, of course, be very high or very low, and that decides the curve towards the expected, from above or from below.

The very first time I ran the new code on my S3 for an extended run, I got 500GHs for the first hour instead of 440GHs ... yeah even I had to think a few times about what was going on. My first overnight test run got 450GHs. But in both cases if I had left them for a day or two they would have ended up close to 440GHs

12 hours and running just fine, thanks!



The 5s hash rate swings pretty wildly, I've seen it as high as 1.1TH, heh.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Right so you use both, since you want both to have access ...
W:127.0.0.1,W:[my local ip address]
https://github.com/ckolivas/cgminer/blob/master/API-README#L18



Try setting the Advances settings to what they were before and saving them and make sure they are the same.

Edit: you can also see the setting it is running if you look at the API estats command output (in our version, not in the bitmain version)

I should read the read me, shouldn't I?  Thank you, makes sense and now fixed.

I had checked the advance settings and they were the same as pre-update before.  I've reinstalled 4.9.2 and I think I just needed to let it run a bit longer to even out.  Hashing away happily over 500GH/s now, thanks.  Would be great to see if 4.9.2 also fixes why my S3+ would decrease from 500GH to 480, sometimes over the course of 1 day, some times over several.  Fingers crossed it holds steady at 500GH+.  Cool
Yes the hash rate has high variance due to the fact that the device nonces are at close to the pool difficulty.

So e.g. if you have the pool getting 18 shares per minute, cgminer will only see (on average) somewhere between 36 and 18 nonces per minute.
It can take a day (or more) to settle into the expected hash rate.
The share variance in the first 10 minutes can, of course, be very high or very low, and that decides the curve towards the expected, from above or from below.

The very first time I ran the new code on my S3 for an extended run, I got 500GHs for the first hour instead of 440GHs ... yeah even I had to think a few times about what was going on. My first overnight test run got 450GHs. But in both cases if I had left them for a day or two they would have ended up close to 440GHs
legendary
Activity: 1274
Merit: 1000
Right so you use both, since you want both to have access ...
W:127.0.0.1,W:[my local ip address]
https://github.com/ckolivas/cgminer/blob/master/API-README#L18



Try setting the Advances settings to what they were before and saving them and make sure they are the same.

Edit: you can also see the setting it is running if you look at the API estats command output (in our version, not in the bitmain version)

I should read the read me, shouldn't I?  Thank you, makes sense and now fixed.

I had checked the advance settings and they were the same as pre-update before.  I've reinstalled 4.9.2 and I think I just needed to let it run a bit longer to even out.  Hashing away happily over 500GH/s now, thanks.  Would be great to see if 4.9.2 also fixes why my S3+ would decrease from 500GH to 480, sometimes over the course of 1 day, some times over several.  Fingers crossed it holds steady at 500GH+.  Cool
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
It's some bug in your code, json implementation or .NET
The output format is as I stated, there are no closing } followed directly by an opening {

Turns out I was looking at the API output from another S3 that has not been updated with the latest cgminer .... I have checked on the updated one and it returns unbroken json (so not a bug in my code and certainly not in .NET!)

...
Bitmain has old versions of their fork of cgminer in their miners ... all the more reason to update to our master cgminer Smiley
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
Just updated my S3+, couple issues:

1)  Under API allow, if I use W:[my local ip address], then CryptoGlance reports the S3+, but the web page doesn't show any stats under Miner Status.  If I use W:127.0.0.1, then it shows stats under Miner Status, but CryptoGlance shows the S3+ as dead.
Right so you use both, since you want both to have access ...
W:127.0.0.1,W:[my local ip address]
https://github.com/ckolivas/cgminer/blob/master/API-README#L18

If you had it before as W:0/0 anyone on the planet could change your miner to mine for them if they had network access and found it ... e.g. your neighbours if you have Wifi and they can hack into it Smiley
I also have no idea what Bitmain did to the API - but it SHOULD ONLY give access to what you tell it to have access as how I designed and wrote the API and api-allow

Quote
2)  Old version my unit hashed at 500GH/s, now I don't see over 420.

Going to have to revert versions...
Try setting the Advances settings to what they were before and saving them and make sure they are the same.

Edit: you can also see the setting it is running if you look at the API estats command output (in our version, not in the bitmain version)
legendary
Activity: 1274
Merit: 1000
Just updated my S3+, couple issues:

1)  Under API allow, if I use W:[my local ip address], then CryptoGlance reports the S3+, but the web page doesn't show any stats under Miner Status.  If I use W:127.0.0.1, then it shows stats under Miner Status, but CryptoGlance shows the S3+ as dead.

2)  Old version my unit hashed at 500GH/s, now I don't see over 420.

Going to have to revert versions...
hero member
Activity: 518
Merit: 500
It's some bug in your code, json implementation or .NET
The output format is as I stated, there are no closing } followed directly by an opening {

Turns out I was looking at the API output from another S3 that has not been updated with the latest cgminer .... I have checked on the updated one and it returns unbroken json (so not a bug in my code and certainly not in .NET!)

The API puts a null at the end of the full reply (not in the middle) on purpose.
It's a socket level optimisation.
It is guaranteed to be the only null and it clearly terminates the socket message.

Like I said, I had not tested that (but I know if it does it'd would cause the issue I mentioned), and have yet to confirm either way.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
It's some bug in your code, json implementation or .NET
The output format is as I stated, there are no closing } followed directly by an opening {

The API puts a null at the end of the full reply (not in the middle) on purpose.
It's a socket level optimisation.
It is guaranteed to be the only null and it clearly terminates the socket message.
Various code in various places had random handling to determine an end of a socket message.
There is no such confusion with the API socket.
Once you get the null, you know you have all the data and do not need to look for/wait for anything else.
Until you get the null, you know you need to keep reading.
Thus only in the very rare case of a transmission error/failure do you ever wait on the socket and get a timeout.

You can test what the reply is directly on linux:
echo '{"command":"summary+stats"}' | ncat -4 MinerIPAddress 4028

Edit: note: it's not 2 responses seperated by a comma, it's a json list
If you are getting 2 {} responses then you must be making 2 connections and sending 2 {command} requests
hero member
Activity: 518
Merit: 500
...
I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
What command did you send it so I can test it?

If you send json as multiple commands with + between them they become an array of replies

 {"command":"cmd1+cmd2"}
replies with
 {"cmd1":[{ ... reply1 ... }],"cmd2":[{ ... reply2 ... }]}

where { ... reply1 ... } is what you'd get from {"command":"cmd1"}

Edit: reading your comment again - you can't send 2 commands - only one per API access (and then the API socket closes)
You can join them, as I've mentioned above, with a +, to get an array of answers in one command (but they can only be "report" commands)
... as in https://github.com/ckolivas/cgminer/blob/master/API-README

Command sent was stats+summary JSON encoded (i.e I use the .NET JavaScriptSerializer to serialize a dictionary of string, string to JSON then use the serialized object / string to poll the API. As you mention above, it SHOULD respond with the two responses separated by a comma, but it does not put the comma there.

Additionally (and I have not checked this properly yet), normally the API will terminate a response for a single command with a null at the end, it may be that the API also includes a null at the end of the first command response in a two command poll which will cause loops looking for a terminating null to bail out early on the first null.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
...
I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
What command did you send it so I can test it?

If you send json as multiple commands with + between them they become an array of replies

 {"command":"cmd1+cmd2"}
replies with
 {"cmd1":[{ ... reply1 ... }],"cmd2":[{ ... reply2 ... }]}

where { ... reply1 ... } is what you'd get from {"command":"cmd1"}

Edit: reading your comment again - you can't send 2 commands - only one per API access (and then the API socket closes)
You can join them, as I've mentioned above, with a +, to get an array of answers in one command (but they can only be "report" commands)
... as in https://github.com/ckolivas/cgminer/blob/master/API-README
legendary
Activity: 1274
Merit: 1000
Yes just the steps shown at the top to do it - the tar extract and the cgset do everything
The rest below just explains what's going on.
Of course you should check your settings as it suggests.

... have you ever tried the U3 on cgminer? (latest 4.9.2)
We use proper USB access to devices, not 30 year old filtered serial access that hides important information
(and cgminer has since I first changed it to use USB long ago)
Direct USB has many advantages over the filtered serial access.
... though of course the U3 itself is pretty shoddy Tongue

Thanks, still haven't gotten around to updating my S3+, will do that soon.

I have not used 4.9.2 on my U3s yet, I have them at another location at the moment and that machine does not grant me necessary permissions to use zadig and modify usb drivers, so I am pursuing other avenues there.  Once summer is over I'll bring them home and run them with cgminer again.
full member
Activity: 213
Merit: 100
Some more feedback about S3+. I noticed that HW error droped on most of my miners by .003%. Everything running good here about 20 hours on each miner.  I like the new view on the miner stats simple and straight forward the way I  like it. Great work. Bitmain should hire you (if you would choose to work for a company like that.) I dont get it with all the epic fails on there part dont know why they don't try to pick up a dev that knows what they are doing.
hero member
Activity: 518
Merit: 500
If you do that on any mining device that internally has more then one dev, you need to add up all the devs to get the summary amount.

My custom monitor is on windows and linq makes it trivial to pull the non null/empty values from the API response and average them out (though I just tend to list the values for each).
Saying that, with the new S3 API, it throws up a wierd character on chain_acs11 (also seen it on chain_acs10). It keeps changing though ... here's a screenshot of it in putty (my monitor is currently in debug mode but will post how it manifests on the form when I am up and running again).

http://s11.postimg.org/n67k0lqjn/S3_Gremlin.png

EDIT: Here's the gremlin in my monitor!

http://s2.postimg.org/87zo87th5/S3_Gremlin1.png
I'll look into it (I don't see it at all on mine - so it may be a bug on your end)

But you already know not to display it:
   [miner_count] => 2
for tempX, chain_acnX and chain_acsX
i.e. 2 means 0 and 1 for X

of course same for fan:
   [fan_num] => 2
for fanX

It definitely is on my end. I initially thought it was because I was running a pre-release putty 0.65 (to fix the bug that was fixed in windows update that meant putty could not render its window) but then it showed up in my form. And yes, I could (and now have) use the miner count, or even check for length, but thought you may want to know in case there was something more to it.
While on that subject (and I'll make the the last one), I also noticed that initially the response for chains 1 and 2 had double the "chips", with the first set all dashes ..... however this cleared up soon enough and I have not replicated it since I've left the S3 I am testing this on to run (now 24 hrs+). Again, did not mention it ealier as it cleared up quickly .....

EDIT:  Did a restart and here is the initial confusion per my monitor ...



I know I said that was the last one, but the API still returns "broken" JSON when queried with two commands on the same line. Easy enough to fix by adding a comma between any curly braces backing onto each other .....
Pages:
Jump to: