Pages:
Author

Topic: Pool API Standard (Read 4462 times)

legendary
Activity: 2576
Merit: 1186
June 18, 2012, 08:34:04 AM
#45
Elapsed should 'almost' never be used unless there is an actual specific reason for it
How long has a block been around is a good example of when it shouldn't be used - that should be a timestamp
It is a timestamp...

Also - I like float MH/s not H/s since that is what everyone thinks in nowadays and yeah in the future it will be even bigger
1 TH/s is 1000000000000 H/s - that number is just way too big already ...
1000000.000000 MH/s is 'nicer' IMO Smiley
1e12 is valid JSON.
legendary
Activity: 4592
Merit: 1851
Linux since 1997 RedHat 4
June 18, 2012, 07:56:54 AM
#44
A suggestion (or 2) ... welcome to be ignored Smiley


Elapsed should 'almost' never be used unless there is an actual specific reason for it
How long has a block been around is a good example of when it shouldn't be used - that should be a timestamp
(and saying it makes it easier for recipient XYZ simply means you got it wrong)
Timestamp things and return those integer sec timestamps or float timeval if higher accuracy is required
Also, as I did in the cgminer API, include a 'now' with any data set that has timestamps
(the cgminer API STATUS header always has 'When' for that reason ... once I realised it was missing)
This resolves any issue of ever creating correct elapsed values if anyone wants to display them


Also - I like float MH/s not H/s since that is what everyone thinks in nowadays and yeah in the future it will be even bigger
1 TH/s is 1000000000000 H/s - that number is just way too big already ...
1000000.000000 MH/s is 'nicer' IMO Smiley
legendary
Activity: 2576
Merit: 1186
June 06, 2012, 10:22:29 PM
#43
I (we!) missed a very important metric up there, invalid/stale shares. So I've tweaked it a bit.
Getting rather complicated now though! Didn't intend for it to go this way, but at least it's pretty complete!

https://bitcointalksearch.org/topic/m.921816
Did you update the wiki page with the specification?

I should probably do that...
Please try to edit the existing specification draft, rather than replace it entirely. I combined the two, but please double-check that I didn't miss anything. Note that JSON does not have Float or Time types, just Number.

I also made a quick pass over it to try to make it sensible for Bitcoin address mining.
legendary
Activity: 1795
Merit: 1208
This is not OK.
June 06, 2012, 09:44:32 PM
#42
Hoppable prop pools will not be wanting to give out that much information... of course that's their problem.
That's why everything is optional.

Word.
legendary
Activity: 2576
Merit: 1186
June 06, 2012, 09:37:57 PM
#41
Hoppable prop pools will not be wanting to give out that much information... of course that's their problem.
That's why everything is optional.
-ck
legendary
Activity: 4088
Merit: 1631
Ruu \o/
June 06, 2012, 09:18:12 PM
#40
Hoppable prop pools will not be wanting to give out that much information... of course that's their problem.
legendary
Activity: 1795
Merit: 1208
This is not OK.
June 06, 2012, 05:08:22 PM
#39
Reminds me of



Certainly does Cheesy

but no standard exists for this yet... all have been proprietary.
hero member
Activity: 686
Merit: 500
June 06, 2012, 04:35:59 PM
#38
Reminds me of

legendary
Activity: 1795
Merit: 1208
This is not OK.
June 06, 2012, 03:47:08 PM
#37
I (we!) missed a very important metric up there, invalid/stale shares. So I've tweaked it a bit.
Getting rather complicated now though! Didn't intend for it to go this way, but at least it's pretty complete!

https://bitcointalksearch.org/topic/m.921816
Did you update the wiki page with the specification?

I should probably do that...
legendary
Activity: 2576
Merit: 1186
June 06, 2012, 03:26:26 PM
#36
I (we!) missed a very important metric up there, invalid/stale shares. So I've tweaked it a bit.
Getting rather complicated now though! Didn't intend for it to go this way, but at least it's pretty complete!

https://bitcointalksearch.org/topic/m.921816
Did you update the wiki page with the specification?
legendary
Activity: 1795
Merit: 1208
This is not OK.
June 06, 2012, 03:22:25 PM
#35
I (we!) missed a very important metric up there, invalid/stale shares. So I've tweaked it a bit.
Getting rather complicated now though! Didn't intend for it to go this way, but at least it's pretty complete!

https://bitcointalksearch.org/topic/m.921816
full member
Activity: 133
Merit: 100
June 01, 2012, 05:26:05 PM
#34
block =
{
    "currency" =
    "id" = ,
    "duration" = ,
    "shares_total" = ,
    "shares_submitted" =
}

balance =
{
    "currency" = ,
    "confirmed" = =120 valid blocks) in Satoshis>,
    "unconfirmed" = ,
    "estimate" = ,
    "last_pay" = ,
    "last_pay_time" =

+1

I would love to see this implemented.
legendary
Activity: 1795
Merit: 1208
This is not OK.
May 25, 2012, 11:41:20 AM
#33
shares =
{
    "submitted" = ,
    "stale" = ,
    "invalid" =
}

block =
{
    "currency" =
    "id" = ,
    "duration" = ,
    "shares_round" = {<'shares' for entire pool for the round (as defined above)>},
    "shares_user" = {<'shares' for all workers of the user for the round (as defined above)>}
}

balance =
{
    "currency" = ,
    "confirmed" = =120 valid blocks) in Satoshis>,
    "unconfirmed" = ,
    "estimate" = ,
    "last_pay" = ,
    "last_pay_time" =
legendary
Activity: 1795
Merit: 1208
This is not OK.
May 24, 2012, 11:21:51 AM
#32
CSV for block data sounds reasonable to me.
donator
Activity: 2058
Merit: 1007
Poor impulse control.
May 24, 2012, 10:27:43 AM
#31
I would agree, historical data is better provided as CSV.  The API should be for relevant current information that changes frequently.

Json has overhead compared to csv.  However, csv is not extensible, and the information isn't that big, so the overhead doesn't matter that much.  I would prefer json over csv.  A simple (online) convertor can still be made for those that prefer to work with csv instead of json.  If it is a standard, such a convertor would be easy to write for all pools at once...

csv is only considered for historical data that would be easier to manipulate as a dataframe and needs updating only when a block is found.
sr. member
Activity: 263
Merit: 250
Pool operator of Triplemining.com
May 24, 2012, 08:58:08 AM
#30
I would agree, historical data is better provided as CSV.  The API should be for relevant current information that changes frequently.

Json has overhead compared to csv.  However, csv is not extensible, and the information isn't that big, so the overhead doesn't matter that much.  I would prefer json over csv.  A simple (online) convertor can still be made for those that prefer to work with csv instead of json.  If it is a standard, such a convertor would be easy to write for all pools at once...
legendary
Activity: 1260
Merit: 1000
May 24, 2012, 07:40:42 AM
#29
I would agree, historical data is better provided as CSV.  The API should be for relevant current information that changes frequently.
legendary
Activity: 1386
Merit: 1097
May 24, 2012, 01:13:43 AM
#28
As Inaba said the number and order of fields in JSON is immaterial

To be exact, this is true only for keys of an array. Lists (and history of blocks should be definitely stored in list) are ordered. But I see your point. Actually I'm for defining JSON API as an standard, because it's usually easier to handle in applications, and provide alternate CSV API with block history only (from pool side it's just an iteration over original JSON and columns with expected order).
donator
Activity: 2058
Merit: 1007
Poor impulse control.
May 23, 2012, 08:12:32 PM
#27
I think pool round history is probably better as a .csv than in JSON. It's much easier to use a dataframe when working with aggregates of data. Also much easier for the poor sods using excel to get a feel for their pool's recent history.

As Inaba said the number and order of fields in JSON is immaterial, but that's not the case in a dataframe, especially if you want to perform exactly the same operations on different dataframes from different pools.

sr. member
Activity: 263
Merit: 250
Pool operator of Triplemining.com
May 23, 2012, 05:18:16 PM
#26
I'd suggest providing 2 API's/url's

One API that requires an API key, showing all the information related to that user.

A second api/url containing the pool data, global data that is not related to a specific user, so it can be retrieved by any user without an API key.  Examples are global hashrate, found blocks history and such.

Seperating them would make it easy for people to get full data on any pool without an account...

Perhaps a third url can be made like /standard_api_locations.txt where the location of both those url's are placed, so the user doesn't have to enter anything except the main pool url.
Pages:
Jump to: