Pages:
Author

Topic: 300 BTC Coding Contest: Distributed Exchange (MasterCoin Developer Thread) - page 23. (Read 129207 times)

legendary
Activity: 1666
Merit: 1010
he who has the gold makes the rules
Congrats!

I'm going to pop a bottle for you all

legendary
Activity: 1358
Merit: 1003
Ron Gross
I would actually suggest this should be done with real MSC (just really small amounts 0.00000001 etc.) because consensus on Test coins is already bonkers because of DEx Smiley
Fair point - let's see what Bitoy says about update frequency so participants don't think they've broken consensus just due to delayed updates - IIRC he's at 30 mins currently.


I'm now updating every time there is a  consensus check.  (Hoping blockchain.org doesn't block my ip Smiley

Trying to update transactions for PDex checking.   I'll be making some changes to the database to meet tachikoma's dex API.

Finally 100% consensus Smiley
https://masterchest.info/consensus_masterchestinfodev.aspx

Terrific, this is a great day for The Master Protocol!
Thanks everyone for your tremendous efforts on this.


* I notice the Exodus address still has minor differences between the implementations, but I there's a pull request set out to fix that.
sr. member
Activity: 449
Merit: 250
I would actually suggest this should be done with real MSC (just really small amounts 0.00000001 etc.) because consensus on Test coins is already bonkers because of DEx Smiley
Fair point - let's see what Bitoy says about update frequency so participants don't think they've broken consensus just due to delayed updates - IIRC he's at 30 mins currently.


I'm now updating every time there is a  consensus check.  (Hoping blockchain.org doesn't block my ip Smiley

Trying to update transactions for PDex checking.   I'll be making some changes to the database to meet tachikoma's dex API.

Finally 100% consensus Smiley
https://masterchest.info/consensus_masterchestinfodev.aspx
sr. member
Activity: 266
Merit: 250
I would actually suggest this should be done with real MSC (just really small amounts 0.00000001 etc.) because consensus on Test coins is already bonkers because of DEx Smiley
Fair point - let's see what Bitoy says about update frequency so participants don't think they've broken consensus just due to delayed updates - IIRC he's at 30 mins currently.
hero member
Activity: 938
Merit: 1000
I would actually suggest this should be done with real MSC (just really small amounts 0.00000001 etc.) because consensus on Test coins is already bonkers because of DEx Smiley
sr. member
Activity: 266
Merit: 250
Zathras let me start with thanking you for the effort you put in the above post, our approaches are totally different but I think your is perhaps the better solution as it leaves the actual implementation up to the implementor.
Thanks for the patience mate haha - it's such an exciting project to work on but finding enough time is a nightmare atm as I'm sure it is for you Smiley

Your example of the transaction that breaks all these rules is actually the one I was talking about as well. I only assumed that because the transaction is always handwritten the Exodus sized outputs get preference over the non-Exodus valued outputs because it stands to reason the value that is not hand-crafted is not meant to be parsed for the transaction. If that makes sense.
That's actually a really good point and I'm sad to say I didn't consider that.  I have a tendency to come at things from a purely engineering kind of view and sometimes forget the human aspect of it.  Yep, since seqnum is out of the hands of the user and it's only the outputs they can control, I can see how there could be a case for giving same outputs priority over dataseqnum+1 in rare cases of conflict.  

How would you guys feel about organising a Simple Send test phase where we encourage users to send weird transactions over the network in order to break consensus. Could we be ready for that now?
I think it would be great.  There is a whole bunch of BTC set aside for testing and at the moment other than us devs I'm not sure how much other testing is going on - would be good to get some outside involvement, there's obviously plenty of reward so I don't think we'd have any problem getting volunteers.  We'd just seed some test MSC out and tell them to have at it Smiley
hero member
Activity: 938
Merit: 1000
Zathras let me start with thanking you for the effort you put in the above post, our approaches are totally different but I think your is perhaps the better solution as it leaves the actual implementation up to the implementor.

Your example of the transaction that breaks all these rules is actually the one I was talking about as well. I only assumed that because the transaction is always handwritten the Exodus sized outputs get preference over the non-Exodus valued outputs because it stands to reason the value that is not hand-crafted is not meant to be parsed for the transaction. If that makes sense.

Although since this transaction shouldn't happen anyway I think it would be safe to invalidate it. Could we somehow add a rule to the explicitly say which transactions to invalidate just because they are edge cases? Or should everybody understand that since it's not covered by the explanation?


How would you guys feel about organising a Simple Send test phase where we encourage users to send weird transactions over the network in order to break consensus. Could we be ready for that now?
sr. member
Activity: 266
Merit: 250
Simplifying the Class A Model

Sorry this has taken a few days to get my ducks in a row (will be great when I go RBB!), but I wanted to make sure what I'm proposing tests successfully and will give us the outcomes I'm looking for (simplify Class A & remove ambiguity on whether a transaction is valid or not).

I've been discussing with Tachikoma for a few days, and I think I've now done enough testing to make this official & submit a pull.

What?
This pull is structured to simplify the Class A model by removing a number of potentially confusing rules and replacing them with a single defining rule.

Why?
Class A is only intended to provide backwards compatibility & is not used in any of the respective wallet implementations or for any features other than basic simple send.  Thus the effort being expended on Class A does not correlate with its utility to the project and this pull is intended to put Class A to bed so we can move onto other more useful work.

How?
All of the following rules would be removed from the spec:
Quote
The following conditions must also be satisfied for the transaction to be considered decode-able:

* The reference address sequence number must be the data address sequence number + 1
* Ideally, all outputs should be the same (except the change). In fringe cases where the change output value is equal to the other output values the change address can be identified by sequence number, which must not equal or be +/-1 of the sequence number for either the reference address or the data address
* A last resort 'peek and decode' method may be used to identify the data packet in the event of ambiguity following the above rules. This involves decoding each packet and looking for the correct bytes for a simple send (the majority of bytes in a Class A simple send do not change). These byte checks are defined as:
     + Bytes two to eight must equal 00
     + Byte nine must equal 01 or 02
* Should there still be packet ambiguity or 'peek and decode' reveals more than one packet (simple sends are always one packet) the transaction is considered invalid.
And replaced with:
Quote
* Has exactly two non-exodus outputs (one of which must be the data address) with a value equal to the Exodus output and/or has exactly one output with a sequence number +1 of the data address for reference output identification
EDIT: Qualified that one of the two same valued outputs must be the data address & that the two same valued outputs are in addition to the Exodus output.

A transaction will either satisfy this single rule or not, helping to remove some of the discussion/ambiguity around whether certain transactions that are decodable (eg via P&D) but don't meet one or some of the original rules should be validated.  The rule has been written to cater for all existing transactions and new code based on this simplified logic has undergone a fair degree of testing.

This is not a change to the underlying structure of Class A and more a consolidation of wording/logic such that we have a clear interpretation of what is and what isn't a valid Class A transaction.  This also does not enforce a method for testing against the rule - the rule is intended to be clear enough that no specific approach needs to be defined.

I have already rewritten the Masterchest library to utilize this logic instead for the purposes of testing (source).  This has resulted in a big reduction & simplification in Class A parsing code.  I've run the modified library against a full blockscan in the engine, and can confirm that all transactions validated by the original complex rule implementation have been also been validated by the simplified model.  Additionally transactions that were under active discussion between Tachikoma & I which were decodable via P&D but technically invalid per spec when considered against the original ruleset are now validated via the new single rule.

While trying to break this simplification I was able to identify an edge case of an incorrectly structured transaction that would cause ambiguity through this rule.  Essentially same output values provide one reference address while dataseqnum+1 provides a different reference address.  This edge case exists just the same in the original ruleset and has never been used in a transaction (that I could locate, happy to be corrected) - an example of such an incorrectly built transaction is as follows:

Code:
SEQNUM      ADDRESS             OUTPUT VALUE
101         ChangeAddress       0.05 BTC
100         DataAddress         0.00006 BTC
72          ReferenceAddress    0.00006 BTC
148         ExodusAddress       0.00006 BTC

As you can see the seqnum for the reference address should be 101 following dataseqnum+1, but that's actually the seqnum of what would be the change address following same output values.  For this specific case (where same outputs and dataseqnum+1 provide different ref addresses) in order to allow it we would need to set an order of precedence on whether same outputs or dataseqnum+1 is authoritative first and once we start down the road of which rule is more important than another we head back down into complexity territory.  As I say this is not a 'new' edge case specific to the simplification, but out of all the edge cases I looked at I think it is the one case that this simplification does not solve.  This is the very definition of ambiguity on the reference address so if it ever does appear on the network should be considered invalid IMO.

I have added a line in the pull to be explicit about this if we find an edge case is broadcast that results in ambiguity - "NOTE: Should a transaction result in an edge case that appears to follow the rules above but fails to exclusively and without ambiguity identify the sending, reference and data addresses the transaction is considered invalid."

TL:DR; Since this simplification caters not only for properly formatted transactions but also for every single edge case that has ever been broadcast for a Mastercoin transaction with just one single sentence - my own view is we should merge & put Class A to bed now and focus our efforts elsewhere.

To help with considering this pull, I have hooked up dev to prod to allow the consensus check to run against dev.  You can see the results of the simplified logic here (https://masterchest.info/consensus_masterchestinfodev.aspx).  Spoiler alert...



Shame about Exodus, I thought we had that locked up during the hackathon?  Other than that though, I recall offering a round of beers when we hit consensus - given our locations though you're all going to just have to buy yourself a beer and bill me for it Tongue

EDIT: Hmmm, how bizarre.  One of the changes we made during the hackathon (seconds in a year from 31557600 to 31556926) is MIA.  Put it back in, we're back synced on Exodus Smiley though Bitoy's is still a bit off.  I'd like to hear about update frequency from Grazcoin & Bitoy too if you can guys as I've noticed consensus seems to slip in and out depending on how recently the implementations update.  Consensus runs every 5 mins FYI.

Pull request is here https://github.com/mastercoin-MSC/spec/pull/36, please -1/+1 and feel free to comment/ask any questions.

Thanks Smiley
Zathras

P.S. Tachikoma, quick FYI took some hunting to isolate but squished the bug that was causing a second row for an address (1LjT88X7Zu8BdbqJw8vfRa83NJuzYL9kqm) in the balances table.
newbie
Activity: 21
Merit: 0
I've commented as well.
sr. member
Activity: 284
Merit: 250
hero member
Activity: 938
Merit: 1000
No that's MST, this is MSC.
hero member
Activity: 938
Merit: 1000
hero member
Activity: 938
Merit: 1000
I think we should ignore the consensus for sites who's verification API is down, it makes the most sense to me.

I will be very busy this week so I won't have much time to go into coding a whole lot but if I find some time I will most likely be working on implementing the extended verification API for DEx. I think we need to extend the consensus to transactions soon to make it easier to find discrepancies in the DEx code.  

Edit: Wow I missed a page, reading back.

Edit 2: Haha tuning can mess with stuff indeed! I should really create time to do a full parse as well. I havent in a while.... /me adds that to the todo.
sr. member
Activity: 266
Merit: 250
Bug is fixed.  

(I tried to update the api so that it checks for new transaction every time consensus queries it.)

Zathras,

Thanks for the advise regarding customerror.   (I created a Generic Error page to catch bugs in the future bugs =)
Perfect, thanks bud.  Seems Masterchain is behind now...  

What are your scantimes guys?  Bitoy I think you mentioned 30 mins - of course it's up to you but since that's (on average) 3 blocks you may spend most of your time a couple of blocks behind.  I can't speak for blockchain.info but I doubt they're going to be concerned about you polling their API every 5 mins or so, I'd posit there were many services out there already polling them much quicker than that, but as I say I'm no authority on the matter Smiley

Here you go, bottom of https://blockchain.info/api.  If you look for new transactions every 5 minutes unless there are 725 Mastercoin transactions you need to grab you should be fine.  Also these limits don't apply if you use an API key apparently.

Requests in 8 Hours: 4 (Soft Limit = 28800, Hard Limit = 28900)
Requests in 5 minutes: 4 (Soft Limit = 700, Hard Limit = 725)

Hope that helps Smiley
sr. member
Activity: 266
Merit: 250
Bug is fixed.   

(I tried to update the api so that it checks for new transaction every time consensus queries it.)

Zathras,

Thanks for the advise regarding customerror.   (I created a Generic Error page to catch bugs in the future bugs =)
Perfect, thanks bud.  Seems Masterchain is behind now... 

What are your scantimes guys?  Bitoy I think you mentioned 30 mins - of course it's up to you but since that's (on average) 3 blocks you may spend most of your time a couple of blocks behind.  I can't speak for blockchain.info but I doubt they're going to be concerned about you polling their API every 5 mins or so, I'd posit there were many services out there already polling them much quicker than that, but as I say I'm no authority on the matter Smiley
sr. member
Activity: 449
Merit: 250
Bug is fixed.   

(I tried to update the api so that it checks for new transaction every time consensus queries it.)


Zathras,

Thanks for the advise regarding customerror.   (I created a Generic Error page to catch bugs in the future bugs =)


sr. member
Activity: 266
Merit: 250
Zathras: Any details on the new parsing engine? Did it work?

Class A offline analysis looks good Smiley  I broke multisig though haha Shocked

Also pushing into the various transaction edge cases I did find one that wasn't satisfied (though has never been used in a transaction) - I'm hoping to exclude if all agree.  More details to come, I'd like to give you guys a holistic overview of applying this simplification but without being able to run consensus testing I can only compare against my own original implementation and I'd like to present the whole picture.

To be continued! ...

Thanks
Zathras

EDIT: Couple of quick performance wins too:
  • Removed input checking from the ismastercointx function, as this function really only needs to identify whether it's a Mastercoin transaction and what type, not where it came from.  Since this removes recursion looking up transactions for each vin that speeds blockchain scanning up a lot.
  • Also found AVG runs a process called AVGNSA (network shield agent) that intercepts and proxies all network traffic (aptly named huh?).  This was slowing down my calls to bitcoins RPC server by over 50%.  That's something I'm going to have to consider for release.

EDIT2: Grrrr! Lol
OK so it wasn't the Class A simplification that broke my multisig code, it was the performance tuning to remove input checking from 'ismastercointx'.  That now returns 'none' for multisigs because without the sending address we can't decrypt & decode the data packet(s) to analyze the type of transaction.  I'm going to have to roll-back the input recursion performance improvement (no biggy) and come at this a different way.  Address watch-only functionality in bitcoind is a perfect use (99% of the work already done by BDB so scanning would only take a few secs) but I've been following that pull since I started developing for Mastercoin several months ago and it's not looking like moving quickly Sad

EDIT3: Compromise Tongue
Hehe.  So I've taken middle ground here.  ismastercointx has been rewritten to check the inputs only on multisig transactions containing an Exodus output.  This gives me most of the performance benefit without breaking multisig.  Running another full blockchain scan now while we wait for Bitoy Smiley
sr. member
Activity: 266
Merit: 250
Consensus system going offline temporarily while I use it to analyse consensus of the uber-simplified Class A model.

Should be about 15 - 30 mins.

Thanks
Zathras

Typical luck...  Bitoy - your verification API is down.  I'm bringing consensus back to normal and will re-run testing when Bitoy resolves the issue.

P.S. Bitoy - just a suggestion but your security config needs work mate (lock it down in web.config) - for example you're exposing debugging publicly (which enables nefarious parties to look for exploits in your code at runtime - lock it down with customerrors="remoteonly" so you can only see debugging if you're local to the box).  If you need any advice please just shout Smiley  



Also brought me to a bug in consensus checking code, if one of you guys is down and doesn't return any data the check is aborted and the state remains as per last consensus check for all implementations which makes it look like we still have consensus when we don't.  Will trap those exceptions and enable it so the system can adapt to failures in the various implementations providing consensus data.

Tachikoma - what are your views on scoring?  For example if I modify the code to adapt to one of the implementations being down, it would then do the comparison against the remaining three implementations.  In this scenario if those remaining three agree do you think our consensus score should remain at 100%, or use last-known values for the failed implementation (will definitely cause a drop in consensus score).  I see value to both approaches.

Thanks
Zathras

sr. member
Activity: 266
Merit: 250
Consensus system going offline temporarily while I use it to analyse consensus of the uber-simplified Class A model.

Should be about 15 - 30 mins.

Thanks
Zathras
sr. member
Activity: 284
Merit: 250
Graz: I noticed you fixed the rounding error; what was it in the end?

Zathras: Any details on the new parsing engine? Did it work?

Almost at 100% guys :}

I was using a different number of total reward mastercoins (ending with ...6245 instead of ...6222). I missed it during the online meeting session as the number is calculated dynamically (not hard coded like in your implementation).
A hot fix was to make it hard coded, but I still have to understand why I got a different number.

But:

I have noticed *another* rounding difference that happens less often. Today at one time you rounded last digit to ....8 and I rounded to ....7.
Normally it would mean temporary consensus loss for one block, but if by chance exodus sends all all funds (e.g. the number you suggested that ends with digit 8 ), my implementation will invalidate the tx since not enough funds are there. Then it is a constant consensus loss. The chance for this is close to zero, but it should be fixed.

conclusion:
we have to define *exactly* the rounding rules.

Pages:
Jump to: