Author

Topic: Unmixing JoinMarket Transactions (Read 1660 times)

newbie
Activity: 6
Merit: 0
September 22, 2016, 12:11:40 AM
#12
That's a good idea. Thanks. I will add it to the wishlist.
sr. member
Activity: 261
Merit: 523
September 21, 2016, 08:41:52 AM
#11
Would you be able to plot against time what proportion of the coinjoins don't all have outputs that are coinjoins (i.e. you can unmix them).

I wonder if it the high total ratio ~50% is because of the Joinmarket-Qt GUI application, which would mean your graph would rise around the date it was released.
newbie
Activity: 6
Merit: 0
September 15, 2016, 01:34:14 PM
#10
I'm currently considering writing a JM-Unmixer web-explorer, to let anybody check their txs easily, without having to run the tool.

This can also prove important if users decide to provide feedback (pointing out false positives and negatives), which can lead to improving the tool.

I want to do it, but it might be hard for me to find the time for that...  If anybody is interested to jump aboard and help me develop the explorer, please mail me.  I will definitely do it if I get help.
newbie
Activity: 6
Merit: 0
September 15, 2016, 01:24:05 PM
#9
waxwing,  at this point of the discussion it is pretty clear I tend to focus more on the practical state of the project, and you on the theoretical problem: what JM can do, and what analysis tools can and cannot do.  Both  are interesting problems.

I am personally more interested in the practical aspect, which led me to writing this tool.  The reason for this is that people who want their coins mixed actaully do use JM today -- there are more than 15000 jmtxs in the blockchain to date.  I'm sure a significant number of them are unaware of what level of effectiveness is reasonable to expect today.

Re false positives: I completely agree my tool, as any other blockchain analysis tool, has false positives.  (Therefore, I agree my 54% figure is optimistic, and since your estimate was 40%, I'm willing to assume the current real success rate is closer to 40% than to 54%.)

From a purely theoretical point of view, blockchain analysis tools (in general) just cannot work, ever, because anybody can fool them. Since all they can do is follow heuristics, and heuristics can be fooled, they are useless.  But in practice, analysis tools should aim to work on the typical usages.  In many cases, the typical usages are the vast majority of the usages, because they are the product of a small number of tools, (in the end, a small number of tools end up being used by the majority of the users).  In our case, this means that (again, in practice) the majority (or maybe even vast majority) of the JM txs (and the links between them) look pretty much the same, because most people just install the tool and use it as naive users.  This point would certainly become more pronounced as the project becomes more popular (suppose it had 100k active users/month -- how many of them would really understand the nuances and potential attack vectors?).

My point about blockchain analysis tools is this: every analysis tool has false positives, and false positives are not inherently worse than false negatives, and if a tool has a real-success-rate of p (0<=p<=1), no one cares what it wrongly thinks about the other 1-p cases, especially as p grows closer to 1.  As a user, you know the probability of you being "violated" by the tool is p (or, actually, you can only estimate of p, because the real-success-rate is not known), and you need to consider how that makes you feel before deciding to use the tool.

Now, to delve more into the point of false positives wrt my tool, let's consider the three major potential causes of false positive:

1. hard to detect JM txs: I believe detecting which txs are JM txs is an easy problem.  I believe my tool has a very high success rate in doing that, when considering false positives and false negatives (I have no way to verify, but I have looked in depth into many txs when writing it, hence my confidence).  (It is very likely there are currently bugs, but they could be fixed when detected.)
2. by-chance and deliberate false positives: since I focus on the practical, allow me to rule this out as being purely theoretical. If they exist, I'm going to assume there isn't enough of them to affect the results in a significant way.
3. role switching: I think that, in practice, this is the only real major source of false positives.

As I understand the problem of role switching (and I might be mistaken here, or at least over-simplifying), is that the more the roles are switched, the harder it is for tools like mine to find something.  The ideal role-switching rate is 50% -- ideally, any participant should do 50% making and 50% taking.  This, of course, cannot be achieved (1. it breaks the basic incentive system, and 2. by definition there must be many more total making than taking).  So if we assume for simplicity that the average maker does 80% making, that still leaves 80% of his txs exploitable (the other 20% would probably cause false positives).  Also, a taker should hope all the participating makers will switch their roles in their following txs, so the analysis breaks on his particular tx. If they don't (and the chances are they don't), his tx is vulnerable.

Re how to treat unspent cjouts: that's a good point. The tool currently treats them as "unexploitable"/"failed to unmix" (which is a conservative approach).  I might improve it to treat them as outputs which "might be exploitable in the future", which will actually give better estimated success rate.
sr. member
Activity: 469
Merit: 253
September 12, 2016, 06:09:33 AM
#8
An additional point to make is about assumptions.

Consider that blockchain analysis pretty much started with what Meiklejohn et. al. called "Heuristic 1", which I guess should really be attributed to Satoshi:

"All inputs have the same owner."

The "first cut" of blockchain analysis is just that: clustering ownership based on that assumption. That's mostly what walletexplorer.com was/is all about, and although I don't pretend to know what other additional analysis it does, that's clearly the backbone of it.

Now, coinjoin breaks it of course, and for this reason I always say to people: "I use sendpayment for retail payments specifically to break automated clustering analysis". People who have got as far as the kind of thinking in this thread may counter: "well, but that's nearly useless since the Taker/Maker role behaviour distinction makes it almost always easy to isolate your output".

But, again, assumptions: if you're going to have automated, large scale wallet analysis, are you comfortable to start putting in a whole raft of assumptions about:

1. what is and is not a joinmarket transaction
2. that a person taking Taker role in TX1 is not taking maker role in TX2
3. the right way to handle cases where certain outputs have not yet been spent

etc etc. It is messy, and it gets messier the more different ways of operation exist; other coinjoin implementations than joinmarket, by-chance false positives, deliberate false positives, mixed role bots etc etc.

Something that Greg Maxwell has always been at pains to point out during discussions: there is no fixed "fact" about a coinjoin pattern, once heuristic 1 is dropped; even what looks like it couldn't possibly be a coinjoin, e.g. : inputs(1,3), outputs(3,1) could actually be a payment of 2 bitcoins from party A to party B, or two payments of 3btc A->C, 1btc B->D. Neither of these are likely today, but ... are they? How do we know?

Joinmarket's motivating philosophy is a bit "worse is better" (see gwern's article on bitcoin is worse is better) - that usage is what counts, because the set of near-watertight assumptions splinters and, in the most optimistic future, eventually collapses.
sr. member
Activity: 469
Merit: 253
September 11, 2016, 06:51:16 AM
#7
I now realize I probably wasted too many words describing things which are alreay known (the principle and technique), and too little on the results.

Perhaps it would be a good idea at this point to re-emphasize the main message (which might have been lost in my long post):



I wrote a very simple tool for unmixing JM Txs.  Despite its simplicity (and the fact it is just a prototype-quality tool), it still achieves a surprisingly high success rate of 54%!  Approximately 1/4 of all JM Txs it found got unmixed fully!.

One more thing to stress is the simplicity of the tool.  Unlike the impressive analysis by by Adam Gibson (which I wasn't aware of before), my tool isn't concerned with most of the "advanced" JM details (like wallet structure, tumbler algorithm, sendpayment.py, JM Txs types, etc.).  And, still, its success rate is considerable.  For this reason I believe achieving even better rates is possible with some work.



SO that is the current state of the JM world as reflected on the blockchain.  I find it disturbing.  I think all users should know about its expected effectiveness before deciding to use JM (and I'd recomment they also check how effective their past JM Txs have been).

I don't know if any of this is new or surprising to JM developers and experts, but we should make sure it is known to all users.  Transparency is vital here.

waxwing,

> The only solutions are more mixing of roles and/or more activity overall.

How would you incentivize makers to play as takers often enough (or at all)? Doing that can destroy most of their profit.  Furthermore, the taker will have to trust the makers she chooses to do that after their joint JMTx.  She basically pays them, and hopes for the best....

Regarding more activity overall: can you explain how it can help?  As I see it, the same technique which works on 100 txs would also work on 100,000,000 txs.


I want to get into this more, and I'm sorry I haven't engaged further in this discussion over the past week, I intend to - at the moment I'm overwhelmed getting the (very large change) new release of Joinmarket out. I think your 54% figure is not a completely black and white thing, and the arguments about mixing roles are more complicated than a simple "well it would cost to do that". But, the "hopes for the best" comment is rather spot on today, that much is fairly clear. Well, I will come back to this later, and thanks for doing this work, it's much needed imo.

Edit: well, I've pushed the release now, and just waiting, so why not get into it more Smiley

First, I can understand that my first reply seems offhand and dismissive towards what is after all quite a serious piece of analytical work. Sorry if it came off like that, but let me expand my perspective a bit:

Re: the 54% figure: I remember when I first looked at your finding, my thought process was something like: well, given that sendpayment will almost always "fall" to this "attack" in today's environment, I would have probably ball-parked 40% of joins would fit that pattern (basically all the isolated sendpayments, however many there are, plus *some* tumbler individual joins - that's complicated). That it's as high as 54% is a little surprising to me; but it might be just that a larger fraction of joinmarket usage is sendpayment than I thought; wouldn't be amazing. There are probably other things to think about - false positives is a tricky one, there is at least one wallet that at one point was explicitly planning to create fake JM style transactions, and in earlier investigations using cjhunt (just messing around with it) I noticed a substantial class of small-ish JM-looking txs that weren't actually JM. Now, having said that, I'm assuming your analysis tool is not likely to be fooled by such false positives, since you're looking at connections between JM txs; but it's something one would want to sanity check (e.g. it's not impossible that JM-looking-but-not-actually-JM txs might be chained together).

So the practical point, that without any role-mixing this method picks up ~1/2 of all taker cjouts as currently used, is certainly worthy of note, albeit it isn't far at all from what I'd expect, hence my "of course" at the start. There's probably an element of wishful thinking - people treat coinjoin as a blackbox and don't delve into the subtleties.

Re: scaling up and whether it makes a difference.

You're right, it's irrelevant in the base case: where roles are 100% separate, and only isolated coinjoins occur, there is no difference. But to the extent that Makers don't only act as makers - whether by explicitly mixed patientsendpayment, or by defunding wallets, or by simply choosing to act as Takers, and also to the extent Takers use more complex behaviour patterns as in the tumbler, I think there will be a compounding effect of larger activity with role-mixing (see e.g. here and the previous section.

Re: economics, why would a Maker switch roles

I don't think there's anything watertight about the argument that Makers won't vary roles because it costs them money; there is no reason to suppose the set of people who get value from the Taker role is disjoint from the set of people who get value from the Maker role; in fact the opposite seems quite plausible. But still it's an important observation to make, I accept that much (that there might be some 'role stickiness').

newbie
Activity: 6
Merit: 0
September 08, 2016, 05:30:00 AM
#6

If sendpayment supported multiple destination addresses would that make the maker/taker analysis more difficult/inaccurate/impossible?

It would make it computationally harder, and possibly infeasible (in some cases but not all, if many outputs are included), because the problem is NP-Complete.
That might sound like a reasonable solution, but it has its costs: taker will have to pay higher miner fees (tx will be larger), and it will hurt overall usability, because it is harder to avoid links created by re-joining outputs (in later txs) when you have many addresses with low balances.


The recent research paper with the proof said that transactions with large parties (10+ parties) were more difficult to analyze. How does your tool do with that type of JMtx?

From the few examples I examined in depth: not bad at all!  I will look up a few examples and post its output on them.

EDIT:
I found seven 12-party txs which the tool had no problem processing.  I added sample outputs to github. E.g. take a look at this tx: https://github.com/fungibit/jm_unmixer/blob/master/outputs/jmtx_43433788800da4c272acaf89050c5256ed5a545517e345fefd8fc3ebe68ba9a4.txt

Note that the number of inputs (and outputs) has a greater effect on difficulty than number of parties.  E.g. a 10-party tx with 10 inputs is very easy to process, while a 5-party tx with 20 inputs can be very difficult.
newbie
Activity: 6
Merit: 0
September 08, 2016, 05:19:39 AM
#5
I now realize I probably wasted too many words describing things which are alreay known (the principle and technique), and too little on the results.

Perhaps it would be a good idea at this point to re-emphasize the main message (which might have been lost in my long post):



I wrote a very simple tool for unmixing JM Txs.  Despite its simplicity (and the fact it is just a prototype-quality tool), it still achieves a surprisingly high success rate of 54%!  Approximately 1/4 of all JM Txs it found got unmixed fully!.

One more thing to stress is the simplicity of the tool.  Unlike the impressive analysis by by Adam Gibson (which I wasn't aware of before), my tool isn't concerned with most of the "advanced" JM details (like wallet structure, tumbler algorithm, sendpayment.py, JM Txs types, etc.).  And, still, its success rate is considerable.  For this reason I believe achieving even better rates is possible with some work.



SO that is the current state of the JM world as reflected on the blockchain.  I find it disturbing.  I think all users should know about its expected effectiveness before deciding to use JM (and I'd recomment they also check how effective their past JM Txs have been).

I don't know if any of this is new or surprising to JM developers and experts, but we should make sure it is known to all users.  Transparency is vital here.

waxwing,

> The only solutions are more mixing of roles and/or more activity overall.

How would you incentivize makers to play as takers often enough (or at all)? Doing that can destroy most of their profit.  Furthermore, the taker will have to trust the makers she chooses to do that after their joint JMTx.  She basically pays them, and hopes for the best....

Regarding more activity overall: can you explain how it can help?  As I see it, the same technique which works on 100 txs would also work on 100,000,000 txs.



legendary
Activity: 2940
Merit: 1865
September 07, 2016, 01:28:32 PM
#4
...

First a quick disclosure: I do not program.

Any new ideas and algorithms to mix BTC (and unmix) is a good thing, it improves privacy.  Differing mixing techniques strengthens privacy.

Putting code like yours (fungibit) out for all to see will inspire trust and improve conscientious mixers in improving their mixing services.

Bravo.
newbie
Activity: 1
Merit: 0
September 07, 2016, 12:45:24 PM
#3

TX1 = 7e2345ec6b8d0c17111e1af33a6f52476ed9e41ec257fe9f7cc56257309b68f9 -- 5-party, cj_amount=1.993 btc
[TAKER]
  IN :         1.50917279  OUT:         1.99262186
  IN :         0.01928769                         
  IN :         0.08006194                         
  IN :         0.13664522                         
  IN :         0.14481342                         
  IN :         0.10456104                         
  (JMFEE=0.00038236)                         
  (TXFEE=0.00153788)                         
[MAKER 0]
  IN :         3.90033161  OUT:         1.99262186
                           OUT:         1.90771975
                           (JMFEE=0.00001000)
[MAKER 1]
  IN :         5.52946225  OUT:         1.99262186
                           OUT:         3.53694002
                           (JMFEE=0.00009963)
[MAKER 2]
  IN :         0.69184000  OUT:         1.99262186
  IN :         1.46566551  OUT:         0.16490749
                           (JMFEE=0.00002384)
[MAKER 3]
  IN :         0.58763935  OUT:         1.99262186
  IN :         1.53214468  OUT:         0.12741106
                           (JMFEE=0.00024889)

If sendpayment supported multiple destination addresses would that make the maker/taker analysis more difficult/inaccurate/impossible?

The recent research paper with the proof said that transactions with large parties (10+ parties) were more difficult to analyze. How does your tool do with that type of JMtx?

JoinMarket as a project acknowledges that a single JM transaction can be potentially unmixed, and easily. Using the tumbler should help a lot though no amount of CoinJoins can ever guarantee the link is broken. The input is always linked to the output. I hope JoinMarket explores options that truely break the link. A CoinJoin/CoinSwap hybrid or multi-layer onion transactions maybe.
sr. member
Activity: 469
Merit: 253
September 06, 2016, 02:55:11 PM
#2
Of course; that's why the tumbler script exists. A single coinjoin serves only to confuse automated wallet closure analysis, and to generally improve the health of Bitcoin's privacy in as much as every single transaction with more than one owner of the inputs degrades such analysis.

Whenever possible we have tried to make this clear. I do sendpayment single coinjoins for the above reasons; but note that because I use the same wallet as a Maker too, there may be some actual confusion about which outputs are of "Taker" type, even in the case of these single isolated joins.

>I think the conclusion from this is that the very idea which distinguishes JM (operating a market with maker and taker roles), is also what exposes it to exploitation (the inherent role asymmetry can be exploited).

This is absolutely true, I agree, as a high level analysis. The only solutions are more mixing of roles and/or more activity overall.

Edit: this was only a response to the overall idea; but I should look at your data analysis, that seems like really interesting work, thanks.

Second edit: by the way have you read https://github.com/AdamISZ/JMPrivacyAnalysis/blob/master/tumbler_privacy.md ?
newbie
Activity: 6
Merit: 0
September 06, 2016, 02:19:04 PM
#1
Hello, everybody.
My first post here  Shocked

I wrote a simple tool for unmixing JoinMarket transactions.

Description and example are included below.

I believe this vulnerability is more alarming than issue #156, where the attacker has to actively participate in the market, and can only unmix transactions involving the makers she manages to get exopsed to.

I think the conclusion from this is that the very idea which distinguishes JM (operating a market with maker and taker roles),
is also what exposes it to exploitation (the inherent role asymmetry can be exploited).

THE IDEA

First, given a JM tx, it is relatively easy for an observer to identify each input and each change-output as "maker" or "taker", by matching their amount. The example below should make it clear how this is done. It is also easy to identify JM txs (being the transactions for which this matching algorithm fully succeeds). This alone reveals nothing about the CJ-amount-outputs, of course, only about inputs and change-outputs.

Second, makers often spend utxo's from JM txs they participate in in other JM txs.

That is a link.  If we know a certain input in a JM tx, TX2, belongs to a maker of TX2, and it spends an output from another JM tx, TX1, then it is very likely that output of TX1 belongs to a maker of TX1. If that output has amount equals to TX1's CJ-amount, then we partially unmixed TX1.

EXAMPLE

Consider the following two JM txs, and the corresponding outputs of the amount-matching algorithm:


TX1 = 7e2345ec6b8d0c17111e1af33a6f52476ed9e41ec257fe9f7cc56257309b68f9 -- 5-party, cj_amount=1.993 btc
[TAKER]
  IN :         1.50917279  OUT:         1.99262186
  IN :         0.01928769                         
  IN :         0.08006194                         
  IN :         0.13664522                         
  IN :         0.14481342                         
  IN :         0.10456104                         
  (JMFEE=0.00038236)                         
  (TXFEE=0.00153788)                         
[MAKER 0]
  IN :         3.90033161  OUT:         1.99262186
                           OUT:         1.90771975
                           (JMFEE=0.00001000)
[MAKER 1]
  IN :         5.52946225  OUT:         1.99262186
                           OUT:         3.53694002
                           (JMFEE=0.00009963)
[MAKER 2]
  IN :         0.69184000  OUT:         1.99262186
  IN :         1.46566551  OUT:         0.16490749
                           (JMFEE=0.00002384)
[MAKER 3]
  IN :         0.58763935  OUT:         1.99262186
  IN :         1.53214468  OUT:         0.12741106
                           (JMFEE=0.00024889)

TX2 = 7e08c9f1c310510b00ac594a5e1e68d1b4a80ec3d73994d1e3a860431c14e4f3 -- 3-party, cj_amount=1.386 btc
[TAKER]
  IN :         0.67294632  OUT:         1.38576861
  IN :         0.71351357                         
  (JMFEE=0.00005122)                         
  (TXFEE=0.00064006)                         
[MAKER 0]
  IN :     *** 1.99262186  OUT:         1.38576861
                           OUT:         0.60687947
                           (JMFEE=0.00002622)
[MAKER 1]
  IN :         0.92612002  OUT:         1.38576861
  IN :         0.96325620  OUT:         0.50363261
                           (JMFEE=0.00002500)


The "MAKER 0" input marked with *** in TX2 (1.992 btc) spends an output from TX1 (address 1FdiMKdqfTrb3WUW7Q4LBsPXytcDezTnJH).
From this, we can conclude (with a high probability) that address 1FdiMKdqfTrb3WUW7Q4LBsPXytcDezTnJH belongs to a maker of TX1, and not to the taker.

CODE

The code is here:
https://github.com/fungibit/jm_unmixer.git

This is just a proof-of-concept.  I didn't put that much effort into it, and it is just a prototype-level tool, probably with many bugs.

That said, it still works pretty well.  It detects ~15000 txs it believes to be JM txs.  The analysis gives an average unmix-level of 54% (unmix_level=(num_makers_detected)/(num_parties-1)).  24% of the txs are unmixed fully. It gets "confused" by less than 2% of the txs, categorizing *all* parties as makers (no taker).

I believe that with more work, the tool can be improved to unmix 70%-90% of the outputs.  However, I have no intention to improve it further. I only wanted to prove the concept.
Jump to: