Author

Topic: Joining mempool RBF transactions (Read 246 times)

jr. member
Activity: 31
Merit: 14
June 16, 2021, 12:20:28 PM
#10
Bumping this thread with a blog post we wrote. It details some of the risks associated with dynamic/additive/rbf batching, folks in this thread might find it interesting.

https://blog.cardcoins.co/rbf-batching-at-cardcoins-diving-into-the-mempool-s-dark-reorg-forest
legendary
Activity: 1463
Merit: 1886
August 14, 2020, 05:43:35 PM
#9
I'm very happy that Bitcoin I gifted in the past was gifted using by handing out private key.  People lost their gifts and I was able to recover them for them.

I printed out a bunch of paper wallets over the years to give as gifts. I also kept the private keys to help restore if needed, and just checking now, only one of them has moved (in 2015). No one has ever asked me for help, so I don't know if I volunteer to "help" them restoring their bitcoin. A few of those are for 1 BTC (although most are 0.1), which is now a decent chunk of money. I like to think they kept the paper wallet in a safety deposit box or something, but probably the reality is it's both lost and forgotten.


But my actual main motivation for the "utxo-giftcards" was actually as a (very trade-offy) way of making payments. Very often when people are sending money to an exchange or a casino, they really just want to "send everything" and a big service would be able to very efficiently redeem such "utxo-giftcards"  (e.g. using them to process someone's low-priority withdrawal)
staff
Activity: 4284
Merit: 8808
August 14, 2020, 03:00:33 PM
#8
The reason I'm such a huge fan of the idea of "utxo-giftcards" is that it's pretty phenomenally space-efficient (as you can give someone money without even making a transaction) and a death-blow t entire classes of bitcoin analysis attacks. (Of course there are drawbacks in transferring a private-key, but I feel those are pretty obvious and understood enough that it's easy to use what is most suitable)

It's also a perfect match to the model of a gift or a donation-- you don't care if the donor/gifter claws their money back at the last moment (-- you'd rather that didn't but you weren't relying on them not doing it).  It also addresses the problem that small bitcoin gifts usually result in lost coins.

I'm very happy that Bitcoin I gifted in the past was gifted using by handing out private key.  People lost their gifts and I was able to recover them for them.
legendary
Activity: 1463
Merit: 1886
August 14, 2020, 11:46:54 AM
#7
@RHavar may I see what you have tried? I'm interested in this subject. My name for it was "dynamic batching".

I'm not sure it's very useful, as not only did I not get it working -- I'm not sure how to do so. But I think the "project scope" was pretty interesting.

Basically the idea was to build a "commercial wallet" that sits behind a trusted bitcoin node (avoid a lot of validation nonsense and risks etc.). And then provided a really robust API for sending/receiving payments. And I made a very strong distinction between a "payment" and a bitcoin transaction. As in a payment might be: "I want to pay X with N BTC with Z priority" and you could track that payment and see which "bitcoin transactions" it gets reified into (which might keep changing until it eventually confirms).

The other feature I was trying to build which I'm a huge fan of, is something I call "utxo-giftcards". Where a "utxo-giftcard" is basically a (txid:vout:privatekey). So when you import someone elses utxo-giftcard the wallet knows that it's "untrusted" and that because a 3rd party also has the private key it could be double-spent easily. And a "utxo-giftcard" is considered "redeemed" when the wallet itself spends it (as opposed to a 3rd party does). And when adding a "utxo-giftcard" you could specify the "redeem priority".

The reason I'm such a huge fan of the idea of "utxo-giftcards" is that it's pretty phenomenally space-efficient (as you can give someone money without even making a transaction) and a death-blow t entire classes of bitcoin analysis attacks. (Of course there are drawbacks in transferring a private-key, but I feel those are pretty obvious and understood enough that it's easy to use what is most suitable)

---

But doing it again, I'd just try have the same interface -- but not try code any of the logic myself -- and instead try use some sort of logical solver -- and then execute the results of that. I think maybe Microsoft's Z3 Theorem Prover could work for it. But don't have the time or energy to try that at the moment
newbie
Activity: 28
Merit: 24
August 11, 2020, 02:21:18 PM
#6
Except "batch on replacement" is ludicrously hard to do.
That's why I say they should batch in the first place. Smiley

Quote
I actually think it's the hardest (pure) programming problem I've worked on, and don't feel like I'd even be able to do it given another 6 months. I've never done "logic programming" but I almost feel like something like that would be essential, where you sort of logically describe all the high level concepts and ask it to solve what you should do. As just trying to handle all the cases imperatively just seems impossible to not end up in an exploded spaghetti nightmare of a gazillion states.
[Bit off topic, just ranting here incase you have any insights]
That was all I had to offer. I think the right way to solve that isn't to write code for it, it's to write (or steal) a logic-relation engine.

In particular handling all the cases were an earlier partial payment confirms, and then making sure that your follwup payment conflicts with the earlier complete payments (either by being a child of the partial or more directly) ... just a gnarly mess.

For most people the best advise right now is batch in the first place.

@RHavar may I see what you have tried? I'm interested in this subject. My name for it was "dynamic batching".
staff
Activity: 4284
Merit: 8808
August 05, 2020, 10:37:35 PM
#5
Except "batch on replacement" is ludicrously hard to do.
That's why I say they should batch in the first place. Smiley

Quote
I actually think it's the hardest (pure) programming problem I've worked on, and don't feel like I'd even be able to do it given another 6 months. I've never done "logic programming" but I almost feel like something like that would be essential, where you sort of logically describe all the high level concepts and ask it to solve what you should do. As just trying to handle all the cases imperatively just seems impossible to not end up in an exploded spaghetti nightmare of a gazillion states.
[Bit off topic, just ranting here incase you have any insights]
That was all I had to offer. I think the right way to solve that isn't to write code for it, it's to write (or steal) a logic-relation engine.

In particular handling all the cases were an earlier partial payment confirms, and then making sure that your follwup payment conflicts with the earlier complete payments (either by being a child of the partial or more directly) ... just a gnarly mess.

For most people the best advise right now is batch in the first place.
legendary
Activity: 1463
Merit: 1886
August 05, 2020, 09:36:58 PM
#4
It some cases senders have failed to batch and could batch on replacement, but the solution there is ... batching in the first place. Smiley

Except "batch on replacement" is ludicrously hard to do.  I called it super sending and spent like 6 months working on the (generalization of that) problem (fully decoupling "payments" from "bitcoin transactions" and handling inbound/outbound correctly) and I like to think I'm a decent programmer, but had to give up as I could never figure out how to handle get a handle on all the complexity (which really explodes when you consider reorgs, inbound payments being replaced etc.).

I actually think it's the hardest (pure) programming problem I've worked on, and don't feel like I'd even be able to do it given another 6 months. I've never done "logic programming" but I almost feel like something like that would be essential, where you sort of logically describe all the high level concepts and ask it to solve what you should do. As just trying to handle all the cases imperatively just seems impossible to not end up in an exploded spaghetti nightmare of a gazillion states.


[Bit off topic, just ranting here incase you have any insights]
staff
Activity: 4284
Merit: 8808
August 05, 2020, 05:05:28 PM
#3
Without signature aggregation there wouldn't be much savings unless there was cut-through going on, but there isn't much of that naturally because wallets don't normally spend unconfirmed outputs by third parties.

It some cases senders have failed to batch and could batch on replacement, but the solution there is ... batching in the first place. Smiley
HCP
legendary
Activity: 2086
Merit: 4361
August 05, 2020, 04:26:23 PM
#2
Your solution basically just describes exactly what happens with transaction now, ie. you put them into a "pool" and they get grouped... in a block! Tongue

As all the transactions need to be signed, there is no way to "join" them together into one big transaction to "save space" without breaking the signatures (or more than likely adding even more overhead)... in any case, the space savings will be minimal. The major components of a transaction (in terms of bytes) are the Inputs and, to a lesser extent, the outputs... the actual "fixed" overheads are a small fraction of a transactions size. For old legacy transactions it was around 10-12 bytes per transaction regardless of total transaction size.
copper member
Activity: 906
Merit: 2258
August 05, 2020, 03:45:51 PM
#1
When observing mempool, I can see many RBF transactions with low fee. Is it possible to join some of them without increasing fees? In this way, these transactions will take less space and their satoshi per byte value will be higher, so it will be more profitable to include them. I wonder if some "mempool-based coinjoin" is possible, when just people will send their transactions into mempool and they will be aggregated and replaced if all of them are RBF transactions. Are there any existing ways of doing it now?
Jump to: