Besides the fact that throughout the source code, in the comments, transactions with dependent inputs are referenced using this term, I tried but didn't find any other term to use for such "packaged" transactions when they are present in a block, it'll be highly appreciated if you would give me one.
100% of the use of the word is in the context of the mempool and tx selection.
Being "in the context of mempool" is a correct remark, but I don't understand why should we invent any other term for packages when they migrate to blocks? I mean other than just being offensive against an imaginary opponent
Calling that packages would be inappropriate because it suggests they're in some way grouped or something-- but they're not. A dependent transaction can be anywhere in the block so long as it's after its parents.
But they are actually "grouped or something" and there is no "dependent transaction" in bitcoin literature. Now you are inventing a new term with ambiguous meaning just for fighting with me
So, it is not about parallel transaction processing because other than their input scripts, transaction are processed sequentially.
The validation of a transaction is almost exclusively the validation of its inputs. Outputs are only checked for size and total amount, nlocktime is an integer comparison, as is the maximum weight. For these kinds of checks bouncing the cacheline over to another thread would take more time than the check.
You are deliberately "overlooking" the most expensive task here: checking for the inputs to exist in the UTXO db which is currently done in the main thread, sequetially.
It could queue the entire transactions rather than the individual inputs but this would achieve worse parallelism e.g. because a block pretty reliably will have >6000 inputs these days, but may have anywhere between 500 and 2000 txn just the same. At least at the time that was written working tx at a time was slower per my chat logs. That might not be true anymore because more of the workload is full blocks, because more recent cpus suffer more from overhead, etc. This was the sort of thing I was referring to in my prior post when I said that less parallelism might turn out to be faster now, due to overheads.
Besides contradicting yourself in a single paragraph, your reasoning suffers from an architectural weakness:
Look, when it comes to parallelism, you don't do it by breaking apart cpu intensive tasks and I/O bound ones artificially, it is OS (and not a programmer's) job. Violating this simple rule of thumb would lead to excessive context switching overheads, exactly the same problem we are dealing with here.
Mission critical projects desperately need software architects, ways more than "genius"
if-then-else coders. It is a known issue for open-source projects as they are subject to deviation of engineering principles and best practices because there is a tendency toward coding before thinking or even discussing. The way they like to put it: thinking by code! The most stupid slogan ever.
Software engineers/architects do not think by code, never, ever. I, personally postpone coding as much as possible because I've been there for a long time and I regret writing down thousands of lines of stupid code that although they "work" they are much of a burden rather than a relief for the project and users.
I'm telling you that the strategy chosen by sipa in PR 2060, which has survived until now in spite of further improvements, was not a good one because it does too much context switching.
A single thread should take care of a single transaction, a software architect speaks here, so don't bother wasting your time and mine, arguing about basics. Bitcoin is a disruptive technology but not in the field of software engineering and architecture, programmers are not allowed to bend the rules whenever they "feel" something, it is chaos and should be avoided. Nobody will be confused or intimidated by a programmer who is making excuses for his wrong choices or bragging with his
if-then-else writing "power".
Few weeks ago someone in this forum told something about block verification in bitcoin being sequential because of "packaged transactions" (take it easy).
Turns out that people around here often have mistaken beliefs. Among other sources they get propagated by altcoin scammers that claim to have fixed "problems" that don't even exist.
Isn't it over? The
anti-alt crusade?
Let's move on and let them to breath. Bitcoin is doing well and is not threatened by Alt-coins, no need to run such a crusade at all. It is not a sect or a cult, it is an open source project and ecosystem, forks happen, divergence is natural and a sign of maturing. Take it easy.
I proposed an algorithm for this and the discussion went on in other directions ...
Your proposed algorithm would almost certainly not work well in practice:
OK. Challenge accepted.
I have not the luxury of living in a wealthy country, I usually do jobs for a living, I have to, but after this mess, I feel it is necessary to allocate a part of my time for proving you wrong in your way: proving by code.
So, I'll implement the algorithm that you are "almost" (thanks god) certain about it not working well, in a few weeks and we will see. it is totally unnecessary, TBH, because your following objections are totally wrong:
Every one iterations requires multiple read/write accesses to global data (the set of existing outputs).
The thread will be blocked leaving room for the next thread, don't worry about threads being locked, on the contrary, assigning cpu intensive tasks to threads that never get locked is somehow a counter-purpose design choice, let threads to block, please!!
Also in the worst case it would have to attempt to validate all transactions N^2 times: consider what happens if all transactions are dependent on the transaction before them and the parallel selection is unlucky so that it attempts the first transaction last.
Firstly, it is actually N^2/2 for the worst case.
Secondly, it is not how we design algorithms for real world problems, being too much concerned about a case that simply doesn't happen. It is enough for an algorithm to finish in real time when the worst case happens, if ever.
It's just trying to impose too strong an ordering requirement, as there is no need to validate in order. Bitcoin core doesn't today.
Oppositely, it is the current implementation of block verification that imposes an ordering requirement, my algorithm even doesn't enforce parent txns to show up earlier in the block. It has nothing to do with the order of transactions being dependent (your term) or not, once a transaction fails because of its input(s) not being present in the UTXO, it 'll be re-scheduled for a retry as long as at least one more confirmed transaction is added to the set, I don't even look at the txn's index in the block, though, it may be necessary because of consensus, if so, I can keep track of the index, waiting for a confirmed txn with lower index to retry.