What you're seeing here is someone trying to pump his ego by shitting on other things and show off to impress you with how uber technical he is-- not the first or the last one of those we'll see.
What you're seeing here is someone trying to defend obvious bad design choices.
A quarter of the items in the list like "Lack of inline assembly in critical loops." are both untrue and also show up in other abusive folks lists as things Bitcoin Core is doing and is awful for doing because its antithetical to portability reliability or the posters idea of code aesthetics (or because MSVC stopped supporting inline assembly thus anyone who uses it is a "moron").
What aesthetics? Your code is ugly anyway, wait, you think your code has aesthetics? Shit.
You know decades ago people invented this little thing call #ifdef right?
Just use #ifdef _MSC_VER/#else/#endif around the inline assembly if you want to bypass MSVC.
This is basic stuff, anyone who pretend to be an expert and doesn't know this is also a "moron".
Here is the straight dope: If the comments had merit and the author were qualified to apply them-- where is the patch? Oh look at that, no patches.
You ignored the part that was already explained to you:
There are plenty of gurus out there who can make Core's code run two to four times faster without even trying. But most of them won't bother, if they're going to work for the bankers they'd expect to get paid handsomely for it.
Like any self respecting coder is going to clean up your crap and let you claim all the credits.
Many of the of the people working on the project have a long term experience with low level programming
And many people on the project quit because they didn't like working with you, what's your point?
(for example I spend many years building multimedia codecs; wladimir does things like video drivers and IIRC used to work in the semiconductor industry), and the codebase reflects
many points of optimization with micro-architectural features in mind. But _most_ of the codebase is not a hot-path and _all_ of the codebase must be optimized for reliability and reviewability above pretty much all else.
gmaxwell must think I am the only one who knows the code sucks.
Why don't you walk outside your little church and look around once in a while.
It's not just the micro optimizations that's in question, even the basic design choices are obviously flawed.
People have been laughing at your choices for years and here you are defending it because you wrote some codec to watch porn with higher fps some years ago.
Some of these pieces of advice are just a bit outdated as well-- it makes little sense to bake in an optimization that a compiler will reliably perform on its own at the expense of code clarity and maintainability; especially in the 99% of code that isn't hot or on a latency critical path. (Examples being loop invariant code motion and use of conditional moves instead of branching).
Translation: My code is great, everyone else is wrong, nobody else can possibly improve it.
It's your style and your choices, it tells people you don't understand performance at the instinctive level.
Even simple crap like switching to --i instead of ++i, will reduce assembly instructions regardless of what optimization flags you use on the compiler.
Similarly, some are true for generic non-hot-path code: E.g. it's pretty challenging in idiomatic, safe C++ to avoid some amount of superfluous memory copying (especially prior to C++11 which we were only able to upgrade to in the last year due to laggards in the userbase), but in the critical path for validation there is virtually none (though there are an excess of small allocations, help improving that would be very welcome). Though, you're not likely to know that if you're just tossing around insults on the internet instead of starting up a profiler.
Translation: My code is great, everyone else is wrong.
And of course, we're all quite busy keeping things running reliably and improving-- and pulling out the big tens of percent performance improvements that come from high level algorithmic improvements. Eeking out the last percent in micro-optimizations isn't always something that we have the resources to do even where they make sense from a maintainability perspective. But, instead we're off building the fastest ECC validation code that exists out there bar none; because thats simply more important.
Could there be more micro-optimizations: Absolutely. So step on up and get your hands dirty because there is 10x as much work needed as there are resources are. There is almost no funding (unlike the millions poured into BU just to crank out crashware); and we can't have basically any failures-- at least not in the consensus critical parts. Oh yea, anonymous people will be abusive to you on the internet too. It's great fun.
Again, it's not just the micro-optimizations, it's the big fat bad design choices.
We didn't come here to bash you or bash your code, the topic just came up, and in that 1 page people were already mocking your choices.
Like this one from ComputerGenie:
Why the hell is Core still stuck on LevelDB anyway?
The same reason BDB hasn't ever been replaced, because even after a softtfork
and a hard fork, new wallets must still be backwards-compatible with already nonfunctional 2011 wallets.
That should tell you something.
Inefficient data storage Oh please. Cargo cult bullshit at its worst. Do you even know what leveldb is used for in Bitcoin? What reason do you believe that $BUZZWORD_PACKAGE_DEJURE is any better for that? Did it occur to you that perhaps people have already benchmarked other options? Rocks has a lot of feature set which is completely irrelevant for our very narrow use of leveldb-- I see in your other posts that you're going on about superior compression in rocksdb: Guess what: we disable compression and rip out out of leveldb, because it HURTS PERFORMANCE for our use case. It turns out that cryptographic hashes are not very compressible.
Everyone knows compression costs performance, it's for space efficiency, wtf are you even on about.
Most CPU is running idle most of the time, and SSD is still expensive.
So just use RocksDB, or just toss in a lz4 lib, add an option in the config and let people with decent CPU enable compression and save 20G and more.
I just copied the entire bitcoind dir (blocks, index, exec, everything) onto a ZFS pool with lz4 compression enabled and at 256k record size it saved over 20G for me.
Works just fine, and ZFS isn't even known for its performance.
(And as CK pointed out, no the blockchain isn't stored in it-- that would be pretty stupid)
That was not what CK said, what CK said was: I'm not a fan of its performance either [
#1058]
Do you have difficulty reading or are you just being intentionally dishonest?
The regular contributors who have written most of the code are the same people pretty much through the entire life of the project; and they're professionals with many years of experience. Perhaps you'd care to share with use your lovely and impressive works?
Doesn't matter how many life time people spent on it, when you see silly shit like sha256() twice, you know it's written by amateurs.
Which wouldn't even hold a candle to the multiple orders of magnitude speedup we've produced so far cumulatively through the life of the project-- exactly my point about micro-optimizations. Of course, contributions are welcome. But it's a heck of a lot easier to wave your arms and insult people who've produced hundred fold improvements, because you think a laundry list of magic moves is going to get another couple times (and they might-- but at what cost?)
If you'd like to help out it's open and ready-- though you'll be held to the same high standard of review and validation and not just given a pass because a micro-benchmark got 1% faster-- reliability is the first concern... but 2x-level improvements in latency or throughput critical paths would be very very welcome even if they were a bit painful to review.
If you're not interested or able-- well then maybe you're just another drunken sports fan throwing concessions from the stand convinced that you could do so much better than the team, though you won't ever take to the field yourself. Tongue It doesn't impress, quite the opposite: because you're effectively exploiting the fact that we don't self-promote much, and so you can get away with slinging some rubbish about how terrible we are just to try to make yourself look impressive. It's a low blow against some very hard working people whom owe nothing to you.
All this bullshit talk is meaningless when your basic level silly choices are all over the place.
Here, your internal sha256 lib, the critical hashing function all encode/decode operation relies on, the one that hasn't been updated since 2014:
https://github.com/bitcoin/bitcoin/blob/master/src/crypto/sha256.cppSHA256 is one of the key pieces of Bitcoin operations, the blocks use it, the transactions use it, the addresses even use it twice.
So what's your excuse for not making use of SSE/AVX/AVX2 and the Intel SHA extension? Aesthetics? Portability? Pfft.
There are mountains of accelerated SHA2 libs out there, like this one,
Supports Intel SHA extension, supports ARMv8, even has MVSC headers:
For AVX2, here is one from Intel themselves:
https://patchwork.kernel.org/patch/2343841/Optimized sha256 x86_64 routine using AVX2's RORX instructions
Provides SHA256 x86_64 assembly routine optimized with SSE, AVX and
AVX2's RORX instructions. Speedup of 70% or more has been
measured over the generic implementation.Signed-off-by: Tim Chen <
[email protected]>
There is your 70% speed up for a single critical operation on your hot-path.
This isn't some advanced shit, that Intel patch was created in March 26, 2013, your sha256 lib was last updated in Dec 19, 2014, so the patch existed over a year before your last update. We have even faster stuff now using Intel SHA intrinsics.
You talk a lot of shit but your code and choices are like they're made by amateurs.
Working in "cryptocurrency" doesn't automatically make you an expert because the word has "crypto" in it.
Fix your silly shit instead of keep talking about it.