Pages:
Author

Topic: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? - page 10. (Read 95279 times)

sr. member
Activity: 420
Merit: 262
The shift to electronic money will be swift and physical currency will experience a waterfall collapse.

Armstrong also disses the idea that gold will go to a very high price ($10,000, or even $64,000, non-hyper-inflated dollars).  I am not sure that I agree with MA on this.  My best guess is that they will run out of physical gold at some point vs. all the paper gold.  I guess we will wait and see.

ORO my man, gold is never coming back as money every again. It is purely a speculation. That is why it will top at $5000 maximum. I understand it is very hard for you to accept this, but the reasons are very obvious. Physical money is a dinosaur in this era where we trade money instantly digitally. Gold is too slow. Fugetaboutit already.

Sorry man. I know you want wealth to not grow wings and fly away, but the Bible tells you that is impossible.

I love to hold gold 1 oz coins in my hand. It isn't a lack of passion for gold on my part. It is just my rationality.

Is this the case in the Philippines? In your estimation how much of the SE Asia transactions nowadays are digital and physical cash?

It is still mostly physical cash (80 - 90%?), but in another 5 - 10 years everyone will have a smartphone. The shift to all digital will be very rapid. The AsianUnion will open everything and poverty will be gone by 2033.

The world is going to change so fast...

I wouldn't be so sure, changing habits is not easy, it's not about giving everyone a smartphone. Large groups of the population can't and will not want to shift to all digital, physical money will have a significant share of transactions in the foreseeable future.

You don't realize how difficult it is here to get change for 500 pesos ($11).

You don't realize how young filipinos hate to carry around these heavy metal coins in their stylish no pockets clothes.

You don't realize that the Philippines was the SMS capital of the world before Twitter arrived.

Sorry you are don't realize that there are more filipinos under age 25 than over it.

The youth spend much of their time online here. The change is already underway.
sr. member
Activity: 420
Merit: 262
Go language:

Afair, Skycoin is programmed in Go. Here some of my quickly researched findings about Go:

Sharing a private message:

Quote from: myself
[...]

Btw, Go sucks as a replacement for C++ templates and anyone using that doesn't understand well modern Generics programming language design, which includes Ethereum:

https://en.wikipedia.org/wiki/Go_%28programming_language%29#Notable_users
http://yager.io/programming/go.html
http://jozefg.bitbucket.org/posts/2013-08-23-leaving-go.html

Also the marriage of a GC with what is intended to be a low-level systems programming language seems to be a mismatch:

http://jaredforsyth.com/2014/03/22/rust-vs-go/

[...]
sr. member
Activity: 420
Merit: 262
I'm starting to have doubts about Rust's by-default emphasis on low-level control of memory allocation lifetimes and scopes:

Some unofficial hacks to get Rust to compile to Javascript via LLVM/Emscripten:

https://users.rust-lang.org/t/any-solution-to-write-html5-games-in-rust/4380

The ability to compile to Javascript and moreover to be able to comprehend the Javascript produced (at least for the unoptimized output case) so it could be debugged with Javascript debuggers (e.g. in the browser) would certainly make a language much more accessible. I am thinking Rust's semantics are too heavy to ever achieve the latter. Minimizing required tooling could potentially remove some tsuris.
hero member
Activity: 798
Merit: 500
I like the Viᖚes name, I think it's cool.
legendary
Activity: 2968
Merit: 1198
There is no perfect language, but I told you already that I have certain requirements which my project dictates which are not met by any language:

Oh sure, I can't disagree when you state that you have defined requirements, done a diligent search and come up empty. (The above was mostly a response to AlexGR comments anyway.)

Now you just have to make a sound business case for building it, or if you can't do that then iterate your requirements.

Quote
Is it outside my resources? Maybe. That is what I am trying to determine now. Deep in research and coding.

Okay we mostly agree. The question is not only resources but also time to market (i.e. will the market wait?). This is not meant to suggest an answer.
sr. member
Activity: 420
Merit: 262
On software development, terms of implementation he is hopeless. I read not long time ago, he said none of the popular existing programming languages - which could expose the project to existing developer communities - satisfies his requirements, and therefore he is evaluating now the Rust language/framework. WTF? So he delays his progress again, because right now the programing languages are not suited to his use case. WTF again? It seems he doesn't have the ability to deliver anything, except showcasing his super intellect in this forum.

I agree with you on this. You go to war with the army you have, while working to make it better. Waiting for the perfect language guarantees waiting forever.

There is no perfect language, but I told you already that I have certain requirements which my project dictates which are not met by any language:

1. JIT compilation as loaded over the Internet.
2. Strong typing with type parameter polymorphism, eager by default, with Top as the disjunction of all types and Bottom (_|_) as the conjunction of all types (Haskell is the opposite).
3. Async/await (stackless coroutines)
4. Haskell typeclass adhoc polymorphism, i.e. no subclassing which is an anti-pattern
5. Low-level control for performance with high-level elegance such as first-class functions and functional programming paradigm.
6. Add first-class disjunctions to #4.

There is no language on earth which can do that. My analysis thus far concludes Rust is closest.

Rust does everything except #3 and #6, and IMO its #4 and #5 need some polishing.

If you are just building an application, crypto-currency or just a website, then you don't need to invest in creating the correct tools. But when you are building an application platform for others to build on top of, then you have to invest in the tools. Ask Google about Android. Ask Apple about iOS.

Is it outside my resources? Maybe. That is what I am trying to determine now. Deep in research and coding.
legendary
Activity: 1708
Merit: 1049
Yes, the streaming argument is valid, but the processor is capable of more than that.

Compilers are not superoptimizers. They can't and don't promise to do everything a processor is capable of.

Basically that brings up back to the starting point... When C was first created, it promised to be very fast and suitable for creating OS'es, etc. Meaning, its compiler wasn't leaving much performance on the table. With khz of speed and few kbytes of memory there was no room for inefficiency.

Granted, the instruction set has expanded greatly since the 70's with FPUs (x387), MMX, SSE(x), AVX(x), AES, etc, but that was the promise. To keep the result close to no overhead (compared to asm). That's what C promised to be.

But that has gone out the window as the compilers failed to match the progress and expansion of the cpu's arsenal of tools. We are 15 years after SSE2 and we are still discussing why the hell isn't it using SSE2 in a packed manner. This isn't normal for my standards.

If you look at the basic C operations, they all pretty much correspond to a single (or very small number) CPU instruction from the 70s. As TPTB said, it was pretty much intended to be a thin somewhat higher level abstraction but still close to the hardware. Original C wasn't even that portable in the sense that you didn't have things like fixed size integer types and such. To get something close to that today you have to consider intrinsics for new instructions (that didn't exist in the 70s) as part of the language.

The original design of C never included all these highly aggressive optimizations that compilers try to do today, that was all added later.  Back in the day, optimizers were largely confined to the realm of FORTRAN. They succeed in some cases for C of course, but its a bit of square peg, round hole.

AlexGR, assembly wouldn't regroup instructions to make them SIMD if you hadn't explicitly made them that way. So don't insinuate that C is a regression.

I'm not even asking it to guess what is going to happen. It won't do it, even after profiling the logic and flow (monitoring the runtime - and recompiling after taking notes of the runtime that will be used for the PGO build).

Quote
C has no sematics for SIMD, rather it is all inferred from what the invariants don't prevent. As smooth showed, that C compiler's heuristics are able to group into SIMD over loops but for that compiler and flag choices, it isn't grouping over unrolled loops.

What they are doing is primarily confining the use to arrays. If you don't use an array you are fucked (not that if you use arrays it will produce optimal code, but it will tend to be much better).

However, with AVX gaining ground, the losses are now not 2-4x as with leaving packed SSE SIMDs out, but rather 4-8x - which is quite unacceptable.

The spread between what your hardware can do and what your code generates in terms of result is increasing by the day. When you can group together 4 or 8 instructions in one, it's like your processor is not running at 4 ghz, but rather at 16ghz or 32ghz - as you want 1/8th the clock cycles for batch processing.

Contemplate this: In the 70s, "C is fast" meant something like we are leaving 5-10% on the table. Now it's gone up to 87.5% left on the table in an AVX scenario where 8 commands could be packed and batch processed, but weren't. Is this still considered fast? I thought so.

Quote
Go write a better C compiler or contribute to an existing one, if you think that optimization case is important.

I can barely write programs, let alone compilers Tongue But that doesn't prevent me from understanding their deficiencies and pointing them out. I'm very rusty in terms of skills - I've not been doing much since the 90s. Even the asm code has changed two times from 16 bit, to 32 bit, to 64 bit...

Quote
If you want explicit SIMD, then use a language that has such semantics. Then you can blame the compiler for not adhering to the invariants.

Actually there are two-three additions in the form of #pragma that help the compiler "get it"... mainly for icc though. GCC might have ivdep.
sr. member
Activity: 420
Merit: 262
Yes, the streaming argument is valid, but the processor is capable of more than that.

Compilers are not superoptimizers. They can't and don't promise to do everything a processor is capable of.

Basically that brings up back to the starting point... When C was first created, it promised to be very fast and suitable for creating OS'es, etc. Meaning, its compiler wasn't leaving much performance on the table. With khz of speed and few kbytes of memory there was no room for inefficiency.

Granted, the instruction set has expanded greatly since the 70's with FPUs (x387), MMX, SSE(x), AVX(x), AES, etc, but that was the promise. To keep the result close to no overhead (compared to asm). That's what C promised to be.

But that has gone out the window as the compilers failed to match the progress and expansion of the cpu's arsenal of tools. We are 15 years after SSE2 and we are still discussing why the hell isn't it using SSE2 in a packed manner. This isn't normal for my standards.

If you look at the basic C operations, they all pretty much correspond to a single (or very small number) CPU instruction from the 70s. As TPTB said, it was pretty much intended to be a thin somewhat higher level abstraction but still close to the hardware. Original C wasn't even that portable in the sense that you didn't have things like fixed size integer types and such. To get something close to that today you have to consider intrinsics for new instructions (that didn't exist in the 70s) as part of the language.

The original design of C never included all these highly aggressive optimizations that compilers try to do today, that was all added later.  Back in the day, optimizers were largely confined to the realm of FORTRAN. They succeed in some cases for C of course, but its a bit of square peg, round hole.

AlexGR, assembly wouldn't regroup instructions to make them SIMD if you hadn't explicitly made them that way. So don't insinuate that C is a regression.

C has no sematics for SIMD, rather it is all inferred from what the invariants don't prevent. As smooth showed, that C compiler's heuristics are able to group into SIMD over loops but for that compiler and flag choices, it isn't grouping over unrolled loops.

Go write a better C compiler or contribute to an existing one, if you think that optimization case is important.

If you want explicit SIMD, then use a language that has such semantics. Then you can blame the compiler for not adhering to the invariants.
sr. member
Activity: 420
Merit: 262
Quote
"The distinction from the Dash masternode scam, is that a masternode is staked only once with DRK (Dash tokens) and earns 50+% ROI per annum forever"

Block reward share for MNs at 1.94 dash, x 576 blocks per day x 365 days per year = 407.865 dash / 4000MNs = 102 dash / MN / ~10%.

There was a chart (may or may not still be there) on the Dash website showing up to 50% per annum returns since I think the rate increases every year of being staked or something like that.

Please reply in the original thread (that my prior post was quoted from) and not here. I will delete my reply here and your post shortly, so please copy it to where it should go.
sr. member
Activity: 420
Merit: 262
enforcing worldwide spread is not easy, and perhaps not doable.
They tried doing it with porn in the 90's, file sharing in 2000's and so on...
and servers kept over heating and got fried up Smiley

As I explained the key distinction upthread, those are free markets because they are decentralized and there is no significant asymmetry of information which makes it otherwise.

It pisses me off when readers waste my expensive time by ignoring what I already wrote twice in this thread. This makes three times. Please readers don't make me teach this again by writing another post which ignores my prior points.


I still dont understand why your'e calling waves a scam only cuz it made an ico (like everyone now).
Its devs are legit, real names with real work behind them.
So they thought charles and kushti are friends which will support them, and were wrong, apologized and moved on.
everyone got their asses covered legaly ofc..
so if you think all ico's are scams, you got lots of work now not just on waves bro Smiley

Please clearify. tnx

1. I already provided the link to the thread two or three times in this thread, which explains that ICOs sold to non-accredited USA investors are ostensibly illegal.

I hate ICOs by now for other reasons:

2. They contribute to the mainstream thinking that crypto-currency is a scam and thus we will have great difficulty getting CC widely adopted if don't put a stop to these scams.

3. They extract capital to a few scammers, which could be better used to build our real ecosystems which are not vaporware and have real decentralized designs, such as Bitcoin and Monero.

4. They prey on the ignorance of n00b speculators, thus can never be a free market.

5. They can never attain adoption because they destroy the Nash equilibrium and decentralization of the ecosystem:

As an example: I can show that dash is an oligarchy, whether intentional or not, due to the way their paynode scheme works. These systems are designed to work trustlessly, so any hiccups (intentional or not) should be invalidated by the design, not left-up to the good or bad intentions of those who are engaged with it.

did the monero wrote that fact about infinite supply in their ann Huh   if i was an investard in monero i would feel cheated if it isnt

No one can fork Monero without the support of the decentralized miners. The distinction from the Dash masternode scam, is that a masternode is staked only once with DRK (Dash tokens) and earns 50+% ROI per annum forever after for the largest holders of Dash tokens, thus further centralizing the coin meaning there is a centralized oligarchy which the investors are relying on for their future expecation of profits which afaics fulfills the Howey test for what is an investment security that is regulated by the Securities Act. A decentralized PoW miner is constantly expending on electricity in a competitive free market. Owning a lot of Monero doesn't give you any leverage as a miner.

New post to better articulate why permissioned ledger, closed entopy systems likely have no value:

The problem with Emunie, as I talked about in the IOTA thread, is that any system that doesn't have permanent coin turnover via mining, removes mining completely, or puts some type of abstraction layer between mining and block reward (as in the case of IOTA), is a permissioned ledger.  People got too caught up in trying to improve on consensus mechanisms and forgot what actually constitutes a decentralized currency in the first place.

When Maxwell said he "proved mathematically that Bitcoin couldn't exist" and then it did exist, it was because he didn't take open entropy systems into account.  He already knew stuff like NXT or Emunie could exist, but nobody actually considered them to be decentralized.  They're distributed but not decentralized.  Basically stocks that come from a central authority and then the shareholders attempt to form a nash equilibrium to...siphon fees from other shareholders in a zero sum game because there is no nash equilibrium to be had by outsiders adopting a closed entropy system in the first place...

Take for example the real world use case of a nash equilbrium in finance.  There's many rival nations on earth and they're all competing in currency wars, manipulating, devaluing, etc.  They would all be better off with an undisputed unit of account that the other can't tamper with for trade.  In order to adopt said unit, it would have to be a permissionless system that each nation has access to where one of the group isn't suspected to have an enormous advantage over the others, otherwise they would all just say no.

This is why gold was utilized at all.  Yea, some territories had more than others, but nobody actually knew what was under the ground at the time.  Everyone just agreed it was scarce, valuable, and nobody really had a monopoly on it.  There are really no circumstances where people on an individual level or nation-state level can come together to form any kind of nash equilibrium in a closed entropy system.  The market is cornered by design, and for value to increase, others need to willingly submit to the equivalent of an extortion scheme.  The only time systems like that have value at all is when governments use coercion to force them onto people.

6. Because they are not decentralized and rely on expectation of profits based on the performance of a core group, ICOs turn what should be a competition for creating the best technology into a fist fucking fest of ad hominem and political games:

Let's psychoanalyze those want to troll me with a thread like this. Actually I have no censorship motivated objection about making a thread about me (I wish so much, it was possible to do something great without attaining any personal fame), it just feels really stupid because I (the idealist in me) think the technology is more important than the person, which is one of the main reasons I hate vaporware ICOs.

This thread serves mainly to deflect attention away from Dash's instamine scam.

+1 for conscious reason.

The subconscious reason this thread exists is the psychological phenomenon that it is better to destroy everyone, than to fail alone.

"I dropped my ice cream in the mud, so now I am throwing mud on your ice cream so we are the same, because God hates us equally".

This is what socialism built. Equality is prosperity, because fairness is the uniformity of nature's Gaussian distribution. Equality is a human right! Didn't you know that!

They would rather waste the time of important coders whose time would be better spent coding a solution for humanity, so as to satisfy their inability to accept their mistakes and jealousy.

7. ICOs have less liquidity because they are not widely distributed and due to #5:

you can read my observations here.

Interesting post.

The salient quote is of course:

Why litecoin? Liquidity. These guys own 5 and 6 digits amount of BTC. They need massive liquidity to increase their holdings by any significant degree. And as such litecoin has been a blessing. Will history repeat itself?

I've had that in my mind for a loooong time. Liquidity is absolutely necessary for the design, marketing, and distribution of crypto-currency, if you want to succeed.
legendary
Activity: 2968
Merit: 1198
The thing with superoptimizers is that they might work differently on same instruction sets but different cpus.

Superoptimizers are totally impractical in most cases. They are useful for small bits of frequently reused code (and yes in some cases probably want different sequences for different hardware variants). The point of bringing it up is simply to say that general purposes compilers don't and can't do that.
legendary
Activity: 1708
Merit: 1049
The thing with superoptimizers is that they might work differently on same instruction sets but different cpus.

Say one cpu has 32k l1 cache and another has 64k l1.

You can time all possible combinations in a 32k processor and it might be a totally different combo to what a 64k l1 processor need. The unrolling and inlining may be totally different, as can be the speed of execution in certain instructions where a cpu might be weaker.

So what you need is to superoptimize all profiles for all cpus, store these cpu-data-paths and then, on runtime, perform a cpu id check and execute the right instruction-path for that cpuid.

As for the

Quote
On the merits of the optimizations, you're still looking at one particular program, and not considering factors such as compile time, or how many programs those sorts of optimizations would help, how much, and how often.

Let me just add that if you go over at john the ripper's site, it has like support for dozens of ciphers. People, over there, are sitting and manually optimizing these one by one because optimizations suck. The c code is typically much slower. This is not "my" problem. It's a broader problem. The fact that very simple code can't be optimized just illustrates the point. You have to walk before you can run. How will you optimize (unprunable) complex code if you can't optimize a few lines of code? You won't. It's that simple really.
legendary
Activity: 2968
Merit: 1198
Quote
The optimizations added later, were made possible by more complex code that could be simplified (=unnecessary complexity getting scaled down) plus the increase of usable ram.

Yes but that was not part of the original promise (your words: "When C was first created, it promised ..."). The original promise was something close to the hardware (i.e. C language operations mapping more or less directly to hardware instructions).

On the merits of the optimizations, you're still looking at one particular program, and not considering factors such as compile time, or how many programs those sorts of optimizations would help, how much, and how often.

If you look at any credible benchmark suite, for compilers or otherwise, it always consists of a large number of different programs, the test results of which are combined. In the case of compiler benchmarks, compile time, memory usage, etc. are among the results considered.

Anyway feel free to rant. As you say it is easy to whine when you are expecting someone else to build the compiler. And that is my point too.

Also, if you want optimal code (i.e. optimal use of hardware instructions) you do need a superoptimizer, even if you have profiling information. The original superoptimziers only worked on simple straight line code where profiling is irreverent. They may be slightly more flexible now, I'm not sure. Anyway, profiling is not the issue there.

I'll not reply again unless this gets more interesting, it is repetitive now.
legendary
Activity: 1708
Merit: 1049
If you look at the basic C operations, they all pretty much correspond to a single (or very small number) CPU instruction from the 70s. As TPTB said, it was pretty much intended to be a thin somewhat higher level abstraction but still close to the hardware. Original C wasn't even that portable in the sense that you didn't have things like fixed size integer types and such. To get something close to that today you have to consider intrinsics for new instructions (that didn't exist in the 70s) as part of the language.

The original design of C never included all these highly aggressive optimizations that compilers try to do today, that was all added later.  Back in the day, optimizers were largely confined to the realm of FORTRAN. They succeed in some cases for C of course, but its a bit of square peg, round hole.

The optimizations added later, were made possible by more complex code that could be simplified (=unnecessary complexity getting scaled down) plus the increase of usable ram.

For example if an unrolled loop was faster than a rolled one, or inlining worked better but cost you more memory (that you now had), it was worth it. But up to an extent (again) because now we have the performance-wall-limits of L1 and L2 cache sizes which are ...of 1970's main ram sizes Tongue

But in terms of instruction set utilization it's a clusterfuck. In a way we don't even need superoptimizers when we have PGOs for things that have been sampled to run in a predetermined way. You allow the compiler to see PRECISELY what the program does. No "if's" or "hows". It KNOWS what the program does. It KNOWS the logic and flow. It saw it running.

You (as a compiler) are allowed to see the executable doing

b=b/g
bb=bb/g
bbb=bbb/g
bbbb=bbbb/g

...and you now know that you can pack these 4 into two SIMDs. You didn't even have to see it running, you knew that these were different variables, with different outcomes. You knew they were aligned to the correct size. But even if you had any doubts, you even saw them running anyway with the -fprofile-generate. And still you are not packing these fuckers together after -fprofile-use. And that's the point where I'm furious about.

It's just a few simple "if then else" in the heuristics. IF you see instructions that can be packed THEN FUCKING PACK THEM instead of issuing serial scalar / SISD instructions. With AVX the loss is not 2-4x, but 4-8x. It's insane.

You don't need to know much about compilers to understand that their optimizations suck. You just see the epic fail that their PGO is and you know how bad heuristics they have, where they can't even tell what can be optimized by knowing full well the flow / logic / speed / bottlenecks, etc of the program.

I'm kind of repeating myself over and over for emphasis, but we need to realize that at the point where the profiler knows what the program does, there is no excuses left of the type "but, but, but I don't know if that optimization is safe so I can't risk it". No, now you know. With 100% certainty. (Not that packing 2 into 1 was risky).
legendary
Activity: 2968
Merit: 1198
Yes, the streaming argument is valid, but the processor is capable of more than that.

Compilers are not superoptimizers. They can't and don't promise to do everything a processor is capable of.

Basically that brings up back to the starting point... When C was first created, it promised to be very fast and suitable for creating OS'es, etc. Meaning, its compiler wasn't leaving much performance on the table. With khz of speed and few kbytes of memory there was no room for inefficiency.

Granted, the instruction set has expanded greatly since the 70's with FPUs (x387), MMX, SSE(x), AVX(x), AES, etc, but that was the promise. To keep the result close to no overhead (compared to asm). That's what C promised to be.

But that has gone out the window as the compilers failed to match the progress and expansion of the cpu's arsenal of tools. We are 15 years after SSE2 and we are still discussing why the hell isn't it using SSE2 in a packed manner. This isn't normal for my standards.

If you look at the basic C operations, they all pretty much correspond to a single (or very small number) CPU instruction from the 70s. As TPTB said, it was pretty much intended to be a thin somewhat higher level abstraction but still close to the hardware. Original C wasn't even that portable in the sense that you didn't have things like fixed size integer types and such. To get something close to that today you have to consider intrinsics for new instructions (that didn't exist in the 70s) as part of the language.

The original design of C never included all these highly aggressive optimizations that compilers try to do today, that was all added later.  Back in the day, optimizers were largely confined to the realm of FORTRAN. They succeed in some cases for C of course, but its a bit of square peg, round hole.
legendary
Activity: 1708
Merit: 1049
Yes, the streaming argument is valid, but the processor is capable of more than that.

Compilers are not superoptimizers. They can't and don't promise to do everything a processor is capable of.

Basically that brings up back to the starting point... When C was first created, it promised to be very fast and suitable for creating OS'es, etc. Meaning, its compiler wasn't leaving much performance on the table. With khz of speed and few kbytes of memory there was no room for inefficiency.

Granted, the instruction set has expanded greatly since the 70's with FPUs (x387), MMX, SSE(x), AVX(x), AES, etc, but that was the promise. To keep the result close to no overhead (compared to asm). That's what C promised to be.

But that has gone out the window as the compilers failed to match the progress and expansion of the cpu's arsenal of tools. We are 15 years after SSE2 and we are still discussing why the hell isn't it using SSE2 in a packed manner. This isn't normal for my standards.

Quote
Maybe, maybe not. It just apparently hasn't been a priority in the development of GCC. Have you tried icc to see if it does better for example? (I don't know the answer.)

Yes, it's somewhat better but not what I expected. That was in version 12 IIRC. Now it's at 15 or 16, again IIRC. I've actually used clang, icc, amd open64 - they don't have any serious differences. In some apps or cracking stuff, they might. I've seen icc excel in some crypto stuff.

Quote
It is quite possible that an effort to improve optimization in GCC for the purposes of, for example, cryptographic algorithms would bear fruit. Whether that would be accepted into the compiler given its overall tradeoffs I don't know.

We need a better GCC in general. But that's easy asking when someone else has to code it.
legendary
Activity: 2968
Merit: 1198
Yes, the streaming argument is valid, but the processor is capable of more than that.

Compilers are not superoptimizers. They can't and don't promise to do everything a processor is capable of.

Quote
I guess I'm asking too much when I want the compiler to group in a SIMD, similar but separate/non-linear (=safe) operations.

Maybe, maybe not. It just apparently hasn't been a priority in the development of GCC. Have you tried icc to see if it does better for example? (I don't know the answer.)

It is quite possible that an effort to improve optimization in GCC for the purposes of, for example, cryptographic algorithms would bear fruit. Whether that would be accepted into the compiler given its overall tradeoffs I don't know.
legendary
Activity: 1708
Merit: 1049
Yes, the streaming argument is valid, but the processor is capable of more than that.

I guess I'm asking too much when I want the compiler to group in a SIMD, similar but separate/non-linear (=safe) operations.

The comparison is not against some compiler that exists in fantasy land, but with real-life asm improvements.

How many cryptographic functions, mining software etc, aren't tweaked for much greater performance? Why? Because the compilers don't do what they have to do.

For example, what's the greatest speed of an sha256 for c and what's the equivalent in c+asm? I'd guess at least twice as fast for the later.
legendary
Activity: 2968
Merit: 1198
Yeah, the golden brackets of SIMDs... compilers love those, don't they? But they are rarely used if one isn't using arrays.

You can get the same results using pointers. Remember, SSE is Streaming SIMD extensions. It intended for processing large data sets. As it turns out the instructions can be used in scalar mode in place of regular FP instructions, and that appears advantageous, but you should never expect a compiler to produce optimal code. It is impossible.

The only truly meaningful evaluation of a compiler is against another compiler, or if you are a compiler developer, comparing the overall effects of having some optimization (across a large input set) against not having it. Comparing against some fantasy optimal compiler is dumb.

legendary
Activity: 1708
Merit: 1049
Yeah, the golden brackets of SIMDs... compilers love those, don't they? But they are rarely used if one isn't using arrays.

If my loop was

for 1 to 500mn loops do

b=b/g
bb=bb/g
bbb=bbb/g
bbbb=bbbb/g

...it wouldn't use any packed instructions.

Btw, if I made it 100mn loops x 4 math operations, as the original spec was intended (I did 4 ops x5 times the maths in every loop to compensate for fast finishing speeds - but apparently I won't be using them now with values like 2.432094328043280942 as it goes up to 20+ secs instead of 2), then I'd have to manually unroll the loop and lower the loop count and create arrays. Why? Because without those golden brackets the compiler is useless. You have to write, not as you want to write, but as the compiler wants you to write.
Pages:
Jump to: