Pages:
Author

Topic: [neㄘcash, ᨇcash, net⚷eys, or viᖚes?] Name AnonyMint's vapor coin? - page 13. (Read 95279 times)

sr. member
Activity: 420
Merit: 262
what is about to happen to Javascript

What is about to happen to JavaScript is very likely that it will continue to become even more and more ubiquitous.

That will happen first because it headed towards a peaking (critical mass about now with everyone racing to switch from example Silverlight to Javascript). But that peak will be quickly followed by a disruption because Javascript is being deployed in use cases where it is not fit and this will result in disappointment, which is for example what happened with C and C++.

C++ has declined more rapidly than C. It seems there is perhaps no use for which it is fit!

Good point and application of my theory about designed fitness. Thanks. Yeah everytime I imagine going back to C++ to get some low-level control married with generics and some fugly excuse for a first-class function, I decide I'd rather not code, lol. I think this extreme resistance to entering clusterfucks comes with past experience of them.  Some crypto-currencies have decided they are willing to be sacrificed at the altar for that marriage of features. Perhaps it is a pragmatic decision since crypto in a high level language kind of sucks.

I don't know if anyone could pay me enough to work on C++ code again, especially the complexity that ostensibly has been layered on top since I last used it in the late 1990s (which I haven't bothered to learn).  (btw another reason I wasn't excited to work on Monero)

Whereas, for the moment there is no alternative to C when you want portable lowest-level control.

I don't believe I have ever discussed anything about JAMBOX with you.

Okay I thought you meant programming language properties specifically. In terms of the overall platform, no.

I tried to think about using Javacript for my current plan, but there are just so many cases of tsuris and non-unification lurking. Why start another multi-year journey with a monkey patching expedient yet fundamental handicap.

However, there are many details remaining for me to analyze so it is possible I might conclude that not all objectives (e.g. JIT fast start compilation) can be achieved with a language alternative. Stay tuned.
legendary
Activity: 2968
Merit: 1198
what is about to happen to Javascript

What is about to happen to JavaScript is very likely that it will continue to become even more and more ubiquitous.

That will happen first because it headed towards a peaking (critical mass about now with everyone racing to switch from example Silverlight to Javascript). But that peak will be quickly followed by a disruption because Javascript is being deployed in use cases where it is not fit and this will result in disappointment, which is for example what happened with C and C++.

C++ has declined more rapidly than C. It seems there is perhaps no use for which it is fit!

Quote
But I am not yet ready to reveal my entire plan. It will be in the JAMBOX crowdfund document when ever we decide to crowdfund. I am lately thinking of keeping it hidden a bit longer, because of potential copycats such as Synereo. I have enough funding to continue developing for now. No need to rush the crowdfund, especially if my health is improving.

You can also consider not revealing your entire plan and still crowdfunding.

Quote
I don't believe I have ever discussed anything about JAMBOX with you.

Okay I thought you meant programming language properties specifically. In terms of the overall platform, no.

sr. member
Activity: 420
Merit: 262
what is about to happen to Javascript

What is about to happen to JavaScript is very likely that it will continue to become even more and more ubiquitous.

That will happen first because it is headed towards a peaking (critical mass about now with everyone racing to switch from for example Silverlight to Javascript, Java to Node.js, C to ASM.js, etc.). But that peak will be quickly followed by a disruption because Javascript is being deployed in use cases where it is not best fit (merely because it is expedient and only choice) and this will result in disappointment, which is for example what happened with C and C++.

A better fit programming language and App platform will arise to challenge the Internet browser. I know because I am the one who is making it.

No Javascript can't disrupt the non-browser Apps entirely, because it is not a best fit to applications

I agree it won't entirely, because I don't see that it brings sufficient advantages, and it has obvious disadvantages, so it will end up being better in some cases, worse in others. The other languages will continue be used alongside it in a fragmented environment.

You don't seem to grasp yet how unifying the mobile device is coupled with how unfit the current options of HTML5 apps or App store apps are. Appearances can be deceiving. Dig down into what users are actually doing and not doing.

But I am not yet ready to reveal my entire plan. It will be in the JAMBOX crowdfund document when ever we decide to crowdfund. I am lately thinking of keeping it hidden a bit longer, because of potential copycats such as Synereo. I have enough funding to continue developing for now. No need to rush the crowdfund, especially if my health is improving.

I see it. Stay tuned.

If it is what we have discussed before, then I see very, very high costs along with the benefits.

I don't believe I have ever discussed anything about JAMBOX with you.

If it is something new, then I look forward to seeing what you have invented. Sounds interesting.

Just remember that the mobile device is eating (almost) everything.
sr. member
Activity: 420
Merit: 262
C was adopted for a use case for which it is not the best suited, because there was nothing better around and thus it was the best suited at that time. Language technology was very immature at that time. There was more out there as research which hadn't yet been absorbed into mainstream languages and experience.

On this I agree. That is, at a time long after it was designed, for purposes very different from those for which it was designed, it became the best (i.e. least bad; nothing better) available alternative.

C was designed to be exactly what it was used for, one step above the metal with portability abstraction.

That business applications didn't know they needed better (because better wasn't available), is irrelevant to the point of a language only being best for what it was designed for, which is my point. C died as a business application language very quickly because it was not best for that use case.

C was employed for a use case where it was not best because it had sort of a temporary monopoly due to circumstances.

Please make sure you understand that key distinction because I think it will be crucial for understanding about what is about to happen to Javascript.

We are in a different era now where we have a lot of experience with the different programming paradigms and programming language design is an art now of being able to distil all the existing technology and experience in the field to an ideal design.

On this I tend to disagree. Witness Javascript.

It is very difficult, if not impossible, to centrally plan what the marketplace will, in its organic and evolutionary wisdom of how to weigh between conflicting factors, consider "ideal" at any particular point in time.

Emphatically disagree. Everything the market has done has been entirely rational. Javascript is only being widely adopted because the Internet browser has an undeserved monopoly. Breaking that monopoly is going to open Javacript to the realization that Javascript is not the best language for the Apps people are using it for.

If Rust adds first-class asynchronous programming, and someone disrupts the Internet browser, this could be a very rapid waterfall collapse for Javascript.

Clearly Sun Microsystems was trying to challenge Microsoft in server computing and so Java was designed to co-opt Microsoft Windows on the general client so Microsoft couldn't dictate their servers on everyone. What Sun didn't anticipate was Linus Torvalds.

This is chronologically and strategically wrong in a lot of ways, but that is off topic so I'll not elaborate now.

I was around then and I remember the first magazine articles saying the VM could be implemented in hardware and thus it was going to be a new kind of computer or device. Again this was all targeted at Microsoft's growing monopoly which was threatening to cannibalize Sun's market from the bottom up. So Sun decided to attack at the bottom, but changed strategy mid-stream as some of the expectations didn't materialize.

C# was designed to be a Java to run on Microsoft .Net and thus it died with .Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development).

One good language to succeed the current crop, will put the final nail in the coffin.

Heh, Javascript.

No Javascript can't disrupt the non-browser Apps entirely, because it is not a best fit to applications. It is a scripting language. And transcoder hell is not a solution, but rather a stopgap monkey patching. It has all sorts of cascaded tsuris that will explode over time especially for working in decentralized development.

It does seem more likely for the time being that the sort of dominance that was achieved by C and Java in their respective peaks won't be repeated any time soon.

I disagree. The Internet browser and the W3C monopoly (oligarchy) has been standing in the way of a better general purpose programming language.

I understand this brought together huge economies-of-scale, but unfortunately they've been squandered on an ill fit design that is designed to funnel everything through the browser. Then we have Apple and Android trying to funnel everything through App stores. The choke points are the problem.

We'll probably continue to see more fragmentation of multiple languages being used, continuing until there is a paradigm shift that brings significant productivity gains without high costs. Examples of this were C's low-cost of adding abstraction over asm or Java's low-cost of adding runtime safety over C.

I don't see that anywhere today. There are identifiable advantages to be had over the current market frontier leaders, but they all come with high costs.

I see it. Stay tuned.
legendary
Activity: 2968
Merit: 1198
C was adopted for a use case for which it is not the best suited, because there was nothing better around and thus it was the best suited at that time. Language technology was very immature at that time. There was more out there as research which hadn't yet been absorbed into mainstream languages and experience.

On this I agree. That is, at a time long after it was designed, for purposes very different from those for which it was designed, it became the best (i.e. least bad; nothing better) available alternative.

C# was designed to be a Java to run on Microsoft Net and thus it died with Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development).

One good language to succeed the current crop, will put the final nail in the coffin.

Heh, JavaScript.

It does seem more likely for the time being that the sort of dominance that was achieved by C and Java in their respective peaks won't be repeated any time soon. We'll probably continue to see more fragmentation of multiple languages being used, continuing until there is a paradigm shift that brings significant productivity gains without high costs. Examples of this were C's low-cost of adding abstraction over asm or Java's low-cost of adding runtime safety (and the OOP fad) over C (or C++, especially in its earlier forms without smart pointers).

I don't see that anywhere today. There are identifiable advantages to be had over the current market frontier leaders, but they all come with high costs.
sr. member
Activity: 420
Merit: 262
it was originally designed to be

That often does not matter

I sort of disagree with you at least in terms of the exemplary examples of greatest success if not entirely disagree.

C was definitely designed for low-level operating systems programming and it was the most popular language ever bar none because it provided just the necessary abstraction of assembly needed for portability and no more. For example, it didn't try to make pointers safe or other structure.

C was most certainly not designed as a language for all sorts of business applications, Windows desktop applications (of course such a thing did not exist at the time it was designed, but conceptually it was not intended for that), etc. as was most of its usage when it became the most popular language.

The reason it became popular is because of what I wrote before bolded, underlined above for emphasis. The reason C++ become popular is because it promised those benefits plus OOP.

I know because I coded WordUp in assembly then switched to C by version 3.0 and was amazed at the increase in my productivity. And the delayed switch away from assembly afair revolved around which good compilers were available for the Atari ST. Then for CoolPage I chose C++ for precisely that underlined reason and the integration with MSVC++ (Visual Studio) Model View framework for a Windows GUI application.

C was adopted for a use case for which it is not the best suited, because there was nothing better around and thus it was the best suited at that time. Language technology was very immature at that time. There was more out there as research which hadn't yet been absorbed into mainstream languages and experience.

We are in a different era now where we have a lot of experience with the different programming paradigms and programming language design is an art now of being able to distil all the existing technology and experience in the field to an ideal design.

Haskell

Has never really been widely used for anything (at least not yet) so irrelevant to my point.

Apparently you did not read my implied point that the mass market is not the only possible target market a language is designed for. As I wrote before, Haskell was design for an academic, math geek market. It is very popular and extremely successful within that tiny market.

It will be very difficult for you to find an Ivy league computer science graduate who hasn't learned Haskell and who doesn't wish to use it on some projects. That is what is called mindshare. Any language hoping to be the future of multi-paradigm has to capture some of that mindshare. Which Scala attempted to do, but it unfortuately bit down too hard on that anti-pattern subclassing.

But the problem is web pages changed to Apps and Javascript was ill designed for this use case.

And yet it is widely used for this (and growing rapidly for all sorts of uses), supporting my point.

And crashing my browser and transcoding hell of a non-unified typesystem which is headed to a clusterfuck. I have recognized how the Internet browser "App" will die. I will finally get my "I told you so" (in a positive way) on that totalitarian Ian Hickson and that rigor mortis of totalitarianism at the W3C.

Java was designed write once, run everywhere

Somewhat. It was designed for write-once run everywhere on the web (originally interactive TV and maybe PDAs since the web didn't quite exist yet, but the same principle). None of the business software use case where it became dominant was remotely part of the original purpose (nor do these business software deployments commonly even make use of code that is portable and "runs everywhere").

Clearly Sun Microsystems was trying to disrupt Microsoft's challenge in server computing and so Java was designed to co-opt Microsoft Windows on the general client so Microsoft couldn't dictate their servers on everyone. What Sun didn't anticipate was Linus Torvalds and Tim Berners-Lee.

C# was designed to be a Java to run on Microsoft Net and thus it died with Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development).

Windows is a dead man walking. Cross-platform, open source .Net (forgot the name) isn't gaining momentum. One good language to succeed the current crop, will put the final nail in the coffin.

But as you say it was designed to copy Java after Java was already successful so in that sense it is a counterexample, but a "cheat" (if you copy something that has already organically found a strong niche, chances are your copy will serve a similar niche).

It was an attempt to defeat Sun's strategy of co-opting the client. And it helped to do that, but the more salient disruption came from the Internet browser, Linux, and LAMP.

So I think it is definitely relevant what a language was designed for.

Sometimes, but often not. It is of course relevant, it just doesn't mean that what it will end up being used for (if anything) is the same as what it was designed to be used for.

I think it is much more deterministic as explained above. The programming language designer really needs to understand his target market and also anticipate changes in the market from competing technologies and paradigm shifts.

Further examples:

Python: designed for scripting, but is now the most used (I think) language for scientific and math computing (replacing fortran). Of course it is used for other things too, but most are also far removed from scripting.

The challenge we've had since graduating from C, has been to find a good language for expressing higher-level semantics. As you lamented upthread, all attempts have improved some aspects while worsening others. Python's main goal was to be very in tune with readability of the code and the semantics the programmer wants to express.

Thus it became a quick way to code with a high degree of expression. Even Eric Raymond raves about that aspect of Python.

But the downside of Python is the corner cases because afaik the type system is not sound and unified.

So it works for small programs but for large scale work it can become a clusterfuck of corner cases that are difficult to optimize and work out.

So I am arguing that Python was not designed solely for scripting but also for programmer ease-of-expression and readability and this has a natural application for some simpler applications, i.e. that aren't just scripts. This seems to have been a conscious design consideration.

BASIC: Designed for teaching, but became widely used for business software on minicomputers, and then very widely used for all sorts of things on PC.

Because that is what programmers were taught. So that is what they knew how to use. That doesn't apply anymore. My first programming was in BASIC on an Apple II because that was all that was available to me in 1983. I messed around some years before that with a TRS-80 but not programming (because I didn't know anyone who owned one and I could only play around with it for a few minutes at a time in the Radio Shack store). My first exposure to programming was reading the Radio Shack book on microprocessor design and programming when I was 13 in 1978 (afair due to being relegated to my bed for some days due to a high ankle sprain from football). Perhaps that is why I can program in my head, because I had to program only in my head from 1978 to 1983.

I must mea culpa that around that age my mother caught me (as we were leaving the mall) having stolen about $300 of electronic goods from inside the locked glass display case of Radio Shack. I had reached in when the clerk looked the other direction. (Later in teenage hood I became an employee of Radio Shack but I didn't steal). I had become quite the magician and I was eager to have virtually everything in Radio Shack to play with or take apart for parts (for my various projects and experiments in my room). She made me return everything to the store, but couldn't afford to buy any of it for me apparently. I think we were on food stamps and we were living in poverty stricken neighborhoods for example in Baton Rouge where my sister and I were the only non-negro kids in the entire elementary school. Then my mom got angry when my sister had a black boyfriend in high school.  Roll Eyes

COBOL and FORTRAN: counterexamples; used as designed.

Agreed.
legendary
Activity: 2968
Merit: 1198
it was originally designed to be

That often does not matter

I sort of disagree with you at least in terms of the exemplary examples of greatest success if not entirely disagree.

C was definitely designed for low-level operating systems programming and it was the most popular language ever bar none because it provided just the necessary abstraction of assembly needed for portability and no more. For example, it didn't try to make pointers safe or other structure.

C was most certainly not designed as a language for all sorts of business applications, Windows desktop applications (of course such a thing did not exist at the time it was designed, but conceptually it was not intended for that), etc. as was most of its usage when it became the most popular language.

Quote
Haskell

Has never really been widely used for anything (at least not yet) so irrelevant to my point.

Quote
But the problem is web pages changed to Apps and Javascript was ill designed for this use case.

And yet it is widely used for this (and growing rapidly for all sorts of uses), supporting my point.

Quote
PHP

Good counterexample.

Quote
Java was designed write once, run everywhere

Somewhat. It was designed for write-once run everywhere on the web (originally interactive TV and maybe PDAs since the web didn't quite exist yet, but the same principle). None of the business software use case where it became dominant was remotely part of the original purpose (nor do these business software deployments commonly even make use of code that is portable and "runs everywhere").

Quote
C# was designed to be a Java to run on Microsoft Net and thus it died with Net.

C# is not dead, it is still quite popular for Windows development (and some cross-platform development). But as you say it was designed to copy Java after Java was already successful so in that sense it is a counterexample, but a "cheat" (if you copy something that has already organically found a strong niche, chances are your copy will serve a similar niche).

Quote
So I think it is definitely relevant what a language was designed for.

Sometimes, but often not. It is of course relevant, it just doesn't mean that what it will end up being used for (if anything) is the same as what it was designed to be used for.

Further examples:

Python: designed for scripting, but is now the most used (I think) language for scientific and math computing (replacing fortran). Of course it is used for other things too, but most are also far removed from scripting.

BASIC: Designed for teaching, but became widely used for business software on minicomputers, and then very widely used for all sorts of things on PC.

COBOL and FORTRAN: counterexamples; used as designed.

Perl: had a bit of a golden age beyond its scripting purpose, but mostly seems a counterexample, often used as intended

Ruby: Not really sure about this one. Seems to be getting used for quite a few things now, mostly web development of course, but some others. This one is mostly off my radar so I don't really know the history.
sr. member
Activity: 420
Merit: 262
it was originally designed to be

That often does not matter

I sort of disagree with you at least in terms of the exemplary examples of greatest success if not entirely disagree.

C was definitely designed for low-level operating systems programming and it was the most popular language ever bar none because it provided just the necessary abstraction of assembly needed for portability and no more. For example, it didn't try to make pointers safe or other structure.

Haskell was designed to express math in programming tersely without extraneous syntax and thus the very high priority placed on global type inference, which is why they haven't implemented first-class disjunctions (aka unions) that happens to be the one feature missing from Haskell that makes it entirely unsuitable for my goals. Haskell forsaked practical issues such as the non-determinism of lazy evaluation and its impact on for example debugging (e.g. finding a memory leak). Haskell is by far the most successful language for its target use case.

Javascript was designed to be lightweight for the browser because you don't need a complex type system for such short scripts, and thus you can edit it with a text editor and don't even need a debugger (alert statements worked well enough). Javascript is the most popular language for web page scripts. But the problem is web pages changed to Apps and Javascript was ill designed for this use case. Transcoding to Javascript is a monkey patch that must die because it is a non-unified type system!

PHP was designed to render LAMP web pages on the server and became the most popular language for this use case, except the problem is server Apps need to scale and PHP can't sale, thus Node.js and Java, etc..

Java was designed write once, run everywhere, and it gained great popularity for this use case, but the problem was Java has some stupid design decisions which made it unsuitable as the language to use every where for applications.

C# was designed to be a better Java to run on Microsoft .Net and thus it died with .Net.

C++ was designed to be a marriage of C and OOP and it is dying with the realization that OOP (subclassing) is an anti-pattern and because C++ did not incorporate any elegance from functional programming languages.

All the other examples of languages had muddled use-cases for which they were designed and thus were not very popular.

So I think it is definitely relevant what a language was designed for.

legendary
Activity: 2968
Merit: 1198
it was originally designed to be

That often does not matter.

Java was designed to display games and animated advertisements on web pages. It hardly ever did that and instead became the dominant language for enterprise software, a huge pivot.

Such attempts at central planning rarely succeed. (Most languages designed to be ____ end up being used for nothing at all, of course.)

Build something interesting and let it free to find its place.

Quote
In other words, the package manager (module system) needs to be DVCS aware.

Nothing wrong with that, but most people just reference named releases.
sr. member
Activity: 420
Merit: 262
Most development now is being done in higher level languages which are exactly what you just proposed: a small (maybe) speed concession for greater expressiveness and friendliness (though that word is quite vague) and in some sense (though not in others) simpler than C.  The odd exception here is C++. Not surprisingly C++ is very much on the way out.

C++ is a painful, fugly marriage of high-level and low-level, which loses the elegance of high-level semantics. I think it hasn't died because no other language could do C low-level combined generics (templates). Perhaps something like Rust (perhaps with my ideas of extensions to Rust) will be the death blow to C++ and also Java/Scala/C# (probably Python, Ruby, PHP, and Node.js as well). And also perhaps Javascript...

The language that is eating (almost) everything is JavaScript

Which appears to me to be a great tragedy of inertia (it was originally designed to be for short inline scripts for simple DHTML on web pages ... then Google Maps and webmail clients arrived...). I am in the process of attempting to prove this.

I believe there is no reason we can't marry high-level, low-level, static typing, and fast JIT compilation and still be able to write programs with a text editor without tooling (for as long as tooling is in the debugger in the "browser").

This low degrees-of-freedom crap of being forced to fight with bloated IDEs such as Eclipse needs to be reverted.

This dilemma of needing to place the modules of your project all in the same Git repository needs to be replaced with a good package manager which knows how to build from specific changesets in orthogonal repositories, where the relevant changeset hashes from the referenced module are accumulated in this referencing module so that merging DVCS remains sane. In other words, the package manager (module system) needs to be DVCS aware.
legendary
Activity: 2968
Merit: 1198
@AlexGR I'm still puzzled at what the hell you are going off about. It seems almost entirely strawman to me.

Very few new applications are being written in C afaik (and I do have somewhat of an idea). It is only being done by a few fetishists such as jl777.

Most development now is being done in higher level languages which are exactly what you just proposed: a small (maybe) speed concession for greater expressiveness and friendliness (though that word is quite vague) and in some sense (though not in others) simpler than C.  The odd exception here is C++. Not surprisingly C++ is very much on the way out.

The language that is eating (almost) everything is JavaScript, so if you want to comment on the current state of languages I would start there, not with something that is relegated to being largely a legacy tool (other than low level). Of course other languages are being used too, so they're also worthy of comment, but ignoring JavaScript and fixating on C is insane.
legendary
Activity: 1708
Merit: 1049
AlexGR, it is possible that what you were trying to say is that C is too low-level to be able to reduce higher-level semantic driven bloat and that C doesn't not produce the most optimum assembly code, so might as well use assembly for low-level and a higher-level language otherwise.

In that case, I would still disagree that we should use assembly in every case where we need lower-level expression instead of a low-level language that is higher-level than assembly. Every situation is different and we should use the semantics that best fit the scenario. I would agree with the point of using higher-level languages for scenarios that require higher-level semantics as the main priority.

I would also agree that the higher-level languages we have today do lead to bloat because they are all lacking what I have explained upthread. That is why I am and have been for the past 6 years contemplating designing my own programming language. I also would like to unify low-level and higher-level capabilities in the same language. Rust has advanced the art, but is not entirely what I want.

What I'm trying to say is something to that effect: We have to differentiate between
a) C as a language in terms of what we write as source (the input so to speak)
and
b) the efficiency of binaries that come out from the compiler.

The (b) component changes throughout time and is not constant (I'm talking about the ratio of performance efficiency versus the hardware available at each given moment).

When C was created back in the 70's, every byte and every clock cycle were extremely important. If you wanted to write an operating system, you wouldn't do that with a new language if it was like n times slower (in binary execution) than writing it in assembly. So the implementation, in order to stand on merit, *HAD* to compile efficient code. It was a requirement because inefficiency was simply not an option. 1970 processing power was minimal. You couldn't waste that.

Fast forward 45 years later. We have similar syntax and stuff in C but we are now leaving a lot of performance on the table. We are practically unable to produce speed-critical programs (binaries) from C alone. A hello world program might be of a similar speed with a 1980's hello world, but that's not the issue anymore... Processors now have multiple cores, hyperthreading, multiple math coprocessors, multiple SIMD units, etc. Are we properly taking advantage of these? No.

Imagine trying to create a new coin with a new hash or hash-variant that is written in C. You can almost be certain that someone else is doing at least 2-3-5x in speed by asm optimizations and you are thus exposed to various economic and mining attack vectors. Why? Because the compilers can't produce efficient vectorized code that properly uses the embedded SIMD instruction sets. Someone has to do it by hand. That's a failure right there.

I'm not really proposing to go assembly all the way. No. That's not an option in our era. What we need is far better compilers but I don't think we are going to get them - at least not from human programmers.

So, I'm actually saying the opposite, that since C (which is dependent upon the compilers in order to be "fast") is now fucked up by the inefficient compilers in terms of properly exploiting modern hardware (and not 3-4 decades old hardware - which it was good at), and is also "abrasive" in terms of user-interfacing / mentality of how it is syntaxed/constructed, perhaps using friendlier, simpler and more functional languages while making speed concessions may not be such a disaster, especially for non-speed critical apps.
sr. member
Activity: 420
Merit: 262
Rust appears to have everything except the first-class intersections and also the compilation targets and modularity of compilation targets may be badly conflated into the rest of the compiler (haven't looked, but just wondering why the fuck they didn't leverage LLVM?).

My presumption was incorrect. Apparently Rust outputs to the LLVM:

https://play.rust-lang.org/?code=fn main() {
    println!("Hello, world!");
}
sr. member
Activity: 420
Merit: 262
AlexGR, it is possible that what you were trying to say is that C is too low-level to be able to reduce higher-level semantic driven bloat and that C doesn't not produce the most optimum assembly code, so might as well use assembly for low-level and a higher-level language otherwise.

In that case, I would still disagree that we should use assembly in every case where we need lower-level expression instead of a low-level language that is higher-level than assembly. Every situation is different and we should use the semantics that best fit the scenario. I would agree with the point of using higher-level languages for scenarios that require higher-level semantics as the main priority.

I would also agree that the higher-level languages we have today do lead to bloat because they are all lacking what I have explained upthread. That is why I am and have been for the past 6 years contemplating designing my own programming language. I also would like to unify low-level and higher-level capabilities in the same language. Rust has advanced the art, but is not entirely what I want.
sr. member
Activity: 420
Merit: 262
but every time I see serious optimizations needed, they all have to go down to asm.

You are again talking half-truths with half-lies.

If one were to write an entire application in assembly, they would miss high-level semantic optimizations.

You erroneously think the only appropriate tool for the job of programming is a low-level machine code semantics. You don't seem to grasp that we build higher-level semantics, so we can express (and reason about) what we need to. Example, if I really want to iterate List, then I want to say List.map() and not a few hundred lines of assembly code. The higher-level semantics of how I can employ category theory functors on that List impact more my optimization than recoding in assembly language. For example, in a non-lazy, inductive language (e.g. where Top is at that top of all types and not Haskell where Bottom populates all types) then I will need to manually do deforestation when chaining functors which terminate in the middle of the list.

Why? Because c, as a source, is just a textfile. The actual power of the language is the compiler. And the compilers we have suck badly - even the corporate ones like Intel's icc - which is better in vectorizing and often a source for ripping off its generated code, but still.

While it is true that for any given higher-level program code, that hand coding it in assembly can typically make it a few percentage faster and leaner (and in the odd case a significant percentage), this belies the reality that programming code is not static. It is being studied by others and improved. So that hand-coded assembly would have to be scrapped and rewritten each times edits are made to the higher-level program code, which is too much tsuris and too much of a waste of expert man-hours in most scenarios. If we didn't code in the higher-level programming language, then we couldn't express and optimize higher-level semantics.

Your comprehension of the art of programming is simpleton and incorrect.

There are still people in this world who can't afford 32 GB of RAM and if we forsake optimization, then a power user such as myself with 200 tabs open on my browser will need 256 GB of RAM. A typical mobile phone only has 1 GB.

I have to daily reboot my Linux box because I only have 16 GM of RAM and the garbage collector of the browser totally freezes my machine.


I have 4 on my desktop and 1 on my laptop running a pentium-m 2.1ghz, single core (like >10yr old).

"Amazingly" running 2012 pc linux 32 bit, the 1gb laptop can run stuff like chromium, openoffice, torrents, etc etc, all at the same time and without much problem.
 
My quad-core desktop 4gb/64 bit is so bloated that it's not even funny. And I also run zram to compress ram in order to avoid swapping out to the ssd. In any case, thinking about this anomaly, I used 32bit browser packages on the 64bit desktop... ram was suddenly much better and I was doing the exact same things Roll Eyes

Why are you offended or surprised that double-wide integers consume double the space, and that a newer platform is less optimized  Huh Progress is not free.

A Model T had resource-free airconditioning. I prefer the modern variant, especially here in the tropics.

I am thinking you are just spouting off without actually knowing the technical facts. The reason is ostensibly because for example Javacript forces the use of garbage collection, which is an example of an algorithm which is not as accurate and performant as expert human designed memory deallocation. And because memory allocation is a hard problem, which is ostensibly why Mozilla funded the creation of Rust with its new statically compiled memory deallocation model to aid the human with compiler enforced rules.

I doubt their [Mozilla Firefox's] problems lie in the language.

Yes and no. That is why they designed Rust to help them get better optimization and expression over the higher-level semantics of their program.

4) Seriously, wtf are you loading and you need a reboot every day with 16gb? Roll Eyes

YouTube seems to be a culprit. And other Flash videos such as NBA.com. Other times it is a bad script.


Man haven't you noticed that the capabilities of webpages have increased. You had no DHTML then thus nearly no Javascript or Flash on the web page.

With firefox I don't have flash, I have ipc container off, I block javascript (and the pages look like half-downloaded shit) and still the bloat is there.

Mozilla Firefox is ostensibly not sufficiently modularly designed to have all bloat associated with supporting rich media and scripting disappear when you disable certain plugins and features.

The is a higher-level semantics problem which belies your erroneous conceptualization of everything needing to be optimized in assembly code.

Above you argued that bloat doesn't matter and now you argue it does.   Roll Eyes

My argument is that people using C hasn't done wonders for speed due to the existence of bloat and bad practices.

We are immersed in junk software that are abusing hardware resources and we are doing so with software written in languages that are supposedly very efficient and fast. I believe we can allow people to write software that is slower if the languages are much simpler. Microsoft did a step in that direction with VBasic in the 90s... I used it for my first GUI programs... interesting experience. And I'd bet that the programs I made back then are much less bloated than today's junk even if written in c.

The solution is not to go backwards to lower-level semantics but to go even higher-level semantic models that C can't express, thus can't optimize.

Your conceptualization of the cause of the bloat and its solution is incorrect.

Your talent probably lies else where. I don't know why you try to force on yourself something that your mind is not structured to do well.

I don't have the same issue with the structures of basic or pascal, so clearly it's not an issue of talent. More like a c-oriented or c-non-oriented way of thinking.

C is more low-level and less structured than Pascal, which enables it to do more tricks (which can also be unsafe). Also you may prefer verbose words instead of symbols such as curly braces. Also you may prefer that Pascal makes explicit in the syntax what is implicit in the mind of the C programmer. You would probably hate Haskell with a deep disgust. I actually enjoy loading it all in my head and visualizing it as if I am the compiler.

Note I have argued that Haskell's use of spaces instead of parenthesis to group function inputs at the use-site, means the programmer has to keep the definition site declaration in his head in order to read the code. Thus I have argued parenthesis are superior for readability. There are counter arguments and especially parenthesis can't group arguments when the function name is used in the infix position.

Note I got a very high grade in both Pascal and Fortran, so it isn't like a skill in C is mutually exclusive with a skill in those other more structured languages, although that I'm not a counter-example to the converse.
legendary
Activity: 1708
Merit: 1049
That is the propaganda but it isn't true. There are many scenarios where optimization is still critical, e.g. block chain data structures, JIT compilation so that web pages (or in my goal Apps) can start immediately upon download of device-independent code, cryptography algorithms such as the C on my github.

It's not propaganda. It's, let's say, the mainstream practice. Yes, there are things which are constantly getting optimized more and more, but every time I see serious optimizations needed, they all have to go down to asm. Why? Because c, as a source, is just a textfile. The actual power of the language is the compiler. And the compilers we have suck badly - even the corporate ones like Intel's icc - which is better in vectorizing and often a source for ripping off its generated code, but still.


Man haven't you noticed that the capabilities of webpages have increased. You had no DHTML then thus nearly no Javascript or Flash on the web page.

With firefox I don't have flash, I have ipc container off, I block javascript (and the pages look like half-downloaded shit) and still the bloat is there.

You are not an expert programmer, ostensibly never will be, and should not be commenting on the future of programming languages. Period. Sorry. Programming will never be reduced to art form for people who hate abstractions, details, and complexity.

My problem is with unnecessary complexity. In any case, the problem is not to dumb down programming languages. I explicitly said that you can let the language be tunable to anything the programmer designs but you can also make it work out of the box with concessions in terms of speed.

In a way, even C is acting that way because where it is slow you have to go down to asm. But that's 2 layers of complexity instead of 1 layer of simplicity and 1 of elevated complexity.

Quote
Sorry never! You and Ray Kurzweil are wrong and always will be. Sorry, but frankly. I don't care if you disagree, because you don't have the capacity to understand.

Kurzweil's problem is not his "small" opinions on subject A or B. It's his overall problematic vision of what humanity should be, or, to put it better, what humanity should turn into (transhumanism / human+machine integration). This is fucked up.
sr. member
Activity: 420
Merit: 262
Stuff like integers overflowing, variables not having the precision they need, programs crashing due to idiotic stuff like divisions by zero, having buffers overflow etc etc - things like that are ridiculous and shouldn't even exist.

Impossible unless you want to forsake performance and degrees-of-freedom. There is a tension between infinite capability that requires 0 performance and 0 degrees-of-freedom.

Sorry some of the details of programming that you wish would disappear, can't.

The time when programmers tried to save every possible byte to increase performance is long gone because the bytes and the clock cycles aren't that scarce anymore.

That is the propaganda but it isn't true. There are many scenarios where optimization is still critical, e.g. block chain data structures, JIT compilation so that web pages (or in my goal Apps) can start immediately upon download of device-independent code, cryptography algorithms such as the C on my github.

So we are in the age of routinely forsaking performance just because we can, and we do so despite using - theoretically, very efficient languages.

We almost never should ignore optimization for mobile Apps because otherwise it consumes the battery faster.

You want to wish away the art of programming and have it replaced by an algorithm, but I am sorry to tell you that you and Ray Kurzweil have the wrong conceptualization of knowledge (<--- I wrote that "Information Is Alive!").


The bloat is even compensating for hardware increases - and even abuses the hardware more than previous generations of software.

There are still people in this world who can't afford 32 GB of RAM and if we forsake optimization, then a power user such as myself with 200 tabs open on my browser will need 256 GB of RAM. A typical mobile phone only has 1 GB.

I have to daily reboot my Linux box because I only have 16 GM of RAM and the garbage collector of the browser totally freezes my machine.


You see some programs like, say, Firefox, Chrome, etc etc, that should supposedly be very efficient as they are written in C (or C++), right? Yet they are bloated like hell, consuming gigabytes of ram for the lolz.

I am thinking you are just spouting off without actually knowing the technical facts. The reason is ostensibly because for example Javacript forces the use of garbage collection, which is an example of an algorithm which is not as accurate and performant as expert human designed memory deallocation. And because memory allocation is a hard problem, which is ostensibly why Mozilla funded the creation of Rust with its new statically compiled memory deallocation model to aid the human with compiler enforced rules.

And while some features, like sandboxing processes so that the browser won't crash, do make sense in terms of ram waste, the rest of the functionality doesn't make any sense in terms of wasting resources. I remember in the win2k days, I could open over 40-50 netscape windows without any issue whatsoever.

Same bloat situation for my linux desktop (KDE Plasma)... It's a piece of bloated junk. KDE 4 was far better. Same for windows... Win7 wants 1gb ram just to run. 1gb? For what? The OS? Are they even thinking when they make the specs?

Man haven't you noticed that the capabilities of webpages have increased. You had no DHTML then thus nearly no Javascript or Flash on the web page.

Above you argued that bloat doesn't matter and now you argue it does.   Roll Eyes


In the 80's they thought if they train more coders they'll solve the problem. In the 90's they discovered that coders are born, not educated - which was very counterintuitive. How can you have a class of 100 people and 10 good coders and then have a class of 1000 people and 20 good coders instead of 100?

Because the 100 were populated by more natural hackers (e.g. as a kid I took apart all my toys instead of playing with them) who discovered the obscure field because of their interest. The 1000 was probably advertized to people who shouldn't be pursuing programming, which ostensibly includes yourself.

Why aren't they increasing linearly but rather the number of good coders seem to be related to some kind of "talent" like ....music? Well, that's the million dollar question, isn't it?

I searched for that answer myself. I'm of the opinion that knowledge is teachable so it didn't make any sense.

Absolutely not. Knowledge is serendipitiously and accretively formed.

If you have the natural inclination to some field, then you can absorb the existing art and add to it serendipitiously and accretively. If you don't have the natural inclination, no amount of pounding books on your head will result in any traction.


My mind couldn't grasp why I was rejecting C when I could write asm. Well, after some research I found the answer to the problem. The whole structure of C, which was somehow elevated as the most popular programming language for applications, originates from a certain mind-set that C creators had. There are minds who think like those who made it, and minds who don't - minds who reject this structure as bullshit.

Agreed you do not appear to have the natural inclination for C. But C is not bullshit. It was a very elegant portable abstraction of assembly that radically increased productivity over assembly.

The minds who are rejecting this may have issues with what they perceive as unnecessary complexity, ordering of things, counter-intuitive syntax, etc. In my case I could read a line of asm and know what it did (like moving a value to a register or calling an IRQ) and then read a line of C and have 3 question-marks on what that fuckin line did.

What you are describing is that fact that C abstracts assembly language. You must lift you mind into a new machine model which is the semantics of C. The semantics are no less rational than assembly.

You apparently don't have an abstract mind. This was also evident by our recent debate about ethics, wherein you didn't conceive of objective ethics.

You probably are not very good at abstract math as well.

Your talent probably lies else where. I don't know why you try to force on yourself something that your mind is not structured to do well.


I've also reached the conclusion that it is impossible to have an alignment with C-thinking (preferring it as a native environment) without having anomalies in the way one thinks in life, in general. It's similar to the autism drawback of some super-intelligent people, but in this case it is much milder and doesn't need to be at the level of autism. Accepting the rules of the language, as this is, without much internal mind-chatter objection of "what the fuck is this shit" and just getting on with it, is, in a way, the root cause why there are so few people using it at any competent level globally. If more people had rejected it then we'd probably have something far better by now, with more popular adoption.

I strongly suggest you stop blaming your handicaps and talents on others:

There are so many languages springing up but they are all trying to be the next C instead of being unique. For sure, programmer's convenience in transitioning is good, but it should be abandoned in favor of much friendlier languages.

You are not an expert programmer, ostensibly never will be, and should not be commenting on the future of programming languages. Period. Sorry. Programming will never be reduced to an art form for people who hate abstractions, details, and complexity.

Designing an excellent programming language is one of the most expert of all challenges in computer science.


While all this is nice in theory the problem will eventually be solved by AI - not language authors. It will ask what we want in terms of execution and we'll get it automatically by the AI-programming bot who will then be sending the best possible code for execution.

Sorry never! You and Ray Kurzweil are wrong and always will be. Sorry, but frankly. I don't care if you disagree, because you don't have the capacity to understand.
legendary
Activity: 1708
Merit: 1049
Stuff like integers overflowing, variables not having the precision they need, programs crashing due to idiotic stuff like divisions by zero, having buffers overflow etc etc - things like that are ridiculous and shouldn't even exist.

Impossible unless you want to forsake performance and degrees-of-freedom. There is a tension between infinite capability that requires 0 performance and 0 degrees-of-freedom.

Sorry some of the details of programming that you wish would disappear, can't.

The time when programmers tried to save every possible byte to increase performance is long gone because the bytes and the clock cycles aren't that scarce anymore. So we are in the age of routinely forsaking performance just because we can, and we do so despite using - theoretically, very efficient languages. The bloat is even compensating for hardware increases - and even abuses the hardware more than previous generations of software.

You see some programs like, say, Firefox, Chrome, etc etc, that should supposedly be very efficient as they are written in C (or C++), right? Yet they are bloated like hell, consuming gigabytes of ram for the lolz. And while some features, like sandboxing processes so that the browser won't crash, do make sense in terms of ram waste, the rest of the functionality doesn't make any sense in terms of wasting resources. I remember in the win2k days, I could open over 40-50 netscape windows without any issue whatsoever.

Same bloat situation for my linux desktop (KDE Plasma)... It's a piece of bloated junk. KDE 4 was far better. Same for windows... Win7 wants 1gb ram just to run. 1gb? For what? The OS? Are they even thinking when they make the specs?

Same for our cryptocurrency wallets. Slow as hell and very resource-hungry.

In any case, if better languages don't exist, the code that should be written won't be written because there will be a lack of developers to do so. Applications that will never be written won't run that fast... they won't even exist!

Now that's a prime corporate problem. "We need able coders to develop X, Y, Z". Yeah well, there are only so many of them and the corporations won't be able to hire as many as they want. So...

In the 80's they thought if they train more coders they'll solve the problem. In the 90's they discovered that coders are born, not educated - which was very counterintuitive. How can you have a class of 100 people and 10 good coders and then have a class of 1000 people and 20 good coders instead of 100? Why aren't they increasing linearly but rather the number of good coders seem to be related to some kind of "talent" like ....music? Well, that's the million dollar question, isn't it?

I searched for that answer myself. I'm of the opinion that knowledge is teachable so it didn't make any sense. My mind couldn't grasp why I was rejecting C when I could write asm. Well, after some research I found the answer to the problem. The whole structure of C, which was somehow elevated as the most popular programming language for applications, originates from a certain mind-set that C creators had. There are minds who think like those who made it, and minds who don't - minds who reject this structure as bullshit.

The minds who are rejecting this may have issues with what they perceive as unnecessary complexity, ordering of things, counter-intuitive syntax, etc. In my case I could read a line of asm and know what it did (like moving a value to a register or calling an IRQ) and then read a line of C and have 3 question-marks on what that fuckin line did. I've also reached the conclusion that it is impossible to have an alignment with C-thinking (preferring it as a native environment) without having anomalies in the way one thinks in life, in general. It's similar to the autism drawback of some super-intelligent people, but in this case it is much milder and doesn't need to be at the level of autism. Accepting the rules of the language, as this is, without much internal mind-chatter objection of "what the fuck is this shit" and just getting on with it, is, in a way, the root cause why there are so few people using it at any competent level globally. If more people had rejected it then we'd probably have something far better by now, with more popular adoption.

There are so many languages springing up but they are all trying to be the next C instead of being unique. For sure, programmer's convenience in transitioning is good, but it should be abandoned in favor of much friendlier languages.

While all this is nice in theory the problem will eventually be solved by AI - not language authors. It will ask what we want in terms of execution and we'll get it automatically by the AI-programming bot who will then be sending the best possible code for execution.

AI-Generic interface: "What do you want today Alex?"
Alex: "I want to find the correlation data of soil composition, soil appearance, minerals and vegetation, by combining every known pattern recognition, in the data set I will be uploading to you in a while".
AI-Generic interface: "OOOOOOK, coming right up" (coding it from the specs given)
AI-Generic interface: "You are ready... do you want me to run the program with the data in the flash drive you just connected?"
Alex: "Sure".

This means that the only thing we need to program is a self-learning AI. Once this is done, it will be able to do everything that a coder can do, even better and faster. It will be able to do what an if-then-else idiotic compiler does today, but far better in terms of optimizations and hardware exploitation. Most of the optimizations that aren't done, is because the if-then-else compiler doesn't recognize the logic behind the program and if the optimization would be safe or not. But if programmer and compiler are the same "being" then things can start ...flying in terms of efficiency.

This got futuristic very fast, but these aren't so futuristic anymore. They might have been 10 years ago but now it's getting increasingly closer.
sr. member
Activity: 420
Merit: 262
As far as the rest, programming languages are a nasty thicket. Every decision seems like some sort of tradeoff with few pure wins. Make it better in some way and it gets worse in others.

Your Language Sucks Because...lol.

I'm formulating a theory that the worsening is due to ideological decisions that were not based on the pragmatic science of utility and practicality.

Examples:

1. Rust abandoning Typestate (as I warned in 2011) which was originally motivated by the desire to assert every invariant in the program. But of course infinite assertion is 0 performance and 0 degrees-of-freedom. Rather you choose a type system that meets the practical balance between utility and expression. And Typestate was not holistically unified with the type system of the language, thus it was doomed (as I had warned).

2. Scala was started as an ideological experiment (grown out of the Pizza project which added generics to Java) to marry functional programming concepts and OOP. Problem is that subclassing is an anti-pattern because for example the Liskov Substitution Principle is easily violated by subsclasses. Here is my Scala Google group post where I explained that Scala 3's DOT is trying to unify (the anti-pattern) subclassing and abstract types. Here is my post where I explained why I think Rust-like interfaces (Haskell Typeclasses) with first-class unions (that I had proposed for Scala) instead of subclass subsumption is the correct practical model of maximum utility and not an anti-pattern. Oh my, I was writing that when I was 4 days into my 10-day water only fasting that corresponded with my leap off the health cliff last year. Here is my post from April 2012 (right before my May hospitalization that started the acute phase of my illness) wherein I explained that "the bijective contract of subclassing is undecidable". Note Scala's complexity (generics are Turing complete thus generally undecidable same as for C++) means the compiler is uber slow making it entirely unsuitable for JIT compilation.

3. James Gosling's prioritization of simplification as the ideological justification for the train wreck that is Java's and JVM's lack of unsigned types (and for originally lacking generics and still no operator overloading).

4. Python's ideology (or ignorance) to remove the shackles of a unified type system which means not only does it have corner case gotchas, but it can never be statically checked compiled language thus can't be JIT optimized.

5. C++'s ideology of marrying backward compatibility with C (which is not entirely achieved) with OOP (including the subclassing anti-pattern on steroids with multiple diamond inheritance) is a massive clusterfuck of fugly complexity and corner cases.

P.S. I need to revisit the ideology of category theory applied to programming to try to analyze its utility more practically.
sr. member
Activity: 420
Merit: 262
Stuff like integers overflowing, variables not having the precision they need, programs crashing due to idiotic stuff like divisions by zero, having buffers overflow etc etc - things like that are ridiculous and shouldn't even exist.

Impossible unless you want to forsake performance and degrees-of-freedom. There is a tension between infinite capability that requires 0 performance and 0 degrees-of-freedom.

Sorry some of the details of programming that you wish would disappear, can't.
Pages:
Jump to: