Pages:
Author

Topic: Anonymity: Death of the Stateless Web (Read 4089 times)

sr. member
Activity: 420
Merit: 262
June 15, 2015, 09:54:09 AM
#43
http://www.infoq.com/interviews/bracha-javascript-future

Quote from: Gilad Bracha @ co-author of the Java language spec
talking with Gilad Bracha. He works at Google and he is currently working on Dart. He is best known as the co-author of the Java language specification. Gilad, your keynote today was called “Whither web programming”. Can you tell us a little bit about the title of your talk and summarize it?

Sure. The title was a bit of a pun which I am sure nobody got which is the way I like my puns to be. Basically, the point is where is the web going or where it should go in the future and also the idea that there is a risk if it does not address these issues that it might not be as dominant or as popular as it should be as a programming platform in particular because of competition from app stores and things like that which have certain advantages, in particular with respect to the ability to reliably install an application. So, the web has this great advantage of zero install, but it actually does not have a way to reliably ensure that that application is there for you offline or when the network is slow or unreliable, etc, which is an added feature as it were that is one of the weaknesses that I was talking about.

What direction is the web going in? What is the best-case scenario and what is the worst-case scenario?

I guess the best – let’s start with the best – the best case scenario is that a series of missing primitives that would allow a great variety of programming languages to be implemented efficiently on the web, get standardized and put into all the browsers in a relatively quick manner. I mean it is a standard process and it does take time. There already is this flowering of all kinds of programming paradigms on the web and I think that is a good thing. I think that mono-lingual platforms either become multi-lingual or they die. Look at the JVM, for example. If we do that, then the web will evolve into something where you really have this ideal combination of the advantages of the network and the advantages of an independent client. So, things will work for you online and offline, your apps will synchronize transparently for you, wherever you go, for multiple devices, your data will synchronize transparently with collaboration and so forth. All these things that the network can enable will work well on the web, in an open fashion, in a standardized fashion. That is what we'd really like to see happening.

The worst case scenario is that none of these things happen and instead you see developer energy focused more on mobile platforms and you get more of these walled garden kind of things like iOS frankly where your ability to innovate is limited but in some sense there are better primitives and they actually become more competitive with the web.

Can you summarize then your vision for the future of the web and the web applications?

Well, I think we want a world where applications can work online and offline as much as their functionality allows. Obviously, if you are accessing some giant database, you may need real access to the network. Or if you are communicating, obviously there is nothing that can be done. But there are many applications where it is plausible to store your data and the application locally and it will work for you offline and I think the platform should make it easy for you to do that. You should be able to synchronize when you are back online and synchronize your application and your data and again, it should happen in a very lightweight fashion, it should be handled as much as possible by the platform so that developers do not have to solve this rather hard problem over and over again. It should produce an experience that is as good or better than any native application does.

Gilad Bracha and W. Cook, Mixin-based Inheritance, Proceedings of ECOOP '90.
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
November 28, 2014, 03:48:09 PM
#42
Apparently the Android permissions prompt is optional or something.

Uber's Android App Caught Reporting Data Back Without Permission

full member
Activity: 154
Merit: 100
November 27, 2014, 12:19:36 AM
#41
Any reader who thinks we don't have a totalitarian police state in the USA, should try to explain away the following well documented, egregious case. Don't forget that the USA can now legally send the military after you and "make you shut up" without any due process nor habeas corpus.

http://armstrongeconomics.com/2013/02/09/indefinite-detention/

Quote from: Armstrong
So when the Supreme Court ordered the government to explain what the hell was going on, they realized they would lose. You have to at least charge someone. Now, there was not even a charge. Therefore, I was released to prevent the Supreme Court from ruling against them. What did they do then? They used the terrorist nonsense as the excuse to now indefinitely imprison anyone at any time without even charging them, lawyers, or a right to trial. The rumor is they used Lindsey Graham threatening him because he is gay and if he did not strip Americans of all rights, he would be exposed.

http://www.youtube.com/watch?v=9ni-nPc6gT4

Now a journalist Chris Hedges and several others sued the Obama Administration on the grounds of it being unconstitutional to indefinitely hold citizens as they did to me without charges, lawyers, or a right to a trial. Judge Katherine Forrest agreed it was unconstitutional and issued an injunction to prevent the government from doing so. This was immediately appealed by the Obama Administration for they are really indistinguishable from George Bush when it comes to expanding government power and destroying the Constitution. The Obama Administration appealed to the higher court – where? Second Circuit Court of Appeals of New York. That court, naturally with the speed of a bullet, instantaneously issued a temporary stay on the injunction allowing the government to indefinitely detain anyone it desires.

The notorious Second Circuit, perhaps the most anti-constitution court in the USA, will make the decision. The way this goes, if they side with the government, you can appeal to the Supreme Court but they take only perhaps 100 out of 10 thousand petitions. If the government lost, whenever they appeal, they are normally granted the right to be heard by the Supreme Court. So if the Second Circuit sides with government, the burden is then on the citizen to show why this case should be heard.

First they came for the Socialists, and I did not speak out—
Because I was not a Socialist.

Then they came for the Trade Unionists, and I did not speak out—
Because I was not a Trade Unionist.

Then they came for the Jews, and I did not speak out—
Because I was not a Jew.

Then they came for me—and there was no one left to speak for me
full member
Activity: 154
Merit: 100
November 24, 2014, 10:41:34 PM
#40
Users become accustomed to the simplifying unification of accessing any content via the web browser. This provided some benefits for developers and content authors too as they could "code once, run every where".

But this had the tradeoff of retarding innovation on those areas that would require stepping outside the web browser’s myopically designed security sandbox or require a different model of interaction that could interopt well with high latency transport. Many even myopically cheered that this security sandbox was a major advantage.

There were some attempts such as Flash, ActiveX plugins, Silverlight, etc but these lacked the holistic purpose, demand, and system design that could cross the chasm to simplifying unification via sufficient market adoption.

I am positing that mobile apps, and more likely Android apps in particular, is a paradigm which is crossing the chasm. And even migrating towards the desktop and laptop to bring further unification.

I think you are overstating the importance of the "user experience" and discounting the value provided by "robot experience" (by robot I mean the indexing and search engines). One of the most important factors of the growth of "web" was that HTML became some approximation of the lowest common denominator of information interchange. XML family of standards tried, but failed to improve on the HTML model with this regard.

I don't want to devalue your insight, but here is a recent example of "UX uber alles" problem:

http://torrentfreak.com/fail-mpaa-makes-legal-content-unfindable-google-141122/

exhibited by the "overapplification" of the user experience. I'm sorry for the ugly, hastily coined word. I don't know the better term. But from my childhood I still remember a paper pop-up book for "Puss in boots" which could be animated by hands. It was very cute, but didn't meaningfully improve the classic text printed in the plain book.

Edit: One more link from today that is tangentially related to the subject matter:

http://news.slashdot.org/story/14/11/23/1714255/blame-america-for-everything-you-hate-about-internet-culture

I had thought of this, so it is good you raise the issue, so I can respond with my prior thoughts.

1. Those applications that require some advances in user experience does not eliminate those applications where standards for content make semantics more transparent. Both can continue to proliferate, i.e. we shouldn't prevent the former if requiring always the latter would make some applications not exist at all.

2. Those applications that require some advances in user experience could in many cases also publish stateless content (or any other standard for making semantics more transparent), e.g. an application for interacting with a forum offline or on high latency (for example to access it over a high latency truly anonymous network), could coexist with a website version of the forum.

P.S. I hate pay walls and login walls when I am surfing the web for news or research.
legendary
Activity: 2128
Merit: 1073
November 24, 2014, 03:52:48 PM
#39
Users become accustomed to the simplifying unification of accessing any content via the web browser. This provided some benefits for developers and content authors too as they could "code once, run every where".

But this had the tradeoff of retarding innovation on those areas that would require stepping outside the web browser’s myopically designed security sandbox or require a different model of interaction that could interopt well with high latency transport. Many even myopically cheered that this security sandbox was a major advantage.

There were some attempts such as Flash, ActiveX plugins, Silverlight, etc but these lacked the holistic purpose, demand, and system design that could cross the chasm to simplifying unification via sufficient market adoption.

I am positing that mobile apps, and more likely Android apps in particular, is a paradigm which is crossing the chasm. And even migrating towards the desktop and laptop to bring further unification.
I think you are overstating the importance of the "user experience" and discounting the value provided by "robot experience" (by robot I mean the indexing and search engines). One of the most important factors of the growth of "web" was that HTML became some approximation of the lowest common denominator of information interchange. XML family of standards tried, but failed to improve on the HTML model with this regard.

I don't want to devalue your insight, but here is a recent example of "UX uber alles" problem:

http://torrentfreak.com/fail-mpaa-makes-legal-content-unfindable-google-141122/

exhibited by the "overapplification" of the user experience. I'm sorry for the ugly, hastily coined word. I don't know the better term. But from my childhood I still remember a paper pop-up book for "Puss in boots" which could be animated by hands. It was very cute, but didn't meaningfully improve the classic text printed in the plain book.

Edit: One more link from today that is tangentially related to the subject matter:

http://news.slashdot.org/story/14/11/23/1714255/blame-america-for-everything-you-hate-about-internet-culture
full member
Activity: 154
Merit: 100
November 24, 2014, 12:25:32 PM
#38
I posit one of those opportunities is to interopt with a hypothetical high latency transport, which would have the benefit of being truly anonymous because Tor and low latency ostensibly can not be anonymous.

What kind of high latency numbers are you talking about? 10 seconds? 30 seconds? More? In my experience important data more often than not is also time critical in nature.

Data on the internet ranges in time criticality. For example, posting new Likes and photos to Facebook is not that time critical, but Facebook interactive chat or a stock quotes application are time critical.

There are two components to the latency on a high latency variant of Tor:

1. The greater number of hops (onion layers or nodes) between the client and the server requires some additional latency for each hop.

2. The randomization of when to forward each packet at each hop increases latency at each hop by the time window over which that randomization occurs. As the volume of traffic increases, the time window can shrink because the anonymity mix (of packets forwarded by the node) per unit time has increased.

Let's assume the system can have an algorithm that attempts to choose paths through the nodes of the network that keep latency below 50ms per hop[1]. If that can be achieved, then assuming #2 is reduced to negligible as traffic volume increases, then 10 hops would be 0.5 seconds on average latency. 100 hops would be 5 seconds. Since the client and hidden service server each get to decide half of the onion layer path (with a rendezvous node in the middle), they can choose a latency and anonymity tradeoff that matches their application.

[1]http://www.webperformancetoday.com/2012/04/02/latency-101-what-is-latency-and-why-is-it-such-a-big-deal/
http://www.sqlskills.com/blogs/paul/are-io-latencies-killing-your-performance/
donator
Activity: 1617
Merit: 1012
November 24, 2014, 11:29:13 AM
#37
I posit one of those opportunities is to interopt with a hypothetical high latency transport, which would have the benefit of being truly anonymous because Tor and low latency ostensibly can not be anonymous.

What kind of high latency numbers are you talking about? 10 seconds? 30 seconds? More? In my experience important data more often than not is also time critical in nature.
full member
Activity: 154
Merit: 100
November 24, 2014, 10:57:13 AM
#36
The sheeple don't care about being anonymous...

They don't need to care. They only need to be aided by the paradigm shift I am describing, specifically the usurpation of the top-down morass of web standards by the viral diversity of a programmable, secure platform such as Android which is on 80% of their smartphones.

...and most of the world is still in 3rd world areas with no net access other than wifi on a cheap smart phone at the local wifi cafe.

You see they need offline app solutions for their high latency network access. So extremely high latency in fact, sometimes they have to wait hours or days to reconnect to the net or connect over very slow GSM or overloaded 3G networks.

The No votes come from neophytes who don't understand the issues.

newbie
Activity: 56
Merit: 0
November 24, 2014, 09:46:57 AM
#35
The sheeple don't care about being anonymous and most of the world is still in 3rd world areas with no net access other than wifi on a cheap smart phone at the local wifi cafe.

The future net will be an AI network and people will have neural implants to put the net right into their optic nerve center, then the next generation will be that dna will be manipulated to have the human brain able to connect on an alpha wave like they do the akashic record now.

There will be no anonymity, there is none now, the AI that created you tracks you through alpha waves now and that AI makes you do what that AI wants through alpha waves as well



legendary
Activity: 2114
Merit: 1090
=== NODE IS OK! ==
November 24, 2014, 09:38:32 AM
#34
No, web is simply developing faster than regulations.
full member
Activity: 154
Merit: 100
November 23, 2014, 05:13:57 AM
#33
I've gotten some feedback from at least one of my supporters that he is not able to quickly discern the significance of the OP (readers should also read the linked OP).

Until recently the majority of the user demand for traffic on the internet has been HTML over HTTP, i.e. what I refer to in the OP as the stateless Web. That traffic is required to be low latency, because the caching mechanisms are not typically smart enough especially with dynamically changing content (e.g. forums, social networking sites) to give a satisfactory user experience if the transport between server and client is high latency.

Yeah there are other protocols on the internet, such as SMTP which delivers email and doesn't require low latency transfer. And even P2P over UDP versus TCP/IP. But the majority of the market has been focused on HTML over HTTP.

Users become accustomed to the simplifying unification of accessing any content via the web browser. This provided some benefits for developers and content authors too as they could "code once, run every where".

But this had the tradeoff of retarding innovation on those areas that would require stepping outside the web browser’s myopically designed security sandbox or require a different model of interaction that could interopt well with high latency transport. Many even myopically cheered that this security sandbox was a major advantage.

There were some attempts such as Flash, ActiveX plugins, Silverlight, etc but these lacked the holistic purpose, demand, and system design that could cross the chasm to simplifying unification via sufficient market adoption.

I am positing that mobile apps, and more likely Android apps in particular, is a paradigm which is crossing the chasm. And even migrating towards the desktop and laptop to bring further unification.

This opens up new opportunities for fast moving innovation outside the confines of the slowly innovating (top-down standards driven, e.g. W3C.org) web browser. I posit one of those opportunities is to interopt with a hypothetical high latency transport, which would have the benefit of being truly anonymous because Tor and low latency ostensibly can not be anonymous.

The innovations might be so far outside the paradigm of the web browser content platform, that the web browser might not be able to incorporate such innovations without ceasing to be any semblance of what it was. If the web browser becomes essentially Android or something closer to what Android is, then the W3C (top-down, stifling morass) lost control. If Android (or something like it) is a standard that runs every where, then we haven’t lost the "code once, run every where" advantage. What is really accomplished is making the content platform more programmable with more granularity of modularity of Android APIs instead of the "all or nothing" monolithic imposition of the web browser APIs (by for example having a more holistic security sandbox model).
full member
Activity: 154
Merit: 100
November 21, 2014, 06:31:47 AM
#32
I’d feel facetious and subject to accusation of being non-objectively biased if I did not acknowledge some serious security theatre I submitted an Android Issue on today.

https://code.google.com/p/android/issues/detail?id=80335

Quote from: me
Documentation states, “There is no security enforced with these files. For example, any application holding WRITE_EXTERNAL_STORAGE can write to these files.”

I understand files stored in the returned directory can be accessed by the user via explicit actions such as by connecting the device to a computer via USB or removing the SD storage card. Thus security can not be guaranteed in all cases for these files.

However, there is a critically important scenario where security can and should be provided.

Users may install an app and despite approving the WRITE_EXTERNAL_STORAGE permission, not realize they have just enabled that app to corrupt the data files of other apps that have stored external data files. Users are not programmers and thus do not think in terms of the implications of obscure logic. They may think that particular write permission gives that app permission to write date for itself to the external directory, but not presume it enables that app to corrupt the external data of other apps. Why should the user presume Android was designed stupid?

In other words, the user likely views the write permission as a way for the user to get access to those data files with those aforementioned explicit actions, but not as permission to do unnecessary harm. The Unix design principles of least surprise and rule of silence apply:

http://catb.org/~esr/writings/taoup/html/ch01s06.html#id2878339

There is simply no reason to enable a trojan app to apply social engineering to trick the user into enabling something the user has no reasonable reason to assume would happen.

For example, I would like to store an SQLite database on the removeable media because it enables the user to be sure that data has no traces even after being deleted. And because it enables the user to instantly remove that data from the system in a heartbeat in an emergency.

And I think this is a very piss poor Android design that the user could unwittingly enable a trojan that would corrupt their data.

Also note that many or most users are oblivious to the meaning of security permission prompts and confirm them always.

In other words, WRITE_EXTERNAL_STORAGE permission should only apply to Environment.getExternalStoragePublicDirectory) directories. Since Kitkat it is no longer required for writing to the app’s own private external directory.
full member
Activity: 154
Merit: 100
November 21, 2014, 02:55:06 AM
#31
Wow 9 ÷ 11 = 82% voted ‘no’ thus far (or 90% if exclude my vote).

The ubiquity of Dunning–Kruger ignorance needs to be culled by action in the market place. This vast preponderance of ignorance means there is a huge opportunity here because most do not realize the paradigm shift yet.

I suspect it escaped the logic of readers that stateless content can increase (even in proportion) and yet orthogonality of transport and content can proliferate.

P.S. if the  ‘no’ votes are pertaining to the rise of the global police state and the need for anonymity, I can only sigh again soon. I was watching the NBA (i.e. Rome’s bread and circus, or the Roaring 1920’s socialite glitter & glee) and realized why most people today would again think it is ludicrous to claim such horrific outcomes as we approach the cliff.
full member
Activity: 154
Merit: 100
November 21, 2014, 02:11:31 AM
#30
Remember I [AnonyMint] was telling everyone in 2013 that Tor was not anonymous because of timing analysis due to being a low latency network and Sybil attacks on the relay nodes by national security agencies. And everyone thought I was crazy. And now we see new research that says 81% of the users can be identified. Sigh.

The title and content of the OP is not about the death of all stateless content, rather I think it quite explicitly says death of the Stateless Web.

This salient distinction is that the content and rending model (e.g. HTML) shouldn’t have a monopoly over the transport model (e.g. HTTP).

The Web is becoming more general and the transport layer is detaching from market dominance by the rendering layer.

This enables new opportunities and possibilities.

I wonder what the No voters are thinking? Is my presentation too abstracted? Perhaps I need to incorporate the above summary.



Update: done.

The title and content of this epistle is not about the death of all stateless content, rather I think it quite explicitly says death of the Stateless Web. This salient distinction is that per the Unix design principles of least presumptions, orthogonality, and separation-of-concerns, the content and rending model (e.g. HTML) shouldn’t have a monopoly over the transport model (e.g. HTTP). The Web is becoming more general, stateful, and the transport layer is detaching from market dominance by the rendering layer. This creates new opportunities and possibilities.

Even in Europe for example Switzerland is increasing gun control (oh grasshopper please understand why a lack of private arms means Putin’s ground forces can run over Europe like a hot knife through warm butter).
full member
Activity: 154
Merit: 100
November 20, 2014, 04:30:38 PM
#29
My favorite Mozilla kafkaesque, security theatre fuck-up for the ages. I warned there and exactly what I warned happened. And so he eventually closed the bug to further comments after receiving 100 complaints over the next two years as I warned him.
full member
Activity: 154
Merit: 100
November 20, 2014, 04:10:05 PM
#28
phillipsjk, good to hear.

Looks like we agree on we are headed towards convergence?

I hope not. As I said, I believe there should be a clear distinction between data and code. I was using lynx as my primary browser at least until 2005. It really stopped adapting to new HTML revisions after HTML 3.2. HTML 4 introduced style-sheets, which were never really implemented by lynx.

I edited my post to explain I mean convergence via orthogonality and separation-of-concerns.

If you are programmer (especially if you understand the Unix design philosophy) then you know the value of these concepts instead of trying to have one monolithic thing do everything you need.

Then you can mix-n-match to retain the flavor you desire.

HTML should not be a dominating force on how I can distribute apps as seamless content to my users. You may prefer a strict static content model without JavaScript, but other users have other preferences such as for some HTML is just a rendering engine that is used in some contexts within their stateful app. Developers should be allowed to serve all users well, including you (if there are enough of you, else you can roll your own).

P.S. You are conflating the issue of good semantic design with the orthogonal issue of security.

I fought against Daniel Glazman's spaghetization of the orthogonality between code and data with XBL (because CSS was not the correct semantic layer to bind code!). I was for registering events instead of embedding them in the HTML file. Etc.. But I don't think you can build a dynamic web page and not have any code manipulating the page. We are talking only about good semantic programming, not about security unless it is just security theatre. The broader the scope of your sandbox, the less fined grained permissions you need to ask the user about. Because users have no fucking clue and just click "yes" any way, so then you don't have security. Android is trying to design to reduce the need to ask the user for permissions.
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
November 20, 2014, 04:06:51 PM
#27
phillipsjk, good to hear.

Looks like we agree on we are headed towards convergence?

I hope not. As I said, I believe there should be a clear distinction between data and code. I was using lynx as my primary browser at least until 2005. It really stopped adapting to new HTML revisions after HTML 3.2. HTML 4 introduced style-sheets, which were never really implemented by lynx.

Edit: I like CSS, just lynx does not.
Edit2: I still try to avoid running client-side scripts as much as possible though.
full member
Activity: 154
Merit: 100
November 20, 2014, 03:59:10 PM
#26
phillipsjk, good to hear. Yeah I would prefer the web browser be a rendering (android.app.)Activity or Fragment and provide an HTTP ContentProvider which can be hooked into it (or substituted) and not try to be the OS.

Orthogonality and separation-of-concerns.

Looks like we agree on we are headed towards convergence?

P.S. In the transposed direction of misuse or incorrect design, I was listening to this masscan developer explain (how to scan the entire internet in 3 minutes with commodity hardware) in his C10M video about how web servers shouldn't move packet processing into the OS by using threads, because Linux was not designed to be real-time OS but rather optimized to be a multi-user OS. And instead do the logic in user mode.
legendary
Activity: 1008
Merit: 1001
Let the chips fall where they may.
November 20, 2014, 03:51:44 PM
#25
On Android, I can choose from dozen languages that run on the JVM, including Jython (Python), Java, Scala, etc..

The entire web page should be sandboxed in its own process and let the developer do what ever he wants. For static web pages, you don't need to spend a process on each one. It is as if the W3C never made the fundamental categorical distinction between a long-lived (stateful) app and a stateless static web page.

I don't think we strongly disagree on this point: just on the methods. Java applets were one such clear distinction. Of course, other than the fine-grained control, there was also the (dreaded) "loading Java..." message (and delay) that made the thing unpopular. Android largely follows the Java philosophy: to the point that they were sued over the use of the API.

I believe that web-browsers have become too complex. Because, as you mention, they are being asked to operate as an OS in user-land. One thing Chrome gets right is leveraging OS services by spawning each web-page as its' own process. It is resource heavy for simple web-pages, but actually lets you track down which page is using all of your CPU time/memory.
full member
Activity: 154
Merit: 100
November 20, 2014, 03:50:02 PM
#24
In Android, your application’s Activities, ContentProviders, etc have a Uri. Thus, I can envision when you type the app Uri (or its abbreviation) in a browser, you run the installed or install the app. So it can become as seamless as the web. It appears disjoint for now and the web appears to be easier and more readily accessed, but it doesn’t have to remain this way. Then web sites could also be placed as favorite icons on your desktop (which you can do now but not so easy or readily achieved as installing an app and seeing its icon appear there).
Pages:
Jump to: