Thank you for that link. It reminded me to some tests I made - some months ago - with Hubzilla (the successor of Friendica / Red Matrix, a "decentralized" social network), which uses the ZOT library to provide a "nomadic identity". That could be described as a hybrid model between the Solid approach (everybody stores his/her data on his own) and the Mastodon approach (multiple users on one server). Users in Hubzilla can change from the "lazy" Mastodon approach to the more demanding Solid approach at any time "cloning" their data sets; they can also wander around between distinct "group" servers.
I was ignorant about Hubzilla previously; having skimmed
the narrative description of the Zot protocol, fwiw I think you're spot on. They summarise it as “a JSON-based web framework for implementing secure decentralised communications and services.” and I would characterise the SOLID approach as “a JSONLD-based web framework for implementing secure decentralised communications and services.” Only two letters extra but it makes a big difference in terms of opening up access to already worked-out solutions.
In the current "Slimweb" incarnation (web2web-based) identity would be linked primarily to private keys; it would be worth discussing if a second "identity layer" could be provided; for example, one primary address identified with an "online identity" which stores inscriptions leading to other addresses following distinct publications of that person, being the relationships between these publications stored in signed RDF graphs.
My conceptualisation runs as follows:
I create a
convention by using the
in-browser bip39 HD keygen implementation to create a handful of notional "identity” addresses at sub-level
m/44'/63'/0'/8/*, one for each identity that I wish to distinguish - as exemplified below (each one given a label for convenient reference):
m/44'/63'/0'/8/0 | Sa3evx12ZgTDHqvuT5xpBaAYA5PQAmZFDM | gjhiggins |
m/44'/63'/0'/8/1 | SffwZ6jriJbnYtB27hbA9SzhpWYqf8WdoT | minki |
m/44'/63'/0'/8/2 | SYXxitjEU4KjVoXSrLLPwUhxEzpdjNr6Ud | gj |
(I'm complicating matters slightly by using a bip39 wallet but the deterministic aspect means that re-entering the passphrase re-creates the privkeys for my identity addresses)The convention continues ... in txouts spent by the “identity” addresses (e.g. Sa3evx12ZgTDHqvuT5xpBaAYA5PQAmZFDM), the OP_RETURN data is to be read as a ni-URI¹ formatted “Trusty URI”², e.g.
ni:///sha-256;5AbXdpz5DcaYXCh9l3eI9ruBosiL5XDU3rxBbBaUO70?module=RAIn the intended bog-standard
modus operandi, the publisher will either self-host a SLM-ACME or will have a subscription to an SLM-ACME service hosted by a third-party and the ni-URI value will resolve to a signed graph, retrieved by a SPARQL query posed of the graph:
SELECT ?g WHERE {
?g ccy:graphSignatureHash 5AbXdpz5DcaYXCh9l3eI9ruBosiL5XDU3rxBbBaUO70
}The signed graph can contain whatever you like, as a file resource (module=FA) or an RDF document (module=RA). The latter makes more sense in a decentralised context where the need for apps to “discover” meta-data starts to become acute (happily, RDF documents are self-describing). Identity addresses would have to be published through sidechannels, such as this thread.
The minimum requirements of the
SOLID WebId Profile would seem to be satisfiable, same goes for most of the SOLID-recommended
vCard.
BTW. the new blocknotify script now has been tested with "reorganized" small fake RDF blockchains and so far works well recognizing and "repairing" reorgs. I'll conduct some more tests with partial blockchains containing only blocks and transactions with OP_RETURN inscriptions this week, then activate the new script on my VPS with Fuseki (the one that is accessible via the "
Slimweb gateway").
Very cool. Sorry for the non-reply to your earlier question, I completely lost track of time - I didn't have an answer anyway.
Once it is fully synced, real-time web2web testing will be finally possible. That means basically that when you publish something via an OP_RETURN inscription, web2web apps like the Gateway will find the corresponding Torrent hash about 2 minutes later. I would consider it still alpha software, however.
Nice work.
Another direction in which to pursue the notion of using OP_RETURN data to carry resolvable pointers to content/metadata - there's a very accessible HN discussion of IPFS basics which is very pertinent, especially the comparison of IPFS and torrenting practilcalities.
“One thing I've failed to find out about IPFS: who pays for hosting? The user? Or is it donated by some peers?” -
https://news.ycombinator.com/item?id=16078975I think I mentioned, I gave IPFS a spin. IPFS locators as OP_RETURN data would also work. There's a javascript implementation with some interesting in-browser examples:
https://github.com/ipfs/js-ipfs/tree/master/examples and a ipfs-api
https://github.com/ipfs/js-ipfs-api. It's on my stack to investigate, using the resurrected neocities.org -
https://neocities.org/api and their support for ipfs DNS -
https://blog.neocities.org/blog/2017/08/07/ipfs-dns-support.htmlI still think RDF has an overwhelmingly significant advantage in that the torrent and ipfs protocols are incapable of carrying metadata which necessitates the use of separate sidechannels for the communication of associated user-vital info such as title, creator etc.
Cheers
Graham
¹
https://datatracker.ietf.org/doc/rfc6920/ Naming Things with Hashes - RFC 6920
²
https://www.researchgate.net/publication/259845637_Trusty_URIs_Verifiable_Immutable_and_Permanent_Digital_Artifacts_for_Linked_Data“Trusty URIs end with a hash value in Base64 notation(i.e. A–Z,a–z,0–9,-, and representing the numbers from 0 to 63) that is preceded by a module identifier. This is an example:
http://example.org/r1.RA5AbXdpz5DcaYXCh9l3eI9ruBosiL5XDU3rxBbBaUO70”