Well as far as i understand, LN channels can be somehow shut down, via certain glitches or other, and in that case, what would remain of the operation made in that channel ?
And what validity would this journalizing have regarding on chain state if there are difference at the end ?
The problem with caching is not about HD crash, but if the controller is stopped before the cache is a actually wrote, even if the hard drive works well the data is still lost.
Just to push analogy with cache to show certain caveeat, with smp system and cpu cache, there are certain case when the memory is shared with other chips with dma, or virtual pagination, in system with high concurency on the data, cpu cache can become "out of date", even with sse2 there are certain thing to help dealing with this, but as far as i know, most os disable caching on certain shared memory because of all the issues with cache, and instruction reordering etc When having access to up to date data in concurent system is more important than fast access to potentially out of date data.
If LN is to be seen as a cache system , it doesn't look like they are taking all the precautions for it to be really safe.
Cache are easily safe when all the write access to the data are made throught the same interface doing the caching, which is not the case with bitcoin & LN.
With hard drive it works because all the access goes throught the same controller doing the caching.
But anyway as LN locks the bitcoin on the main chain, it's not even really a true cache system, because the principle of a cache system is to fasten multiple access on the same data, as the bitcoin are locked, the channel have exclusive access to it, and so it's not really to be seen as a true system of blockchain caching.
There are multiple implementations of the payment channel schema. Of course they have different trade-offs.
You remind me of young GMAX proving Bitcoin was impossible, and I hope you are similarly happy when shown to be wrong about LN.
I thought you might be interested in this tweet; to me it seems there is an interesting congruence afoot. Convergent morphology perhaps....or simply Data Structures 101?
To paraphrase:
"What sucks about directly buying Frappuchinos with Bitcoin?"
"The biggest issue I think is random small tx are literally the work that kills Blockchain the most."
The message seems to be that without write caches we don't get to have nice things!
In terms of mechanical engineering, write caches function as a shim which reduces friction and the resulting heat/damage.
https://en.wikipedia.org/wiki/Shim_(spacer)
Anyway saying LN is a caching solution is like saying it would be normal if a browser would lock down a picture for the whole internet because it need to use it locally.
LN miss many thing to be able to be called a true cache system.
Memory management with smp & pci bus is a very complex things, and architecture evolved with more instruction and better instructions pipelining, more functions coming with c11 & openmp, but handling of cache with smp/pci/south bus is far from trivial.
The issues can be seen more clearly with arm architecture because the cpu architecture is much simpler and they dont have built in handling of these issue of cache and concurent access with south bus and memory bridge,memory space conversion etc.
https://en.m.wikipedia.org/wiki/Conventional_PCI#PCI_bus_bridgesPosted writes
Generally, when a bus bridge sees a transaction on one bus that must be forwarded to the other, the original transaction must wait until the forwarded transaction completes before a result is ready. One notable exception occurs in the case of memory writes. Here, the bridge may record the write data internally (if it has room) and signal completion of the write before the forwarded write has completed. Or, indeed, before it has begun. Such "sent but not yet arrived" writes are referred to as "posted writes", by analogy with a postal mail message. Although they offer great opportunity for performance gains, the rules governing what is permissible are somewhat intricate.Caching help when it take in account temporality when multiple access on the same data are made , it can help skipping some likely useless long write, but it's still quite probabilistic.
LN would be a cache if it didn't lock the resources on the main chain, and would be able to detect with good ratio of success when the btc are only going to be used locally and keep the modification off chain on the "local cache" when it's most likely not to be used outside of the local channel, and only write it to the main chain when the state is more likely to be shared outside of the local cache shared by a limited number of participant , and it should always keep the local cache updated from the main chain when there is a modification in the on chain state. And anytime there is an access to the state of the chain outside of the local cache it should be wrote back to the main network as fast as possible or the request could not be processed before the state is fully synchronized. The efficiency of cache system dépend on how successful it is at guessing when the data is going to be used again in the local cache before a modification on it happen outside of the cache, otherwise there is zero gain.
https://en.m.wikipedia.org/wiki/Temporal_databasehttps://en.m.wikipedia.org/wiki/Locality_of_referenceLocality is merely one type of predictable behavior that occurs in computer systems. Systems that exhibit strong locality of reference are great candidates for performance optimization through the use of techniques such as the caching, prefetching for memory and advanced branch predictors at the pipelining stage of processor coreTemporal locality
If at one point a particular memory location is referenced, then it is likely that the same location will be referenced again in the near future. There is a temporal proximity between the adjacent references to the same memory location. In this case it is common to make efforts to store a copy of the referenced data in special memory storage, which can be accessed faster. Temporal locality is a special case of spatial locality, namely when the prospective location is identical to the present location.
it's this kind of problematic involved for efficient caching. With a prospective approach on how likely the data is to change in a certain time frame, which allow for faster cacheed access during this time frame.
With transaction it mean you need to predict if the state of the onchain input is going to be potentially accessed within the time frame when it's used in the local cache. In case it's only going to be a accessed in the local cachenfor a certain period of time, it's worth keeping it in the cache, if the data is shared with other processes, and they need to read it or modify it during that time frame, the cache is useless and data need to be updated from/to the main chain for each operation.