Author

Topic: [XMR] Monero - A secure, private, untraceable cryptocurrency - page 137. (Read 4671114 times)

sr. member
Activity: 628
Merit: 276
BTC, ETH, XMR, LTC
Monero (XMR) could pump hard next week! 18th oct hardfork  Roll Eyes
Hardfork again, really?
which sort of main purposes of the coming hardfork (18th of Oct)?
If I remembered exactly, the XMR had forked months ago.
Yes, Monero community has agreed to make hardforks regularly. But they are all expected and collectively accepted by community. If you want to evolve you need to make changes.
With this fork Bulletproofs will be added which should result in lower fees and other new features. Read this https://www.reddit.com/r/Monero/comments/9igfub/what_does_the_hard_fork_on_the_18th_of_october/ to get more info.
newbie
Activity: 78
Merit: 0
Monero (XMR) could pump hard next week! 18th oct hardfork  Roll Eyes
full member
Activity: 1179
Merit: 210
only hodl what you understand and love!
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
Crosspost as this really should be in this thread.

Articmine of Core on the Architecture of Monero

https://www.youtube.com/watch?v=ECqcf-Nyjmw

It seems Monero Talk took the torch from Monero Monitor now.  

When listening this podcast made me think why almost no new coin nowadays pick tail emission although thy can be totally sure they will have huge problems get consensus adding in the future.

ArcticMine makes great points and really knocked this interview out and I loved how all his answers were spot on to the core issues and he didn't mince words.


Preliminary information thread regarding the scheduled protocol upgrade of October 18

https://www.reddit.com/r/Monero/comments/9lcdme/preliminary_information_thread_regarding_the/

Thanks for this Debruyne. Smiley
legendary
Activity: 2268
Merit: 1141
Preliminary information thread regarding the scheduled protocol upgrade of October 18

https://www.reddit.com/r/Monero/comments/9lcdme/preliminary_information_thread_regarding_the/
sr. member
Activity: 616
Merit: 250
Most of us here buy monero to use it (either to spend, o so safely store funds) not really to make money. I'm sure on certain timeframes you would make money, and on other timeframes you will lose money. I'd suggest asking questions about the project itself in order to learn about it, and not to  view it as a get rich quick scheme.
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
...

As I've stated in the past the most dangerous attack vector for monero is from the inside.

I have learned 3 things from this conversation, I will let others draw their own conclusions.

Altruistically speaking, complacency kills is a beneficent analogy.

For long-term investments, monero is even better than bitcoin. If cryptocurrencies go into the shadows, then Monero will be more popular than Bitcoin

It seems the Monero will be more popular than ever for the private conscious users.

This is an inevitability which we all adhere to but the real question is what is the timeframe going to be? Smiley



member
Activity: 120
Merit: 10
For long-term investments, monero is even better than bitcoin. If cryptocurrencies go into the shadows, then Monero will be more popular than Bitcoin

It seems the Monero will be more popular than ever for the private conscious users.
member
Activity: 554
Merit: 11
For some reason, I started to find much value and potential for XMR! Until then I did not have the slightest interest in this coin, but after a few weeks, everything changed suddenly! I'll be around in this thread as soon as possible to find discussions and discussions, and so on. What is the best wallet to keep XMR?
Linux CLI is best.
For android, I use monerujo (it's in the Google Play Store).
There's a shady android wallet called freewallet, or something like that.  Stay away from it.

If you have an issue with Freewallet, share more details, please, and we'll be glad to assist you in the resolution.
full member
Activity: 490
Merit: 103
For long-term investments, monero is even better than bitcoin. If cryptocurrencies go into the shadows, then Monero will be more popular than Bitcoin
full member
Activity: 1179
Merit: 210
only hodl what you understand and love!
full member
Activity: 243
Merit: 125
full member
Activity: 165
Merit: 101
so what is the layman's TLDR; version of hyc and gmaxwell's conversation on IRC?
jr. member
Activity: 314
Merit: 6
Cryptojacking has become popular, after the last crypto surge. Monero was one of the most popular cryptocurrencies mined by cryptojacking software. Users can now receive technical support from the Monero Malware Workgroup on Freenode, Riot, Mattermost and Slack by joining the #monero-mrw channel.

https://coincodex.com/article/2423/monero-community-tackles-cryptojacking-with-working-group/
full member
Activity: 1179
Merit: 210
only hodl what you understand and love!
For some reason, I started to find much value and potential for XMR! Until then I did not have the slightest interest in this coin, but after a few weeks, everything changed suddenly! I'll be around in this thread as soon as possible to find discussions and discussions, and so on. What is the best wallet to keep XMR?
Linux CLI is best.
For android, I use monerujo (it's in the Google Play Store).
There's a shady android wallet called freewallet, or something like that.  Stay away from it.

Jopidopi, nothing to add  Grin Cheesy
legendary
Activity: 1081
Merit: 1001

Unfortunately though, the "Sign Transaction" confirmation window (offline mode) remains too large and does not scroll thereby making the "Yes" and "No" buttons inaccessible.

What resolution do you have? I have 1024x600 and sign tx window fit to the screen (on linux).

Yes, that's a typical netbook screen resolution. I have three of them that I use as offline/air-gapped machines and they all have such resolution. Perhaps some Linux distro have made it a point to account for this and implement a solution as I believe Linux is often installed on older machines. For instance, Linux Mint has a nifty feature to address such issue wherein holding down the Alt key and double-clicking any area of the window would allow moving/dragging it around beyond the screen borders thereby making any part of it accessible.

Edit: BTW, the issue occurred on a Windows machine.

legendary
Activity: 1081
Merit: 1001

Scrolling window and start-low-graphics-mode solutions in GUI to accommodate low-resolution/limited-GPU devices (such as netbooks used as cold/offline machines) are very handy. Unfortunately though, the "Sign Transaction" confirmation window (offline mode) remains too large and does not scroll thereby making the "Yes" and "No" buttons inaccessible.



Are you able to confirm or deny with the enter and escape button, respectively?

I did try to hit enter hoping that Yes is made active and highlighted by default but to no avail. I even tried the tab/enter combo hoping it'd cycle through the buttons and get lucky. No dice. I'm almost sure I have hit the Esc key a few times too as I just about tried any key that I thought would trigger something.

Edit: BTW, this occurred on a Windows machine.
legendary
Activity: 3836
Merit: 4969
Doomed to see the future and unable to prevent it
This conversation says many things and I for one like what I'm reading. Smiley

Quote
[08:27:03] I thought this randomJS PoW idea was pretty cool when I came up with it, but rrol makes it sound 100x cooler
[08:27:32] <-- sgp_1 ([email protected]) has left #monero-research-lab
[08:28:32] --> sgp_ ([email protected]) has joined #monero-research-lab
[08:30:01] hyc: that kind of approach has been totally busted whenever I've seen it suggested before (going back to 2012 at least).  To mine it you can just throw out any nonces that choose code which isn't super trivial to execute.
[08:30:40] generally one of the needed properties for a good POW is that every instance is equivlently hard to execute under the most optimized execution model.
[08:30:51] <-- muff1nman ([email protected]) has quit (Quit: No Ping reply in 180 seconds.)
[08:32:14] yes but - how complex is the code that determines "super trivial" ?
[08:33:06] the code generator is being fine tuned to keep all the generated code within X +/- 10ms execution time
[08:33:14] hyc: what changes could be made to randomjs if it turns out some aspects of it have been implemented in hardware?
[08:33:20] i'd expect it to do template-matching and to take single-digit numbers of nanoseconds
[08:33:24] --> muff1nman ([email protected]) has joined #monero-research-lab
[08:34:26] andytoshi: I doubt that very much. particularly since the generated code includes exec("") invocations with more randomly generated code.
[08:34:46] it will require actually executing JS, there is no getting around that.
[08:34:53] famous last words
[08:36:08] the number of permutations is astronomical. template matching for all of that?
[08:36:23] no matter what optimizations, the required circuitry will be a lot bigger than a cryptonight ASIC
[08:36:42] larger chip area, higher power, lower efficiency ratio.
[08:39:34] how are you going to optimize the instruction pointer? you'll need a CPU-style branch predictor
[08:40:01] not sure that reasoning holds when we have big industry players with an incentive and the facilities to produce a JS chip
[08:40:12] endogenic: that's still a win.
[08:40:14] but i'm no expert of this..
[08:40:16] hyc: this really really sounds like something people could find shortcuts for
[08:40:24] it's not necessarily a win if we don't have as much flexibility to change it later
[08:40:26] if a JS chip becomes commoditized, everyone wins
[08:40:26] and cheaply-identifiable weak subsets
[08:41:18] andytoshi: I don't see how the code analysis could be significantly cheaper
[08:42:16] look at the state of the art in code analysis tools today, like Coverity or Veracode. totally riddled with false positives and bogus results.
[08:42:37] and that's for a language like C with just a small set of keywords and semantics
[08:43:09] --> doubletwist25 ([email protected]) has joined #monero-research-lab
[08:44:51] right, so critically you don't care about false negatives here (within reason), nor do you care about the depth of analysis you'd need to do useful static analysis for logic bugs
[08:45:04] also C is mostly UB
[08:45:22] so it's not immediately obvious to me that it'd be easier to reason about than js
[08:47:04] essentially you're talking about writing a decompiler to turn the generated syntax tree into macro-ops
[08:47:36] and somehow setting up a lookup table that can direct flow to various template handlers
[08:48:06] <-- valentinbuza_ (~valentinb@unaffiliated/valentinbuza) has quit (Ping timeout: 252 seconds)
[08:48:12] it would still be an interpreter for a tokenized language, you're just saying some of the tokens can be macro scale
[08:48:16] yes. and if you can do that in .1% the time it takes to actually execute the code, but this only works .1% of the time, then you're set
[08:48:28] and the other 99.9% of the time you just reject the nonce
[08:49:00] that's self-defeating, since the winning nonce is 99.9% more likely to be in the code sequences you rejected.
[08:49:36] andytoshi: thats exactly how I broke the first proshares pow. several orders of magitude speedup.
[08:50:00] hyc: there is no "the winning nonce", just some proportion of nonces that lead to a valid pow
[08:50:01] hyc: "the winning nonce" sounds like a common fundimental misunderstanding of POW.
[08:50:42] <-- doubletwist25 ([email protected]) has quit (Ping timeout: 272 seconds)
[08:50:43] eh? the final result must have bytes XXXX lower than difficulty.
[08:50:45] Mining is a 'find the needled in a haystack problem' it's a 'find a shorter than usual straw in a pile of straw' problem.
[08:51:07] hyc: there are an effectively infinite number of such solutions.
[08:51:21] there will be 999 times as many winning nonces that i skip than there are nonces that i don't skip, sure (assuming a uniform distribution)
[08:51:36] but that's fine because i spend so much less time on skipped nonces than non-skipped ones
[08:52:27] I'd suggest reading the equihash paper on the description of what it takes to be a good POW.  (not that equihash is particularly good, but the authors had a good understanding of what was required-- they just had incorrect beliefs about how hardware worked)
[08:52:53] for a given block hash there is only a small number of valid nonces
[08:53:24] hm, I've read the equihash paper, yes.
[08:53:38] hyc: the mining process doesn't just change the small nonce field, if it did, it would be unacceptably likely to be stuck and never be able to find a solution.
[08:53:56] If you don't find a solution in your nonce field search you just move on to a larger space.
[08:54:08] for a given block hash, it is true that there may be no solution.
[08:54:25] you mean, "for a given blockheader sans nonce"
[08:54:30] yes
[08:54:39] but you can change other parts of the block header
[08:54:51] right, so it's wrong to think in trms of "only a small number of valid nonces".
[08:54:51] and you have to in e.g. bitcoin where the actual "nonce" field is only 32 bits
[08:55:09] you can change the timestamp and you can change the mix of txns used to generate the header
[08:55:42] I think you can just handwave and not care what changes, and just call it the "nonce". Close enough and doesn't change the theory.
[08:55:53] moneromooo: exactly.
[08:55:55] but you don't know that you need to change anything unless you've already exhausted the nonce space
[08:56:03] hyc: sure, and?
[08:56:45] hyc: moneromooo's point is that because you can change other stuff, the nonce space is effectively unlimited. Yes, you compute it in two parts, but from an analysis perspective that doesn't really matter much.
[08:57:24] Oh, the generated program depends on the actual nonce, right ? So you could keep the same nonce, then iterate through other stuff (like non-nonce tx_extra garbage).
[08:57:28] other than updating one part is somewhat more expensive, but since you'll do a huge number of checks on the fast to update part, the more costly part is amortized.
[08:57:36] So you get to do the program analysis once.
[08:58:08] moneromooo: the generated program depends on the entire header including nonce
[08:58:18] you don't get to reuse any analysis
[08:58:18] OK, ignore me ^_^
[08:59:02] the header with nonce is hashed, hash result feeds PRNG.
[08:59:06] seeds
[09:07:12] Equihash also claims memory hardness, which again is a fast moving target. memory tech advances far more quickly than CPU speeds
[09:07:26] It's a doomed approach from the get-go
[09:08:58] I agree it's a bad idea, but the reason you give is kinda bonkers, "memory tech advances far more quickly than CPU speeds" is just not historically true... memory has advanced much much much much much slower than cpu speeds.
[09:09:21] capacity doubles faster than CPU speeds double.
[09:09:35] these functions are not 'capacity hard', they're bandwidth hard.
[09:09:39] actually CPU speeds have been flat for the past few years, and are regressing now due to Meltdown
[09:10:07] they are both - if you shrink the memory space used, the work accelerates by more than 2x
[09:10:27] if you shrink the space *required*...
[09:11:02] also, (xkcd386) memory per dollar has advanced very slowly as well...
[09:11:35] (in recent years at least)
[09:15:17] https://people.xiph.org/~greg/temp/memoryprices.png  < I made you a graph.
[09:15:25] thats the last decade.
[09:16:58] --> valentinbuza_ (~valentinb@unaffiliated/valentinbuza) has joined #monero-research-lab
[09:17:14] --> sfhi ([email protected]) has joined #monero-research-lab
[09:18:32] hmmm. I think that's a result of physically destroyed fabs, not the technology trend
[09:19:30] progress on compute/$ have been MUCH better. ... as in looks kind of similar if you give the compute/$ a log scale instead of linear like in that chart.
[09:20:19] fwiw memory _speed_ has still improved over the time period where $/mbyte has been stagnant
[09:20:58] andytoshi: yes, but the bandwidth improvement is still a joke compared to computing improvements.
[09:21:25] computing cost http://image.slidesharecdn.com/jsu-feb6-2016-160206014218/95/big-data-hpc-convergence-and-a-bunch-of-other-things-18-638.jpg?cb=1454723352
[09:21:38] storage cost http://image.slidesharecdn.com/jsu-feb6-2016-160206014218/95/big-data-hpc-convergence-and-a-bunch-of-other-things-19-638.jpg?cb=1454723352
[09:21:41] i believe that, i'm more trying to justify to myself all the money i've spent on RAM, than i am trying to contribute to the conversation Wink
[09:21:48] lol
[09:21:58] the big thing equihash authors misunderstood is that memory bandwidth limits aren't as much a fundimental property of memory as they were a fundimental property of external to chip IO.. put the memory on chip and the bandwidth goes up 100x with no other tech improvements.
[09:22:07] storage cost declining 38%/yr, vs compute at 33%/yr.
[09:22:35] I guess nobody told those guys about HBM
[09:22:36] storage != ram though, yes bulk storage cost overtime has historically improved better than compute.
[09:23:02] <-- muff1nman ([email protected]) has quit (Quit: No Ping reply in 180 seconds.)
[09:23:25] --> el00ruobuob_[m] ([email protected]) has joined #monero-research-lab
[09:23:34] --> thelinuxguy7 ([email protected]) has joined #monero-research-lab
[09:23:37] hyc: eldyentyrell posted on the zcash github, basically pointing out package attached ram, tsvs etc... they ignored him.
[09:24:14] such a surprising response ... ...
[09:24:22] but thats what you get when you have self appointed narrow field experts design something like that, without input from the very people who would be most responsible for optimizing it (hardware engineers).
[09:24:46] the comment has since been deleted, or I'd link to it.
[09:25:21] --> muff1nman ([email protected]) has joined #monero-research-lab
[09:25:27] btw andytoshi, might be time to splurge on some more DRAM again soon https://press.trendforce.com/press/20180926-3163.html
[09:25:28] --> midipoet (uid316937@gateway/web/irccloud.com/x-phbbtmjglhzxrinb) has joined #monero-research-lab
[09:25:43] in any case, equihash guys appeared to understand what was needed, but misunderstood what hardware provided. ... still better than most altcoin pow designs, that failed in both domains.
[09:26:04] --> jwheare10 ([email protected]) has joined #monero-research-lab
[09:26:13] hyc: my real struggle is that the 14in thinkpads have had motherboards capped at 32Gb for like 6 years
[09:26:33] ah I noticed that. was looking at picking up an A485 myself
[09:26:40] andytoshi: the xeon based luggable computers will take 64. Tongue
[09:26:46] <-- thelinuxguy7 ([email protected]) has quit (Remote host closed the connection)
[09:27:09] Lenovo’s ThinkPad X1 Extreme: Hex-core, GTX 1050 Ti, 64GB RAM under 4 pounds
[09:27:09] gmaxwell: so you believe memory-hardness is still a viable approach? taking on-chip memory into account?
[09:28:11] hyc: I think no one really knows. In general you can see memory hardness as a speific instance of "upfront cost hardness", and I think there are good arguments as to why "upfront cost hardness" is bad, both for POW-consensus and for password security (though for different reasons).
[09:29:09] Basically, with upfront cost hardness, you pay a lot to buy in to attack, but the amoritized cost after that is low.  So this favors first movers and parties that can run the stuff the longest before throwing it out.
[09:29:31] Proof of Purchase, instead of Proof of Work. Tongue
[09:29:36] <-- jwheare10 ([email protected]) has quit (Remote host closed the connection)
[09:29:37] that would seem to be true even if we were in a CPU-only world
[09:29:46] first movers just buy up the most CPUs
[09:31:51] I think trying to be cpu only, mostly, is a lost cause... for one, it's a massive subsidy for patent rights for fancy cpus rather than a fundimental cost, meaning some cpu pirate would perhaps have the best advantage.   Also no matter how 'cpu only' your function is, someone could make a mining specific cpu that strips the useless parts (like pcie busses) and uses a fair bit less power and
[09:31:51] silicon area.  maybe at best you could make the optmized hardware only 2x power efficient and 10x area efficient,  but in mining (at least) that kind of small gap guarentees the generic hardware will be driven out of business eventually.
[09:32:17] (cpu pirates, or cpu companies themselves, of course)
[09:35:29] don't economies of scale factor into things?
[09:35:31] I think that's overly pessimistic. on a headless server, dedicated to mining, the ALUs will be busy 100% of the time, and external buses will be mostly idle
[09:35:43] I'm much more hopeful of the utility of fancy work functions for password security than for mining. I think for mining making specialized stuff 'merely' 2x/10x more efficient is basically useless in the long run,  but for password security, it's fine
[09:35:53] a special purpose mining chip will still need a communications channel to get data in and out.
[09:36:09] hyc: I dunno about other things, but idle the pci-e controllers in epyc draw about 4 watts. (just as an example)
[09:36:09] this comms channel would have about the same utilization as the buses in the server.
[09:36:55] a mining chip needs a tin can and string to get data in and out. it's just not compariable to a gbytes per second bus. Smiley
[09:37:22] sure, but that gbytes/sec bus only blips on for a microsecond
[09:37:26] what's the worst that can happen if we try hyc's idea for a few Monero release cycles just to see what happens? it seems like it has so much potential
[09:37:47] the worst thing that can happen is a few months later an ASIC shows up
[09:38:08] yeah and so we fork back to something else, no real damage
[09:38:48] or make it hybrid, so it's every 4th block or something that requires hyc's idea, and then ramp up depending on the response
[09:38:50] the damage is if we have no fallback algorithm for the fork after that
[09:39:03] The worst is that someone finds a way to speed up massively by shortcut, and 99%s the network and keeps mum.
[09:39:35] to get 99% of the network they'd have a huge visible impact on network hashrate
[09:39:57] knaccc: it's kind of an embarassingly unsound design.
[09:40:23] so one disadvantages is that people like me start thinking monero is more like ethereum in terms of yolo design.
[09:40:28] Yes, we'd certainly see. My point is that an ASIC manufacturer would have an incentive to keep the network "unaffected".
[09:40:59] haha yolo design, nice term
[09:41:15] (and FWIW, ethereum had proposed a similar POW,  with a RNG selecting a random circuit, and they abandoned it after it was shown to be broken by the same kind of optimizations I described)
[09:41:42] are you talking about something before ProgPoW?
[09:42:03] yes, long before they launched.
[09:42:24] can you point me to the discussion?
[09:42:25] the original eth hash constructed a random arithmetic circuit over 256 bit integers.
[09:43:26] --> Trieste24 ([email protected]) has joined #monero-research-lab
[09:43:34] <-- Trieste24 ([email protected]) has quit (Remote host closed the connection)
[09:43:51] they seem to have vanished a lot of the docs, you can see it described on this page: http://nbviewer.jupyter.org/gist/anonymous/cb53d06b837be97ebe32
[09:44:01] thanks
[09:44:19] (hm that shows 64 bit, but I am pretty darn confident that it was 256 bit at one point, they changed it a lot)
[09:45:08] yea, that must be a later version. the first version had instructions like div and mod too.
[09:45:45] "the worst thing that can happen is a few months later an ASIC shows up" < you also lose however many months towards developing the old mining algo's asic resistance while miners are going to be hedging for a future switch back
[09:46:32] the current algo is not asic-resistant at all. it is simply being changed, to break existing ASICs
[09:47:41] this "Random Circuit" is pretty trivial
[09:48:52] It's general for all computation, so I don't think you can say it is any less 'trivial' than anything else. Additional complexity may just be obfscuation.
[09:49:23] Eventually, I predict all of the world's governments will issue every citizen with a unique asymmetric keypair, publish a list of all issued public keys so that the number issued is known, and then we don't need proof of work any more
[09:49:32] obfuscation*
[09:49:38] difference between theory and practice. in theory algos X and Y may be comparable, while in practice one is far more easily implementable than another
[09:49:41] knaccc: hi, mike hearn.
[09:50:01] haha oh is that what he is hoping for too
[09:50:13] knaccc: "proof of passport" was a thing he wanted.
[09:50:30] is he an book-of-revelations/mark-of-the-beast bible thumper?
[09:50:46] ha interesting, thanks i'll google it
[09:51:26] that would kind of defeat the purpose of a permissionless anonymous cryptocurrency
[09:51:40] hyc: If you look at X and say it's trivial, but then we look at Y and can that Y is provably reducably to X, shouldn't that at least worry you that Y is also similarly trivial?
[09:52:02] hyc: the keypair would only be used for "mining", not as wallet public keys
[09:52:05] yes, that's a valid point
[09:52:51] I would say that since the inputs are ultimately the same, the range of permutations may also be the same.
[09:53:05] the monero project would have had the opportunity to learn something more about asic resistance through having the problem of having to manually tweak the algorithm..
[09:53:49] I mean, we know that PRNGs can't create entropy. so it may all just turn out to be pointless window dressing
[09:54:28] but the flip side of the argument is that the existing algos haven't squeezed out as many permutations as they could have
[09:56:29] if we can say these 2^256 permutations can only be reduced to 2^32 unique templates, we've still created a problem that's much harder for an ASIC implementer
[09:58:18] how much harder is it for us?
[09:59:33] with a CPU? it would be no particular disadvantage
[10:06:55] what changes can be made to it though?
[10:07:57] hyc: I don't agree at all.
[10:08:47] the asic implemter would look at those 2^32 unique templates, find a large subclass that a stupidly cheap to implement, implement only those, and then just make their chip reconize and abort on the others.
[10:11:24] we could simulate this right now. modify a randomJS impl and make it abort on 90% of its inputs
[10:11:50] hyc: and be 1000x faster on the remaining 10%...
[10:12:10] ^
[10:12:23] sure, we can replace it with a 10ns sleep or whatnot
[10:14:04] hm, no, we still need it to produce a valid output for the cases it actually executes.
[10:14:14] we'd have to slow down the regular impl isntead.
[10:16:11] but that'd work fine. we can also vary the delay factor, and see where the breakeven point is
full member
Activity: 297
Merit: 112
PRIVATE AND NOT PREMINED: MONERO, AEON, KARBO

Unfortunately though, the "Sign Transaction" confirmation window (offline mode) remains too large and does not scroll thereby making the "Yes" and "No" buttons inaccessible.

What resolution do you have? I have 1024x600 and sign tx window fit to the screen (on linux).
Jump to: