Author

Topic: [XMR] Monero - A secure, private, untraceable cryptocurrency - page 1608. (Read 4670972 times)

sr. member
Activity: 784
Merit: 272
These guys are brilliant! Smiley

certainly worth a read.
r05
full member
Activity: 193
Merit: 100
test cryptocoin please ignore
These guys are brilliant! Smiley
dga
hero member
Activity: 737
Merit: 511
However that is a significant disadvantage to all those who run a 32-bit operating system, which is apparently still greater than 50% of all computers:

http://community.spiceworks.com/topic/426628-windows-32-bit-vs-64-bit-market-share

http://en.wikipedia.org/wiki/Usage_share_of_operating_systems#Desktop_and_laptop_computers

Probably due to having twice as many fat registers in 64-bit mode, which means among other possibilities you can pipeline up to doubly better (although a hyperthreaded CPU should compensate if your pipelining is nailed fully with 16 fat registers).

I've posted an informal summary of my analysis of the CryptoNight algorithm earlier in this thread with respect to GPU balance and its likely eventual balance with ASICs.

It's good.

Link please?

The L3 cache by itself is almost half of the chip.

I looked at an image of the Haswell die and appears to be less than 20%. The APU (GPU) is taking up more space on the consumer models. On the server models there is no GPU and the cache is probably a higher percentage of the die.

There is also a 64 bit multiply, which is I'm told is non-trivial. Once you combine that with your observation about Intel having a (likely persistent) process advantage (and also the inherent average unit cost advantage of a widely-used general purpose device), there just isn't much if anything left for an ASIC-maker to to work with.

So no I don't think the point is really valid. You won't be able to get thousands of times anything with a straightforward ASIC design here. There may be back doors though, we don't know. The point about lack of a clear writeup and peer review is valid.

Quote
The CPU has an inherent disadvantage in that it is designed to be a general purpose computing device so it can't be as specialized at any one computation as an ASIC can be.

This is obviously going to be true, but the scope of the task here is very different. Thousands of copies will not work.

I believe that is wrong. I suspect an ASIC can be designed that vastly outperform (at least on a power efficiency basis) and one of the reasons is the algorithm is so complex, thus it probably has many ways to be optimized with specific circuitry instead of generalized circuitry. My point is isolating a simpler ("enveloped") instruction such as aesinc would be a superior strategy (and embrace USB pluggable ASICs and get them spread out to the consumer).

Also I had noted (find my post in my thread a couple of months ago) that the way the AES is incorrectly employed as a random oracle (as the index to lookup in the memory table), the algorithm is very likely subject to some reduced solution space. This is perhaps Claymore's advantage (I could probably figure it out if I was inclined to spend sufficient time on it).

There is no cryptographic analysis of the hash. It might have impossible images, collisions, etc..

I strongly disagree.

The algorithm is *not* complex, it's very simple.  Grab a random-indexed 128 bit value from the big lookup table.  Mix it using a single round of AES.  Store part of the result back.  Use that to index the next item.  Mix that with a 64 bit multiply.  Store back.  Repeat.  It's intellectually very close to scrypt, with a few tweaks to take advantage of things that are fast on modern CPUs.

I know more about this because I independently developed an algorithm several months ago that is very similar which I named L3scrypt. I also solved several problems which are not solved in CryptoNite such as the speed.

The concept of lookup can still be utilized to trade computation for space. I will quote from my white papers as follows which also explains why I abandoned it (at least until I can do some real world testing on GPUs) when I realized the coalescing of memory access is likely much more sophisticated on the GPU.

Code: (AnonyMint)
However, since first loop of L3crypt is overwriting values in random order instead of sequentially, a more
complex data structure is required for implementing "lookup gap" than was the case for Scrypt. For every element
store in an elements table the index to a values table, index of a stored value in that values table and the
number of iterations of H required on the stored value. Each time an element in the values table needs to be
overwritten, an additional values table must be created because other elements may reference the existing stored
value.

Thus for example, reducing the memory required by up to half (if no element is overwritten), doubles the number of H
computed for the input of FH as follows because there is a recursive 50% chance to recompute H before reaching a
stored value.

   1 + 1/2 + 1/4 + 1/8 + 1/16 + ... = 2 [15]

Storing only every third V[j] reducing the memory required by up to two-thirds (if no element is overwritten), trebles the
number of H computed for the input of FH as follows because there is a recursive 2/3 chance to recompute H before
reaching a stored value.

   1 + 2/3 + 4/9 + 8/27 + 16/81 + ... = 3

The increased memory required due to overwritten 512B elements is approximately the factor n. With n = 8 to reduce the
1MB memory footprint to 256KB would require 32X more computation of H if the second loop isn't also overwriting elements.
Optionally given the second loop overwrites twice as many 32B elements, to reduce the 1MB memory footprint to 256KB
would also require 4X more computation of FH.

However since execution time of the first loop bounded by latency can be significantly reduced by trading recomputation of
H for lower memory requirements if FLOPs exceed the CPU by significantly more than a factor of 8, it is a desirable precaution
to make the the latency bound of the second loop a significant portion of the execution time so L3crypt remains latency
bound in that case.

Even without employing "lookup gap", the GPU could potentially execute more than 200 concurrent instances of L3crypt to
leverage its superior FLOPs and offset the 25x slower main memory latency and the CPU's 8 hyperthreads.

So if you can trade computation for space, then an ASIC can potentially clobber the CPU. The GPU would be beating the CPU, except for the inclusion of the AES instructions which the GPU doesn't have. An ASIC won't have this limitation.

Also the use of AES as random oracle to generate the lookup in the table is a potential major snafu, because AES is not designed to be a hash function. Thus it is possible that certain non-randomized patterns exist which can be exploited. I covered this in post in my thread, which the Monero developers were made of aware of but apparently decided to conveniently ignore(?).

Adding all these together it may also be possible to utilize more efficient caching design on the ASIC that is tailored for the access profile of this algorithm. There are sram caches for ASICs (e.g. Toshiba) and there is a lot of leeway in terms of parameters such as set associativity, etc..

So sorry, I don't think you know in depth what you are talking about. And I think I do.

Remember that there are two ways to implement the CryptoNight algorithm:
  (1) Try to fit a few copies in cache and pound the hell out of them;
  (2) Fit a lot of copies in DRAM and use a lot of bandwidth.

Approach (1) is what's being done on CPUs.  Approach (2) is what's being done on GPUs.

And the coalescing of memory accesses for #2 is precisely what I meant. It is only the AES instructions that impeding the GPU from clobbering the CPU.

I tried implementing #2 on CPU and couldn't get it to perform as well as my back-of-the-envelope analysis suggests it should, but it's possible it could outperform the current CPU implementations by about 20%.  (I believe yvg1900 tried something similar and came to the same conclusion I did).

No because external memory access parameters decline as the number of threads simultaneously accessing it increases (less so for the high end server CPUs). So what you saw is what I expected. If you need me to cite a reference I can go dig it up.

An ASIC approach might well be better off with #2, however, but it simply moves the bottleneck to the memory controller, and it's a hard engineering job compared to building an AES unit, a 64 bit multiplier, and 2MB of DRAM.  But that 2MB of DRAM area limits you in a big way.

Computation can be traded for space to use fast caches. See what I wrote up-post. And or you could design an ASIC to drop into an existing GPU memory controller setup. Etc.. There are numerous options. Yes it is a more difficult engineering job that is a worse for CryptoNite because it means who ever is first to solve it, will limit supply and give an incredible advantage to a few, which is what plagued Bitcoin in 2013 until the ASICs became ubiquitous. This proprietary advantage might linger for much longer duration.

In my best professional opinion, barring funky weaknesses lingering within the single round of AES, CryptoNight is a very solid PoW.  Its only real disadvantage is comparatively slow verification time, which really hurts the time to download and verify the blockchain.

In my professional opinion, I think you lack depth of understanding. What gives?

What gives is very simple:  You're wrong;  you're also being needlessly insulting, in a discussion that need not become personal.  If you'd like to engage in a credential pissing match, fine, but that seems like a waste of time.  Let's settle for me pointing out that I'm the original source of the code that's now used in the inner loop of the CPU cryptonight mining and block verification code, so I will claim some familiarity thereby.

You haven't posted enough details about your L3scrypt design to determine if your analysis actually applies to CryptoNight, but let's walk through the math a little:

There are 1,000,000 random accesses of the inner loop of CryptoNight.

There are 131,072 individual 128 bit slots in the lookup table.

Simple approach #1:  Store only elements after they have been modified as part of the execution of CryptoNight.  Assume *zero* cost to compute the "initial" table entries on-demand, but assume that all values are stored after they have been modified, so that the inner loop doesn't have to backtrack:

 - Balls-in-bins analysis of 1M balls into 128k bins;  how many are empty at the end?  As a first approximation, not very many at all.  Saves less than 10% of the storage space.  Not an effective optimization.

Your approach:  Dynamic recomputation.

The first flaw in your analysis:  Your l3scrypt seems, from what you wrote below, to use 512b (bit?  likely, if scrypt) entries.  CryptoNight uses 128 bit entries, which means that the cost of a 24 bit counter to indicate the last-modified-in round information for a particular value is still fairly significant in comparison to the original storage.

As an example, consider LOOKUP_GAP=2:
  1MB of full cache to store actual values + 64k*4bytes ~= 256KB = 1.25MB of space.

You furthermore haven't dealt with the issue of potential cycles in the recomputation graph, which requires a somewhat more sophisticated data structure to handle:  A depends on B depends on C which depends on an earlier-computed version of A.  (Keeping in mind that there's a non-negligible chance of A immediately modifying A!  It happens, on average, a few times per hash).

I missed the part of your proposal that handled that.  Furthermore, there's some internal state associated with the mixing that happens at each round -- it's not simplify a crank-through of X iterations of a hash on a static data item.  That state is carried forward from the previous half-round (the multiply or the AES mix, respectively), so you have to have a way to backtrack to that.  Likely, you could store another bit in the entry to indicate if it was in the first or second half-round, but you still need to be able to track that part back to an up-to-date stored value as well.   And you need to have a previous round in which not only did you generate the value to be stored in the LOOKUP_GAP space, but where you're able to go back to a previous round and find the initial value of 'a' that was used in the AES encryption.

Your stored values table must be versioned, because each subsequent modification updates it.  That's going to add another bit of bookkeeping overhead.

As I said in my post, there are possibly some weaknesses involved in the use of a single round of AES as a random number generator, but I *suspect* they're not exploitable enough to confer a major speed advantage.  That's not an expert part of my conclusion, because I'm not a cryptographer.

I think you're being overly optimistic about the success of your own approach based upon the flaws in your (completely unexplained) l3scrypt.  I'm delighted at the idea that CryptoNight is flawed, but you've completely failed to prove it, and presenting an analysis of some *other* PoW function that you designed and that, as far as I can tell, exists only in your head and in your own private document repository, is hardly a way to go about it.

You're missing way too many CryptoNight-specific details to be convincing at all.  I think that underlying this is an important difference:  Your PoW design didn't carry as much information forward between rounds as CN does.  Your approach isn't crazy, but you've left way too many important parts out of the analysis.

Regarding the bandwidth-intensive approach, you're still wrong about where the time is being spent in the GPU.  It's about 50/50 in random memory access and AES computation time.  Amdahl's law gets you again there -- I'll certainly grant something like a 4x speedup, but it starts to decline after that.

Update:  I also read your linked thread's comments about the use of AES.  You're not looking at the big picture.  In the context of a proof-of-work scheme (NOT as the hash to verify integrity), the limitation of 128 bits at each step is unimportant.  More to the point, your post has absolutely no substantiation of your claim and has a link to a stackexchange article that in no way suggests any easy-to-exploit repeating pattern of the output bits that could be used to shrink the scratchpad size.  If you'd care to actually provide a substantive reference for and explanation of your claim, then perhaps the Monero developers (or bytecoin developers) might take it a little more seriously.
sr. member
Activity: 336
Merit: 250
We didn't create it, we inherited it from the CryptoNote reference code. All optimisations we've made to it are public and in master on github.

It had world's slowest AES implementation in git for a month, before someone bothered to add AES-NI support.
Why did you not have AES-NI from day one?

How? thankful_for_today forked and launched it, everyone that played around with it from very early on (myself included) solo mined on the miner that came with it. It took some time to even figure out how all the moving pieces in the code fit together, much less begin to grok it. Only after all of that could any optimisations happen, and that was LONG before we added AES-NI support (which was a complete PITA). Remember: at the time it was code we inherited, not code we wrote.

OK, I forgot the dev change.  I also looked at the code and thought WTH...  I optimized it and made it three times faster (I didn't try AES-NI, because it was not clear was it just mixing features of AES randomly in the PoW), and I also wondered the great effort that was put into the obfuscation.
I found one block and then AES-NI patch was released.  Roll Eyes Oh and I missed the initial announcement because BCT didn't send me new thread notification about monero.
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
We didn't create it, we inherited it from the CryptoNote reference code. All optimisations we've made to it are public and in master on github.

It had world's slowest AES implementation in git for a month, before someone bothered to add AES-NI support.
Why did you not have AES-NI from day one?

How? thankful_for_today forked and launched it, everyone that played around with it from very early on (myself included) solo mined on the miner that came with it. It took some time to even figure out how all the moving pieces in the code fit together, much less begin to grok it. Only after all of that could any optimisations happen, and that was LONG before we added AES-NI support (which was a complete PITA). Remember: at the time it was code we inherited, not code we wrote.
legendary
Activity: 3766
Merit: 5146
Note the unconventional cAPITALIZATION!
It had world's slowest AES implementation in git for a month, before someone bothered to add AES-NI support.
Why did you not have AES-NI from day one?

None of the current members of the core team had anything to do with the initial reference code or the cryptocurrency's inception.

So what he meant was: "Congrats for the AES-NI support!"
legendary
Activity: 1484
Merit: 1005
It had world's slowest AES implementation in git for a month, before someone bothered to add AES-NI support.
Why did you not have AES-NI from day one?

None of the current members of the core team had anything to do with the initial reference code or the cryptocurrency's inception.
sr. member
Activity: 336
Merit: 250
We didn't create it, we inherited it from the CryptoNote reference code. All optimisations we've made to it are public and in master on github.

It had world's slowest AES implementation in git for a month, before someone bothered to add AES-NI support.
Why did you not have AES-NI from day one?
3x2
legendary
Activity: 1526
Merit: 1004
We've had the troll wars, now we have the crypto-expert wars! Cheesy

Experts opinion matter which is good for monero.
hero member
Activity: 518
Merit: 521
We've had the troll wars, now we have the crypto-expert wars! Cheesy

False dilemma?

Agreed only good can come out of hashing out such details earlier rather than later.
r05
full member
Activity: 193
Merit: 100
test cryptocoin please ignore
We've had the troll wars, now we have the crypto-expert wars! Cheesy

False dilemma?

I have no idea what you mean?
legendary
Activity: 1596
Merit: 1030
Sine secretum non libertas
We've had the troll wars, now we have the crypto-expert wars! Cheesy

False dilemma?
legendary
Activity: 1596
Merit: 1030
Sine secretum non libertas
the emission halves every 512 days?

This is a continuous process, not suddenly halved (like bitcoin). The continuous decrease is such that after 512 days the reward is half of original one.

i always wondered why satoshi designed bitcoin to drop off these gigantic cliffs every few years.

It is a marvelous opportunity to study the impulse response of a real economy.  Generations of econometricians will be deeply indebted to (presumptive default gender) him for the emission step function.  The guinea pigs already owe him for bitcoin, so they can't complain much.  You can unwind a lot of structure from a binary ping!

r05
full member
Activity: 193
Merit: 100
test cryptocoin please ignore
We've had the troll wars, now we have the crypto-expert wars! Cheesy
hero member
Activity: 518
Merit: 521
However that is a significant disadvantage to all those who run a 32-bit operating system, which is apparently still greater than 50% of all computers:

http://community.spiceworks.com/topic/426628-windows-32-bit-vs-64-bit-market-share

http://en.wikipedia.org/wiki/Usage_share_of_operating_systems#Desktop_and_laptop_computers

Probably due to having twice as many fat registers in 64-bit mode, which means among other possibilities you can pipeline up to doubly better (although a hyperthreaded CPU should compensate if your pipelining is nailed fully with 16 fat registers).

I've posted an informal summary of my analysis of the CryptoNight algorithm earlier in this thread with respect to GPU balance and its likely eventual balance with ASICs.

It's good.

Link please?

The L3 cache by itself is almost half of the chip.

I looked at an image of the Haswell die and appears to be less than 20%. The APU (GPU) is taking up more space on the consumer models. On the server models there is no GPU and the cache is probably a higher percentage of the die.

There is also a 64 bit multiply, which is I'm told is non-trivial. Once you combine that with your observation about Intel having a (likely persistent) process advantage (and also the inherent average unit cost advantage of a widely-used general purpose device), there just isn't much if anything left for an ASIC-maker to to work with.

So no I don't think the point is really valid. You won't be able to get thousands of times anything with a straightforward ASIC design here. There may be back doors though, we don't know. The point about lack of a clear writeup and peer review is valid.

Quote
The CPU has an inherent disadvantage in that it is designed to be a general purpose computing device so it can't be as specialized at any one computation as an ASIC can be.

This is obviously going to be true, but the scope of the task here is very different. Thousands of copies will not work.

I believe that is wrong. I suspect an ASIC can be designed that vastly outperform (at least on a power efficiency basis) and one of the reasons is the algorithm is so complex, thus it probably has many ways to be optimized with specific circuitry instead of generalized circuitry. My point is isolating a simpler ("enveloped") instruction such as aesinc would be a superior strategy (and embrace USB pluggable ASICs and get them spread out to the consumer).

Also I had noted (find my post in my thread a couple of months ago) that the way the AES is incorrectly employed as a random oracle (as the index to lookup in the memory table), the algorithm is very likely subject to some reduced solution space. This is perhaps Claymore's advantage (I could probably figure it out if I was inclined to spend sufficient time on it).

There is no cryptographic analysis of the hash. It might have impossible images, collisions, etc..

I strongly disagree.

The algorithm is *not* complex, it's very simple.  Grab a random-indexed 128 bit value from the big lookup table.  Mix it using a single round of AES.  Store part of the result back.  Use that to index the next item.  Mix that with a 64 bit multiply.  Store back.  Repeat.  It's intellectually very close to scrypt, with a few tweaks to take advantage of things that are fast on modern CPUs.

I know more about this because I independently developed an algorithm several months ago that is very similar which I named L3scrypt. I also solved several problems which are not solved in CryptoNite such as the speed.

The concept of lookup can still be utilized to trade computation for space. I will quote from my white papers as follows which also explains why I abandoned it (at least until I can do some real world testing on GPUs) when I realized the coalescing of memory access is likely much more sophisticated on the GPU.

Code: (AnonyMint)
However, since first loop of L3crypt is overwriting values in random order instead of sequentially, a more
complex data structure is required for implementing "lookup gap" than was the case for Scrypt. For every element
store in an elements table the index to a values table, index of a stored value in that values table and the
number of iterations of H required on the stored value. Each time an element in the values table needs to be
overwritten, an additional values table must be created because other elements may reference the existing stored
value.

Thus for example, reducing the memory required by up to half (if no element is overwritten), doubles the number of H
computed for the input of FH as follows because there is a recursive 50% chance to recompute H before reaching a
stored value.

   1 + 1/2 + 1/4 + 1/8 + 1/16 + ... = 2 [15]

Storing only every third V[j] reducing the memory required by up to two-thirds (if no element is overwritten), trebles the
number of H computed for the input of FH as follows because there is a recursive 2/3 chance to recompute H before
reaching a stored value.

   1 + 2/3 + 4/9 + 8/27 + 16/81 + ... = 3

The increased memory required due to overwritten 512B elements is approximately the factor n. With n = 8 to reduce the
1MB memory footprint to 256KB would require 32X more computation of H if the second loop isn't also overwriting elements.
Optionally given the second loop overwrites twice as many 32B elements, to reduce the 1MB memory footprint to 256KB
would also require 4X more computation of FH.

However since execution time of the first loop bounded by latency can be significantly reduced by trading recomputation of
H for lower memory requirements if FLOPs exceed the CPU by significantly more than a factor of 8, it is a desirable precaution
to make the the latency bound of the second loop a significant portion of the execution time so L3crypt remains latency
bound in that case.

Even without employing "lookup gap", the GPU could potentially execute more than 200 concurrent instances of L3crypt to
leverage its superior FLOPs and offset the 25x slower main memory latency and the CPU's 8 hyperthreads.

So if you can trade computation for space, then an ASIC can potentially clobber the CPU. The GPU would be beating the CPU, except for the inclusion of the AES instructions which the GPU doesn't have. An ASIC won't have this limitation.

Also the use of AES as random oracle to generate the lookup in the table is a potential major snafu, because AES is not designed to be a hash function. Thus it is possible that certain non-randomized patterns exist which can be exploited. I covered this in post in my thread, which the Monero developers were made of aware of but apparently decided to conveniently ignore(?).

Adding all these together it may also be possible to utilize more efficient caching design on the ASIC that is tailored for the access profile of this algorithm. There are sram caches for ASICs (e.g. Toshiba) and there is a lot of leeway in terms of parameters such as set associativity, etc..

So sorry, I don't think you know in depth what you are talking about. And I think I do.

Remember that there are two ways to implement the CryptoNight algorithm:
  (1) Try to fit a few copies in cache and pound the hell out of them;
  (2) Fit a lot of copies in DRAM and use a lot of bandwidth.

Approach (1) is what's being done on CPUs.  Approach (2) is what's being done on GPUs.

And the coalescing of memory accesses for #2 is precisely what I meant. It is only the AES instructions that impeding the GPU from clobbering the CPU.

I tried implementing #2 on CPU and couldn't get it to perform as well as my back-of-the-envelope analysis suggests it should, but it's possible it could outperform the current CPU implementations by about 20%.  (I believe yvg1900 tried something similar and came to the same conclusion I did).

No because external memory access parameters decline as the number of threads simultaneously accessing it increases (less so for the high end server CPUs). So what you saw is what I expected. If you need me to cite a reference I can go dig it up.

An ASIC approach might well be better off with #2, however, but it simply moves the bottleneck to the memory controller, and it's a hard engineering job compared to building an AES unit, a 64 bit multiplier, and 2MB of DRAM.  But that 2MB of DRAM area limits you in a big way.

Computation can be traded for space to use fast caches. See what I wrote up-post. And or you could design an ASIC to drop into an existing GPU memory controller setup. Etc.. There are numerous options. Yes it is a more difficult engineering job that is a worse for CryptoNite because it means who ever is first to solve it, will limit supply and give an incredible advantage to a few, which is what plagued Bitcoin in 2013 until the ASICs became ubiquitous. This proprietary advantage might linger for much longer duration.

In my best professional opinion, barring funky weaknesses lingering within the single round of AES, CryptoNight is a very solid PoW.  Its only real disadvantage is comparatively slow verification time, which really hurts the time to download and verify the blockchain.

In my professional opinion, I think you lack depth of understanding. What gives?
legendary
Activity: 1470
Merit: 1000
Want privacy? Use Monero!
the emission halves every 512 days?

This is a continuous process, not suddenly halved (like bitcoin). The continuous decrease is such that after 512 days the reward is half of original one.

i always wondered why satoshi designed bitcoin to drop off these gigantic cliffs every few years.

yes, when you look at the monero emission, it makes much more sense Cheesy
The block reward halving can cause volatility, speculation + risk that mining pools just don't want to switch to lower block rewards. If all mining pools decide that they don't do the halving, this can cause real trust problems Tongue
legendary
Activity: 3766
Merit: 5146
Note the unconventional cAPITALIZATION!
After testing the "--restore-deterministic-wallet" on windows, i lost some transactions.
I re-download the blockchain but only the last transactions appear.

What is the problem?


Your transactions are still there.  This bug has been fixed in a subsequent version of the wallet on github, and I assume it will be rolled into the main distribution eventually.
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
After testing the "--restore-deterministic-wallet" on windows, i lost some transactions.
I re-download the blockchain but only the last transactions appear.

What is the problem?


(save Bitmonero Appdata on usb storage)

Move wallet.bin somewhere else other than the appdata folder and rerun your wallet.

Then press refresh and wait

That won't solve it - the address of the wallet creation is serialised and stored in the .keys file, and it only rescans from there. We've fixed this quite a while back, though:)
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
After testing the "--restore-deterministic-wallet" on windows, i lost some transactions.
I re-download the blockchain but only the last transactions appear.

What is the problem?


There was a bug in an older version of simplewallet where a restore will only scan blocks from 24 hours before the restore. Are you using the most recent version? We may have to put new binaries out for Windows if the patch isn't in the latest.
hero member
Activity: 565
Merit: 500
After testing the "--restore-deterministic-wallet" on windows, i lost some transactions.
I re-download the blockchain but only the last transactions appear.

What is the problem?


(save Bitmonero Appdata on usb storage)

Move wallet.bin somewhere else other than the appdata folder and rerun your wallet.

Then press refresh and wait

 
Jump to: