Pages:
Author

Topic: Monero no wallet update since 2014, FluffyPony deleting question about it (Read 4179 times)

newbie
Activity: 28
Merit: 0
This is a call to the user who quoted my post that showed the lies of Monero development.

Both are post have been deleted (this is my thread but it still deleted), but everything else was left.

If it's you, please check your private message to see if you have the text because I want to repost it on my thread: https://bitcointalksearch.org/topic/someone-in-bct-is-scamming-for-monero-thread-posts-pms-wiped-1140169

My private messages that contain the text of the post warning it was deleted, have also been deleted, so i don't have it.

Thanks

EDIT: I found the post and will contact the user, the information has been deleted though:

Man you hit them hard, trolleros devs are scamming people, but like every scammer they don't admit.

][mod note: deleted coin ad spam]
legendary
Activity: 2968
Merit: 1198
All of my posts are being deleted, even on my own unmoderated thread.

How many times did you repost the same thing, and how many got deleted, even on your own unmoderated thread?

newbie
Activity: 28
Merit: 0
All of my posts are being deleted, even on my own unmoderated thread.

The reply to FluffyPony was deleted from here:

Some unrelease github commits and pdf

DID YOU STOP COPY PASTE YOUR STRAW?

I can't believe, you flood this thread with garbage because you can't moderate it like the other thread.  Then, you post nonsense, honestly it's like a child.

I can keep posting my unanswered discussion until you answer, you can keep flood this thread with garbage text and nonsense, because you avoid my discussion each time, it is showing more that it is true if you can't.

Users can make up their own mind.


I have contacted the BCT mods, I am guessing Monero is doing this with the reporting system.  More information is here: https://bitcointalksearch.org/topic/xmr-monero-technical-discussion-unmoderated-1139813
hero member
Activity: 770
Merit: 504
Oh God.... this reminds me of when I was a kid and staying at my friend's house for the weekend. 
 
He warned me not to make his dad mad.... he said his dad gave out the worse punishments... and I was a little afraid and so tried to mind myself. 
 
Well, that afternoon we were playing in the woods behind the house.  We were only supposed to go a little ways into the woods, only far enough that the adults could still see us if they wanted to. 
 
But we obviously went way, way deeper.  We saw a treehouse a hundred yards in, went to investigate, got turned around, and were gone for hours.  When we came back, the dad had a frown on his face and his belt in his hand. 
 
He divided us up. 
 
He sat me down and proceeded to ask me if I understood the importance of being accountable and responsible, and talked to me trying to get me to understand how it must have made him feel to have lost us in the woods. 
 
He talked, and talked, and talked.  This went on for an hour and a half.  I nearly lost my ten-year-old mind.  I began to wish he would just hit me with the belt, my god, ANYTHING BUT MORE FUCKING TALKING ABOUT WHAT I DID WRONG.  After a literal eternity sitting there talking to me, when I was finally able to tell him exactly why he was mad at us, and convince him I understood he told me I could go. 
 
Then he called his son in, and started the whole process over. 
 
I never wanted to make that man mad again.
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Some unrelease github commits and pdf

DID YOU STOP COPY PASTE YOUR STRAW?
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Did you stop copy paste your straw?

Thank you for providing us with a companion quote to go along with such gems as "Is it true?".

DID YOU STOP COPY PASTE YOUR STRAW? DID YOU?!

Please comment the post.

I am littering it with comments as we speak.

Please compare development of monero with the development of the fathers of the code that your coin uses, ups, sorry, this project was stolen from T_F_T.

steal, verb

1. to take (the property of another or others) without permission or right, especially secretly or by force:

2. to appropriate (ideas, credit, words, etc.) without right or acknowledgment.

This cannot apply to Monero. The license covering the code pre-fork was the MIT license:

https://github.com/bitmonero-project/bitmonero/blob/master/src/cryptonote_config.h#L2

Per the text of the MIT license:

"Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software"

So we had permission and right. As for acknowledgement, we have included, and continue to include, a CryptoNote copyright notice on all files that originated from before the fork:

https://github.com/monero-project/bitmonero/blob/master/src/cryptonote_config.h#L29

Yes, thats when you scammers started. You Trolleros Devs couldn't even create a genesis block, so you stole the project from T_F_T.

We forked the repo, and retained the social contract implicit in Monero's launch by not relaunching it, thus ensuring that even thankful_for_today was rewarded for his work (ie. as an early miner). If we had relaunched it then it would have been difficult to do so fairly, someone on either side would feel mistreated.

Look what another dev think of your work, 6 months ago, and Monero did not change a thing since then.

Looking at it objectively is always better than quoting the opinion of "another dev", especially when that dev is likely involved in the Bytecoin scam.





You are not just insulting me, you are insulting many others, and your claim is provably untrue. Don't make a statement countering this, provide tangible evidence of your claim.

As for me - monero is a biggest bubble, because there is only speculation about it

Of course there's speculation. But there is also actual use.

blockchain is unusable with its size

How do you suppose you'll have a usable cryptocurrency that is actually used, and yet not have an ever-growing blockchain? What a silly comment, it really shows the desperation of this trolling.

IMO almost everything about monero seems terrible - unusable blockchain size (On average PC, it just can`t be a network node)

Runs just fine on a Raspberry Pi 1 with 256mb RAM, so I guess he must be talking about running it on a 486 DX2 66.

block intervals

Chosen by...thankful_for_today! And here I thought you were hailing him as an incredible human being?

tx generation size

Is duck dark digital dontcareNote really trying to claim that their transactions are significantly smaller?

fee size

0.001 XMR per kilobyte (most transactions are 1kb-2kb), so that's about $0.0000056 per transaction. Gosh. So expensive.

no GUI after almost 1 year since start

Wrong. There are several GUIs: https://getmonero.org/getting-started/choose

totally wrong way of development

As opposed to dNote, which has the "totally right" way of development, no? If that's the case, then why have they attracted NO other contributors?

https://github.com/xdn-project/digitalnote/graphs/contributors vs. https://github.com/monero-project/bitmonero/graphs/contributors

looks like just no understanding of CN technology.

Looks like I've got to rehash some of our research bulletins. Let's do this one:

MRL-0003: Monero is Not That Mysterious

1 Introduction

Recently, there have been some vague fears about the CryptoNote source code and protocol floating around the internet based on the fact that it is a more complicated protocol than, for instance, Bitcoin. The purpose of this note is to try and clear up some misconceptions, and hopefully remove some of the mystery surrounding Monero Ring Signatures. I will start by comparing the mathematics involved in CryptoNote ring signatures (as described in [CN]) to the mathematics in [FS], on which CryptoNote is based. After this, I will compare the mathematics of the ring signature to what is actually in the CryptoNote codebase.

2 CryptoNote Origins

As noted in ([CN], 4.1) by Saberhagen, group ring signatures have a history starting as early as 1991 [CH], and various ring signature schemes have been studied by a number of researchers throughout the past two decades. As claimed in ([CN] 4.1), the main ring signature used in CryptoNote is based on [FS], with some changes to accomodate blockchain technology.

2.1 Traceable Ring Signatures

In [FS], Fujisaki and Suzuki introduce a scheme for a ring signature designed to “leak secrets anonymously, without the risk of identity escrow.” This lack of a need for identity escrow allows users of the ring signature to hide themselves in a group with an inherently higher level of distrust compared to schemes relying on a group manager.

In ring-signature schemes relying on a group manager, such as the original ring signatures described in [CH], a designated trusted person guards the secrets of the group participants. While anonymous, such schemes rely, of course, on the manager not being compromised. The result of having a group-manager, in terms of currencies, is essentially the same as having a trusted organization or node to mix your coins.

In contrast, the traceable ring signature scheme given in [FS] has no group manager. According to [FS], there are four formal security requirements to their traceable ring signature scheme:

•    Public Traceability -Anyone who creates two signatures for di.erent mes.sages with respect to the same tag can be traced. (In CryptoNote, if the user opts not to use a one-time key for each transaction, then they will be traceable, however if they desire anonymity, then they will use the one-time key. Thus as stated on page 5 of [CN], the traceability property is weakened in CryptoNote.)
•    Tag-Linkability -Every two signatures generated by the same signer with respect to the same tag are linked. (This aspect in CryptoNote refers to each transaction having a key image which prevents double spending.)
•    Anonymity -As long as a signer does not sign on two di.erent messages with respect to the same tag, the identity of the signer is indistinguishable from any of the possible ring members. In addition, any two signatures gen.erated with respect to two distinct tags are always unlinkable. (In terms of CryptoNote, if the signer attempts to use the same key-image more than once, then they can be identified out of the group. The unlinkability aspect is retained and is a key part of CryptoNote.)
•    Exculpability -An honest ring member cannot be accused of signing twice with respect to the same tag. In other words, it should be infeasible to counterfeit a tag corresponding to another person’s secret key. (In terms of CryptoNote, this says that key images cannot be faked.)

In addition, [FS] provide a ring signature protocol on page 10 of their paper, which is equivalent to the CryptoNote ring signature algorithm, as described on page 9-10 of [CN]. It is worthwhile to note that [FS] is a publicly peer-reviewed publication appearing in Lecture Notes in Computer Science, as opposed to typical crypto.currency protocol descriptions, where it is unclear whether or not they have been reviewed or not.

2.2 Traceability vs CryptoNote

In the original traceable ring signature algorithm described in [FS], it is possible to use the tag corresponding to a signature multiple times. However, multiple uses of the tag allow the user to be traced; in other words, the signer’s index can be determined out of the group of users signing. It is worthwhile to note that, due to the exculpability feature of the protocol ([FS] 5.6, [CN], A2), keys cannot be stolen this way, unless an attacker is able to solve the Elliptic Curve Discrete Logarithm Problem (ECDLP) upon which a large portion of modern cryptography is based ([Si] XI.4).

The process to trace a tag used more than once is described on ([FS], page 10). In the CryptoNote protocol, however, key images (tags) used more than once are rejected by the blockchain as double-spends, and hence traceability is not an aspect of CryptoNote.

2.3 Tag-Linkability vs CryptoNote

In essence, the tag-linkability aspect of the traceable ring signature protocol is what prevents CryptoNote transactions from being double-spends. The relevant protocols are referred to as “Trace” in ([FS], 5) and “LNK” in the CryptoNote paper. Essentially all that is required is to be able to keep track of the key images which have been used before, and to verify that a key image is not used again.

If one key-image is detected on the blockchain before another key-image, then the second key image is detected as a double-spend transaction. As key-images cannot be forged, being exculpable, the double-spender must in fact be the same person, and not another person trying to steal a wallet.

3 One-Time Ring Signatures (mathematics)

The security of the ring signature scheme as described in ([FS] 10, [CN] 10) and implemented in the CryptoNote source relies on the known security properties of Curve25519. Note that this is the same curve used in OpenSSH 6.5, Tor, Apple iOS, and many other[1] security systems.

3.1 Twisted Edwards Curves

The basic security in the CryptoNote Ring Signature algorithm is guaranteed by the ECDLP ([Si], XI.4) on the Twisted Edwards curve ed25519. The security properties of curve ed25519 are described in [Bern], by noted cryptographer Daniel Bernstein, andin([BCPM])byateamfromMicrosoftResearch.Bernsteinnotesabouted25519 the “every known attack is more expensive than performing a brute-force search on a typical 128-bit secret-key cipher.”

The curve ed25519 is a singular curve of genus 1 with a group law, and described
  
2 .121665 22
by .x+ y2 =1+xy. This curve is considered over the finite field Fq,
121666q =2255 . 19. For those readers unfamiliar with algebraic geometry, an algebraic curve is considered as a one dimensional sort of space, consisting of all points (x, y) satisfying the above equation. All points are also considered modulo q. By virtue of its genus, ed25519 has a “group structure” which, for the purpose of this discussion, means if P =(x1,y1) is a point on the curve, and Q =(x2,y2) is another point on the curve, then these points can be added (or subtracted) and the sum (or di.erence), P + Q (or P . Q) will also be on the curve. The addition is not the naive adding of x1 + x2 and y1 + y2, but instead points are added using the rule

x1y2 + y1x2 y1y2 + x1x2
P + Q = ,
1+ dx1x2y1y2 1 . dx1x2y1y2
  
.121665

[1]http://ianix.com/pub/curve25519-deployment.html

where d =([BBJLP] 6, [BCPM]). The mathematics of curves of genus one are explained in great detail in [Si] for the interested reader.

Based on the above, we can compute P +P for any such point. In order to shorten notation, we rely on our algebraic intuition and denote 2P = P + P . If n . Z, then nP denotes the repeated sum

P + P + ··· + P
P+_ P
n times

using the above nonlinear addition law. As an example of how this di.ers from ordinary addition, consider the following system of equations:

aP + bQ = X
aP ' + bQ' = Y

where a, b, c, d are integers and P, Q, X are points. If this were a standard system of linear equations then one could use linear algebraic techniques to easily solve for a and b, assuming that P, Q, X, Y, P ', and Q ' are known. However, even if a, b are very small the above system is extremely di.cult to solve using the ed25519 addition law. For example, if a =1 and b =1, we have

xP yQ + yP xQ yP yQ + xP xQ
, =(xX ,yX )
1+ dxP xQyP yQ 1 . dxP xQyP yQ
xP 1 yQ1 + yP 1 xQ1 yP 1 yQ1 + xP 1 xQ1
, =(xY ,yY )
1+ dxP 1 xQ1 yP1 yQ1 1 . dxP 1 xQ1 yP 1 yQ1

So in reality, this is a system of 4 nonlinear equations. To convince yourself that it is in fact di.cult to figure out a and b, try writing the above systems assuming a =2, b =1. It should become clear that the problem is extremely di.cult when a, b are chosen to be very large. As of yet, there are no known methods available to e.ciently solve this system for large values of a and b.

Consider the following problem. Suppose your friend has a random integer q, and computes qP using the above form of addition. Your friend then tells you the x and y coordinates qP =(x, y), but not what q itself is. Without asking, how do you find out what q is? A naive approach might be to start with P and keep adding P + P + P... until you reach qP (which you will know because you will end up at (x, y)). But if q is very large then this naive approach might take billions of years using modern supercomputers. Based on what mathematicians currently know about the problem and the number of possible q, none of the currently known attacking techniques can, as a general rule, do better in any practical sense than brute force.

In CryptoNote, your secret key is essentially just a very, very large number x (for other considerations, see section 4.3.3, we choose x to be a multiple of Cool. There is a special point G on the curve ed25519 called “the base point” of the curve which is used as the starting point to get xG. Your public key is just xG, and you are protected by the above problem from someone using known information to determine the private key.

3.2 Relation to Diffie Helman

Included in a ring signature are the following equations involving your secret key x:

P = xG
I = xHp (P )

rs = qs . csx.

Here s is a number giving the index in the group signature to your public key, and Hp (P ) is a hash function which deterministically takes the point P to another point

''
P = x ' G, where x is another very large uniformly chosen number. The value qs is chosen uniformly at random, and cs is computed using another equation involving random values. The particular hash function used in CryptoNote is Keccak1600, used in other applications such as SHA-3; it is currently considered to be secure ([FIPS]). The CryptoNote use of a single hash function is consistent with the standard procedure of consolidating distinct random oracles (in proofs of security in [FS], for example) into a single strong hash function.

The above equations can be written as follows:

P = xG
' ' G '
P = xx
rs = qs . csx


Solving the top two equations is equivalent to the ECDH (as outlined in a previ.ous note ([SN])) and is the same practical di.culty as the ECDLP. Although the equations appear linear, they are in fact highly non-linear, as they use the addition described in 3.1 and above. The third equation (with unknowns qs and x), has the di.culty of finding a random number (either qs or x) in Fq, a very large finite field; this is not feasible. Note that as the third equation has two unknowns, combining it with the previous two equations does not help; an attacker needs to determine at least one of the random numbers qs or x.

3.3 Time Cost to Guess q or x

Since q and x are assumed to be random very large numbers in Fq , with q = 2255 . 19 (generated as 32-byte integers), this is equivalent to a 128-bit security level ([BCPM]), which is known to take billions of years to compute with current supercomputers.

3.4 Review of Proofs in Appendix

In the CryptoNote appendix, there are four proofs of the four basic properties required for security of the one-time ring-signature scheme:

• Linkability (protection against double-spending)
• Exculpability (protection of your secret key)
• Unforgeability (protection against forged ring signatures)
• Anonymity (ability to hide a transaction within other transactions) These theorems are essentially identical to those in [FS] and show that the ring signature protocol satisfies the above traits. The first theorem shows that only the secret keys corresponding to the public keys included in a group can produce a signature for that group. This relies on the ECDLP for the solution of two simulta.neous (non-linear) elliptic curve equations, which, as explained in 3.2, is practically unsolvable. The second theorem uses the same reasoning, but shows that in order to create a fake signature that passes verification, one would need to be able to solve the ECDLP. The third and fourth theorems are taken directly from [FS].

4 One-Time Ring Signatures (Application)

To understand how CryptoNote is implementing the One-Time Ring signatures, I built a model in Python of Crypto-ops.cpp and Crypto.cpp from the Monero source code using naive Twisted Edwards Curve operations (taken from code by Bernstein), rather than the apparently reasonably optimized operations existing in the CryptoNote code. Functions are explained in the code comments below. Using themodelwillproduceaworkingringsignaturethatdi.ersslightlyfromtheMonero ring signatures only because of hashing and packing di.erences between the used libraries. The full code is hosted at the following address: https://github.com/monero-project/mininero

Note that most of the important helper functions in crypto-ops.cpp in the CryptoNote source are pulled from the reference implementation of Curve25519. This reference implentation was coded by Matthew Dempsky (Mochi Media, now Google)[2].

In addition, after comparing the python code to the paper, and in turn comparing the python code to the actual Monero source, it is fairly easy to see that functions like generate_ring_sig are all doing what they are supposed to based on the pro.tocol describedinthewhitepaper. For example,here is the ring signature generation algorithm used in the CryptoNote source:

Algorithm 1 Ring Signatures

i . 0
while i < numkeys do

if i = s then
k . random Fq element
Li . k · G
Ri . k ·Hp(Pi)

else
k1 . random Fq element
k2 . random Fq element
Li . k1Pi + k2G
Ri . k1I + k2Hp(Pi)
ci . k1
ri . k2

end if
i . i +1
end while
h .Hs(prefix + L.is + R.is))
p
cs . h . i ci
=s
rs . k . xcs
return (I, {ci}, {ri})


Comparing this with [CN] shows that it agrees with the whitepaper. Similarly, here is the algorithm used in the CryptoNote source to verify ring signatures:

Algorithm 2 VER
i =0
while i < numkeys do L.. ciPi + riG
i R.. riHp(Pi)+ ciI
i
i . i +1
end while
h .Hs(prefix + L.is + R.is))
p
h . h .
i
=s ci return (h == 0(mod q)) == 0

4.1 Important Crypto-ops Functions

Descriptions of important functions in Crypto-ops.cpp. Even more references and information is given in the comments in the MiniNero.py code linked above.

[2]http://nacl.cr.yp.to/

4.1.1 ge_frombytes_vartime

Takes as input some data and converts to a point on ed25519. For a reference of (q.5)/837 the equation used, . = uvuv, see ([BBJLP], section 5).

4.1.2 ge_fromfe_frombytesvartime

Similar to the above, but compressed in another form.

4.1.3 ge_double_scalarmult_base_vartime

Takes as inputs two integers a and b and a point A on ed25519 and returns the point aA + bG, where G is the ed25519 base point. Used for the ring signatures when computing, for example, Li with i = s as in ([CN], 4.4)

4.1.4 ge_double_scalarmult_vartime

Takes as inputs two integers a and b and two points A and B on ed25519 and outputs aA + bB. Used, for example, when computing the Ri in the ring signatures with i = s ([CN], 4.4)

4.1.5 ge_scalarmult

Given a point A on ed25519 and an integer a, this computes the point aA. Used for example when computing Li and Ri when i = s.

4.1.6 ge_scalarmult_base

Takes as input an integer a and computes aG, where G is the ed25519 base point.

4.1.7 ge_p1p1_to_p2

There are different representations of curve points for ed25519, this converts between them. See MiniNero for more reference.

4.1.8 ge_p2_dbl

This takes a point in the “p2” representation and doubles it.

4.1.9 ge_p3_to_p2

Takes a point in the “p3” representation on ed25519 and turns it into a point in the “p2” representation.

4.1.10 ge_mul8

This takes a point A on ed25519 and returns 8A.

4.1.11 sc_reduce

Takes a 64-byte integer and outputs the lowest 32 bytes modulo the prime q. This is not a CryptoNote-specific function, but comes from the standard ed25519 library.

4.1.12 sc_reduce32

Takes a 32-byte integer and outputs the integer modulo q. Same code as above, except skipping the 64.32 byte step.

4.1.13 sc_mulsub

Takes three integers a, b, c in Fq and returns c . ab modulo q.

4.2 Important Hashing Functions

4.2.1 cn_fast_hash

Takes data and returns the Keccak1600 hash of the data.

4.3 Crypto.cpp Functions

4.3.1 random_scalar

Generates a 64-byte integer and then reduces it to a 32 byte integer modulo q for 128-bit security as described in section 3.3.

4.3.2 hash_to_scalar

Inputs data (for example, a point P on ed25519) and outputs Hs (P ), which is the Keccak1600 hash of the data. The function then converts the hashed data to a 32-byte integer modulo q.

4.3.3 generate_keys

Returns a secret key and public key pair, using random_scalar (asdescribed above) to get the secret key. Note that, as described in [Bern], the key set for ed25519 actually is only multiples of 8 in Fq, and hence ge_scalarmult_base includes a ge_mul8 to ensure the secret key is in the key set. This prevents transaction malleability at.tacks as described in ([Bern], c.f. section on “small subgroup attacks”). This is part of the GEN algorithm as described in ([CN], 4.4).

4.3.4 check_key

Inputs a public key and outputs if the point is on the curve.

4.3.5 secret_key_to_public_key

Inputs a secret key, checks it for some uniformity conditions, and outputs the cor.responding public key, which is essentially just 8 times the base point times the point.

4.3.6 hash_to_ec

Inputs a key, hashes it, and then does the equivalent in bitwise operations of mul.tiplying the resulting integer by the base point and then by 8.

4.3.7 generate_key_image

Takes as input a secret key x and public key P , and returns I = xHp (P ), the key image. This is part of the GEN algorithm as described in ([CN], 4.4).

4.3.8 generate_ring_signature

Computes a ring signature, performing SIG as in ([CN], 4.4) given a key image I, a list of n public keys Pi, and a secret index. Essentially there is a loop on i, and if the secret-index is reached, then an if-statement controls the special computation of Li,Ri when i is equal to the secret index. The values ci and ri for the signature are computed throughout the loop and returned along with the image to create the total signature (I,c1, ..., cn,r1, ..., rn) .

4.3.9 check_ring_signature

Runs the VER algorithm in ([CN], 4.4). The verifier uses a given ring signature n
to compute L ' i = riGi, Ri ' = riHp (Pi)+ ciI, and finally to check ifi=0 ci = Hs (m, L 0' , ..., L ' ,R 0' , ..., R ' ) mod l.nn

4.3.10 generate_key_derivation

Takes a secretkey b,andapublic key P , and outputs 8·bP . (The 8 being for thepur.pose of the secret key set, as described in 4.3.3). This is used in derive_public_key as part of creating one-time addresses.

4.3.11 derivation_to_scalar

Performs Hs (.) as part of generating keys in ([CN], 4.3, 4.4). It hashes an output index together with the point.

4.3.12 derive_public_key

Takes a derivation rA (computed via generate_key_derivation), a point B, and an output index, computes a scalar via derivation_to_scalar, and then computes Hs (rA)+ B.

4.3.13 generate_signature

This takes a prefix, a public key, and a secret key, and generates a standard (not ring) transaction signature (similar to a Bitcoin transaction signature).

4.3.14 check_signature

This checks if a standard (not ring) signature is a valid signature.

5 Conclusion

Despite the ring signature functions in the original CryptoNote source being poorly commented, the code can be traced back to established and used sources, and is rel.atively straightfoward. The Python implementation provided with this review gives further indication of the code’s correctness. Furthermore, the elliptic curve mathe.matics underlying the ring signature scheme has been extremely well-studied; the concept of ring signatures is not novel, even if their application to cryptocurrencies is.

References

BBJLP. Bernstein, Daniel J., et al. "Twisted edwards curves." Progress in Cryptology–AFRICACRYPT 2008. Springer Berlin Heidelberg, 2008. 389-405. BCPM. Bos, Joppe W., et al. "Selecting Elliptic Curves for Cryptography: An E.ciency and Security Analysis." IACR Cryptology ePrint Archive 2014 (2014): 130. Bern. Bernstein, Daniel J. "Curve25519: new Di.e-Hellman speed records." Public Key Cryptography-PKC 2006. Springer Berlin Heidelberg, 2006. 207-228. CH. Chaum, David, and Eugene Van Heyst. "Group signatures." Advances in Cryptology—EUROCRYPT’91. Springer Berlin Heidelberg, 1991. CN. van Saberhagen, Nicolas. "CryptoNote v 2.0." (2013). FIPS. SHA, NIST DRAFT. "standard: Permutation-based hash and extendable-output functions." DRAFT FIPS 202 (2014).
Fu.    Fujisaki, Eiichiro. "Sub-linear size traceable ring signatures without random oracles." IEICE TRANSACTIONS on Fundamentals of Electronics, Communications and Computer Sciences 95.1 (2012): 151-166.
FS.    Fujisaki, Eiichiro, and Koutarou Suzuki. "Traceable ring signature." Public Key Cryptography–PKC 2007. Springer Berlin Heidelberg, 2007. 181-200.
IAN. IANIX http://ianix.com/pub/curve25519-deployment.html Si. Silverman, Joseph H. The arithmetic of elliptic curves. Vol. 106. Dordrecht: Springer, 2009.
SN. http://lab.monero.cc/pubs/multiple_equations_attack.pdf
legendary
Activity: 2968
Merit: 1198
which no one from Monero will answer:

What you missed in your rage over being kept in line by forum mods is that I answered your question

The question was where you can download the new wallet.

You can download the new wallet from https://github.com/monero-project/bitmonero
sr. member
Activity: 350
Merit: 250
Lol BCN trolling Monero again at full force, I'll never own a single Bytecoin and I'll make sure every exchange and possible but unlikely bussiness that think of accepting this scam are aware of 82% premine controled by a single entity. All I have to do is point at the proofs: https://bitcointalksearch.org/topic/blowing-the-lid-off-the-cryptonotebytecoin-scam-with-the-exception-of-monero-740112
legendary
Activity: 2968
Merit: 1198
You still did not answer my question that you delete from Reddit, after all the text you post.  Huh

The question was where you can download the new wallet.

You can download the new wallet from https://github.com/monero-project/bitmonero

BTW, your post was deleted by a Forum moderator, not anyone related to Monero. You are obviously doing something wrong here. I suggest you clean up your act before you get banned.

Quote
was deleted by a Bitcoin Forum moderator
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Monero was always a junkcoin, all they did was hype up the price and then sell their bag.
The manipulated volume on Poloniex alone should tell you to stay away from this scam.

Are you still flogging the same horse? I used to pity you, thinking that you were maybe just a sad loser desperately vying for relevance on an unimportant forum in a corner of the Internet, but now I see you're nothing more than a common troll. So sad, so sad.
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Why doesn't Trolleros get a forum? Like any respectable coin with a community, probably they don't have such community?

...

You're not a very bright scammer, are you.

https://forum.getmonero.org
newbie
Activity: 28
Merit: 0
I am pretty darn sure that esoum and dukey are the BCN scammers, the energy and aggressiveness these accounts have to keep bad mouthing Monero is a clear sign. I remember when these guys unleashed the last shit storm we started seeing newbie accounts never seen before.

I have started to remember the Dash guys and general troll folks  Cheesy but these are clearly agenda driven newbie campaigners from the BCN criminal group

You are so lost, my friend ...

You can´t see what is goin on, do you....

Everything that i have said is true, like monero dev smooth says, i feel the obligation of clarifying the new users of this forum.

Why doesn't Trolleros get a forum? Like any respectable coin with a community, probably they don't have such community?
G2M
sr. member
Activity: 280
Merit: 250
Activity: 616
legendary
Activity: 952
Merit: 1000
Stagnation is Death
I am pretty darn sure that esoum and dukey are the BCN scammers, the energy and aggressiveness these accounts have to keep bad mouthing Monero is a clear sign. I remember when these guys unleashed the last shit storm we started seeing newbie accounts never seen before.

I have started to remember the Dash guys and general troll folks  Cheesy but these are clearly agenda driven newbie campaigners from the BCN criminal group
newbie
Activity: 28
Merit: 0

Did you stop copy paste your straw?

Please comment the post.

Please compare development of monero with the development of the fathers of the code that your coin uses, ups, sorry, this project was stolen from T_F_T.

Yes, thats when you scammers started. You Trolleros Devs couldn't even create a genesis block, so you stole the project from T_F_T.

Look what another dev think of your work, 6 months ago, and Monero did not change a thing since then.


As for me - monero is a biggest bubble, because there is only speculation about it, blockchain is unusable with its size, IMO almost everything about monero seems terrible - unusable blockchain size (On average PC, it just can`t be a network node), block intervals, tx generation size,  fee size, no GUI after almost 1 year since start,  totally wrong way of development, looks like just no understanding of CN technology.

"software development and infrastructure creation that is done by The Monero Project. " lol you are confirming my words - XRM devs can`t develop the coin, because of low technology understanding, they just make some poor services around original source code, like closed source web wallets.
There is nothing that monero gives to cryptocurrency world, except the wrong way in understanding the new technology, such a pity.
sr. member
Activity: 432
Merit: 251
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
I've got like another 40 posts to go, but I'm going to take a break for a few minutes and let you think about your life.
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Continuation of: Git diff from commit e940386f9a8765423ab3dd9e3aabe19a68cba9f9 (thankful_for_today's last commit) to current HEAD, excluding changes in /external so as to exclude libraries and submoduled code, and excluding contrib/ so as to exclude changes made to epee, excluding most comments because apparently nobody that writes code for BCN knows what a comment is anyway, excluding removed lines, excluding empty lines


+bool BootstrapFile::initialize_file()
+{
+  const uint32_t file_magic = blockchain_raw_magic;
+  std::string blob;
+  if (! ::serialization::dump_binary(file_magic, blob))
+  {
+    throw std::runtime_error("Error in serialization of file magic");
+  }
+  *m_raw_data_file << blob;
+  bootstrap::file_info bfi;
+  bfi.major_version = 0;
+  bfi.minor_version = 1;
+  bfi.header_size = header_size;
+  bootstrap::blocks_info bbi;
+  bbi.block_first = 0;
+  bbi.block_last = 0;
+  bbi.block_last_pos = 0;
+  buffer_type buffer2;
+  boost::iostreams::stream>* output_stream_header;
+  output_stream_header = new boost::iostreams::stream>(buffer2);
+  uint32_t bd_size = 0;
+  blobdata bd = t_serializable_object_to_blob(bfi);
+  LOG_PRINT_L1("bootstrap::file_info size: " << bd.size());
+  bd_size = bd.size();
+  if (! ::serialization::dump_binary(bd_size, blob))
+  {
+    throw std::runtime_error("Error in serialization of bootstrap::file_info size");
+  }
+  *output_stream_header << blob;
+  *output_stream_header << bd;
+  bd = t_serializable_object_to_blob(bbi);
+  LOG_PRINT_L1("bootstrap::blocks_info size: " << bd.size());
+  bd_size = bd.size();
+  if (! ::serialization::dump_binary(bd_size, blob))
+  {
+    throw std::runtime_error("Error in serialization of bootstrap::blocks_info size");
+  }
+  *output_stream_header << blob;
+  *output_stream_header << bd;
+  output_stream_header->flush();
+  *output_stream_header << std::string(header_size-buffer2.size(), 0); // fill in rest with null bytes
+  output_stream_header->flush();
+  std::copy(buffer2.begin(), buffer2.end(), std::ostreambuf_iterator(*m_raw_data_file));
+  return true;
+}
+void BootstrapFile::flush_chunk()
+{
+  m_output_stream->flush();
+  uint32_t chunk_size = m_buffer.size();
+  // LOG_PRINT_L0("chunk_size " << chunk_size);
+  if (chunk_size > BUFFER_SIZE)
+  {
+    LOG_PRINT_L0("WARNING: chunk_size " << chunk_size << " > BUFFER_SIZE " << BUFFER_SIZE);
+  }
+  std::string blob;
+  if (! ::serialization::dump_binary(chunk_size, blob))
+  {
+    throw std::runtime_error("Error in serialization of chunk size");
+  }
+  *m_raw_data_file << blob;
+  if (m_max_chunk < chunk_size)
+  {
+    m_max_chunk = chunk_size;
+  }
+  long pos_before = m_raw_data_file->tellp();
+  std::copy(m_buffer.begin(), m_buffer.end(), std::ostreambuf_iterator(*m_raw_data_file));
+  m_raw_data_file->flush();
+  long pos_after = m_raw_data_file->tellp();
+  long num_chars_written = pos_after - pos_before;
+  if (static_cast(num_chars_written) != chunk_size)
+  {
+    LOG_PRINT_RED_L0("Error writing chunk:  height: " << m_cur_height << "  chunk_size: " << chunk_size << "  num chars written: " << num_chars_written);
+    throw std::runtime_error("Error writing chunk");
+  }
+  m_buffer.clear();
+  delete m_output_stream;
+  m_output_stream = new boost::iostreams::stream>(m_buffer);
+  LOG_PRINT_L1("flushed chunk:  chunk_size: " << chunk_size);
+}
+void BootstrapFile::write_block(block& block)
+{
+  bootstrap::block_package bp;
+  bp.block = block;
+  std::vector txs;
+  uint64_t block_height = boost::get(block.miner_tx.vin.front()).height;
+  // now add all regular transactions
+  for (const auto& tx_id : block.tx_hashes)
+  {
+    if (tx_id == null_hash)
+    {
+      throw std::runtime_error("Aborting: tx == null_hash");
+    }
+    const transaction* tx = m_blockchain_storage->get_tx(tx_id);
+    transaction tx = m_blockchain_storage->get_db().get_tx(tx_id);
+    if(tx == NULL)
+    {
+      if (! m_tx_pool)
+        throw std::runtime_error("Aborting: tx == NULL, so memory pool required to get tx, but memory pool isn't enabled");
+      else
+      {
+        transaction tx;
+        if(m_tx_pool->get_transaction(tx_id, tx))
+          txs.push_back(tx);
+        else
+          throw std::runtime_error("Aborting: tx not found in pool");
+      }
+    }
+    else
+      txs.push_back(*tx);
+    txs.push_back(tx);
+  }
+  // these non-coinbase txs will be serialized using this structure
+  bp.txs = txs;
+  // These three attributes are currently necessary for a fast import that adds blocks without verification.
+  bool include_extra_block_data = true;
+  if (include_extra_block_data)
+  {
+    size_t block_size = m_blockchain_storage->get_block_size(block_height);
+    difficulty_type cumulative_difficulty = m_blockchain_storage->get_block_cumulative_difficulty(block_height);
+    uint64_t coins_generated = m_blockchain_storage->get_block_coins_generated(block_height);
+    size_t block_size = m_blockchain_storage->get_db().get_block_size(block_height);
+    difficulty_type cumulative_difficulty = m_blockchain_storage->get_db().get_block_cumulative_difficulty(block_height);
+    uint64_t coins_generated = m_blockchain_storage->get_db().get_block_already_generated_coins(block_height);
+    bp.block_size = block_size;
+    bp.cumulative_difficulty = cumulative_difficulty;
+    bp.coins_generated = coins_generated;
+  }
+  blobdata bd = t_serializable_object_to_blob(bp);
+  m_output_stream->write((const char*)bd.data(), bd.size());
+}
+bool BootstrapFile::close()
+{
+  if (m_raw_data_file->fail())
+    return false;
+  m_raw_data_file->flush();
+  delete m_output_stream;
+  delete m_raw_data_file;
+  return true;
+}
+bool BootstrapFile::store_blockchain_raw(blockchain_storage* _blockchain_storage, tx_memory_pool* _tx_pool, boost::filesystem::path& output_dir, uint64_t requested_block_stop)
+bool BootstrapFile::store_blockchain_raw(Blockchain* _blockchain_storage, tx_memory_pool* _tx_pool, boost::filesystem::path& output_dir, uint64_t requested_block_stop)
+{
+  uint64_t num_blocks_written = 0;
+  m_max_chunk = 0;
+  m_blockchain_storage = _blockchain_storage;
+  m_tx_pool = _tx_pool;
+  uint64_t progress_interval = 100;
+  LOG_PRINT_L0("Storing blocks raw data...");
+  if (!BootstrapFile::open_writer(output_dir))
+  {
+    LOG_PRINT_RED_L0("failed to open raw file for write");
+    return false;
+  }
+  block b;
+  // block_start, block_stop use 0-based height. m_height uses 1-based height. So to resume export
+  // from last exported block, block_start doesn't need to add 1 here, as it's already at the next
+  // height.
+  uint64_t block_start = m_height;
+  uint64_t block_stop = 0;
+  LOG_PRINT_L0("source blockchain height: " <<  m_blockchain_storage->get_current_blockchain_height()-1);
+  if ((requested_block_stop > 0) && (requested_block_stop < m_blockchain_storage->get_current_blockchain_height()))
+  {
+    LOG_PRINT_L0("Using requested block height: " << requested_block_stop);
+    block_stop = requested_block_stop;
+  }
+  else
+  {
+    block_stop = m_blockchain_storage->get_current_blockchain_height() - 1;
+    LOG_PRINT_L0("Using block height of source blockchain: " << block_stop);
+  }
+  for (m_cur_height = block_start; m_cur_height <= block_stop; ++m_cur_height)
+  {
+    // this method's height refers to 0-based height (genesis block = height 0)
+    crypto::hash hash = m_blockchain_storage->get_block_id_by_height(m_cur_height);
+    m_blockchain_storage->get_block_by_hash(hash, b);
+    write_block(b);
+    if (m_cur_height % NUM_BLOCKS_PER_CHUNK == 0) {
+      flush_chunk();
+      num_blocks_written += NUM_BLOCKS_PER_CHUNK;
+    }
+    if (m_cur_height % progress_interval == 0) {
+      std::cout << refresh_string;
+      std::cout << "block " << m_cur_height << "/" << block_stop << std::flush;
+    }
+  }
+  // NOTE: use of NUM_BLOCKS_PER_CHUNK is a placeholder in case multi-block chunks are later supported.
+  if (m_cur_height % NUM_BLOCKS_PER_CHUNK != 0)
+  {
+    flush_chunk();
+  }
+  // print message for last block, which may not have been printed yet due to progress_interval
+  std::cout << refresh_string;
+  std::cout << "block " << m_cur_height-1 << "/" << block_stop << ENDL;
+  LOG_PRINT_L0("Number of blocks exported: " << num_blocks_written);
+  if (num_blocks_written > 0)
+    LOG_PRINT_L0("Largest chunk: " << m_max_chunk << " bytes");
+  return BootstrapFile::close();
+}
+uint64_t BootstrapFile::seek_to_first_chunk(std::ifstream& import_file)
+{
+  uint32_t file_magic;
+  std::string str1;
+  char buf1[2048];
+  import_file.read(buf1, sizeof(file_magic));
+  if (! import_file)
+    throw std::runtime_error("Error reading expected number of bytes");
+  str1.assign(buf1, sizeof(file_magic));
+  if (! ::serialization::parse_binary(str1, file_magic))
+    throw std::runtime_error("Error in deserialization of file_magic");
+  if (file_magic != blockchain_raw_magic)
+  {
+    LOG_PRINT_RED_L0("bootstrap file not recognized");
+    throw std::runtime_error("Aborting");
+  }
+  else
+    LOG_PRINT_L0("bootstrap file recognized");
+  uint32_t buflen_file_info;
+  import_file.read(buf1, sizeof(buflen_file_info));
+  str1.assign(buf1, sizeof(buflen_file_info));
+  if (! import_file)
+    throw std::runtime_error("Error reading expected number of bytes");
+  if (! ::serialization::parse_binary(str1, buflen_file_info))
+    throw std::runtime_error("Error in deserialization of buflen_file_info");
+  LOG_PRINT_L1("bootstrap::file_info size: " << buflen_file_info);
+  if (buflen_file_info > sizeof(buf1))
+    throw std::runtime_error("Error: bootstrap::file_info size exceeds buffer size");
+  import_file.read(buf1, buflen_file_info);
+  if (! import_file)
+    throw std::runtime_error("Error reading expected number of bytes");
+  str1.assign(buf1, buflen_file_info);
+  bootstrap::file_info bfi;
+  if (! ::serialization::parse_binary(str1, bfi))
+    throw std::runtime_error("Error in deserialization of bootstrap::file_info");
+  LOG_PRINT_L0("bootstrap file v" << unsigned(bfi.major_version) << "." << unsigned(bfi.minor_version));
+  LOG_PRINT_L0("bootstrap magic size: " << sizeof(file_magic));
+  LOG_PRINT_L0("bootstrap header size: " << bfi.header_size)
+  uint64_t full_header_size = sizeof(file_magic) + bfi.header_size;
+  import_file.seekg(full_header_size);
+  return full_header_size;
+}
+uint64_t BootstrapFile::count_blocks(const std::string& import_file_path)
+{
+  boost::filesystem::path raw_file_path(import_file_path);
+  boost::system::error_code ec;
+  if (!boost::filesystem::exists(raw_file_path, ec))
+  {
+    LOG_PRINT_L0("bootstrap file not found: " << raw_file_path);
+    throw std::runtime_error("Aborting");
+  }
+  std::ifstream import_file;
+  import_file.open(import_file_path, std::ios_base::binary | std::ifstream::in);
+  uint64_t h = 0;
+  if (import_file.fail())
+  {
+    LOG_PRINT_L0("import_file.open() fail");
+    throw std::runtime_error("Aborting");
+  }
+  uint64_t full_header_size; // 4 byte magic + length of header structures
+  full_header_size = seek_to_first_chunk(import_file);
+  LOG_PRINT_L0("Scanning blockchain from bootstrap file...");
+  block b;
+  bool quit = false;
+  uint64_t bytes_read = 0;
+  int progress_interval = 10;
+  std::string str1;
+  char buf1[2048];
+  while (! quit)
+  {
+    uint32_t chunk_size;
+    import_file.read(buf1, sizeof(chunk_size));
+    if (!import_file) {
+      std::cout << refresh_string;
+      LOG_PRINT_L1("End of file reached");
+      quit = true;
+      break;
+    }
+    h += NUM_BLOCKS_PER_CHUNK;
+    if ((h-1) % progress_interval == 0)
+    {
+      std::cout << "\r" << "block height: " << h-1 <<
+        "    " <<
+        std::flush;
+    }
+    bytes_read += sizeof(chunk_size);
+    str1.assign(buf1, sizeof(chunk_size));
+    if (! ::serialization::parse_binary(str1, chunk_size))
+      throw std::runtime_error("Error in deserialization of chunk_size");
+    LOG_PRINT_L3("chunk_size: " << chunk_size);
+    if (chunk_size > BUFFER_SIZE)
+    {
+      std::cout << refresh_string;
+      LOG_PRINT_L0("WARNING: chunk_size " << chunk_size << " > BUFFER_SIZE " << BUFFER_SIZE
+          << "  height: " << h-1);
+      throw std::runtime_error("Aborting: chunk size exceeds buffer size");
+    }
+    if (chunk_size > 100000)
+    {
+      std::cout << refresh_string;
+      LOG_PRINT_L0("NOTE: chunk_size " << chunk_size << " > 100000" << "  height: "
+          << h-1);
+    }
+    else if (chunk_size <= 0) {
+      std::cout << refresh_string;
+      LOG_PRINT_L0("ERROR: chunk_size " << chunk_size << " <= 0" << "  height: " << h-1);
+      throw std::runtime_error("Aborting");
+    }
+    // skip to next expected block size value
+    import_file.seekg(chunk_size, std::ios_base::cur);
+    if (! import_file) {
+      std::cout << refresh_string;
+      LOG_PRINT_L0("ERROR: unexpected end of file: bytes read before error: "
+          << import_file.gcount() << " of chunk_size " << chunk_size);
+      throw std::runtime_error("Aborting");
+    }
+    bytes_read += chunk_size;
+    // std::cout << refresh_string;
+    LOG_PRINT_L3("Number bytes scanned: " << bytes_read);
+  }
+  import_file.close();
+  std::cout << ENDL;
+  std::cout << "Done scanning bootstrap file" << ENDL;
+  std::cout << "Full header length: " << full_header_size << " bytes" << ENDL;
+  std::cout << "Scanned for blocks: " << bytes_read << " bytes" << ENDL;
+  std::cout << "Total:              " << full_header_size + bytes_read << " bytes" << ENDL;
+  std::cout << "Number of blocks: " << h << ENDL;
+  std::cout << ENDL;
+  // NOTE: h is the number of blocks.
+  // Note that a block's stored height is zero-based, but parts of the code use
+  // one-based height.
+  return h;
+}
diff --git a/src/blockchain_utilities/bootstrap_file.h b/src/blockchain_utilities/bootstrap_file.h
new file mode 100644
index 0000000..5fb8a1d
+++ b/src/blockchain_utilities/bootstrap_file.h
@@ -0,0 +1,116 @@
+using namespace cryptonote;
+class BootstrapFile
+{
+public:
+  uint64_t count_blocks(const std::string& dir_path);
+  uint64_t seek_to_first_chunk(std::ifstream& import_file);
+  bool store_blockchain_raw(cryptonote::blockchain_storage* cs, cryptonote::tx_memory_pool* txp,
+      boost::filesystem::path& output_dir, uint64_t use_block_height=0);
+  bool store_blockchain_raw(cryptonote::Blockchain* cs, cryptonote::tx_memory_pool* txp,
+      boost::filesystem::path& output_dir, uint64_t use_block_height=0);
+protected:
+  blockchain_storage* m_blockchain_storage;
+  Blockchain* m_blockchain_storage;
+  tx_memory_pool* m_tx_pool;
+  typedef std::vector buffer_type;
+  std::ofstream * m_raw_data_file;
+  buffer_type m_buffer;
+  boost::iostreams::stream>* m_output_stream;
+  // open export file for write
+  bool open_writer(const boost::filesystem::path& dir_path);
+  bool initialize_file();
+  bool close();
+  void write_block(block& block);
+  void flush_chunk();
+private:
+  uint64_t m_height;
+  uint64_t m_cur_height; // tracks current height during export
+  uint32_t m_max_chunk;
+};
diff --git a/src/blockchain_utilities/bootstrap_serialization.h b/src/blockchain_utilities/bootstrap_serialization.h
new file mode 100644
index 0000000..6fa9493
+++ b/src/blockchain_utilities/bootstrap_serialization.h
@@ -0,0 +1,88 @@
+namespace cryptonote
+{
+  namespace bootstrap
+  {
+    struct file_info
+    {
+      uint8_t  major_version;
+      uint8_t  minor_version;
+      uint32_t header_size;
+      BEGIN_SERIALIZE_OBJECT()
+        FIELD(major_version);
+        FIELD(minor_version);
+        VARINT_FIELD(header_size);
+      END_SERIALIZE()
+    };
+    struct blocks_info
+    {
+      // block heights of file's first and last blocks, zero-based indexes
+      uint64_t block_first;
+      uint64_t block_last;
+      // file position, for directly reading last block
+      uint64_t block_last_pos;
+      BEGIN_SERIALIZE_OBJECT()
+        VARINT_FIELD(block_first);
+        VARINT_FIELD(block_last);
+        VARINT_FIELD(block_last_pos);
+      END_SERIALIZE()
+    };
+    struct block_package
+    {
+      cryptonote::block block;
+      std::vector txs;
+      size_t block_size;
+      difficulty_type cumulative_difficulty;
+      uint64_t coins_generated;
+      BEGIN_SERIALIZE()
+        FIELD(block)
+        FIELD(txs)
+        VARINT_FIELD(block_size)
+        VARINT_FIELD(cumulative_difficulty)
+        VARINT_FIELD(coins_generated)
+      END_SERIALIZE()
+    };
+  }
+}
diff --git a/src/blockchain_utilities/fake_core.h b/src/blockchain_utilities/fake_core.h
new file mode 100644
index 0000000..5eda504
+++ b/src/blockchain_utilities/fake_core.h
@@ -0,0 +1,165 @@
+using namespace cryptonote;
+struct fake_core_lmdb
+{
+  Blockchain m_storage;
+  tx_memory_pool m_pool;
+  bool support_batch;
+  bool support_add_block;
+  // for multi_db_runtime:
+  fake_core_lmdb(const boost::filesystem::path &path, const bool use_testnet=false, const bool do_batch=true, const int mdb_flags=0) : m_pool(&m_storage), m_storage(m_pool)
+  // for multi_db_compile:
+  fake_core_lmdb(const boost::filesystem::path &path, const bool use_testnet=false, const bool do_batch=true, const int mdb_flags=0) : m_pool(m_storage), m_storage(m_pool)
+  {
+    m_pool.init(path.string());
+    BlockchainDB* db = new BlockchainLMDB();
+    boost::filesystem::path folder(path);
+    folder /= db->get_db_name();
+    LOG_PRINT_L0("Loading blockchain from folder " << folder.string() << " ...");
+    const std::string filename = folder.string();
+    try
+    {
+      db->open(filename, mdb_flags);
+    }
+    catch (const std::exception& e)
+    {
+      LOG_PRINT_L0("Error opening database: " << e.what());
+      throw;
+    }
+    m_storage.init(db, use_testnet);
+    if (do_batch)
+      m_storage.get_db().set_batch_transactions(do_batch);
+    support_batch = true;
+    support_add_block = true;
+  }
+  ~fake_core_lmdb()
+  {
+    m_storage.deinit();
+  }
+  uint64_t add_block(const block& blk
+                            , const size_t& block_size
+                            , const difficulty_type& cumulative_difficulty
+                            , const uint64_t& coins_generated
+                            , const std::vector& txs
+                            )
+  {
+    return m_storage.get_db().add_block(blk, block_size, cumulative_difficulty, coins_generated, txs);
+  }
+  void batch_start(uint64_t batch_num_blocks = 0)
+  {
+    m_storage.get_db().batch_start(batch_num_blocks);
+  }
+  void batch_stop()
+  {
+    m_storage.get_db().batch_stop();
+  }
+};
+struct fake_core_memory
+{
+  blockchain_storage m_storage;
+  tx_memory_pool m_pool;
+  bool support_batch;
+  bool support_add_block;
+  // for multi_db_runtime:
+  fake_core_memory(const boost::filesystem::path &path, const bool use_testnet=false) : m_pool(&m_storage), m_storage(m_pool)
+  // for multi_db_compile:
+  fake_core_memory(const boost::filesystem::path &path, const bool use_testnet=false) : m_pool(m_storage), m_storage(&m_pool)
+  {
+    m_pool.init(path.string());
+    m_storage.init(path.string(), use_testnet);
+    support_batch = false;
+    support_add_block = false;
+  }
+  ~fake_core_memory()
+  {
+    LOG_PRINT_L3("fake_core_memory() destructor called - want to see it ripple down");
+    m_storage.deinit();
+  }
+  uint64_t add_block(const block& blk
+                            , const size_t& block_size
+                            , const difficulty_type& cumulative_difficulty
+                            , const uint64_t& coins_generated
+                            , const std::vector& txs
+                            )
+  {
+    // TODO:
+    // would need to refactor handle_block_to_main_chain() to have a direct add_block() method like Blockchain class
+    throw std::runtime_error("direct add_block() method not implemented for in-memory db");
+    return 2;
+  }
+  void batch_start(uint64_t batch_num_blocks = 0)
+  {
+    LOG_PRINT_L0("WARNING: [batch_start] opt_batch set, but this database doesn't support/need transactions - ignoring");
+  }
+  void batch_stop()
+  {
+    LOG_PRINT_L0("WARNING: [batch_stop] opt_batch set, but this database doesn't support/need transactions - ignoring");
+  }
+};
diff --git a/src/blocks/CMakeLists.txt b/src/blocks/CMakeLists.txt
new file mode 100644
index 0000000..4020132
+++ b/src/blocks/CMakeLists.txt
@@ -0,0 +1,38 @@
+if(APPLE)
+    add_library(blocks STATIC blockexports.c)
+    set_target_properties(blocks PROPERTIES LINKER_LANGUAGE C)
+else()
+   add_custom_command(OUTPUT blocks.o COMMAND cd ${CMAKE_CURRENT_SOURCE_DIR} && ld -r -b binary -o ${CMAKE_CURRENT_BINARY_DIR}/blocks.o blocks.dat)
+    add_library(blocks STATIC blocks.o blockexports.c)
+    set_target_properties(blocks PROPERTIES LINKER_LANGUAGE C)
+endif()
diff --git a/src/blocks/blockexports.c b/src/blocks/blockexports.c
new file mode 100644
index 0000000..cea72b2
+++ b/src/blocks/blockexports.c
@@ -0,0 +1,48 @@
+extern const struct mach_header _mh_execute_header;
+extern const struct mach_header_64 _mh_execute_header;
+const unsigned char *get_blocks_dat_start()
+{
+    size_t size;
+    return getsectiondata(&_mh_execute_header, "__DATA", "__blocks_dat", &size);
+}
+size_t get_blocks_dat_size()
+{
+    size_t size;
+    getsectiondata(&_mh_execute_header, "__DATA", "__blocks_dat", &size);
+    return size;
+}
+extern const unsigned char _binary_blocks_start[];
+extern const unsigned char _binary_blocks_end[];
+const unsigned char *get_blocks_dat_start(void)
+{
+   return _binary_blocks_start;
+}
+size_t get_blocks_dat_size(void)
+{
+   return (size_t) (_binary_blocks_end - _binary_blocks_start);
+}
diff --git a/src/blocks/blocks.dat b/src/blocks/blocks.dat
new file mode 100644
index 0000000..e69de29
diff --git a/src/blocks/blocks.h b/src/blocks/blocks.h
new file mode 100644
index 0000000..76a08c8
+++ b/src/blocks/blocks.h
@@ -0,0 +1,16 @@
+extern "C" {
+const unsigned char *get_blocks_dat_start();
+size_t get_blocks_dat_size();
+}
diff --git a/src/blocks/checkpoints.dat b/src/blocks/checkpoints.dat
new file mode 100644
index 0000000..249956e
Binary files /dev/null and b/src/blocks/checkpoints.dat differ
diff --git a/src/common/CMakeLists.txt b/src/common/CMakeLists.txt
new file mode 100644
index 0000000..b1db006
+++ b/src/common/CMakeLists.txt
@@ -0,0 +1,69 @@
+set(common_sources
+  base58.cpp
+  command_line.cpp
+  dns_utils.cpp
+  util.cpp
+  i18n.cpp)
+set(common_headers)
+set(common_private_headers
+  base58.h
+  boost_serialization_helper.h
+  command_line.h
+  dns_utils.h
+  http_connection.h
+  int-util.h
+  pod-class.h
+  rpc_client.h
+  scoped_message_writer.h
+  unordered_containers_boost_serialization.h
+  util.h
+  varint.h
+  i18n.h)
+bitmonero_private_headers(common
+  ${common_private_headers})
+bitmonero_add_library(common
+  ${common_sources}
+  ${common_headers}
+  ${common_private_headers})
+target_link_libraries(common
+  LINK_PRIVATE
+    crypto
+    ${UNBOUND_LIBRARY}
+    ${Boost_DATE_TIME_LIBRARY}
+    ${Boost_FILESYSTEM_LIBRARY}
+    ${Boost_SYSTEM_LIBRARY}
+    ${EXTRA_LIBRARIES})
diff --git a/src/common/base58.cpp b/src/common/base58.cpp
index 454c0db..6adc46c 100644
+++ b/src/common/base58.cpp
@@ -1,7 +1,32 @@
 #include "base58.h"
@@ -110,7 +135,7 @@ namespace tools
       void encode_block(const char* block, size_t size, char* res)
       {
+        assert(1 <= size && size <= full_block_size);
         uint64_t num = uint_8be_to_64(reinterpret_cast(block), size);
         int i = static_cast(encoded_block_sizes[size]) - 1;
diff --git a/src/common/base58.h b/src/common/base58.h
index 4055f62..1a9ad1a 100644
+++ b/src/common/base58.h
@@ -1,6 +1,32 @@
 #pragma once
diff --git a/src/common/boost_serialization_helper.h b/src/common/boost_serialization_helper.h
index 74016ae..c108c52 100644
+++ b/src/common/boost_serialization_helper.h
@@ -1,6 +1,32 @@
 #pragma once
@@ -14,15 +40,55 @@ namespace tools
   bool serialize_obj_to_file(t_object& obj, const std::string& file_path)
   {
     TRY_ENTRY();
+    // Need to know HANDLE of file to call FlushFileBuffers
+    HANDLE data_file_handle = ::CreateFile(file_path.c_str(), GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
+    if (INVALID_HANDLE_VALUE == data_file_handle)
+      return false;
+    int data_file_descriptor = _open_osfhandle((intptr_t)data_file_handle, 0);
+    if (-1 == data_file_descriptor)
+    {
+      ::CloseHandle(data_file_handle);
+      return false;
+    }
+    FILE* data_file_file = _fdopen(data_file_descriptor, "wb");
+    if (0 == data_file_file)
+    {
+      // Call CloseHandle is not necessary
+      _close(data_file_descriptor);
+      return false;
+    }
+    // HACK: undocumented constructor, this code may not compile
+    std::ofstream data_file(data_file_file);
+    if (data_file.fail())
+    {
+      // Call CloseHandle and _close are not necessary
+      fclose(data_file_file);
+      return false;
+    }
     std::ofstream data_file;
+    data_file.open(file_path , std::ios_base::binary | std::ios_base::out| std::ios::trunc);
+    if (data_file.fail())
       return false;
     boost::archive::binary_oarchive a(data_file);
     a << obj;
+    if (data_file.fail())
+      return false;
+    data_file.flush();
+    // To make sure the file is fully stored on disk
+    ::FlushFileBuffers(data_file_handle);
+    fclose(data_file_file);
+    return true;
     CATCH_ENTRY_L0("serialize_obj_to_file", false);
   }
diff --git a/src/common/command_line.cpp b/src/common/command_line.cpp
index 0b90345..d2cd75e 100644
+++ b/src/common/command_line.cpp
@@ -1,12 +1,54 @@
 #include "command_line.h"
 namespace command_line
 {
+  std::string input_line(const std::string& prompt)
+  {
+    std::cout << prompt;
+    std::string buf;
+    std::getline(std::cin, buf);
+    return epee::string_tools::trim(buf);
+  }
   const arg_descriptor arg_help = {"help", "Produce help message"};
   const arg_descriptor arg_version = {"version", "Output version information"};
   const arg_descriptor arg_data_dir = {"data-dir", "Specify data directory"};
+  const arg_descriptor arg_testnet_data_dir = {"testnet-data-dir", "Specify testnet data directory"};
+  const arg_descriptor      arg_test_drop_download        = {"test-drop-download", "For net tests: in download, discard ALL blocks instead checking/saving them (very fast)"};
+  const arg_descriptor   arg_test_drop_download_height     = {"test-drop-download-height", "Like test-drop-download but disards only after around certain height", 0};
+  const arg_descriptor       arg_test_dbg_lock_sleep = {"test-dbg-lock-sleep", "Sleep time in ms, defaults to 0 (off), used to debug before/after locking mutex. Values 100 to 1000 are good for tests."};
 }
diff --git a/src/common/command_line.h b/src/common/command_line.h
index 8606537..ae79f0a 100644
+++ b/src/common/command_line.h
@@ -1,6 +1,32 @@
 #pragma once
@@ -14,6 +40,9 @@
 namespace command_line
 {
+  std::string input_line(const std::string& prompt);
   template
   struct arg_descriptor;
@@ -174,4 +203,8 @@ namespace command_line
   extern const arg_descriptor arg_help;
   extern const arg_descriptor arg_version;
   extern const arg_descriptor arg_data_dir;
+  extern const arg_descriptor arg_testnet_data_dir;
+  extern const arg_descriptor      arg_test_drop_download;
+  extern const arg_descriptor   arg_test_drop_download_height;
+  extern const arg_descriptor       arg_test_dbg_lock_sleep;
 }
diff --git a/src/common/dns_utils.cpp b/src/common/dns_utils.cpp
new file mode 100644
index 0000000..e442d3d
+++ b/src/common/dns_utils.cpp
@@ -0,0 +1,347 @@
+using namespace epee;
+namespace bf = boost::filesystem;
+namespace
+{
+/*
+ * The following two functions were taken from unbound-anchor.c, from
+ * the unbound library packaged with this source.  The license and source
+ * can be found in $PROJECT_ROOT/external/unbound
+ */
+/* Cert builtin commented out until it's used, as the compiler complains
+static const char*
+get_builtin_cert(void)
+{
+   return
+"-----BEGIN CERTIFICATE-----\n"
+"MIIDdzCCAl+gAwIBAgIBATANBgkqhkiG9w0BAQsFADBdMQ4wDAYDVQQKEwVJQ0FO\n"
+"TjEmMCQGA1UECxMdSUNBTk4gQ2VydGlmaWNhdGlvbiBBdXRob3JpdHkxFjAUBgNV\n"
+"BAMTDUlDQU5OIFJvb3QgQ0ExCzAJBgNVBAYTAlVTMB4XDTA5MTIyMzA0MTkxMloX\n"
+"DTI5MTIxODA0MTkxMlowXTEOMAwGA1UEChMFSUNBTk4xJjAkBgNVBAsTHUlDQU5O\n"
+"IENlcnRpZmljYXRpb24gQXV0aG9yaXR5MRYwFAYDVQQDEw1JQ0FOTiBSb290IENB\n"
+"MQswCQYDVQQGEwJVUzCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoCggEBAKDb\n"
+"cLhPNNqc1NB+u+oVvOnJESofYS9qub0/PXagmgr37pNublVThIzyLPGCJ8gPms9S\n"
+"G1TaKNIsMI7d+5IgMy3WyPEOECGIcfqEIktdR1YWfJufXcMReZwU4v/AdKzdOdfg\n"
+"ONiwc6r70duEr1IiqPbVm5T05l1e6D+HkAvHGnf1LtOPGs4CHQdpIUcy2kauAEy2\n"
+"paKcOcHASvbTHK7TbbvHGPB+7faAztABLoneErruEcumetcNfPMIjXKdv1V1E3C7\n"
+"MSJKy+jAqqQJqjZoQGB0necZgUMiUv7JK1IPQRM2CXJllcyJrm9WFxY0c1KjBO29\n"
+"iIKK69fcglKcBuFShUECAwEAAaNCMEAwDwYDVR0TAQH/BAUwAwEB/zAOBgNVHQ8B\n"
+"Af8EBAMCAf4wHQYDVR0OBBYEFLpS6UmDJIZSL8eZzfyNa2kITcBQMA0GCSqGSIb3\n"
+"DQEBCwUAA4IBAQAP8emCogqHny2UYFqywEuhLys7R9UKmYY4suzGO4nkbgfPFMfH\n"
+"6M+Zj6owwxlwueZt1j/IaCayoKU3QsrYYoDRolpILh+FPwx7wseUEV8ZKpWsoDoD\n"
+"2JFbLg2cfB8u/OlE4RYmcxxFSmXBg0yQ8/IoQt/bxOcEEhhiQ168H2yE5rxJMt9h\n"
+"15nu5JBSewrCkYqYYmaxyOC3WrVGfHZxVI7MpIFcGdvSb2a1uyuua8l0BKgk3ujF\n"
+"0/wsHNeP22qNyVO+XVBzrM8fk8BSUFuiT/6tZTYXRtEt5aKQZgXbKU5dUF3jT9qg\n"
+"j/Br5BZw3X/zd325TvnswzMC1+ljLzHnQGGk\n"
+"-----END CERTIFICATE-----\n"
+      ;
+}
+*/
+/** return the built in root DS trust anchor */
+static const char*
+get_builtin_ds(void)
+{
+   return
+". IN DS 19036 8 2 49AAC11D7B6F6446702E54A1607371607A1A41855200FD2CE1CDDE32F24E8FB5\n";
+}
+/************************************************************
+ ************************************************************
+ ***********************************************************/
+} // anonymous namespace
+namespace tools
+{
+std::string ipv4_to_string(const char* src)
+{
+  std::stringstream ss;
+  unsigned int bytes[4];
+  for (int i = 0; i < 4; i++)
+  {
+    unsigned char a = src;
+    bytes = a;
+  }
+  ss << bytes[0] << "."
+     << bytes[1] << "."
+     << bytes[2] << "."
+     << bytes[3];
+  return ss.str();
+}
+std::string ipv6_to_string(const char* src)
+{
+  std::stringstream ss;
+  unsigned int bytes[8];
+  for (int i = 0; i < 8; i++)
+  {
+    unsigned char a = src;
+    bytes = a;
+  }
+  ss << bytes[0] << ":"
+     << bytes[1] << ":"
+     << bytes[2] << ":"
+     << bytes[3] << ":"
+     << bytes[4] << ":"
+     << bytes[5] << ":"
+     << bytes[6] << ":"
+     << bytes[7];
+  return ss.str();
+}
+template
+class scoped_ptr
+{
+public:
+  scoped_ptr():
+    ptr(nullptr)
+  {
+  }
+  scoped_ptr(type *p):
+    ptr(p)
+  {
+  }
+  ~scoped_ptr()
+  {
+    freefunc(ptr);
+  }
+  operator type *() { return ptr; }
+  type **operator &() { return &ptr; }
+  type *operator->() { return ptr; }
+  operator const type*() const { return &ptr; }
+private:
+  type* ptr;
+};
+typedef class scoped_ptr ub_result_ptr;
+static void freestring(char *ptr) { free(ptr); }
+typedef class scoped_ptr string_ptr;
+struct DNSResolverData
+{
+  ub_ctx* m_ub_context;
+};
+DNSResolver::DNSResolver() : m_data(new DNSResolverData())
+{
+  // init libunbound context
+  m_data->m_ub_context = ub_ctx_create();
+  // look for "/etc/resolv.conf" and "/etc/hosts" or platform equivalent
+  ub_ctx_resolvconf(m_data->m_ub_context, NULL);
+  ub_ctx_hosts(m_data->m_ub_context, NULL);
+   #ifdef DEVELOPER_LIBUNBOUND_OLD
+      #pragma message "Using the work around for old libunbound"
+      { // work around for bug https://www.nlnetlabs.nl/bugs-script/show_bug.cgi?id=515 needed for it to compile on e.g. Debian 7
+         char * ds_copy = NULL; // this will be the writable copy of string that bugged version of libunbound requires
+         try {
+            char * ds_copy = strdup( ::get_builtin_ds() );
+            ub_ctx_add_ta(m_data->m_ub_context, ds_copy);
+         } catch(...) { // probably not needed but to work correctly in every case...
+            if (ds_copy) { free(ds_copy); ds_copy=NULL; } // for the strdup
+            throw ;
+         }
+         if (ds_copy) { free(ds_copy); ds_copy=NULL; } // for the strdup
+      }
+   #else
+      // normal version for fixed libunbound
+      ub_ctx_add_ta(m_data->m_ub_context, ::get_builtin_ds() );
+   #endif
+}
+DNSResolver::~DNSResolver()
+{
+  if (m_data)
+  {
+    if (m_data->m_ub_context != NULL)
+    {
+      ub_ctx_delete(m_data->m_ub_context);
+    }
+    delete m_data;
+  }
+}
+std::vector DNSResolver::get_ipv4(const std::string& url, bool& dnssec_available, bool& dnssec_valid)
+{
+  std::vector addresses;
+  dnssec_available = false;
+  dnssec_valid = false;
+  string_ptr urlC(strdup(url.c_str()));
+  if (!check_address_syntax(urlC))
+  {
+    return addresses;
+  }
+  // destructor takes care of cleanup
+  ub_result_ptr result;
+  // call DNS resolver, blocking.  if return value not zero, something went wrong
+  if (!ub_resolve(m_data->m_ub_context, urlC, DNS_TYPE_A, DNS_CLASS_IN, &result))
+  {
+    dnssec_available = (result->secure || (!result->secure && result->bogus));
+    dnssec_valid = result->secure && !result->bogus;
+    if (result->havedata)
+    {
+      for (size_t i=0; result->data != NULL; i++)
+      {
+        addresses.push_back(ipv4_to_string(result->data));
+      }
+    }
+  }
+  return addresses;
+}
+std::vector DNSResolver::get_ipv6(const std::string& url, bool& dnssec_available, bool& dnssec_valid)
+{
+  std::vector addresses;
+  dnssec_available = false;
+  dnssec_valid = false;
+  string_ptr urlC(strdup(url.c_str()));
+  if (!check_address_syntax(urlC))
+  {
+    return addresses;
+  }
+  ub_result_ptr result;
+  // call DNS resolver, blocking.  if return value not zero, something went wrong
+  if (!ub_resolve(m_data->m_ub_context, urlC, DNS_TYPE_AAAA, DNS_CLASS_IN, &result))
+  {
+    dnssec_available = (result->secure || (!result->secure && result->bogus));
+    dnssec_valid = result->secure && !result->bogus;
+    if (result->havedata)
+    {
+      for (size_t i=0; result->data != NULL; i++)
+      {
+        addresses.push_back(ipv6_to_string(result->data));
+      }
+    }
+  }
+  return addresses;
+}
+std::vector DNSResolver::get_txt_record(const std::string& url, bool& dnssec_available, bool& dnssec_valid)
+{
+  std::vector records;
+  dnssec_available = false;
+  dnssec_valid = false;
+  string_ptr urlC(strdup(url.c_str()));
+  if (!check_address_syntax(urlC))
+  {
+    return records;
+  }
+  ub_result_ptr result;
+  // call DNS resolver, blocking.  if return value not zero, something went wrong
+  if (!ub_resolve(m_data->m_ub_context, urlC, DNS_TYPE_TXT, DNS_CLASS_IN, &result))
+  {
+    dnssec_available = (result->secure || (!result->secure && result->bogus));
+    dnssec_valid = result->secure && !result->bogus;
+    if (result->havedata)
+    {
+      for (size_t i=0; result->data != NULL; i++)
+      {
+         // plz fix this, but this does NOT work and spills over into parts of memory it shouldn't: records.push_back(result.ptr->data);
+        char *restxt;
+        restxt = (char*) calloc(result->len+1, 1);
+        memcpy(restxt, result->data+1, result->len-1);
+        records.push_back(restxt);
+      }
+    }
+  }
+  return records;
+}
+std::string DNSResolver::get_dns_format_from_oa_address(const std::string& oa_addr)
+{
+  std::string addr(oa_addr);
+  auto first_at = addr.find("@");
+  if (first_at == std::string::npos)
+    return addr;
+  // convert [email protected] to name.domain.tld
+  addr.replace(first_at, 1, ".");
+  return addr;
+}
+DNSResolver& DNSResolver::instance()
+{
+  static DNSResolver* staticInstance = NULL;
+  if (staticInstance == NULL)
+  {
+    staticInstance = new DNSResolver();
+  }
+  return *staticInstance;
+}
+bool DNSResolver::check_address_syntax(const char *addr)
+{
+  // if string doesn't contain a dot, we won't consider it a url for now.
+  if (strchr(addr,'.') == NULL)
+  {
+    return false;
+  }
+  return true;
+}
+}  // namespace tools
diff --git a/src/common/dns_utils.h b/src/common/dns_utils.h
new file mode 100644
index 0000000..1e726c8
+++ b/src/common/dns_utils.h
@@ -0,0 +1,136 @@
+namespace tools
+{
+const static int DNS_CLASS_IN  = 1;
+const static int DNS_TYPE_A    = 1;
+const static int DNS_TYPE_TXT  = 16;
+const static int DNS_TYPE_AAAA = 8;
+struct DNSResolverData;
+/**
+ * @brief Provides high-level access to DNS resolution
+ *
+ * This class is designed to provide a high-level abstraction to DNS resolution
+ * functionality, including access to TXT records and such.  It will also
+ * handle DNSSEC validation of the results.
+ */
+class DNSResolver
+{
+public:
+  /**
+   * @brief Constructs an instance of DNSResolver
+   *
+   * Constructs a class instance and does setup stuff for the backend resolver.
+   */
+  DNSResolver();
+  /**
+   * @brief takes care of freeing C pointers and such
+   */
+  ~DNSResolver();
+  /**
+   * @brief gets ipv4 addresses from DNS query of a URL
+   *
+   * returns a vector of all IPv4 "A" records for given URL.
+   * If no "A" records found, returns an empty vector.
+   *
+   * @param url A string containing a URL to query for
+   *
+   * @param dnssec_available
+   *
+   * @return vector of strings containing ipv4 addresses
+   */
+  std::vector get_ipv4(const std::string& url, bool& dnssec_available, bool& dnssec_valid);
+  /**
+   * @brief gets ipv6 addresses from DNS query
+   *
+   * returns a vector of all IPv6 "A" records for given URL.
+   * If no "A" records found, returns an empty vector.
+   *
+   * @param url A string containing a URL to query for
+   *
+   * @return vector of strings containing ipv6 addresses
+   */
+   std::vector get_ipv6(const std::string& url, bool& dnssec_available, bool& dnssec_valid);
+  /**
+   * @brief gets all TXT records from a DNS query for the supplied URL;
+   * if no TXT record present returns an empty vector.
+   *
+   * @param url A string containing a URL to query for
+   *
+   * @return A vector of strings containing a TXT record; or an empty vector
+   */
+  // TODO: modify this to accomodate DNSSEC
+   std::vector get_txt_record(const std::string& url, bool& dnssec_available, bool& dnssec_valid);
+  /**
+   * @brief Gets a DNS address from OpenAlias format
+   *
+   * If the address looks good, but contains one @ symbol, replace that with a .
+   * e.g. [email protected] becomes donate.getmonero.org
+   *
+   * @param oa_addr  OpenAlias address
+   *
+   * @return dns_addr  DNS address
+   */
+  std::string get_dns_format_from_oa_address(const std::string& oa_addr);
+  /**
+   * @brief Gets the singleton instance of DNSResolver
+   *
+   * @return returns a pointer to the singleton
+   */
+  static DNSResolver& instance();
+private:
+  /**
+   * @brief Checks a string to see if it looks like a URL
+   *
+   * @param addr the string to be checked
+   *
+   * @return true if it looks enough like a URL, false if not
+   */
+  bool check_address_syntax(const char *addr);
+  DNSResolverData *m_data;
+}; // class DNSResolver
+}  // namespace tools
diff --git a/src/common/http_connection.h b/src/common/http_connection.h
new file mode 100644
index 0000000..c192e79
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Continuation of: Git diff from commit e940386f9a8765423ab3dd9e3aabe19a68cba9f9 (thankful_for_today's last commit) to current HEAD, excluding changes in /external so as to exclude libraries and submoduled code, and excluding contrib/ so as to exclude changes made to epee, excluding most comments because apparently nobody that writes code for BCN knows what a comment is anyway, excluding removed lines, excluding empty lines


+This imports blocks from `$MONERO_DATA_DIR/export/blockchain.raw` into the current database.
+Defaults: `--batch on`, `--batch size 20000`, `--verify on`
+Batch size refers to number of blocks and can be adjusted for performance based on available RAM.
+Verification should only be turned off if importing from a trusted blockchain.
+If you encounter an error like "resizing not supported in batch mode", you can just re-run
+the `blockchain_import` command again, and it will restart from where it left off.
+```bash
+$ blockchain_import
+$ blockchain_import --batch-size 100000 --verify off
+$ blockchain_import --database lmdb#nosync
+$ blockchain_import --database lmdb#nosync,nometasync
+```
+`blockchain_converter` has also been updated and includes batching for faster writes. However, on lower RAM systems, this will be slower than using the exporter and importer utilities. The converter needs to keep the blockchain in memory for the duration of the conversion, like the original bitmonerod, thus leaving less memory available to the destination database to operate.
+```bash
+$ blockchain_converter --batch on --batch-size 20000
+```
diff --git a/src/blockchain_utilities/blockchain_converter.cpp b/src/blockchain_utilities/blockchain_converter.cpp
new file mode 100644
index 0000000..d18ce87
+++ b/src/blockchain_utilities/blockchain_converter.cpp
@@ -0,0 +1,308 @@
+unsigned int epee::g_test_dbg_lock_sleep = 0;
+namespace
+{
+bool opt_batch   = true;
+bool opt_resume  = true;
+bool opt_testnet = false;
+uint64_t db_batch_size_verify = 5000;
+uint64_t db_batch_size_verify = 1000;
+uint64_t db_batch_size = db_batch_size_verify;
+}
+namespace po = boost::program_options;
+using namespace cryptonote;
+using namespace epee;
+struct fake_core
+{
+  Blockchain dummy;
+  tx_memory_pool m_pool;
+  blockchain_storage m_storage;
+  // for multi_db_runtime:
+  fake_core(const boost::filesystem::path &path, const bool use_testnet) : dummy(m_pool), m_pool(&dummy), m_storage(m_pool)
+  // for multi_db_compile:
+  fake_core(const boost::filesystem::path &path, const bool use_testnet) : dummy(m_pool), m_pool(dummy), m_storage(&m_pool)
+  {
+    m_pool.init(path.string());
+    m_storage.init(path.string(), use_testnet);
+  }
+};
+int main(int argc, char* argv[])
+{
+  uint64_t height = 0;
+  uint64_t start_block = 0;
+  uint64_t end_block = 0;
+  uint64_t num_blocks = 0;
+  boost::filesystem::path default_data_path {tools::get_default_data_dir()};
+  boost::filesystem::path default_testnet_data_path {default_data_path / "testnet"};
+  po::options_description desc_cmd_only("Command line options");
+  po::options_description desc_cmd_sett("Command line options and settings options");
+  const command_line::arg_descriptor arg_log_level   =  {"log-level", "", LOG_LEVEL_0};
+  const command_line::arg_descriptor arg_batch_size  =  {"batch-size", "", db_batch_size};
+  const command_line::arg_descriptor     arg_testnet_on  = {
+    "testnet"
+      , "Run on testnet."
+      , opt_testnet
+  };
+  const command_line::arg_descriptor arg_block_number =
+  {"block-number", "Number of blocks (default: use entire source blockchain)",
+    0};
+  command_line::add_arg(desc_cmd_sett, command_line::arg_data_dir, default_data_path.string());
+  command_line::add_arg(desc_cmd_sett, command_line::arg_testnet_data_dir, default_testnet_data_path.string());
+  command_line::add_arg(desc_cmd_sett, arg_log_level);
+  command_line::add_arg(desc_cmd_sett, arg_batch_size);
+  command_line::add_arg(desc_cmd_sett, arg_testnet_on);
+  command_line::add_arg(desc_cmd_sett, arg_block_number);
+  command_line::add_arg(desc_cmd_only, command_line::arg_help);
+  const command_line::arg_descriptor arg_batch  =  {"batch",
+    "Batch transactions for faster import", true};
+  const command_line::arg_descriptor arg_resume =  {"resume",
+        "Resume from current height if output database already exists", true};
+  // call add_options() directly for these arguments since command_line helpers
+  // support only boolean switch, not boolean argument
+  desc_cmd_sett.add_options()
+    (arg_batch.name,  make_semantic(arg_batch),  arg_batch.description)
+    (arg_resume.name, make_semantic(arg_resume), arg_resume.description)
+    ;
+  po::options_description desc_options("Allowed options");
+  desc_options.add(desc_cmd_only).add(desc_cmd_sett);
+  po::variables_map vm;
+  bool r = command_line::handle_error_helper(desc_options, [&]()
+  {
+    po::store(po::parse_command_line(argc, argv, desc_options), vm);
+    po::notify(vm);
+    return true;
+  });
+  if (!r)
+    return 1;
+  int log_level = command_line::get_arg(vm, arg_log_level);
+  opt_batch     = command_line::get_arg(vm, arg_batch);
+  opt_resume    = command_line::get_arg(vm, arg_resume);
+  db_batch_size = command_line::get_arg(vm, arg_batch_size);
+  if (command_line::get_arg(vm, command_line::arg_help))
+  {
+    std::cout << CRYPTONOTE_NAME << " v" << MONERO_VERSION_FULL << ENDL << ENDL;
+    std::cout << desc_options << std::endl;
+    return 1;
+  }
+  if (! opt_batch && ! vm["batch-size"].defaulted())
+  {
+    std::cerr << "Error: batch-size set, but batch option not enabled" << ENDL;
+    return 1;
+  }
+  if (! db_batch_size)
+  {
+    std::cerr << "Error: batch-size must be > 0" << ENDL;
+    return 1;
+  }
+  log_space::get_set_log_detalisation_level(true, log_level);
+  log_space::log_singletone::add_logger(LOGGER_CONSOLE, NULL, NULL);
+  LOG_PRINT_L0("Starting...");
+  std::string src_folder;
+  opt_testnet = command_line::get_arg(vm, arg_testnet_on);
+  auto data_dir_arg = opt_testnet ? command_line::arg_testnet_data_dir : command_line::arg_data_dir;
+  src_folder = command_line::get_arg(vm, data_dir_arg);
+  boost::filesystem::path dest_folder(src_folder);
+  num_blocks = command_line::get_arg(vm, arg_block_number);
+  if (opt_batch)
+  {
+    LOG_PRINT_L0("batch:   " << std::boolalpha << opt_batch << std::noboolalpha
+        << "  batch size: " << db_batch_size);
+  }
+  else
+  {
+    LOG_PRINT_L0("batch:   " << std::boolalpha << opt_batch << std::noboolalpha);
+  }
+  LOG_PRINT_L0("resume:  " << std::boolalpha << opt_resume  << std::noboolalpha);
+  LOG_PRINT_L0("testnet: " << std::boolalpha << opt_testnet << std::noboolalpha);
+  fake_core c(src_folder, opt_testnet);
+  height = c.m_storage.get_current_blockchain_height();
+  BlockchainDB *blockchain;
+  blockchain = new BlockchainLMDB(opt_batch);
+  dest_folder /= blockchain->get_db_name();
+  LOG_PRINT_L0("Source blockchain: " << src_folder);
+  LOG_PRINT_L0("Dest blockchain:   " << dest_folder.string());
+  LOG_PRINT_L0("Opening dest blockchain (BlockchainDB " << blockchain->get_db_name() << ")");
+  blockchain->open(dest_folder.string());
+  LOG_PRINT_L0("Source blockchain height: " << height);
+  LOG_PRINT_L0("Dest blockchain height:   " << blockchain->height());
+  if (opt_resume)
+    // next block number to add is same as current height
+    start_block = blockchain->height();
+  if (! num_blocks || (start_block + num_blocks > height))
+    end_block = height - 1;
+  else
+    end_block = start_block + num_blocks - 1;
+  LOG_PRINT_L0("start height: " << start_block+1 << "  stop height: " <<
+            end_block+1);
+  if (start_block > end_block)
+  {
+    LOG_PRINT_L0("Finished: no blocks to add");
+    delete blockchain;
+    return 0;
+  }
+  if (opt_batch)
+    blockchain->batch_start(db_batch_size);
+  uint64_t i = 0;
+  for (i = start_block; i < end_block + 1; ++i)
+  {
+    // block: i  height: i+1  end height: end_block + 1
+    if ((i+1) % 10 == 0)
+    {
+      std::cout << "\r                   \r" << "height " << i+1 << "/" <<
+        end_block+1 << " (" << (i+1)*100/(end_block+1)<< "%)" << std::flush;
+    }
+    // for debugging:
+    // std::cout << "height " << i+1 << "/" << end_block+1
+    //   << " ((" << i+1 << ")*100/(end_block+1))" << "%)" << ENDL;
+    block b = c.m_storage.get_block(i);
+    size_t bsize = c.m_storage.get_block_size(i);
+    difficulty_type bdiff = c.m_storage.get_block_cumulative_difficulty(i);
+    uint64_t bcoins = c.m_storage.get_block_coins_generated(i);
+    std::vector txs;
+    std::vector missed;
+    c.m_storage.get_transactions(b.tx_hashes, txs, missed);
+    if (missed.size())
+    {
+      std::cout << ENDL;
+      std::cerr << "Missed transaction(s) for block at height " << i + 1 << ", exiting" << ENDL;
+      delete blockchain;
+      return 1;
+    }
+    try
+    {
+      blockchain->add_block(b, bsize, bdiff, bcoins, txs);
+      if (opt_batch)
+      {
+        if ((i < end_block) && ((i + 1) % db_batch_size == 0))
+        {
+          std::cout << "\r                   \r";
+          std::cout << "[- batch commit at height " << i + 1 << " -]" << ENDL;
+          blockchain->batch_stop();
+          blockchain->batch_start(db_batch_size);
+          std::cout << ENDL;
+          blockchain->show_stats();
+        }
+      }
+    }
+    catch (const std::exception& e)
+    {
+      std::cout << ENDL;
+      std::cerr << "Error adding block " << i << " to new blockchain: " << e.what() << ENDL;
+      delete blockchain;
+      return 2;
+    }
+  }
+  if (opt_batch)
+  {
+    std::cout << "\r                   \r" << "height " << i << "/" <<
+      end_block+1 << " (" << (i)*100/(end_block+1)<< "%)" << std::flush;
+    std::cout << ENDL;
+    std::cout << "[- batch commit at height " << i << " -]" << ENDL;
+    blockchain->batch_stop();
+  }
+  std::cout << ENDL;
+  blockchain->show_stats();
+  std::cout << "Finished at height: " << i << "  block: " << i-1 << ENDL;
+  delete blockchain;
+  return 0;
+}
diff --git a/src/blockchain_utilities/blockchain_export.cpp b/src/blockchain_utilities/blockchain_export.cpp
new file mode 100644
index 0000000..ec885ea
+++ b/src/blockchain_utilities/blockchain_export.cpp
@@ -0,0 +1,157 @@
+unsigned int epee::g_test_dbg_lock_sleep = 0;
+namespace po = boost::program_options;
+using namespace epee; // log_space
+int main(int argc, char* argv[])
+{
+  uint32_t log_level = 0;
+  uint64_t block_stop = 0;
+  std::string import_filename = BLOCKCHAIN_RAW;
+  boost::filesystem::path default_data_path {tools::get_default_data_dir()};
+  boost::filesystem::path default_testnet_data_path {default_data_path / "testnet"};
+  po::options_description desc_cmd_only("Command line options");
+  po::options_description desc_cmd_sett("Command line options and settings options");
+  const command_line::arg_descriptor arg_log_level  = {"log-level",  "", log_level};
+  const command_line::arg_descriptor arg_block_stop = {"block-stop", "Stop at block number", block_stop};
+  const command_line::arg_descriptor     arg_testnet_on = {
+    "testnet"
+      , "Run on testnet."
+      , false
+  };
+  command_line::add_arg(desc_cmd_sett, command_line::arg_data_dir, default_data_path.string());
+  command_line::add_arg(desc_cmd_sett, command_line::arg_testnet_data_dir, default_testnet_data_path.string());
+  command_line::add_arg(desc_cmd_sett, arg_testnet_on);
+  command_line::add_arg(desc_cmd_sett, arg_log_level);
+  command_line::add_arg(desc_cmd_sett, arg_block_stop);
+  command_line::add_arg(desc_cmd_only, command_line::arg_help);
+  po::options_description desc_options("Allowed options");
+  desc_options.add(desc_cmd_only).add(desc_cmd_sett);
+  po::variables_map vm;
+  bool r = command_line::handle_error_helper(desc_options, [&]()
+  {
+    po::store(po::parse_command_line(argc, argv, desc_options), vm);
+    po::notify(vm);
+    return true;
+  });
+  if (! r)
+    return 1;
+  if (command_line::get_arg(vm, command_line::arg_help))
+  {
+    std::cout << CRYPTONOTE_NAME << " v" << MONERO_VERSION_FULL << ENDL << ENDL;
+    std::cout << desc_options << std::endl;
+    return 1;
+  }
+  log_level    = command_line::get_arg(vm, arg_log_level);
+  block_stop = command_line::get_arg(vm, arg_block_stop);
+  log_space::get_set_log_detalisation_level(true, log_level);
+  log_space::log_singletone::add_logger(LOGGER_CONSOLE, NULL, NULL);
+  LOG_PRINT_L0("Starting...");
+  LOG_PRINT_L0("Setting log level = " << log_level);
+  bool opt_testnet = command_line::get_arg(vm, arg_testnet_on);
+  std::string m_config_folder;
+  auto data_dir_arg = opt_testnet ? command_line::arg_testnet_data_dir : command_line::arg_data_dir;
+  m_config_folder = command_line::get_arg(vm, data_dir_arg);
+  boost::filesystem::path output_dir {m_config_folder};
+  output_dir /= "export";
+  LOG_PRINT_L0("Export directory: " << output_dir.string());
+  // If we wanted to use the memory pool, we would set up a fake_core.
+  // blockchain_storage* core_storage = NULL;
+  // tx_memory_pool m_mempool(*core_storage); // is this fake anyway? just passing in NULL! so m_mempool can't be used anyway, right?
+  // core_storage = new blockchain_storage(&m_mempool);
+  blockchain_storage* core_storage = new blockchain_storage(NULL);
+  LOG_PRINT_L0("Initializing source blockchain (in-memory database)");
+  r = core_storage->init(m_config_folder, opt_testnet);
+  // Use Blockchain instead of lower-level BlockchainDB for two reasons:
+  // 1. Blockchain has the init() method for easy setup
+  // 2. exporter needs to use get_current_blockchain_height(), get_block_id_by_height(), get_block_by_hash()
+  //
+  // cannot match blockchain_storage setup above with just one line,
+  // e.g.
+  //   Blockchain* core_storage = new Blockchain(NULL);
+  // because unlike blockchain_storage constructor, which takes a pointer to
+  // tx_memory_pool, Blockchain's constructor takes tx_memory_pool object.
+  LOG_PRINT_L0("Initializing source blockchain (BlockchainDB)");
+  Blockchain* core_storage = NULL;
+  tx_memory_pool m_mempool(*core_storage);
+  core_storage = new Blockchain(m_mempool);
+  BlockchainDB* db = new BlockchainLMDB();
+  boost::filesystem::path folder(m_config_folder);
+  folder /= db->get_db_name();
+  int lmdb_flags = 0;
+  lmdb_flags |= MDB_RDONLY;
+  const std::string filename = folder.string();
+  LOG_PRINT_L0("Loading blockchain from folder " << filename << " ...");
+  try
+  {
+    db->open(filename, lmdb_flags);
+  }
+  catch (const std::exception& e)
+  {
+    LOG_PRINT_L0("Error opening database: " << e.what());
+    throw;
+  }
+  r = core_storage->init(db, opt_testnet);
+  CHECK_AND_ASSERT_MES(r, false, "Failed to initialize source blockchain storage");
+  LOG_PRINT_L0("Source blockchain storage initialized OK");
+  LOG_PRINT_L0("Exporting blockchain raw data...");
+  BootstrapFile bootstrap;
+  r = bootstrap.store_blockchain_raw(core_storage, NULL, output_dir, block_stop);
+  CHECK_AND_ASSERT_MES(r, false, "Failed to export blockchain raw data");
+  LOG_PRINT_L0("Blockchain raw data exported OK");
+}
diff --git a/src/blockchain_utilities/blockchain_import.cpp b/src/blockchain_utilities/blockchain_import.cpp
new file mode 100644
index 0000000..924b46d
+++ b/src/blockchain_utilities/blockchain_import.cpp
@@ -0,0 +1,770 @@
+unsigned int epee::g_test_dbg_lock_sleep = 0;
+namespace
+{
+bool opt_batch   = true;
+bool opt_verify  = true; // use add_new_block, which does verification before calling add_block
+bool opt_resume  = true;
+bool opt_testnet = true;
+uint64_t db_batch_size = 20000;
+uint64_t db_batch_size = 1000;
+uint64_t db_batch_size_verify = 5000;
+std::string refresh_string = "\r                                    \r";
+}
+namespace po = boost::program_options;
+using namespace cryptonote;
+using namespace epee;
+int parse_db_arguments(const std::string& db_arg_str, std::string& db_engine, int& mdb_flags)
+{
+  std::vector db_args;
+  boost::split(db_args, db_arg_str, boost::is_any_of("#"));
+  db_engine = db_args.front();
+  boost::algorithm::trim(db_engine);
+  if (db_args.size() == 1)
+  {
+    return 0;
+  }
+  else if (db_args.size() > 2)
+  {
+    std::cerr << "unrecognized database argument format: " << db_arg_str << ENDL;
+    return 1;
+  }
+  std::string db_arg_str2 = db_args[1];
+  boost::split(db_args, db_arg_str2, boost::is_any_of(","));
+  for (auto& it : db_args)
+  {
+    boost::algorithm::trim(it);
+    if (it.empty())
+      continue;
+    LOG_PRINT_L1("LMDB flag: " << it);
+    if (it == "nosync")
+      mdb_flags |= MDB_NOSYNC;
+    else if (it == "nometasync")
+      mdb_flags |= MDB_NOMETASYNC;
+    else if (it == "writemap")
+      mdb_flags |= MDB_WRITEMAP;
+    else if (it == "mapasync")
+      mdb_flags |= MDB_MAPASYNC;
+    else if (it == "nordahead")
+      mdb_flags |= MDB_NORDAHEAD;
+    else
+    {
+      std::cerr << "unrecognized database flag: " << it << ENDL;
+      return 1;
+    }
+  }
+  return 0;
+}
+template
+int pop_blocks(FakeCore& simple_core, int num_blocks)
+{
+  bool use_batch = false;
+  if (opt_batch)
+  {
+    if (simple_core.support_batch)
+      use_batch = true;
+    else
+      LOG_PRINT_L0("WARNING: batch transactions enabled but unsupported or unnecessary for this database engine - ignoring");
+  }
+  if (use_batch)
+    simple_core.batch_start();
+  int quit = 0;
+  block popped_block;
+  std::vector popped_txs;
+  for (int i=0; i < num_blocks; ++i)
+  {
+    simple_core.m_storage.debug_pop_block_from_blockchain();
+    // simple_core.m_storage.pop_block_from_blockchain() is private, so call directly through db
+    simple_core.m_storage.get_db().pop_block(popped_block, popped_txs);
+    quit = 1;
+  }
+  if (use_batch)
+  {
+    if (quit > 1)
+    {
+      // There was an error, so don't commit pending data.
+      // Destructor will abort write txn.
+    }
+    else
+    {
+      simple_core.batch_stop();
+    }
+    simple_core.m_storage.get_db().show_stats();
+  }
+  return num_blocks;
+}
+template
+int import_from_file(FakeCore& simple_core, std::string& import_file_path, uint64_t block_stop=0)
+{
+  static_assert(std::is_same::value || std::is_same::value,
+      "FakeCore constraint error");
+  if (std::is_same::value)
+  {
+    // Reset stats, in case we're using newly created db, accumulating stats
+    // from addition of genesis block.
+    // This aligns internal db counts with importer counts.
+    simple_core.m_storage.get_db().reset_stats();
+  }
+  boost::filesystem::path raw_file_path(import_file_path);
+  boost::system::error_code ec;
+  if (!boost::filesystem::exists(raw_file_path, ec))
+  {
+    LOG_PRINT_L0("bootstrap file not found: " << raw_file_path);
+    return false;
+  }
+  BootstrapFile bootstrap;
+  // BootstrapFile bootstrap(import_file_path);
+  uint64_t total_source_blocks = bootstrap.count_blocks(import_file_path);
+  LOG_PRINT_L0("bootstrap file last block number: " << total_source_blocks-1 << " (zero-based height)  total blocks: " << total_source_blocks);
+  std::cout << ENDL;
+  std::cout << "Preparing to read blocks..." << ENDL;
+  std::cout << ENDL;
+  std::ifstream import_file;
+  import_file.open(import_file_path, std::ios_base::binary | std::ifstream::in);
+  uint64_t h = 0;
+  uint64_t num_imported = 0;
+  if (import_file.fail())
+  {
+    LOG_PRINT_L0("import_file.open() fail");
+    return false;
+  }
+  // 4 byte magic + (currently) 1024 byte header structures
+  bootstrap.seek_to_first_chunk(import_file);
+  std::string str1;
+  char buffer1[1024];
+  char buffer_block[BUFFER_SIZE];
+  block b;
+  transaction tx;
+  int quit = 0;
+  uint64_t bytes_read = 0;
+  uint64_t start_height = 1;
+  if (opt_resume)
+    start_height = simple_core.m_storage.get_current_blockchain_height();
+  // Note that a new blockchain will start with block number 0 (total blocks: 1)
+  // due to genesis block being added at initialization.
+  if (! block_stop)
+  {
+    block_stop = total_source_blocks - 1;
+  }
+  // These are what we'll try to use, and they don't have to be a determination
+  // from source and destination blockchains, but those are the defaults.
+  LOG_PRINT_L0("start block: " << start_height << "  stop block: " <<
+      block_stop);
+  bool use_batch = false;
+  if (opt_batch)
+  {
+    if (simple_core.support_batch)
+      use_batch = true;
+    else
+      LOG_PRINT_L0("WARNING: batch transactions enabled but unsupported or unnecessary for this database engine - ignoring");
+  }
+  if (use_batch)
+    simple_core.batch_start(db_batch_size);
+  LOG_PRINT_L0("Reading blockchain from bootstrap file...");
+  std::cout << ENDL;
+  // Within the loop, we skip to start_height before we start adding.
+  // TODO: Not a bottleneck, but we can use what's done in count_blocks() and
+  // only do the chunk size reads, skipping the chunk content reads until we're
+  // at start_height.
+  while (! quit)
+  {
+    uint32_t chunk_size;
+    import_file.read(buffer1, sizeof(chunk_size));
+    // TODO: bootstrap.read_chunk();
+    if (! import_file) {
+      std::cout << refresh_string;
+      LOG_PRINT_L0("End of file reached");
+      quit = 1;
+      break;
+    }
+    bytes_read += sizeof(chunk_size);
+    str1.assign(buffer1, sizeof(chunk_size));
+    if (! ::serialization::parse_binary(str1, chunk_size))
+    {
+      throw std::runtime_error("Error in deserialization of chunk size");
+    }
+    LOG_PRINT_L3("chunk_size: " << chunk_size);
+    if (chunk_size > BUFFER_SIZE)
+    {
+      LOG_PRINT_L0("WARNING: chunk_size " << chunk_size << " > BUFFER_SIZE " << BUFFER_SIZE);
+      throw std::runtime_error("Aborting: chunk size exceeds buffer size");
+    }
+    if (chunk_size > 100000)
+    {
+      LOG_PRINT_L0("NOTE: chunk_size " << chunk_size << " > 100000");
+    }
+    else if (chunk_size == 0) {
+      LOG_PRINT_L0("ERROR: chunk_size == 0");
+      return 2;
+    }
+    import_file.read(buffer_block, chunk_size);
+    if (! import_file) {
+      LOG_PRINT_L0("ERROR: unexpected end of file: bytes read before error: "
+          << import_file.gcount() << " of chunk_size " << chunk_size);
+      return 2;
+    }
+    bytes_read += chunk_size;
+    LOG_PRINT_L3("Total bytes read: " << bytes_read);
+    if (h + NUM_BLOCKS_PER_CHUNK < start_height + 1)
+    {
+      h += NUM_BLOCKS_PER_CHUNK;
+      continue;
+    }
+    if (h > block_stop)
+    {
+      std::cout << refresh_string << "block " << h-1
+        << " / " << block_stop
+        << std::flush;
+      std::cout << ENDL << ENDL;
+      LOG_PRINT_L0("Specified block number reached - stopping.  block: " << h-1 << "  total blocks: " << h);
+      quit = 1;
+      break;
+    }
+    try
+    {
+      str1.assign(buffer_block, chunk_size);
+      bootstrap::block_package bp;
+      if (! ::serialization::parse_binary(str1, bp))
+        throw std::runtime_error("Error in deserialization of chunk");
+      int display_interval = 1000;
+      int progress_interval = 10;
+      // NOTE: use of NUM_BLOCKS_PER_CHUNK is a placeholder in case multi-block chunks are later supported.
+      for (int chunk_ind = 0; chunk_ind < NUM_BLOCKS_PER_CHUNK; ++chunk_ind)
+      {
+        ++h;
+        if ((h-1) % display_interval == 0)
+        {
+          std::cout << refresh_string;
+          LOG_PRINT_L0("loading block number " << h-1);
+        }
+        else
+        {
+          LOG_PRINT_L3("loading block number " << h-1);
+        }
+        b = bp.block;
+        LOG_PRINT_L2("block prev_id: " << b.prev_id << ENDL);
+        if ((h-1) % progress_interval == 0)
+        {
+          std::cout << refresh_string << "block " << h-1
+            << " / " << block_stop
+            << std::flush;
+        }
+        std::vector txs;
+        std::vector archived_txs;
+        archived_txs = bp.txs;
+        // std::cout << refresh_string;
+        // LOG_PRINT_L1("txs: " << archived_txs.size());
+        // if archived_txs is invalid
+        // {
+        //   std::cout << refresh_string;
+        //   LOG_PRINT_RED_L0("exception while de-archiving txs, height=" << h);
+        //   quit = 1;
+        //   break;
+        // }
+        // tx number 1: coinbase tx
+        // tx number 2 onwards: archived_txs
+        unsigned int tx_num = 1;
+        for (const transaction& tx : archived_txs)
+        {
+          ++tx_num;
+          // if tx is invalid
+          // {
+          //   LOG_PRINT_RED_L0("exception while indexing tx from txs, height=" << h <<", tx_num=" << tx_num);
+          //   quit = 1;
+          //   break;
+          // }
+          // std::cout << refresh_string;
+          // LOG_PRINT_L1("tx hash: " << get_transaction_hash(tx));
+          // crypto::hash hsh = null_hash;
+          // size_t blob_size = 0;
+          // NOTE: all tx hashes except for coinbase tx are available in the block data
+          // get_transaction_hash(tx, hsh, blob_size);
+          // LOG_PRINT_L0("tx " << tx_num << "  " << hsh << " : " << ENDL);
+          // LOG_PRINT_L0(obj_to_json_str(tx) << ENDL);
+          // add blocks with verification.
+          // for Blockchain and blockchain_storage add_new_block().
+          if (opt_verify)
+          {
+            // crypto::hash hsh = null_hash;
+            // size_t blob_size = 0;
+            // get_transaction_hash(tx, hsh, blob_size);
+            tx_verification_context tvc = AUTO_VAL_INIT(tvc);
+            bool r = true;
+            r = simple_core.m_pool.add_tx(tx, tvc, true);
+            if (!r)
+            {
+              LOG_PRINT_RED_L0("failed to add transaction to transaction pool, height=" << h <<", tx_num=" << tx_num);
+              quit = 1;
+              break;
+            }
+          }
+          else
+          {
+            // for add_block() method, without (much) processing.
+            // don't add coinbase transaction to txs.
+            //
+            // because add_block() calls
+            // add_transaction(blk_hash, blk.miner_tx) first, and
+            // then a for loop for the transactions in txs.
+            txs.push_back(tx);
+          }
+        }
+        if (opt_verify)
+        {
+          block_verification_context bvc = boost::value_initialized();
+          simple_core.m_storage.add_new_block(b, bvc);
+          if (bvc.m_verifivation_failed)
+          {
+            LOG_PRINT_L0("Failed to add block to blockchain, verification failed, height = " << h);
+            LOG_PRINT_L0("skipping rest of file");
+            // ok to commit previously batched data because it failed only in
+            // verification of potential new block with nothing added to batch
+            // yet
+            quit = 1;
+            break;
+          }
+          if (! bvc.m_added_to_main_chain)
+          {
+            LOG_PRINT_L0("Failed to add block to blockchain, height = " << h);
+            LOG_PRINT_L0("skipping rest of file");
+            // make sure we don't commit partial block data
+            quit = 2;
+            break;
+          }
+        }
+        else
+        {
+          size_t block_size;
+          difficulty_type cumulative_difficulty;
+          uint64_t coins_generated;
+          block_size = bp.block_size;
+          cumulative_difficulty = bp.cumulative_difficulty;
+          coins_generated = bp.coins_generated;
+          // std::cout << refresh_string;
+          // LOG_PRINT_L2("block_size: " << block_size);
+          // LOG_PRINT_L2("cumulative_difficulty: " << cumulative_difficulty);
+          // LOG_PRINT_L2("coins_generated: " << coins_generated);
+          try
+          {
+            simple_core.add_block(b, block_size, cumulative_difficulty, coins_generated, txs);
+          }
+          catch (const std::exception& e)
+          {
+            std::cout << refresh_string;
+            LOG_PRINT_RED_L0("Error adding block to blockchain: " << e.what());
+            quit = 2; // make sure we don't commit partial block data
+            break;
+          }
+        }
+        ++num_imported;
+        if (use_batch)
+        {
+          if ((h-1) % db_batch_size == 0)
+          {
+            std::cout << refresh_string;
+            // zero-based height
+            std::cout << ENDL << "[- batch commit at height " << h-1 << " -]" << ENDL;
+            simple_core.batch_stop();
+            simple_core.batch_start(db_batch_size);
+            std::cout << ENDL;
+            simple_core.m_storage.get_db().show_stats();
+          }
+        }
+      }
+    }
+    catch (const std::exception& e)
+    {
+      std::cout << refresh_string;
+      LOG_PRINT_RED_L0("exception while reading from file, height=" << h);
+      return 2;
+    }
+  } // while
+  import_file.close();
+  if (use_batch)
+  {
+    if (quit > 1)
+    {
+      // There was an error, so don't commit pending data.
+      // Destructor will abort write txn.
+    }
+    else
+    {
+      simple_core.batch_stop();
+    }
+    simple_core.m_storage.get_db().show_stats();
+    LOG_PRINT_L0("Number of blocks imported: " << num_imported)
+    if (h > 0)
+      // TODO: if there was an error, the last added block is probably at zero-based height h-2
+      LOG_PRINT_L0("Finished at block: " << h-1 << "  total blocks: " << h);
+  }
+  std::cout << ENDL;
+  return 0;
+}
+int main(int argc, char* argv[])
+{
+  std::string import_filename = BLOCKCHAIN_RAW;
+  std::string default_db_engine = "memory";
+  std::string default_db_engine = "lmdb";
+  uint32_t log_level = LOG_LEVEL_0;
+  uint64_t num_blocks = 0;
+  uint64_t block_stop = 0;
+  std::string dirname;
+  std::string db_arg_str;
+  boost::filesystem::path default_data_path {tools::get_default_data_dir()};
+  boost::filesystem::path default_testnet_data_path {default_data_path / "testnet"};
+  po::options_description desc_cmd_only("Command line options");
+  po::options_description desc_cmd_sett("Command line options and settings options");
+  const command_line::arg_descriptor arg_log_level   = {"log-level",  "", log_level};
+  const command_line::arg_descriptor arg_block_stop  = {"block-stop", "Stop at block number", block_stop};
+  const command_line::arg_descriptor arg_batch_size  = {"batch-size", "", db_batch_size};
+  const command_line::arg_descriptor arg_pop_blocks  = {"pop-blocks", "Remove blocks from end of blockchain", num_blocks};
+  const command_line::arg_descriptor     arg_testnet_on  = {
+    "testnet"
+      , "Run on testnet."
+      , false
+  };
+  const command_line::arg_descriptor     arg_count_blocks = {
+    "count-blocks"
+      , "Count blocks in bootstrap file and exit"
+      , false
+  };
+  const command_line::arg_descriptor arg_database = {
+    "database", "available: memory, lmdb"
+      , default_db_engine
+  };
+  const command_line::arg_descriptor arg_verify =  {"verify",
+    "Verify blocks and transactions during import", true};
+  const command_line::arg_descriptor arg_batch  =  {"batch",
+    "Batch transactions for faster import", true};
+  const command_line::arg_descriptor arg_resume =  {"resume",
+    "Resume from current height if output database already exists", true};
+  command_line::add_arg(desc_cmd_sett, command_line::arg_data_dir, default_data_path.string());
+  command_line::add_arg(desc_cmd_sett, command_line::arg_testnet_data_dir, default_testnet_data_path.string());
+  command_line::add_arg(desc_cmd_sett, arg_testnet_on);
+  command_line::add_arg(desc_cmd_sett, arg_log_level);
+  command_line::add_arg(desc_cmd_sett, arg_database);
+  command_line::add_arg(desc_cmd_sett, arg_batch_size);
+  command_line::add_arg(desc_cmd_sett, arg_block_stop);
+  command_line::add_arg(desc_cmd_only, arg_count_blocks);
+  command_line::add_arg(desc_cmd_only, arg_pop_blocks);
+  command_line::add_arg(desc_cmd_only, command_line::arg_help);
+  // call add_options() directly for these arguments since
+  // command_line helpers support only boolean switch, not boolean argument
+  desc_cmd_sett.add_options()
+    (arg_verify.name, make_semantic(arg_verify), arg_verify.description)
+    (arg_batch.name,  make_semantic(arg_batch),  arg_batch.description)
+    (arg_resume.name, make_semantic(arg_resume), arg_resume.description)
+    ;
+  po::options_description desc_options("Allowed options");
+  desc_options.add(desc_cmd_only).add(desc_cmd_sett);
+  po::variables_map vm;
+  bool r = command_line::handle_error_helper(desc_options, [&]()
+  {
+    po::store(po::parse_command_line(argc, argv, desc_options), vm);
+    po::notify(vm);
+    return true;
+  });
+  if (! r)
+    return 1;
+  log_level     = command_line::get_arg(vm, arg_log_level);
+  opt_verify    = command_line::get_arg(vm, arg_verify);
+  opt_batch     = command_line::get_arg(vm, arg_batch);
+  opt_resume    = command_line::get_arg(vm, arg_resume);
+  block_stop    = command_line::get_arg(vm, arg_block_stop);
+  db_batch_size = command_line::get_arg(vm, arg_batch_size);
+  if (command_line::get_arg(vm, command_line::arg_help))
+  {
+    std::cout << CRYPTONOTE_NAME << " v" << MONERO_VERSION_FULL << ENDL << ENDL;
+    std::cout << desc_options << std::endl;
+    return 1;
+  }
+  if (! opt_batch && ! vm["batch-size"].defaulted())
+  {
+    std::cerr << "Error: batch-size set, but batch option not enabled" << ENDL;
+    return 1;
+  }
+  if (! db_batch_size)
+  {
+    std::cerr << "Error: batch-size must be > 0" << ENDL;
+    return 1;
+  }
+  if (opt_verify && vm["batch-size"].defaulted())
+  {
+    // usually want batch size default lower if verify on, so progress can be
+    // frequently saved.
+    //
+    // currently, with Windows, default batch size is low, so ignore
+    // default db_batch_size_verify unless it's even lower
+    if (db_batch_size > db_batch_size_verify)
+    {
+      db_batch_size = db_batch_size_verify;
+    }
+  }
+  std::vector db_engines {"memory", "lmdb"};
+  opt_testnet = command_line::get_arg(vm, arg_testnet_on);
+  auto data_dir_arg = opt_testnet ? command_line::arg_testnet_data_dir : command_line::arg_data_dir;
+  dirname = command_line::get_arg(vm, data_dir_arg);
+  db_arg_str = command_line::get_arg(vm, arg_database);
+  log_space::get_set_log_detalisation_level(true, log_level);
+  log_space::log_singletone::add_logger(LOGGER_CONSOLE, NULL, NULL);
+  LOG_PRINT_L0("Starting...");
+  LOG_PRINT_L0("Setting log level = " << log_level);
+  boost::filesystem::path file_path {dirname};
+  std::string import_file_path;
+  import_file_path = (file_path / "export" / import_filename).string();
+  if (command_line::has_arg(vm, arg_count_blocks))
+  {
+    BootstrapFile bootstrap;
+    bootstrap.count_blocks(import_file_path);
+    return 0;
+  }
+  std::string db_engine;
+  int mdb_flags = 0;
+  int res = 0;
+  res = parse_db_arguments(db_arg_str, db_engine, mdb_flags);
+  if (res)
+  {
+    std::cerr << "Error parsing database argument(s)" << ENDL;
+    return 1;
+  }
+  if (std::find(db_engines.begin(), db_engines.end(), db_engine) == db_engines.end())
+  {
+    std::cerr << "Invalid database engine: " << db_engine << std::endl;
+    return 1;
+  }
+  LOG_PRINT_L0("database: " << db_engine);
+  LOG_PRINT_L0("verify:  " << std::boolalpha << opt_verify << std::noboolalpha);
+  if (opt_batch)
+  {
+    LOG_PRINT_L0("batch:   " << std::boolalpha << opt_batch << std::noboolalpha
+        << "  batch size: " << db_batch_size);
+  }
+  else
+  {
+    LOG_PRINT_L0("batch:   " << std::boolalpha << opt_batch << std::noboolalpha);
+  }
+  LOG_PRINT_L0("resume:  " << std::boolalpha << opt_resume  << std::noboolalpha);
+  LOG_PRINT_L0("testnet: " << std::boolalpha << opt_testnet << std::noboolalpha);
+  LOG_PRINT_L0("bootstrap file path: " << import_file_path);
+  LOG_PRINT_L0("database path:       " << file_path.string());
+  try
+  {
+  // fake_core needed for verification to work when enabled.
+  //
+  // NOTE: don't need fake_core method of doing things when we're going to call
+  // BlockchainDB add_block() directly and have available the 3 block
+  // properties to do so. Both ways work, but fake core isn't necessary in that
+  // circumstance.
+  // for multi_db_runtime:
+  if (db_engine == "lmdb")
+  {
+    fake_core_lmdb simple_core(dirname, opt_testnet, opt_batch, mdb_flags);
+    import_from_file(simple_core, import_file_path, block_stop);
+  }
+  else if (db_engine == "memory")
+  {
+    fake_core_memory simple_core(dirname, opt_testnet);
+    import_from_file(simple_core, import_file_path, block_stop);
+  }
+  else
+  {
+    std::cerr << "database engine unrecognized" << ENDL;
+    return 1;
+  }
+  // for multi_db_compile:
+  if (db_engine != default_db_engine)
+  {
+    std::cerr << "Invalid database engine for compiled version: " << db_engine << std::endl;
+    return 1;
+  }
+  fake_core_lmdb simple_core(dirname, opt_testnet, opt_batch, mdb_flags);
+  fake_core_memory simple_core(dirname, opt_testnet);
+  if (! vm["pop-blocks"].defaulted())
+  {
+    num_blocks = command_line::get_arg(vm, arg_pop_blocks);
+    LOG_PRINT_L0("height: " << simple_core.m_storage.get_current_blockchain_height());
+    pop_blocks(simple_core, num_blocks);
+    LOG_PRINT_L0("height: " << simple_core.m_storage.get_current_blockchain_height());
+    return 0;
+  }
+  import_from_file(simple_core, import_file_path, block_stop);
+  }
+  catch (const DB_ERROR& e)
+  {
+    std::cout << std::string("Error loading blockchain db: ") + e.what() + " -- shutting down now" << ENDL;
+    return 1;
+  }
+  // destructors called at exit:
+  //
+  // ensure db closed
+  //   - transactions properly checked and handled
+  //   - disk sync if needed
+  //
+  // fake_core object's destructor is called when it goes out of scope. For an
+  // LMDB fake_core, it calls Blockchain::deinit() on its object, which in turn
+  // calls delete on its BlockchainDB derived class' object, which closes its
+  // files.
+  return 0;
+}
diff --git a/src/blockchain_utilities/bootstrap_file.cpp b/src/blockchain_utilities/bootstrap_file.cpp
new file mode 100644
index 0000000..573cb15
+++ b/src/blockchain_utilities/bootstrap_file.cpp
@@ -0,0 +1,505 @@
+namespace po = boost::program_options;
+using namespace cryptonote;
+using namespace epee;
+namespace
+{
+  // This number was picked by taking the leading 4 bytes from this output:
+  // echo Monero bootstrap file | sha1sum
+  const uint32_t blockchain_raw_magic = 0x28721586;
+  const uint32_t header_size = 1024;
+  std::string refresh_string = "\r                                    \r";
+}
+bool BootstrapFile::open_writer(const boost::filesystem::path& dir_path)
+{
+  if (boost::filesystem::exists(dir_path))
+  {
+    if (!boost::filesystem::is_directory(dir_path))
+    {
+      LOG_PRINT_RED_L0("export directory path is a file: " << dir_path);
+      return false;
+    }
+  }
+  else
+  {
+    if (!boost::filesystem::create_directory(dir_path))
+    {
+      LOG_PRINT_RED_L0("Failed to create directory " << dir_path);
+      return false;
+    }
+  }
+  std::string file_path = (dir_path / BLOCKCHAIN_RAW).string();
+  m_raw_data_file = new std::ofstream();
+  bool do_initialize_file = false;
+  uint64_t num_blocks = 0;
+  if (! boost::filesystem::exists(file_path))
+  {
+    LOG_PRINT_L0("creating file");
+    do_initialize_file = true;
+    num_blocks = 0;
+  }
+  else
+  {
+    num_blocks = count_blocks(file_path);
+    LOG_PRINT_L0("appending to existing file with height: " << num_blocks-1 << "  total blocks: " << num_blocks);
+  }
+  m_height = num_blocks;
+  if (do_initialize_file)
+    m_raw_data_file->open(file_path, std::ios_base::binary | std::ios_base::out | std::ios::trunc);
+  else
+    m_raw_data_file->open(file_path, std::ios_base::binary | std::ios_base::out | std::ios::app | std::ios::ate);
+  if (m_raw_data_file->fail())
+    return false;
+  m_output_stream = new boost::iostreams::stream>(m_buffer);
+  if (m_output_stream == nullptr)
+    return false;
+  if (do_initialize_file)
+    initialize_file();
+  return true;
+}
donator
Activity: 1274
Merit: 1060
GetMonero.org / MyMonero.com
Continuation of: Git diff from commit e940386f9a8765423ab3dd9e3aabe19a68cba9f9 (thankful_for_today's last commit) to current HEAD, excluding changes in /external so as to exclude libraries and submoduled code, and excluding contrib/ so as to exclude changes made to epee, excluding most comments because apparently nobody that writes code for BCN knows what a comment is anyway, excluding removed lines, excluding empty lines


+uint64_t BlockchainLMDB::get_block_already_generated_coins(const uint64_t& height) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  mdb_txn_safe* txn_ptr = &txn;
+  if (m_batch_active)
+    txn_ptr = m_write_txn;
+  else
+  {
+    if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+  }
+  MDB_val_copy key(height);
+  MDB_val result;
+  auto get_result = mdb_get(*txn_ptr, m_block_coins, &key, &result);
+  if (get_result == MDB_NOTFOUND)
+  {
+    throw0(DB_ERROR(std::string("Attempt to get generated coins from height ").append(boost::lexical_cast(height)).append(" failed -- block size not in db").c_str()));
+  }
+  else if (get_result)
+    throw0(DB_ERROR("Error attempting to retrieve a total generated coins from the db"));
+  if (! m_batch_active)
+    txn.commit();
+  return *(const uint64_t*)result.mv_data;
+}
+crypto::hash BlockchainLMDB::get_block_hash_from_height(const uint64_t& height) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  mdb_txn_safe* txn_ptr = &txn;
+  if (m_batch_active)
+    txn_ptr = m_write_txn;
+  else
+  {
+    if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+  }
+  MDB_val_copy key(height);
+  MDB_val result;
+  auto get_result = mdb_get(*txn_ptr, m_block_hashes, &key, &result);
+  if (get_result == MDB_NOTFOUND)
+  {
+    throw0(BLOCK_DNE(std::string("Attempt to get hash from height ").append(boost::lexical_cast(height)).append(" failed -- hash not in db").c_str()));
+  }
+  else if (get_result)
+    throw0(DB_ERROR(std::string("Error attempting to retrieve a block hash from the db: ").
+          append(mdb_strerror(get_result)).c_str()));
+  if (! m_batch_active)
+    txn.commit();
+  return *(crypto::hash*)result.mv_data;
+}
+std::vector BlockchainLMDB::get_blocks_range(const uint64_t& h1, const uint64_t& h2) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  std::vector v;
+  for (uint64_t height = h1; height <= h2; ++height)
+  {
+    v.push_back(get_block_from_height(height));
+  }
+  return v;
+}
+std::vector BlockchainLMDB::get_hashes_range(const uint64_t& h1, const uint64_t& h2) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  std::vector v;
+  for (uint64_t height = h1; height <= h2; ++height)
+  {
+    v.push_back(get_block_hash_from_height(height));
+  }
+  return v;
+}
+crypto::hash BlockchainLMDB::top_block_hash() const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  if (m_height != 0)
+  {
+    return get_block_hash_from_height(m_height - 1);
+  }
+  return null_hash;
+}
+block BlockchainLMDB::get_top_block() const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  if (m_height != 0)
+  {
+    return get_block_from_height(m_height - 1);
+  }
+  block b;
+  return b;
+}
+uint64_t BlockchainLMDB::height() const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  return m_height;
+}
+bool BlockchainLMDB::tx_exists(const crypto::hash& h) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  mdb_txn_safe* txn_ptr = &txn;
+  if (m_batch_active)
+    txn_ptr = m_write_txn;
+  else
+  {
+    if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+  }
+  MDB_val_copy key(h);
+  MDB_val result;
+  TIME_MEASURE_START(time1);
+  auto get_result = mdb_get(*txn_ptr, m_txs, &key, &result);
+  TIME_MEASURE_FINISH(time1);
+  time_tx_exists += time1;
+  if (get_result == MDB_NOTFOUND)
+  {
+    if (! m_batch_active)
+      txn.commit();
+    LOG_PRINT_L1("transaction with hash " << epee::string_tools::pod_to_hex(h) << " not found in db");
+    return false;
+  }
+  else if (get_result)
+    throw0(DB_ERROR("DB error attempting to fetch transaction from hash"));
+  return true;
+}
+uint64_t BlockchainLMDB::get_tx_unlock_time(const crypto::hash& h) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  MDB_val_copy key(h);
+  MDB_val result;
+  auto get_result = mdb_get(txn, m_tx_unlocks, &key, &result);
+  if (get_result == MDB_NOTFOUND)
+    throw1(TX_DNE(std::string("tx unlock time with hash ").append(epee::string_tools::pod_to_hex(h)).append(" not found in db").c_str()));
+  else if (get_result)
+    throw0(DB_ERROR("DB error attempting to fetch tx unlock time from hash"));
+  return *(const uint64_t*)result.mv_data;
+}
+transaction BlockchainLMDB::get_tx(const crypto::hash& h) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  mdb_txn_safe* txn_ptr = &txn;
+  if (m_batch_active)
+    txn_ptr = m_write_txn;
+  else
+  {
+    if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+  }
+  MDB_val_copy key(h);
+  MDB_val result;
+  auto get_result = mdb_get(*txn_ptr, m_txs, &key, &result);
+  if (get_result == MDB_NOTFOUND)
+    throw1(TX_DNE(std::string("tx with hash ").append(epee::string_tools::pod_to_hex(h)).append(" not found in db").c_str()));
+  else if (get_result)
+    throw0(DB_ERROR("DB error attempting to fetch tx from hash"));
+  blobdata bd;
+  bd.assign(reinterpret_cast(result.mv_data), result.mv_size);
+  transaction tx;
+  if (!parse_and_validate_tx_from_blob(bd, tx))
+    throw0(DB_ERROR("Failed to parse tx from blob retrieved from the db"));
+  if (! m_batch_active)
+    txn.commit();
+  return tx;
+}
+uint64_t BlockchainLMDB::get_tx_count() const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  MDB_stat db_stats;
+  if (mdb_stat(txn, m_txs, &db_stats))
+    throw0(DB_ERROR("Failed to query m_txs"));
+  txn.commit();
+  return db_stats.ms_entries;
+}
+std::vector BlockchainLMDB::get_tx_list(const std::vector& hlist) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  std::vector v;
+  for (auto& h : hlist)
+  {
+    v.push_back(get_tx(h));
+  }
+  return v;
+}
+uint64_t BlockchainLMDB::get_tx_block_height(const crypto::hash& h) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  mdb_txn_safe* txn_ptr = &txn;
+  if (m_batch_active)
+    txn_ptr = m_write_txn;
+  else
+  {
+    if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+  }
+  MDB_val_copy key(h);
+  MDB_val result;
+  auto get_result = mdb_get(*txn_ptr, m_tx_heights, &key, &result);
+  if (get_result == MDB_NOTFOUND)
+  {
+    throw1(TX_DNE(std::string("tx height with hash ").append(epee::string_tools::pod_to_hex(h)).append(" not found in db").c_str()));
+  }
+  else if (get_result)
+    throw0(DB_ERROR("DB error attempting to fetch tx height from hash"));
+  if (! m_batch_active)
+    txn.commit();
+  return *(const uint64_t*)result.mv_data;
+}
+uint64_t BlockchainLMDB::get_random_output(const uint64_t& amount) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  uint64_t num_outputs = get_num_outputs(amount);
+  if (num_outputs == 0)
+    throw1(OUTPUT_DNE("Attempting to get a random output for an amount, but none exist"));
+  return crypto::rand() % num_outputs;
+}
+uint64_t BlockchainLMDB::get_num_outputs(const uint64_t& amount) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  lmdb_cur cur(txn, m_output_amounts);
+  MDB_val_copy k(amount);
+  MDB_val v;
+  auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
+  if (result == MDB_NOTFOUND)
+  {
+    return 0;
+  }
+  else if (result)
+    throw0(DB_ERROR("DB error attempting to get number of outputs of an amount"));
+  size_t num_elems = 0;
+  mdb_cursor_count(cur, &num_elems);
+  txn.commit();
+  return num_elems;
+}
+output_data_t BlockchainLMDB::get_output_key(const uint64_t &global_index) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+   MDB_val_copy k(global_index);
+  MDB_val v;
+  auto get_result = mdb_get(txn, m_output_keys, &k, &v);
+  if (get_result == MDB_NOTFOUND)
+    throw0(DB_ERROR("Attempting to get output pubkey by global index, but key does not exist"));
+  else if (get_result)
+    throw0(DB_ERROR("Error attempting to retrieve an output pubkey from the db"));
+   txn.commit();
+   return *(output_data_t *) v.mv_data;
+}
+output_data_t BlockchainLMDB::get_output_key(const uint64_t& amount, const uint64_t& index)
+{
+   LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+   check_open();
+   uint64_t glob_index = get_output_global_index(amount, index);
+   return get_output_key(glob_index);
+}
+tx_out BlockchainLMDB::get_output(const crypto::hash& h, const uint64_t& index) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  lmdb_cur cur(txn, m_tx_outputs);
+  MDB_val_copy k(h);
+  MDB_val v;
+  auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
+  if (result == MDB_NOTFOUND)
+    throw1(OUTPUT_DNE("Attempting to get an output by tx hash and tx index, but output not found"));
+  else if (result)
+    throw0(DB_ERROR("DB error attempting to get an output"));
+  size_t num_elems = 0;
+  mdb_cursor_count(cur, &num_elems);
+  if (num_elems <= index)
+    throw1(OUTPUT_DNE("Attempting to get an output by tx hash and tx index, but output not found"));
+  mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
+  for (uint64_t i = 0; i < index; ++i)
+  {
+    mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
+  }
+  mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
+  blobdata b;
+  b = *(blobdata*)v.mv_data;
+  cur.close();
+  txn.commit();
+  return output_from_blob(b);
+}
+tx_out BlockchainLMDB::get_output(const uint64_t& index) const
+{
+  return tx_out();
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  MDB_val_copy k(index);
+  MDB_val v;
+  auto get_result = mdb_get(txn, m_outputs, &k, &v);
+  if (get_result == MDB_NOTFOUND)
+  {
+    throw OUTPUT_DNE("Attempting to get output by global index, but output does not exist");
+  }
+  else if (get_result)
+    throw0(DB_ERROR("Error attempting to retrieve an output from the db"));
+  blobdata b = *(blobdata*)v.mv_data;
+  return output_from_blob(b);
+}
+tx_out_index BlockchainLMDB::get_output_tx_and_index_from_global(const uint64_t& index) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  mdb_txn_safe* txn_ptr = &txn;
+  if (m_batch_active)
+    txn_ptr = m_write_txn;
+  else
+  {
+    if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+  }
+  MDB_val_copy k(index);
+  MDB_val v;
+  auto get_result = mdb_get(*txn_ptr, m_output_txs, &k, &v);
+  if (get_result == MDB_NOTFOUND)
+    throw1(OUTPUT_DNE("output with given index not in db"));
+  else if (get_result)
+    throw0(DB_ERROR("DB error attempting to fetch output tx hash"));
+  crypto::hash tx_hash = *(crypto::hash*)v.mv_data;
+  get_result = mdb_get(*txn_ptr, m_output_indices, &k, &v);
+  if (get_result == MDB_NOTFOUND)
+    throw1(OUTPUT_DNE("output with given index not in db"));
+  else if (get_result)
+    throw0(DB_ERROR("DB error attempting to fetch output tx index"));
+  if (! m_batch_active)
+    txn.commit();
+  return tx_out_index(tx_hash, *(const uint64_t *)v.mv_data);
+}
+tx_out_index BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const uint64_t& index)
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+   std::vector < uint64_t > offsets;
+   std::vector indices;
+   offsets.push_back(index);
+   get_output_tx_and_index(amount, offsets, indices);
+   if (!indices.size())
+    throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
+   return indices[0];
+}
+std::vector BlockchainLMDB::get_tx_output_indices(const crypto::hash& h) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  std::vector index_vec;
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  lmdb_cur cur(txn, m_tx_outputs);
+  MDB_val_copy k(h);
+  MDB_val v;
+  auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
+  if (result == MDB_NOTFOUND)
+    throw1(OUTPUT_DNE("Attempting to get an output by tx hash and tx index, but output not found"));
+  else if (result)
+    throw0(DB_ERROR("DB error attempting to get an output"));
+  size_t num_elems = 0;
+  mdb_cursor_count(cur, &num_elems);
+  mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
+  for (uint64_t i = 0; i < num_elems; ++i)
+  {
+    mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
+    index_vec.push_back(*(const uint64_t *)v.mv_data);
+    mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
+  }
+  cur.close();
+  txn.commit();
+  return index_vec;
+}
+std::vector BlockchainLMDB::get_tx_amount_output_indices(const crypto::hash& h) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  std::vector index_vec;
+  std::vector index_vec2;
+  // get the transaction's global output indices first
+  index_vec = get_tx_output_indices(h);
+  // these are next used to obtain the amount output indices
+  transaction tx = get_tx(h);
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  uint64_t i = 0;
+  uint64_t global_index;
+  BOOST_FOREACH(const auto& vout, tx.vout)
+  {
+    uint64_t amount =  vout.amount;
+    global_index = index_vec;
+    lmdb_cur cur(txn, m_output_amounts);
+    MDB_val_copy k(amount);
+    MDB_val v;
+    auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
+    if (result == MDB_NOTFOUND)
+      throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
+    else if (result)
+      throw0(DB_ERROR("DB error attempting to get an output"));
+    size_t num_elems = 0;
+    mdb_cursor_count(cur, &num_elems);
+    mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
+    uint64_t amount_output_index = 0;
+    uint64_t output_index = 0;
+    bool found_index = false;
+    for (uint64_t j = 0; j < num_elems; ++j)
+    {
+      mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
+      output_index = *(const uint64_t *)v.mv_data;
+      if (output_index == global_index)
+      {
+        amount_output_index = j;
+        found_index = true;
+        break;
+      }
+      mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
+    }
+    if (found_index)
+    {
+      index_vec2.push_back(amount_output_index);
+    }
+    else
+    {
+      // not found
+      cur.close();
+      txn.commit();
+      throw1(OUTPUT_DNE("specified output not found in db"));
+    }
+    cur.close();
+    ++i;
+  }
+  txn.commit();
+  return index_vec2;
+}
+bool BlockchainLMDB::has_key_image(const crypto::key_image& img) const
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  MDB_val_copy val_key(img);
+  MDB_val unused;
+  if (mdb_get(txn, m_spent_keys, &val_key, &unused) == 0)
+  {
+    txn.commit();
+    return true;
+  }
+  txn.commit();
+  return false;
+}
+void BlockchainLMDB::batch_start(uint64_t batch_num_blocks)
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  if (! m_batch_transactions)
+    throw0(DB_ERROR("batch transactions not enabled"));
+  if (m_batch_active)
+    throw0(DB_ERROR("batch transaction already in progress"));
+  if (m_write_batch_txn != nullptr)
+    throw0(DB_ERROR("batch transaction already in progress"));
+  if (m_write_txn)
+    throw0(DB_ERROR("batch transaction attempted, but m_write_txn already in use"));
+  check_open();
+  check_and_resize_for_batch(batch_num_blocks);
+  m_write_batch_txn = new mdb_txn_safe();
+  // NOTE: need to make sure it's destroyed properly when done
+  if (mdb_txn_begin(m_env, NULL, 0, *m_write_batch_txn))
+    throw0(DB_ERROR("Failed to create a transaction for the db"));
+  // indicates this transaction is for batch transactions, but not whether it's
+  // active
+  m_write_batch_txn->m_batch_txn = true;
+  m_write_txn = m_write_batch_txn;
+  m_batch_active = true;
+  LOG_PRINT_L3("batch transaction: begin");
+}
+void BlockchainLMDB::batch_commit()
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  if (! m_batch_transactions)
+    throw0(DB_ERROR("batch transactions not enabled"));
+  if (! m_batch_active)
+    throw0(DB_ERROR("batch transaction not in progress"));
+  if (m_write_batch_txn == nullptr)
+    throw0(DB_ERROR("batch transaction not in progress"));
+  check_open();
+  LOG_PRINT_L3("batch transaction: committing...");
+  TIME_MEASURE_START(time1);
+  m_write_txn->commit();
+  TIME_MEASURE_FINISH(time1);
+  time_commit1 += time1;
+  LOG_PRINT_L3("batch transaction: committed");
+  m_write_txn = nullptr;
+  delete m_write_batch_txn;
+}
+void BlockchainLMDB::batch_stop()
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  if (! m_batch_transactions)
+    throw0(DB_ERROR("batch transactions not enabled"));
+  if (! m_batch_active)
+    throw0(DB_ERROR("batch transaction not in progress"));
+  if (m_write_batch_txn == nullptr)
+    throw0(DB_ERROR("batch transaction not in progress"));
+  check_open();
+  LOG_PRINT_L3("batch transaction: committing...");
+  TIME_MEASURE_START(time1);
+  m_write_txn->commit();
+  TIME_MEASURE_FINISH(time1);
+  time_commit1 += time1;
+  // for destruction of batch transaction
+  m_write_txn = nullptr;
+  delete m_write_batch_txn;
+  m_write_batch_txn = nullptr;
+  m_batch_active = false;
+  LOG_PRINT_L3("batch transaction: end");
+}
+void BlockchainLMDB::batch_abort()
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  if (! m_batch_transactions)
+    throw0(DB_ERROR("batch transactions not enabled"));
+  if (! m_batch_active)
+    throw0(DB_ERROR("batch transaction not in progress"));
+  check_open();
+  // for destruction of batch transaction
+  m_write_txn = nullptr;
+  // explicitly call in case mdb_env_close() (BlockchainLMDB::close()) called before BlockchainLMDB destructor called.
+  m_write_batch_txn->abort();
+  m_batch_active = false;
+  m_write_batch_txn = nullptr;
+  LOG_PRINT_L3("batch transaction: aborted");
+}
+void BlockchainLMDB::set_batch_transactions(bool batch_transactions)
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  m_batch_transactions = batch_transactions;
+  LOG_PRINT_L3("batch transactions " << (m_batch_transactions ? "enabled" : "disabled"));
+}
+uint64_t BlockchainLMDB::add_block(const block& blk, const size_t& block_size, const difficulty_type& cumulative_difficulty, const uint64_t& coins_generated,
+      const std::vector& txs)
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  if (m_height % 1000 == 0)
+  {
+    // for batch mode, DB resize check is done at start of batch transaction
+    if (! m_batch_active && need_resize())
+    {
+      LOG_PRINT_L0("LMDB memory map needs resized, doing that now.");
+      do_resize();
+    }
+  }
+  mdb_txn_safe txn;
+  if (! m_batch_active)
+  {
+    if (mdb_txn_begin(m_env, NULL, 0, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+    m_write_txn = &txn;
+  }
+  uint64_t num_outputs = m_num_outputs;
+  try
+  {
+    BlockchainDB::add_block(blk, block_size, cumulative_difficulty, coins_generated, txs);
+    if (! m_batch_active)
+    {
+      m_write_txn = NULL;
+      TIME_MEASURE_START(time1);
+      txn.commit();
+      TIME_MEASURE_FINISH(time1);
+      time_commit1 += time1;
+    }
+  }
+  catch (...)
+  {
+    m_num_outputs = num_outputs;
+    if (! m_batch_active)
+      m_write_txn = NULL;
+    throw;
+  }
+  return ++m_height;
+}
+void BlockchainLMDB::pop_block(block& blk, std::vector& txs)
+{
+  LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+  check_open();
+  mdb_txn_safe txn;
+  if (! m_batch_active)
+  {
+    if (mdb_txn_begin(m_env, NULL, 0, txn))
+      throw0(DB_ERROR("Failed to create a transaction for the db"));
+    m_write_txn = &txn;
+  }
+  uint64_t num_outputs = m_num_outputs;
+  try
+  {
+    BlockchainDB::pop_block(blk, txs);
+    if (! m_batch_active)
+    {
+      m_write_txn = NULL;
+      txn.commit();
+    }
+  }
+  catch (...)
+  {
+    m_num_outputs = num_outputs;
+    m_write_txn = NULL;
+    throw;
+  }
+  --m_height;
+}
+void BlockchainLMDB::get_output_tx_and_index_from_global(const std::vector &global_indices,
+      std::vector &tx_out_indices) const
+{
+   LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+   check_open();
+   tx_out_indices.clear();
+   mdb_txn_safe txn;
+   mdb_txn_safe* txn_ptr = &txn;
+   if (m_batch_active)
+      txn_ptr = m_write_txn;
+   else
+   {
+      if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+         throw0(DB_ERROR("Failed to create a transaction for the db"));
+   }
+   for (const uint64_t &index : global_indices)
+   {
+      MDB_val_copy k(index);
+      MDB_val v;
+      auto get_result = mdb_get(*txn_ptr, m_output_txs, &k, &v);
+      if (get_result == MDB_NOTFOUND)
+         throw1(OUTPUT_DNE("output with given index not in db"));
+      else if (get_result)
+         throw0(DB_ERROR("DB error attempting to fetch output tx hash"));
+      crypto::hash tx_hash = *(crypto::hash*) v.mv_data;
+      get_result = mdb_get(*txn_ptr, m_output_indices, &k, &v);
+      if (get_result == MDB_NOTFOUND)
+         throw1(OUTPUT_DNE("output with given index not in db"));
+      else if (get_result)
+         throw0(DB_ERROR("DB error attempting to fetch output tx index"));
+      auto result = tx_out_index(tx_hash, *(const uint64_t *) v.mv_data);
+      tx_out_indices.push_back(result);
+   }
+   if (!m_batch_active)
+      txn.commit();
+}
+void BlockchainLMDB::get_output_global_indices(const uint64_t& amount, const std::vector &offsets,
+      std::vector &global_indices)
+{
+   LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+   TIME_MEASURE_START(txx);
+   check_open();
+   global_indices.clear();
+   uint64_t max = 0;
+   for (const uint64_t &index : offsets)
+   {
+      if (index > max)
+         max = index;
+   }
+   mdb_txn_safe txn;
+   mdb_txn_safe* txn_ptr = &txn;
+   if(m_batch_active)
+      txn_ptr = m_write_txn;
+   else
+   {
+      if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+         throw0(DB_ERROR("Failed to create a transaction for the db"));
+   }
+   lmdb_cur cur(*txn_ptr, m_output_amounts);
+   MDB_val_copy k(amount);
+   MDB_val v;
+   auto result = mdb_cursor_get(cur, &k, &v, MDB_SET);
+   if (result == MDB_NOTFOUND)
+      throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but amount not found"));
+   else if (result)
+      throw0(DB_ERROR("DB error attempting to get an output"));
+   size_t num_elems = 0;
+   mdb_cursor_count(cur, &num_elems);
+   if (max <= 1 && num_elems <= max)
+      throw1(OUTPUT_DNE("Attempting to get an output index by amount and amount index, but output not found"));
+   uint64_t t_dbmul = 0;
+   uint64_t t_dbscan = 0;
+   if (max <= 1)
+   {
+      for (const uint64_t& index : offsets)
+      {
+         mdb_cursor_get(cur, &k, &v, MDB_FIRST_DUP);
+         for (uint64_t i = 0; i < index; ++i)
+         {
+            mdb_cursor_get(cur, &k, &v, MDB_NEXT_DUP);
+         }
+         mdb_cursor_get(cur, &k, &v, MDB_GET_CURRENT);
+         uint64_t glob_index = *(const uint64_t*) v.mv_data;
+         LOG_PRINT_L3("Amount: " << amount << " M0->v: " << glob_index);
+         global_indices.push_back(glob_index);
+      }
+   }
+   else
+   {
+      uint32_t curcount = 0;
+      uint32_t blockstart = 0;
+      for (const uint64_t& index : offsets)
+      {
+         if (index >= num_elems)
+         {
+            LOG_PRINT_L1("Index: " << index << " Elems: " << num_elems << " partial results found for get_output_tx_and_index");
+            break;
+         }
+         while (index >= curcount)
+         {
+            TIME_MEASURE_START(db1);
+            if (mdb_cursor_get(cur, &k, &v, curcount == 0 ? MDB_GET_MULTIPLE : MDB_NEXT_MULTIPLE) != 0)
+            {
+               // allow partial results
+               result = false;
+               break;
+            }
+            int count = v.mv_size / sizeof(uint64_t);
+            blockstart = curcount;
+            curcount += count;
+            TIME_MEASURE_FINISH(db1);
+            t_dbmul += db1;
+         }
+         LOG_PRINT_L3("Records returned: " << curcount << " Index: " << index);
+         TIME_MEASURE_START(db2);
+         uint64_t actual_index = index - blockstart;
+         uint64_t glob_index = ((const uint64_t*) v.mv_data)[actual_index];
+         LOG_PRINT_L3("Amount: " << amount << " M1->v: " << glob_index);
+         global_indices.push_back(glob_index);
+         TIME_MEASURE_FINISH(db2);
+         t_dbscan += db2;
+      }
+   }
+   cur.close();
+   if(!m_batch_active)
+      txn.commit();
+   TIME_MEASURE_FINISH(txx);
+   LOG_PRINT_L3("txx: " << txx << " db1: " << t_dbmul << " db2: " << t_dbscan);
+}
+void BlockchainLMDB::get_output_key(const uint64_t &amount, const std::vector &offsets, std::vector &outputs)
+{
+   LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+   TIME_MEASURE_START(db3);
+   check_open();
+   outputs.clear();
+   std::vector global_indices;
+   get_output_global_indices(amount, offsets, global_indices);
+   if (global_indices.size() > 0)
+   {
+    mdb_txn_safe txn;
+    mdb_txn_safe* txn_ptr = &txn;
+    if (m_batch_active)
+      txn_ptr = m_write_txn;
+    else
+    {
+      if (mdb_txn_begin(m_env, NULL, MDB_RDONLY, txn))
+        throw0(DB_ERROR("Failed to create a transaction for the db"));
+    }
+      for (const uint64_t &index : global_indices)
+      {
+         MDB_val_copy k(index);
+         MDB_val v;
+         auto get_result = mdb_get(*txn_ptr, m_output_keys, &k, &v);
+         if (get_result != 0)
+            throw0(DB_ERROR("Attempting to get output pubkey by global index, but key does not exist"));
+         else if (get_result)
+            throw0(DB_ERROR("Error attempting to retrieve an output pubkey from the db"));
+         output_data_t data = *(output_data_t *) v.mv_data;
+         outputs.push_back(data);
+      }
+    if (!m_batch_active)
+      txn.commit();
+   }
+   TIME_MEASURE_FINISH(db3);
+   LOG_PRINT_L3("db3: " << db3);
+}
+void BlockchainLMDB::get_output_tx_and_index(const uint64_t& amount, const std::vector &offsets, std::vector &indices)
+{
+   LOG_PRINT_L3("BlockchainLMDB::" << __func__);
+   check_open();
+   indices.clear();
+   std::vector global_indices;
+   get_output_global_indices(amount, offsets, global_indices);
+   TIME_MEASURE_START(db3);
+   if(global_indices.size() > 0)
+   {
+      get_output_tx_and_index_from_global(global_indices, indices);
+   }
+   TIME_MEASURE_FINISH(db3);
+   LOG_PRINT_L3("db3: " << db3);
+}
+}  // namespace cryptonote
diff --git a/src/blockchain_db/lmdb/db_lmdb.h b/src/blockchain_db/lmdb/db_lmdb.h
new file mode 100644
index 0000000..0facb87
+++ b/src/blockchain_db/lmdb/db_lmdb.h
@@ -0,0 +1,320 @@
+namespace cryptonote
+{
+struct mdb_txn_safe
+{
+  mdb_txn_safe();
+  ~mdb_txn_safe();
+  void commit(std::string message = "");
+  // This should only be needed for batch transaction which must be ensured to
+  // be aborted before mdb_env_close, not after. So we can't rely on
+  // BlockchainLMDB destructor to call mdb_txn_safe destructor, as that's too late
+  // to properly abort, since mdb_env_close would have been called earlier.
+  void abort();
+  operator MDB_txn*()
+  {
+    return m_txn;
+  }
+  operator MDB_txn**()
+  {
+    return &m_txn;
+  }
+  uint64_t num_active_tx() const;
+  static void prevent_new_txns();
+  static void wait_no_active_txns();
+  static void allow_new_txns();
+  MDB_txn* m_txn;
+  bool m_batch_txn = false;
+  static std::atomic num_active_txns;
+  // could use a mutex here, but this should be sufficient.
+  static std::atomic_flag creation_gate;
+};
+class BlockchainLMDB : public BlockchainDB
+{
+public:
+  BlockchainLMDB(bool batch_transactions=false);
+  ~BlockchainLMDB();
+  virtual void open(const std::string& filename, const int mdb_flags=0);
+  virtual void close();
+  virtual void sync();
+  virtual void reset();
+  virtual std::vector get_filenames() const;
+  virtual std::string get_db_name() const;
+  virtual bool lock();
+  virtual void unlock();
+  virtual bool block_exists(const crypto::hash& h) const;
+  virtual block get_block(const crypto::hash& h) const;
+  virtual uint64_t get_block_height(const crypto::hash& h) const;
+  virtual block_header get_block_header(const crypto::hash& h) const;
+  virtual block get_block_from_height(const uint64_t& height) const;
+  virtual uint64_t get_block_timestamp(const uint64_t& height) const;
+  virtual uint64_t get_top_block_timestamp() const;
+  virtual size_t get_block_size(const uint64_t& height) const;
+  virtual difficulty_type get_block_cumulative_difficulty(const uint64_t& height) const;
+  virtual difficulty_type get_block_difficulty(const uint64_t& height) const;
+  virtual uint64_t get_block_already_generated_coins(const uint64_t& height) const;
+  virtual crypto::hash get_block_hash_from_height(const uint64_t& height) const;
+  virtual std::vector get_blocks_range(const uint64_t& h1, const uint64_t& h2) const;
+  virtual std::vector get_hashes_range(const uint64_t& h1, const uint64_t& h2) const;
+  virtual crypto::hash top_block_hash() const;
+  virtual block get_top_block() const;
+  virtual uint64_t height() const;
+  virtual bool tx_exists(const crypto::hash& h) const;
+  virtual uint64_t get_tx_unlock_time(const crypto::hash& h) const;
+  virtual transaction get_tx(const crypto::hash& h) const;
+  virtual uint64_t get_tx_count() const;
+  virtual std::vector get_tx_list(const std::vector& hlist) const;
+  virtual uint64_t get_tx_block_height(const crypto::hash& h) const;
+  virtual uint64_t get_random_output(const uint64_t& amount) const;
+  virtual uint64_t get_num_outputs(const uint64_t& amount) const;
+  virtual output_data_t get_output_key(const uint64_t& amount, const uint64_t& index);
+  virtual output_data_t get_output_key(const uint64_t& global_index) const;
+  virtual void get_output_key(const uint64_t &amount, const std::vector &offsets, std::vector &outputs);
+  virtual tx_out get_output(const crypto::hash& h, const uint64_t& index) const;
+  /**
+   * @brief get an output from its global index
+   *
+   * @param index global index of the output desired
+   *
+   * @return the output associated with the index.
+   * Will throw OUTPUT_DNE if not output has that global index.
+   * Will throw DB_ERROR if there is a non-specific LMDB error in fetching
+   */
+  tx_out get_output(const uint64_t& index) const;
+  virtual tx_out_index get_output_tx_and_index_from_global(const uint64_t& index) const;
+  virtual void get_output_tx_and_index_from_global(const std::vector &global_indices,
+        std::vector &tx_out_indices) const;
+  virtual tx_out_index get_output_tx_and_index(const uint64_t& amount, const uint64_t& index);
+  virtual void get_output_tx_and_index(const uint64_t& amount, const std::vector &offsets, std::vector &indices);
+  virtual void get_output_global_indices(const uint64_t& amount, const std::vector &offsets, std::vector &indices);
+  virtual std::vector get_tx_output_indices(const crypto::hash& h) const;
+  virtual std::vector get_tx_amount_output_indices(const crypto::hash& h) const;
+  virtual bool has_key_image(const crypto::key_image& img) const;
+  virtual uint64_t add_block( const block& blk
+                            , const size_t& block_size
+                            , const difficulty_type& cumulative_difficulty
+                            , const uint64_t& coins_generated
+                            , const std::vector& txs
+                            );
+  virtual void set_batch_transactions(bool batch_transactions);
+  virtual void batch_start(uint64_t batch_num_blocks=0);
+  virtual void batch_commit();
+  virtual void batch_stop();
+  virtual void batch_abort();
+  virtual void pop_block(block& blk, std::vector& txs);
+  virtual bool can_thread_bulk_indices() const { return true; }
+private:
+  void do_resize(uint64_t size_increase=0);
+  bool need_resize(uint64_t threshold_size=0) const;
+  void check_and_resize_for_batch(uint64_t batch_num_blocks);
+  uint64_t get_estimated_batch_size(uint64_t batch_num_blocks) const;
+  virtual void add_block( const block& blk
+                , const size_t& block_size
+                , const difficulty_type& cumulative_difficulty
+                , const uint64_t& coins_generated
+                , const crypto::hash& block_hash
+                );
+  virtual void remove_block();
+  virtual void add_transaction_data(const crypto::hash& blk_hash, const transaction& tx, const crypto::hash& tx_hash);
+  virtual void remove_transaction_data(const crypto::hash& tx_hash, const transaction& tx);
+  virtual void add_output(const crypto::hash& tx_hash, const tx_out& tx_output, const uint64_t& local_index, const uint64_t unlock_time);
+  virtual void remove_output(const tx_out& tx_output);
+  void remove_tx_outputs(const crypto::hash& tx_hash, const transaction& tx);
+  void remove_output(const uint64_t& out_index, const uint64_t amount);
+  void remove_amount_output_index(const uint64_t amount, const uint64_t global_output_index);
+  virtual void add_spent_key(const crypto::key_image& k_image);
+  virtual void remove_spent_key(const crypto::key_image& k_image);
+  /**
+   * @brief convert a tx output to a blob for storage
+   *
+   * @param output the output to convert
+   *
+   * @return the resultant blob
+   */
+  blobdata output_to_blob(const tx_out& output) const;
+  /**
+   * @brief convert a tx output blob to a tx output
+   *
+   * @param blob the blob to convert
+   *
+   * @return the resultant tx output
+   */
+  tx_out output_from_blob(const blobdata& blob) const;
+  /**
+   * @brief get the global index of the index-th output of the given amount
+   *
+   * @param amount the output amount
+   * @param index the index into the set of outputs of that amount
+   *
+   * @return the global index of the desired output
+   */
+  uint64_t get_output_global_index(const uint64_t& amount, const uint64_t& index);
+  void check_open() const;
+  MDB_env* m_env;
+  MDB_dbi m_blocks;
+  MDB_dbi m_block_heights;
+  MDB_dbi m_block_hashes;
+  MDB_dbi m_block_timestamps;
+  MDB_dbi m_block_sizes;
+  MDB_dbi m_block_diffs;
+  MDB_dbi m_block_coins;
+  MDB_dbi m_txs;
+  MDB_dbi m_tx_unlocks;
+  MDB_dbi m_tx_heights;
+  MDB_dbi m_tx_outputs;
+  MDB_dbi m_output_txs;
+  MDB_dbi m_output_indices;
+  MDB_dbi m_output_gindices;
+  MDB_dbi m_output_amounts;
+  MDB_dbi m_output_keys;
+  MDB_dbi m_outputs;
+  MDB_dbi m_spent_keys;
+  uint64_t m_height;
+  uint64_t m_num_outputs;
+  std::string m_folder;
+  mdb_txn_safe* m_write_txn; // may point to either a short-lived txn or a batch txn
+  mdb_txn_safe* m_write_batch_txn; // persist batch txn outside of BlockchainLMDB
+  bool m_batch_transactions; // support for batch transactions
+  bool m_batch_active; // whether batch transaction is in progress
+  // force a value so it can compile with 32-bit ARM
+  constexpr static uint64_t DEFAULT_MAPSIZE = 1LL << 31;
+  constexpr static uint64_t DEFAULT_MAPSIZE = 1LL << 30;
+  constexpr static uint64_t DEFAULT_MAPSIZE = 1LL << 33;
+  constexpr static float RESIZE_PERCENT = 0.8f;
+};
+}  // namespace cryptonote
diff --git a/src/blockchain_utilities/CMakeLists.txt b/src/blockchain_utilities/CMakeLists.txt
new file mode 100644
index 0000000..5be37c4
+++ b/src/blockchain_utilities/CMakeLists.txt
@@ -0,0 +1,118 @@
+set(blockchain_converter_sources
+  blockchain_converter.cpp
+  )
+set(blockchain_converter_private_headers)
+bitmonero_private_headers(blockchain_converter
+     ${blockchain_converter_private_headers})
+set(blockchain_import_sources
+  blockchain_import.cpp
+  bootstrap_file.cpp
+  )
+set(blockchain_import_private_headers
+  fake_core.h
+  bootstrap_file.h
+  bootstrap_serialization.h
+  )
+bitmonero_private_headers(blockchain_import
+     ${blockchain_import_private_headers})
+set(blockchain_export_sources
+  blockchain_export.cpp
+  bootstrap_file.cpp
+  )
+set(blockchain_export_private_headers
+  bootstrap_file.h
+  bootstrap_serialization.h
+  )
+bitmonero_private_headers(blockchain_export
+     ${blockchain_export_private_headers})
+if (BLOCKCHAIN_DB STREQUAL DB_LMDB)
+bitmonero_add_executable(blockchain_converter
+  ${blockchain_converter_sources}
+  ${blockchain_converter_private_headers})
+target_link_libraries(blockchain_converter
+  LINK_PRIVATE
+    cryptonote_core
+   p2p
+   blockchain_db
+    ${CMAKE_THREAD_LIBS_INIT})
+add_dependencies(blockchain_converter
+   version)
+set_property(TARGET blockchain_converter
+   PROPERTY
+   OUTPUT_NAME "blockchain_converter")
+endif ()
+bitmonero_add_executable(blockchain_import
+  ${blockchain_import_sources}
+  ${blockchain_import_private_headers})
+target_link_libraries(blockchain_import
+  LINK_PRIVATE
+    cryptonote_core
+   blockchain_db
+   p2p
+    ${CMAKE_THREAD_LIBS_INIT})
+add_dependencies(blockchain_import
+   version)
+set_property(TARGET blockchain_import
+   PROPERTY
+   OUTPUT_NAME "blockchain_import")
+bitmonero_add_executable(blockchain_export
+  ${blockchain_export_sources}
+  ${blockchain_export_private_headers})
+target_link_libraries(blockchain_export
+  LINK_PRIVATE
+    cryptonote_core
+   p2p
+   blockchain_db
+    ${CMAKE_THREAD_LIBS_INIT})
+add_dependencies(blockchain_export
+   version)
+set_property(TARGET blockchain_export
+   PROPERTY
+   OUTPUT_NAME "blockchain_export")
diff --git a/src/blockchain_utilities/README.md b/src/blockchain_utilities/README.md
new file mode 100644
index 0000000..9d24b9a
+++ b/src/blockchain_utilities/README.md
@@ -0,0 +1,68 @@
+Copyright (c) 2014-2015, The Monero Project
+The blockchain utilities allow one to convert an old style blockchain.bin file
+to a new style database. There are two ways to upgrade an old style blockchain:
+The recommended way is to run a `blockchain_export`, then `blockchain_import`.
+The other way is to run `blockchain_converter`. In both cases, you will be left
+with a new style blockchain.
+For importing into the LMDB database, compile with `DATABASE=lmdb`
+e.g.
+`DATABASE=lmdb make release`
+This is also the default compile setting on the master branch.
+By default, the exporter will use the original in-memory database (blockchain.bin) as its source.
+This default is to make migrating to an LMDB database easy, without having to recompile anything.
+To change the source, adjust `SOURCE_DB` in `src/blockchain_utilities/bootstrap_file.h` according to the comments.
+See also each utility's "--help" option.
+`$ blockchain_export`
+This loads the existing blockchain, for whichever database type it was compiled for, and exports it to `$MONERO_DATA_DIR/export/blockchain.raw`
+`$ blockchain_import`
Pages:
Jump to: