The extracted keys or the addresses alone are roughly 750 MB each. the rest is "overhead" by the generator.
I think I will create a query on bitcoin-wall.com/check/ querying each address :/
Proly still gonna be slow as heck but I could split it on multiple threads...
Won't have any hits anyway, so you guys are probably right that I shouldn't import them if I just want to check if they have a balance..
But I was more interested in owning them "forever" in the first place.
Are there any solutions (maybe bitcoind?) that can handle a larger amount of adresses than the desktop solution?
It's unrealistic to expect that a service would allow you to do 20 million requests in any reasonable rate. You should run your own full node and check the addresses yourself. Unfortunately there seem to be no easy way to do it , which means you'll have to write your own code that works with raw blocks or UTXO set.
Or maybe you should stick with your method but modify it to periodically flush these empty addresses to free memory and save resources - e.g. delete all empty addresses every 1000 addresses.
Yea, 20 million single requests won't be good I guess.
I will partition the addresses to blocks of 1000 and query each 1k block on its own.
Oh, and .. I'm not having RAM issues during processing the addresses/keys.
Processing the 2GB raw content to extract the 20 million adresses or private keys takes barely 33 MB of RAM while the program is working and still less than 60 seconds to run it.
Streamreading one line -> writing every third line to the output.
Flush the output every 100k lines to be stored on the HDD.
See the C#-Code yourself:
static void Main(string[] args)
{
Console.WriteLine("Started.");
string path = @"[MYDIRECTORYPATH]",
file = "[INPUTFILENAME]",
outputFile = "[OUTPUTFILENAME]";
long readLines = 0;
StringBuilder storage = new StringBuilder();
StreamReader reader = new StreamReader(Path.Combine(path, file));
try
{
using (reader)
{
string line;
// Read and display lines from the file until the end of
// the file is reached.
while ((line = reader.ReadLine()) != null)
{
readLines++;
if (line.Contains("Address"))
{
storage.Append(line.Split(' ')[1] + Environment.NewLine);
}
//if (line.Contains("Privkey"))
//{
// storage.Append(line.Split(' ')[1] + " DATE-UNKNOWN" + Environment.NewLine);
//}
if (readLines % 100000 == 0)
{
Console.WriteLine("Processing line {0}", AddDotsToLong(readLines));
File.AppendAllText(Path.Combine(path, outputFile), storage.ToString());
storage.Clear();
}
}
}
}
catch (Exception e)
{
Console.WriteLine("An error occured:");
Console.WriteLine(e);
}
finally
{
File.AppendAllText(Path.Combine(path, outputFile), storage.ToString());
storage.Clear();
Console.WriteLine("Finished with line: {0}", AddDotsToLong(readLines));
Console.WriteLine("Output file name: " + outputFile);
reader.Close();
// Deletion of input file commented out for security reasons
//File.Delete(Path.Combine(path, file));
reader.Dispose();
Console.ReadLine();
}
}
public static string AddDotsToLong(long number)
{
string output = number.ToString();
int counter = 3;
if (output.Length > 3)
{
for (int i = number.ToString().Length; i > 0; i--)
{
if (counter++ % 3 == 0 && i < number.ToString().Length) output = output.Insert(i, ".");
}
}
return output;
}
This probably could be improved even further using a streamwriter rather than File.AppendAllText and by getting rid of the string.split method^^
Change if (line.Contains("Address")) to if (line.Contains("Privkey")) depending on what you wish to extract that moment.
This is only a development version for myself, definitely not ready to use for everyone!
(Test data can easily be created in large amounts using vanitygen.)
(Sidenote: To view the data without using 2GB RAM and keep RAM usage low in general you simply can create/use a paginated viewer.)
My problem definitely is the import of the data to the wallet.
Basically this is a private feasibility experiment.