But this was only an example of how the encoder works, not its compression value. I told you before, this is for larger than 500k files. You don't seem to understand, now matter how many times I say the same thing different ways, that this one code represents whatever size file you are working with. If the file is 1024K or 1024 MB, it's still the same one crypto key. Whether it's 1 MB or 100MB or 1000MB, it's just one 4k file. You aren't listening. Your only job appears to be to find ways to break my theory using small data sets of 3 bytes, when I clearly said many times this won't be used for data that small.
You don't seem to understand when I try to explain very clearly why your process
will not work.
You've even seen a joke page created by someone else with broadly the same process, and another page explaining why it
will not work.
Let's say I encode Apple's iCloud Installer, which is just under 50 MB in size (according to Google). For every 1 byte of data, I need to index 100 - 150 indexes into Pi. I am not converting anything or recording anything to do this movement. I am simply moving forward according to some rules I figured out for creating a unique pathway through Pi. So I would need (100*50,000) or 500,000 indexes into Pi at least. Let's say that the last bit of data I find is at index location 501,500 into Pi. Well, here is my crypto key:
[0.501500.8.5250] I didn't have to record any other data that than. Now that's placed into the 4K file that is used to tell the software how to reconstitute the data.
a) 50,000 is 50KB, not 50 bytes. 50MB is 50*1024*1024, not 50*1000.
b) And what if you don't find the last bit of data until index location 501,500,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000...(repeat up until 50MB)?
You entire process only works if you assume you can find all possible index locations in less space that the original file took.
You cannot do this. This is ALL the data there is in my finished file, plain and simple. There is no hashing chunks of data as you describe, nothing like that. I've created a path through Pi like taking footsteps on certain numbers. The footsteps taken are irrelevant, what is relevant are the changes between those steps. For 0s, we hop from the first 4 in Pi to every other 4 in Pi only. But if we encounter a 1 in our binary data, then the Hunt Value increments +1, so now we are only hopping on every 5 in Pi. This is what keeps the record of our original bits. All the other numbers in Pi are useless as we are encoding. Here is an example: 001 We would be looking for the first 4 in Pi then the 2nd 4 in Pi and now we must increment +1 and hop to the next 5 in Pi. We keep hopping on 5s as long as our data is 0s, but if we encounter another 1, we increment and begin hopping along 6s. In this way, our pathway is totally unique to the data we are feeding into it, and thus our arrival point (end point) can let us figure out the path taken to reach itself by knowing how many 1s were used and then attempting to solve backwards to the decimal point using the precise number of steps it took to encode it, the original file size recorded in bits.
And what people keep telling you, but you refuse to accept, is that on average, the index required to store the final position will be as large as the file you are trying to store.