BYTE EXAMPLE: 0000001: 0001000: 100000:
Pi Index: (57) (85) (103)
This is what we've been trying to tell you, and your own example has shown it to be true.
But this was only an example of how the encoder works, not its compression value. I told you before, this is for larger than 500k files. You don't seem to understand, now matter how many times I say the same thing different ways, that this one code represents whatever size file you are working with. If the file is 1024K or 1024 MB, it's still the same one crypto key. Whether it's 1 MB or 100MB or 1000MB, it's just one 4k file. You aren't listening. Your only job appears to be to find ways to break my theory using small data sets of 3 bytes, when I clearly said many times this won't be used for data that small.
Let's say I encode Apple's iCloud Installer, which is just under 50 MB in size (according to Google). For every 1 byte of data, I need to index 100 - 150 indexes into Pi. I am not converting anything or recording anything to do this movement. I am simply moving forward according to some rules I figured out for creating a unique pathway through Pi. So I would need (100*50,000) or 500,000 indexes into Pi at least. Let's say that the last bit of data I find is at index location 501,500 into Pi. Well, here is my crypto key:
[0.501500.8.5250] I didn't have to record any other data that than. Now that's placed into the 4K file that is used to tell the software how to reconstitute the data. This is ALL the data there is in my finished file, plain and simple. There is no hashing chunks of data as you describe, nothing like that. I've created a path through Pi like taking footsteps on certain numbers. The footsteps taken are irrelevant, what is relevant are the changes between those steps. For 0s, we hop from the first 4 in Pi to every other 4 in Pi only. But if we encounter a 1 in our binary data, then the Hunt Value increments +1, so now we are only hopping on every 5 in Pi. This is what keeps the record of our original bits. All the other numbers in Pi are useless as we are encoding. Here is an example: 001 We would be looking for the first 4 in Pi then the 2nd 4 in Pi and now we must increment +1 and hop to the next 5 in Pi. We keep hopping on 5s as long as our data is 0s, but if we encounter another 1, we increment and begin hopping along 6s. In this way, our pathway is totally unique to the data we are feeding into it, and thus our arrival point (end point) can let us figure out the path taken to reach itself by knowing how many 1s were used and then attempting to solve backwards to the decimal point using the precise number of steps it took to encode it, the original file size recorded in bits.
Also, I'm not sure I should be saying 8 bits per byte, but my friend taught me 7 bits per byte when working with Ascii Binary, was I misinformed on that? Remember, the theory works by looking at the data in a file as characters in a book, in Ascii format, and thus will need to encode precisely the same number of bits for every character. But if you look at a hex and decimal and binary converter online, if you type in just one letter, you only get 3 bits or 4 bits or 2 bits ... some erratic bit size. Every character must have the same bit size, so I want to translate the data into Ascii Binary.