Pages:
Author

Topic: Crypto Compression Concept Worth Big Money - I Did It! - page 12. (Read 13895 times)

sr. member
Activity: 476
Merit: 250
Bottom line, none of you really understand my theory well enough to be able to discredit it so quickly, you have no right.  You are not God. 

No, I am a mathematician.
We don't need to understand exactly how your 'compression' algorithm is meant to work.
You cannot compress gigabyte files into a couple of dozen characters. Can't be done.
N digits of alphanumeric index can only index 62^N maximum possible unique files. Fact.
newbie
Activity: 28
Merit: 0
Come on, are you saying this isn't a valid idea in the first place?  Now I am beginning to question if you are for real.  Maybe you are just heckling me for the fun of it.  In that case, let this thread die and go back to your normal lives.  Jeesh.  People are unbelievable.

At long last the penny has finally dropped!

Oh well, another million dollar idea collapses. Back to the drawing board....

Maybe for you, but not for me.  I've spent years on this, and about 1 year testing it by hand.  I had a computer-programmer friend helping me for a time, but he had some work issues and couldn't stay with it for long, so I got another friend to help me.  I taught him the rules and got him to send me Crypto Codes of 3 bytes, or 6 bytes, at a time to me following my rules.  It would take all day to do just one of them by hand, but eventually I would return the answer and my friend was totally amazed when I got each one right.  There was ever only one possible answer to be found.

The math itself doesn't lend any credence to the idea that just because there are 4.29 billion combinations, all 4.29 billion combinations are orderly arrangements that would even return a working file to begin with.  Nature probably has some kind of limits built into it inherently, whereby only 10-20% of all the possibilities will ever be arranged into an orderly thing which, on a computer medium, would be called a working file.  The chances that the Pi Index would actually require every Pi index to be the end of a working file seems ludicrous, Nature would then be shown not to be random or chaos filled at all, but totally derivitive and pre-programmed, where even static or noise would mean something if you knew how to organize it.  We are getting into philosophy here, and I don't wish to do that. 

Bottom line, none of you really understand my theory well enough to be able to discredit it so quickly, you have no right.  You are not God.  You may be smart, but you also don't see all ends, either, as history has shown among scientists most create something that goes on to become the rope for the world to hang from.  I guess by trying to create this technique, I am jumping in the scientific boat, but still ...   By the way, I'm not blaming anyone for not understanding my theory, I haven't released all the rules or explained everything, how could you?  You are only guessing at this point.  Of you all, at least BurtW seems to understand by the way he asks questions, I believe he is sincere and trying to understand, and even offer some good feedback.  Thanks to BurtW for making my day, even when you disagree, you do so with respect in your tone.  I appreciate that.

legendary
Activity: 1652
Merit: 1016
Come on, are you saying this isn't a valid idea in the first place?  Now I am beginning to question if you are for real.  Maybe you are just heckling me for the fun of it.  In that case, let this thread die and go back to your normal lives.  Jeesh.  People are unbelievable.

At long last the penny has finally dropped!

Oh well, another million dollar idea collapses. Back to the drawing board....
sr. member
Activity: 476
Merit: 250
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements?  No.  Most of the arrangements are going to be nothing but white noise.

Which is why compression programs like zip and rar generally do produce smaller files, because they target particular patterns of data which tend to be seen.
General purpose compression routines can only get you so far though, which is why there are specialist audio and video compression routines, which target the common types of data seen in those arenas.
But you started off with a general claim about being able to compress all files. I think you now understand that you can't.
So now you realise you haven't got something magical, the next question is: What makes you think your method is better than those which already exist, and have been comprehensively studied for years or even decades?

What would you call being able to backup all of your movies in total Blueray or HD quality to your USB thumbdrive and being able to carry them in your pocket for use at a friend's house anytime you wanted?  That would be coolness.

[...]

Come on, are you saying this isn't a valid idea in the first place?  Now I am beginning to question if you are for real.  Maybe you are just heckling me for the fun of it.  In that case, let this thread die and go back to your normal lives.  Jeesh.  People are unbelievable.

Yes, that is what we have been saying right from the beginning.
Your claim that you have a compression method that will shrink all files is simply not possible.
We've shown you the maths proving this.
The fact that it would be really cool if it was true doesn't just make it true.
sr. member
Activity: 441
Merit: 250
GET IN - Smart Ticket Protocol - Live in market!
What would you call being able to backup all of your movies in total Blueray or HD quality to your USB thumbdrive and being able to carry them in your pocket for use at a friend's house anytime you wanted?  That would be coolness.

Well, what would you call being able, as a CEO of a company for example, to add a whole bunch of important files into a rar file that you need for a meeting, but which are top secret and there are spies everywhere trying to get your data, but you put it in a container in Nature (more aptly put, you pull out of Nature a name for that file which is its thumbprint) and take that code in your own mind with you to the meeting, and then generate that data on the target computer without the use of any USB drives, Cloud Caches, internal servers, etc...  I would call that better security than most, because chances are, people would not even know you are carrying anything with you in the first place.  And you can leave your sensitive laptop and truecrypt drives behind.

What would you call being able to send a huge videogame over the internet in moments and install itself without the need to host huge expensive servers?  I would call that a huge windfall for businesses.  No more huge server farms to send huge files or do downloads or updates.  Files are compressed under this theory and sent in 4k packets.  You could use a standard dial up modem and still achieve downloads of what would amount to Gigabytes in mere moments on the client end of things.  

It goes on and on.  

Come on, are you saying this isn't a valid idea in the first place?  Now I am beginning to question if you are for real.  Maybe you are just heckling me for the fun of it.  In that case, let this thread die and go back to your normal lives.  Jeesh.  People are unbelievable.

You tried to create an "Magic" algorithm to compress/encrypt data at the same time. How one could do it without any knowledge of the informational theory or even ability to program or even knowing what is out there already. I call it blissful ignorance and yes, your idea to compress huge amount of data into a simple reference is stupid. I did call your idea stupid because it is based on your ignorance and luck of the knowledge and your refusal to educate yourself before trying to find investors for your pipe dream.



sr. member
Activity: 476
Merit: 250
"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.

Arthur Schopenhauer, German philosopher (1788 – 1860)"

"The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown."
Carl Sagan
sr. member
Activity: 476
Merit: 250
Isn't it possible we could work with a large enough index key that the N bits you are referring to are larger than the number of files in existence at present or any time in the near future?  How many total files are there in the world at this point?  Does anyone know?

You could pick some arbitrarily large value of N, sure.
But the second shoe, which hasn't dropped yet, is that your index key will (on average) be the same size as the files you are trying to compress.
So it takes up just as much space (on average) to store the index key as it would to store the file itself.
To compress 1MB files, you need a 1MB index key.
To compress 100GB files, you need a 100GB index key.

But where is that space at?  It's in memory.  Not hard drive space.  The key gets expanded to the 1GB or 2GB size limit, sure.

No, the key is that size.
You cannot (on average across all files of that size) compress a 1MB file to an index smaller than 1MB.
hero member
Activity: 632
Merit: 500
"All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.

Arthur Schopenhauer, German philosopher (1788 – 1860)"

Kinda reminds me of the feedback I got explaining a decentralized currency that would eventually take down central banks (pre-Bitcoin), but lacked any technical abilities to make it a reality (thank God for Bitcoin).

Don't let them get you down.

If you belief in your idea, consider this approach... whats the absolute minimum viable proof of concept?  What would it take to create it?  Then find a way to get proof of concept working.
newbie
Activity: 28
Merit: 0
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements?  No.  Most of the arrangements are going to be nothing but white noise.

Which is why compression programs like zip and rar generally do produce smaller files, because they target particular patterns of data which tend to be seen.
General purpose compression routines can only get you so far though, which is why there are specialist audio and video compression routines, which target the common types of data seen in those arenas.
But you started off with a general claim about being able to compress all files. I think you now understand that you can't.
So now you realise you haven't got something magical, the next question is: What makes you think your method is better than those which already exist, and have been comprehensively studied for years or even decades?

What would you call being able to backup all of your movies in total Blueray or HD quality to your USB thumbdrive and being able to carry them in your pocket for use at a friend's house anytime you wanted?  That would be coolness.

Well, what would you call being able, as a CEO of a company for example, to add a whole bunch of important files into a rar file that you need for a meeting, but which are top secret and there are spies everywhere trying to get your data, but you put it in a container in Nature (more aptly put, you pull out of Nature a name for that file which is its thumbprint) and take that code in your own mind with you to the meeting, and then generate that data on the target computer without the use of any USB drives, Cloud Caches, internal servers, etc...  I would call that better security than most, because chances are, people would not even know you are carrying anything with you in the first place.  And you can leave your sensitive laptop and truecrypt drives behind.

What would you call being able to send a huge videogame over the internet in moments and install itself without the need to host huge expensive servers?  I would call that a huge windfall for businesses.  No more huge server farms to send huge files or do downloads or updates.  Files are compressed under this theory and sent in 4k packets.  You could use a standard dial up modem and still achieve downloads of what would amount to Gigabytes in mere moments on the client end of things. 

It goes on and on. 

Come on, are you saying this isn't a valid idea in the first place?  Now I am beginning to question if you are for real.  Maybe you are just heckling me for the fun of it.  In that case, let this thread die and go back to your normal lives.  Jeesh.  People are unbelievable.
sr. member
Activity: 441
Merit: 250
GET IN - Smart Ticket Protocol - Live in market!
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements?  No.  Most of the arrangements are going to be nothing but white noise.  If I go and create an orderly 64-bit image, I'm not sure that image wasn't already ordered in Pi to begin with.  There might be 4.29 billion unique 32 bit files possible, but are there actually 4.29 billion such files in existence now?  Could there even be?  I doubt it.  How much of 4.29 billion unique combinations did the Nintendo era actually use in creating its games?  Probably 1%.  And then we moved on to 64 bit games.  And then we moved on again.  We never sat too long in any one size before expanding the sizes we needed and creating vastly more incredible works.

Forget about 8/16/32/64 bits games or programs because those are sizes of the instructions and registers computers use to execute instructions and it has nothing to do with a topic in hand. I gave you 64 bit just to illustrate how many possible combinations a file of 8 bytes can contains I could of given you a file of size 72 bits or 1 kilobyte as an example. Do you know how many possible files could be created with the size of 1Kb exactly? 2^8192 do you know how big is this number?. How are you going to represent all those files with and index which is less than 1Kb in size? You are right that so many files do not even exist, but how would you know once you use your "method" to "decompress" this file that the file you have found is the right one?
newbie
Activity: 28
Merit: 0
Isn't it possible we could work with a large enough index key that the N bits you are referring to are larger than the number of files in existence at present or any time in the near future?  How many total files are there in the world at this point?  Does anyone know?

You could pick some arbitrarily large value of N, sure.
But the second shoe, which hasn't dropped yet, is that your index key will (on average) be the same size as the files you are trying to compress.
So it takes up just as much space (on average) to store the index key as it would to store the file itself.
To compress 1MB files, you need a 1MB index key.
To compress 100GB files, you need a 100GB index key.

But where is that space at?  It's in memory.  Not hard drive space.  The key gets expanded to the 1GB or 2GB size limit, sure.  Then you encode your file forward in memory, and once the final Crypto Key is obtained, you dump the program, which dumps the Pi Index file too.  Now you send your friend the key via email.  He opens the software on the other end, which loads Pi into memory again, sure, but the point is that the internet never saw the data.  The data did not choke up the middleground.  No data had to actually travel over the servers, no file analysis was done on it by the NSA, and a host of other cool benefits.  

Also, as I mentioned before, I am not going to be attacking 100GB files directly.  We can re-use the same 1 GB or 2GB Pi index and cut the program into chunks and direct each chunk through the same pipeline, so that the final file looks like this:

[OPENFILE]
[filesize=100000000000000_&_Name="100GBofStuffForWhomever.avi"]
[MChunks=50_&_REM:  Begin Chunk(s) on Next Line! ]
[1,w, xxxxxx, y, zzzz]
[2,w, xxxxxx, y, zzzz]
[3,w, xxxxxx, y, zzzz]
[4,w, xxxxxx, y, zzzz]
[5,w, xxxxxx, y, zzzz]
[6,w, xxxxxx, y, zzzz]
[7,w, xxxxxx, y, zzzz]
[8,w, xxxxxx, y, zzzz]
... etc
[CLOSEFILE]

which then get stitched back together at decompression time onto the hard drive in the original form.
sr. member
Activity: 476
Merit: 250
murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements?  No.  Most of the arrangements are going to be nothing but white noise.

Which is why compression programs like zip and rar generally do produce smaller files, because they target particular patterns of data which tend to be seen.
General purpose compression routines can only get you so far though, which is why there are specialist audio and video compression routines, which target the common types of data seen in those arenas.
But you started off with a general claim about being able to compress all files. I think you now understand that you can't.
So now you realise you haven't got something magical, the next question is: What makes you think your method is better than those which already exist, and have been comprehensively studied for years or even decades?
newbie
Activity: 28
Merit: 0
Here is a very simple example.

We know pi out to who know how many digits - for the sake of this example let's call it "enough".

Your phone number is 123-456-7890

You find your phone number and it happens to be at location 987,654 in the number pi.

You can say "My phone number is 123-456-7890".  This takes 10 digits.

>>snip



BurtW, come on man.  This is funny but only if you are not really trying to mock me.  I get the humor, but still. 

I've said before, and will say again, this is not for small compression of under 1 megabytes of data.   Since I am also assigning one Crypto Key per set amount of data (for example, 500 Megabytes unless that doesn't work so well) each Crypto Key for files larger than the set size would have multiple crypto keys, meaning combinations of crypto keys, further adding to the complexity of the overall N bits of Index, since then we would be combining Crypto Keys into new patterns as well as relying on a singular Pi index.  I am beginning to understand what you are telling me about possibilities, but you also must know that not every possibility contains a valid thought-organized answer, some would always be random noise.  Perhaps Nature has a set limit on the amount of creativity possible, which is why evolution causes us to change in the first place, so to avoid a bufferoverflow error of some sort.  (Okay, that's my own attempt at a joke).
newbie
Activity: 28
Merit: 0
BurtW, no.... I am not trying to locate sequences of digits in Pi.  I've heard of that in my own research and felt that was not the way to go.  I was told hunting for strings in Pi couldn't possibly work, and it also would have required math that I am not capable of in the first place.

Pi is an Index that we move through as we are encoding.  Movement is based on whether the digits are 0s or 1s.  Movement occurs in 8 bit increments (one character at a time), therefore all data must be read in its native format and if not in binary format (such as hexidecimal code isn't) it must be converted to Ascii Binary.  Starting from the decimal, we read forward 8 bits in per character.  I have tested this and the general rule is 100 indexes per byte.

murraypaul, I understand that you are very smart and that you know how logical math works, but are you going to sit here and tell me that every one of those possible combinations of 64-bit files are arranged into meaningful orderly arrangements?  No.  Most of the arrangements are going to be nothing but white noise.  If I go and create an orderly 64-bit image, I'm not sure that image wasn't already ordered in Pi to begin with.  There might be 4.29 billion unique 32 bit files possible, but are there actually 4.29 billion such files in existence now?  Could there even be?  I doubt it.  How much of 4.29 billion unique combinations did the Nintendo era actually use in creating its games?  Probably 1%.  And then we moved on to 64 bit games.  And then we moved on again.  We never sat too long in any one size before expanding the sizes we needed and creating vastly more incredible works.

Besides, I know my theory won't work with one long file because I believe it would take 100 years for the CPU/GPU whatever to actually find the sequence going from the Index End point in Pi backwards to the decimal point, verifying that it's the one unique path to the start of Pi after the decimal point.  So I would be working with 500 meg chunks, called Mega Chunks.  For every 2 GB of data, there would be 4 Mega Chunks for example.  For Crypto Codes, as my example above for the video file represents.

Vladamir, what is your trip?  I swear, sometimes you programmers can be so negative about everything and just shoot everything down without caring that there is a person on the end of your scathing words that is trying to do something cool.  I don't need such nonsense here.  Help me or stay out of the aisleway.  I'm working here.


sr. member
Activity: 476
Merit: 250
Isn't it possible we could work with a large enough index key that the N bits you are referring to are larger than the number of files in existence at present or any time in the near future?  How many total files are there in the world at this point?  Does anyone know?

You could pick some arbitrarily large value of N, sure.
But the second shoe, which hasn't dropped yet, is that your index key will (on average) be the same size as the files you are trying to compress.
So it takes up just as much space (on average) to store the index key as it would to store the file itself.
To compress 1MB files, you need a 1MB index key.
To compress 100GB files, you need a 100GB index key.
legendary
Activity: 2646
Merit: 1137
All paid signature campaigns should be banned.
Here is a very simple example.

We know pi out to who know how many digits - for the sake of this example let's call it "enough".

Your phone number is 123-456-7890

You find your phone number and it happens to be at location 987,654 in the number pi.

You can say "My phone number is 123-456-7890".  This takes 10 digits.

OR

You can say my phone number is at the 987,654 digit of pi.  Cool!

BUT

What if you find your phone number at location 987,654,321,987,654,321 in the number pi.

You can say "My phone number is 123-456-7890".  This takes 10 digits.

OR

You can say my phone number is at the 987,654,321,987,654,321 digit of pi - but this actually takes more digits.

Have you solved this problem?
sr. member
Activity: 441
Merit: 250
GET IN - Smart Ticket Protocol - Live in market!

Quote
It doesn't matter.
If your index key is only N bits long, you cannot index more than 2^N files. No matter how much more you post, this will continue to be true.


Doh!  I finally understood your meaning, sorry.  I do see your point at last.  But that only helps me in the long run, because now I can do something to implement an offset function, meaning we can reset where to start from in Pi if and when we do run into such an error during the research phase of programming this.

I am working with Pi, but I could just as easily work with other irrational numbers, too.  This would serve as a way to encrypt each file by offering hundreds of possible irrational number sets so someone couldn't easily find your file if they somehow got a hold of your Crypto Key.

Isn't it possible we could work with a large enough index key that the N bits you are referring to are larger than the number of files in existence at present or any time in the near future?  How many total files are there in the world at this point?  Does anyone know?

You still don't get it, do you:  if you take all 64 bit files in the universe (8 bytes), there are 2^64 possible files which could be build with those 8 bytes (only 8 bytes). How are you going to represent all those 8 Quintillion files with an index which is less than 8 bytes?

legendary
Activity: 2646
Merit: 1137
All paid signature campaigns should be banned.
So if the sequence of bits you are looking for is very long:

   01000010001001000...101001000111

Let's say 1 million bits then are you trying to locate this 1 million bit sequence in pi?
newbie
Activity: 28
Merit: 0

Quote
It doesn't matter.
If your index key is only N bits long, you cannot index more than 2^N files. No matter how much more you post, this will continue to be true.


Doh!  I finally understood your meaning, sorry.  I do see your point at last.  But that only helps me in the long run, because now I can do something to implement an offset function, meaning we can reset where to start from in Pi if and when we do run into such an error during the research phase of programming this.

I am working with Pi, but I could just as easily work with other irrational numbers, too.  This would serve as a way to encrypt each file by offering hundreds of possible irrational number sets so someone couldn't easily find your file if they somehow got a hold of your Crypto Key.

Isn't it possible we could work with a large enough index key that the N bits you are referring to are larger than the number of files in existence at present or any time in the near future?  How many total files are there in the world at this point?  Does anyone know?
sr. member
Activity: 476
Merit: 250
It doesn't matter.
If your index key is only N bits long, you cannot index more than 2^N files. No matter how much more you post, this will continue to be true.

Pi is an irrational number, which means there is no fixed sequence of recurring digits which continues indefinitely.
It does not mean that every sequence of Y digits is guaranteed to be unique.
In fact, even a few moments of thought will tell you that there must be repeating sequences, otherwise Pi would be representable by a finite string of digits, which it isn't.

So there is a possible way to end up with the same index key from two different files, and that is how you end up with only 2^N possible answers.

I appreciate you taking the time to explain matters, but I haven't your background, and I'm not sure what you mean by 2^N, so I can't actually formulate any response to your critique that will make sense to you.

Every unique file is, itself, made up of repeating strings of data, but what makes the data unique from other files is the arrangment of the data, how the 0s and 1s are arranged.  Thats true for almost everything. Some popular music differs from other songs only by a few notes and the main singer, but we view almost all of it as unique, the small differences do a lot, even if its almost the same.

What I've created is a way for every bit (0 or 1) to affect the path taken to reach the index in Pi.  Therefore, its impossible for any index in Pi to hold more than any one outcome.

Take 0000001  and    0001000   and  100000 for example.   The index for each is, respectively:

BYTE EXAMPLE:              0000001:       0001000:        100000:
    Pi Index:                      (57)               (85)             (103)

And that's just with one byte of data.  The larger the file size, the more unique it becomes.   You could have billions of 1 megabyte files, and if even one bit in their internal structure was different, the outcome would be different.  This is truly the butterfly effect at work.

Lets go in stages:
a) If you have billions of unique files, they must map to billions of unique Pi indexes, otherwise two files would map to the same index, and you would not be able to to determine which of the two files was the original. Do you agree?
b) If you had hundreds of billions of unique files, they must map to hundreds of millions of unique Pi indexes, otherwise two files... Do you agree?
c) Therefore the number of possible indexes increases as the number of possible input files increases. Do you agree?
d) Therefore the number of possible indexes increases as the size of the input files increase. Do you agree?
e) Therefore there is no fixed length of index string which can possibly represent all of the possible indexes for all possible files. Do you agree?
If you have got all the way through to e) and still agree, then you should see that your scheme as presented, with a fixed index length which represents all possible files, is impossible.

To take a small example, lets look at all files of 32 bits in length.
There are 2^32 of them. (2*2*2*2... 32 with 32 numbers 2s in).
That is 4294967296 in decimal.
So there are 4294967296 possible input files.
They each need to have a unique Pi index.
Therefore there must be 4294967296 Pi indexes for them.
That is 2^32 Pi indexes.
Therefore you need 32 bits to represent the possible Pi indexes.
Therefore, as explained before, the average file size after compression will be the same (or greater, with a poor compression scheme) that the average input file size.
Pages:
Jump to: