Pages:
Author

Topic: Crypto Compression Concept Worth Big Money - I Did It! - page 8. (Read 13895 times)

newbie
Activity: 28
Merit: 0
DECODING:
Start from the Ending Index Location.  Begin hunting through Pi using all combinations of 1s and 0s in some systematic or mathematical way, looking for efficiency and intelligence in solving how to do it the most quick possible way, to find the one path through Pi that ends at our Ending Point exactly using the same exact number of steps, and calculating intelligently the number of 1s encoded in the formula using the Flip Count information (so we don't waste time searching for paths that have more 1s than are even in the timeline we've created.  Since any fork in the road will shrink and compress a given file, the overall chance of that file having the same 64-bit Base Key, number of steps to reach the end goal, and same number of 1s, would be significantly reduced.  Add to that, since we are not encoding the file all the way into Pi for one single MetaData Key, but splitting the files up into more and more MetaData Keys, the chances of the file's uniqueness will keep going up with more divisions, while the number of Metadata keys added to our final file output would create a larger and larger file.

It my goal to be able to make this work with 500k files up to 5 TB files using this system if it can be made to work.

Like I said before. This changes nothing. there are several factors that condemn this to fail. Even if you could solve the uniqueness problem, which you can't, there are still several factors. your "chunks" are good if you want to shorten the jumptime; you need a chunk everytime you reach a certain value, say 33 for the heck of it. so pretty much every 33th bit you need to write the corresponding chunk in your file. the higher the value the longer you have to search through pi.
So, now here is the problem: you want to continously iterate through pi, which is fine by itself - BUT you have to store all that in a memory. you could use certain formulas which enable you to jump to a specific place in pi to shorten the amount of time you need and memory used, but we are talking about a supercomputer here to effectively iterate trough pi, get the corresponding values, write them in a file, remember where you where/use a formula to get there again, do that over and over again.

Don't misunderstand me - You had a really nice idea. But you are not the first to try that. The main factor of a Decoder/Encoder is the speed. You don't have any speed.

Well, if you mean the time needed to load 2GB of the Pi Index into memory, I assume that would take 1 minute or so to, upon starting the software, generate the Pi Index (of that size) needed.  Who cares if it's in memory?  I don't.  Besides, who knows if I will end up using 2GB, it could end up being 500MB of Pi, with smaller chunks.  First we need to just get a basic 2 meg version running, and try to encode files between 500K and 1 MB and just see what happens.  

But if I get this working, imagine the time saved over the internet to send your friends or family the 20GB of videos taken on your vacation.  You would be sending a small file 4-64k large, in their email in moments.  Then they'd decompress out the videos overnight, while sleeping perhaps.  Wake up, the videos are there.  The internet did not need to be congested with all of those 0s and 1s.  And if a lot of people were using it, the internet would work more and more smoothly all the time.  Think of the difference!

Another thing is, you still can't convince me that just because it's possible to have 2^N files TO encode, that there are that many unique files.  For all we know, our research into this could reveal a fundamental characteristic of Nature to only allow organized patterns to assemble in random data at a given ratio, like the Golden Means (8/7 is it)?  Just trying to solve this problem itself could lead to a breakthrough.  What if there are only (2^N)/10 file in existence of each size, and they already happen to be written in Nature?  It would mean all of our ideas already exist in the Global Intelligence of Nature and that our brains are receiving the information via DNA alterations that come from eating plants.  Because science just recently confirmed that by eating some plants, our DNA is altered, and they hypothesize that new ideas come from this phenomenon.  If Nature is the one providing original ideas, it stands to reason the answer is already in Nature for every created thing.

I don't wish to go too deepy into philosophy here, though.  Just saying, you don't know for sure that even though there are 2^N files possible, that all of those combinations will ever be used to organize intelligent data into a file that ends up on someone's computer that could be put through my system.  In that case, there might be all the uniqueness I'll ever need.  Won't know until we try it and watch it either (Get busted) or work as conceived.  
legendary
Activity: 1176
Merit: 1011
Yeah, exactly.  Do you know Gustav Whitehead?   No?  Nobody does, but some people think he invented portions of the airplane before the Wright Brothers, who were credited with having invented the airplane, but actually they only improved upon a number of other inventions done by others, by putting it all together.  Nobody remembers Whitehead, but he and others like him paved the way.  The problem is, they gave up ... and the Wright Brothers didn't.  And the rest ... is history.
The stuff that Whitehead invented was not theoretically fundamentally logically impossible. It just couldn't be done in practice at his time, because human technology was not advanced enough yet.

Your idea on the other hand, can be mathematically proven to be wrong. No matter what kind of smart thinking, advanced technology, quantum computers, zero-point energy, or other hypothetical fancy stuff we'll have at hand in the future. Math doesn't lie.
legendary
Activity: 1176
Merit: 1011
All I did was invent the process that needs to be taken and implemented.
I hate to destroy your dreams, but your idea is fundamentally flawed. As has been pointed out repeatedly in this topic.

In your first post you wrote:
my solution for compressing 99.8% of all data out of a file, leaving only a basic crypto key containing the thread of how to re-create the entire file from scratch.
This is simply not possible. Not a matter of requiring faster computers, more memory, or better technology, but a matter of theoretically, mathematically provable, fundamentally, logically impossible.

You keep beating around the bush and getting lost in irrelevant details, thus deluding others (and possibly yourself too) from the very clear, simple fact that you are missing a crucial point.

C) The index isn't included in the file we save, that would be stupid, since it would do just as you say, include a lot of data we don't need to even include.  The index is Pi, a known number anyone who programs can program in moments to auto generate, in less than 51 bytes I'm told.  So our co/deco would include it built in to generate our index is RAM.  The part you're not getting (and again, I don't blame you, I WANT you to understand because that would be awesome!) is that we don't include any index in our output file, THAT'S WHY IT CAN BE 4K.   The fact that we used an index in Pi that was 16 million indexes long has nothing to do with our final file size, because we are just writing down the Index point itself.
Misuse of terminology causes some confusion here, what you call 'index point' is what others call 'index'.

Using your terminology, let me rephrase the essential flaw in your approach: for pretty much ANY file (aside from 0.00000000001% very lucky cases), the index point (or 'crypto key' or however you call it) required to re-create the original file, will require MORE data than the original file itself.

The fact that you can re-create Pi (or other irrational numbers or infinite data sets that contain any possible file in existence) in just a few bytes, does not change this fact in any way.
newbie
Activity: 28
Merit: 0
Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME

Okay, guys, you got me.  Good one.  Although I did say in my first post (jokingly) that this could work for that one day, when the technology reached a high enough level.  And how do you think we are going to ever get there, if we don't at least dream and attempt to do that?  Maybe my idea seems ludicrous to you, but I'm sincere, I'm trying to do something cool, and its the best I can do.  I'm sorry I'm not Bill Gates.  Oh, sorry, I forgot, he was only a businessman who ripped off everyone else and gave himself credit for it.  But I admit he's a lot better than me.  In fact, I have to sit here and use the computer his company made and the OS his company made to write all of this, so that pretty much settles it, right?

I don't deserve to even try, is that?  I should live in China teaching children my whole life, making $1400 a month, and never dream of anything, just stare into space my whole life.  At least a few of you seem to care about what I'm trying to do.  I appreciate that.  It's hard having one's best efforts shot down.  I never said I was smarter or as smart as anyone here.  I just want to try.  TRY, damn it all.  It's so fun, and it might make a difference one day.  Which means my life might account for something.  It's not my fault I was poor all my life, and taught to hate school until I was almost 35 years old, by then it was too late. 

Life is harsh.

You are right, life is harsh. Nobody will believe people like you and me. They think we are fools, or clowns.
We must follow the examples of great pioneers like the Wright brothers or Thomas Edison. We will change the future, and we will be remembered.

Fall seven times, stand up eight.  ~Japanese Proverb

Yeah, exactly.  Do you know Gustav Whitehead?   No?  Nobody does, but some people think he invented portions of the airplane before the Wright Brothers, who were credited with having invented the airplane, but actually they only improved upon a number of other inventions done by others, by putting it all together.  Nobody remembers Whitehead, but he and others like him paved the way.  The problem is, they gave up ... and the Wright Brothers didn't.  And the rest ... is history.
legendary
Activity: 804
Merit: 1002
DECODING:
Start from the Ending Index Location.  Begin hunting through Pi using all combinations of 1s and 0s in some systematic or mathematical way, looking for efficiency and intelligence in solving how to do it the most quick possible way, to find the one path through Pi that ends at our Ending Point exactly using the same exact number of steps, and calculating intelligently the number of 1s encoded in the formula using the Flip Count information (so we don't waste time searching for paths that have more 1s than are even in the timeline we've created.  Since any fork in the road will shrink and compress a given file, the overall chance of that file having the same 64-bit Base Key, number of steps to reach the end goal, and same number of 1s, would be significantly reduced.  Add to that, since we are not encoding the file all the way into Pi for one single MetaData Key, but splitting the files up into more and more MetaData Keys, the chances of the file's uniqueness will keep going up with more divisions, while the number of Metadata keys added to our final file output would create a larger and larger file.

It my goal to be able to make this work with 500k files up to 5 TB files using this system if it can be made to work.

Like I said before. This changes nothing. there are several factors that condemn this to fail. Even if you could solve the uniqueness problem, which you can't, there are still several factors. your "chunks" are good if you want to shorten the jumptime; you need a chunk everytime you reach a certain value, say 33 for the heck of it. so pretty much every 33th bit you need to write the corresponding chunk in your file. the higher the value the longer you have to search through pi.
So, now here is the problem: you want to continously iterate through pi, which is fine by itself - BUT you have to store all that in a memory. you could use certain formulas which enable you to jump to a specific place in pi to shorten the amount of time you need and memory used, but we are talking about a supercomputer here to effectively iterate trough pi, get the corresponding values, write them in a file, remember where you where/use a formula to get there again, do that over and over again.

Don't misunderstand me - You had a really nice idea. But you are not the first to try that. The main factor of a Decoder/Encoder is the speed. You don't have any speed.

To get you to understand this is pretty hard, you don't seem to read the numbers correctly.
I told you that 500mb is a string with 4*10^9 numbers. 5TB?!?
That is a string with 4*10^13 numbers. You need to iterate through pi at least forthy TRILLION times, and that is IF all the hunting values are right behind each other!
This won't change even if you always start from the top of pi!
b!z
legendary
Activity: 1582
Merit: 1010
Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME

Okay, guys, you got me.  Good one.  Although I did say in my first post (jokingly) that this could work for that one day, when the technology reached a high enough level.  And how do you think we are going to ever get there, if we don't at least dream and attempt to do that?  Maybe my idea seems ludicrous to you, but I'm sincere, I'm trying to do something cool, and its the best I can do.  I'm sorry I'm not Bill Gates.  Oh, sorry, I forgot, he was only a businessman who ripped off everyone else and gave himself credit for it.  But I admit he's a lot better than me.  In fact, I have to sit here and use the computer his company made and the OS his company made to write all of this, so that pretty much settles it, right?

I don't deserve to even try, is that?  I should live in China teaching children my whole life, making $1400 a month, and never dream of anything, just stare into space my whole life.  At least a few of you seem to care about what I'm trying to do.  I appreciate that.  It's hard having one's best efforts shot down.  I never said I was smarter or as smart as anyone here.  I just want to try.  TRY, damn it all.  It's so fun, and it might make a difference one day.  Which means my life might account for something.  It's not my fault I was poor all my life, and taught to hate school until I was almost 35 years old, by then it was too late. 

Life is harsh.

You are right, life is harsh. Nobody will believe people like you and me. They think we are fools, or clowns.
We must follow the examples of great pioneers like the Wright brothers or Thomas Edison. We will change the future, and we will be remembered.

Fall seven times, stand up eight.  ~Japanese Proverb
newbie
Activity: 28
Merit: 0
Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME

Okay, guys, you got me.  Good one.  Although I did say in my first post (jokingly) that this could work for that one day, when the technology reached a high enough level.  And how do you think we are going to ever get there, if we don't at least dream and attempt to do that?  Maybe my idea seems ludicrous to you, but I'm sincere, I'm trying to do something cool, and its the best I can do.  I'm sorry I'm not Bill Gates.  Oh, sorry, I forgot, he was only a businessman who ripped off everyone else and gave himself credit for it.  But I admit he's a lot better than me.  In fact, I have to sit here and use the computer his company made and the OS his company made to write all of this, so that pretty much settles it, right?

I don't deserve to even try, is that?  I should live in China teaching children my whole life, making $1400 a month, and never dream of anything, just stare into space my whole life.  At least a few of you seem to care about what I'm trying to do.  I appreciate that.  It's hard having one's best efforts shot down.  I never said I was smarter or as smart as anyone here.  I just want to try.  TRY, damn it all.  It's so fun, and it might make a difference one day.  Which means my life might account for something.  It's not my fault I was poor all my life, and taught to hate school until I was almost 35 years old, by then it was too late. 

Life is harsh.
newbie
Activity: 28
Merit: 0


I would LOVE to take your bet, I really would, but I am not a programmer, nor do I have elite math.  

That is exactly your problem. You are right, you can just index everything in pi or any other endless continuing non periodic number. BUT like everyone here told you, it is simply not possible to do that in an efficient manner.
I will give you a little example:
Say you have a file the size of 10 MB: now you read out the actual 0's and 1's in it and search pi (or any other) for the representing string and store the starting and end index in your file. So far so good, now you have a file which is only the length of the starting and end index. That is really small, you are right. BUT you have to get the corresponding string in pi, which takes A LOT of time to find first.
AND everyone who wants to recreate the file has to compute pi UNTIL he is at the corresponding string again. And that is the problem here.

Let's say you have around 500mb of data!  If you round it a little you have a corresponding string of 4*10^9 bits length.  Now, to make that a little more efficient we convert it into the dezimal system, since a string of 0's and 1's that long would be way too long to search in a dezimal number.
To make it easier say 10^9 binary is 10^3 in dezimal. so now you have a string of numbers which has a length of 4000. Now, you need to find this string in an infinite number. Now you could use BBP to fasten your search over multiple processors and you would still search a VERY long time for a corresponding number.  

I just saw your last explanation. But that changes pretty much nothin on my statement above.That is not a new theory and it is proven to be not efficient in first year university information technologies iirc.

I encourage newcomers to the party, I'm all in.  But you would need to go back and re-read a large portion of this starting from about page 6 forward until now and then you will see that what you're talking about is compression, and thanks to BurtW, what I now know this to be called is creating a Encoder/Decoder that allows you to pull data out of Pi with a Meta Data keyfile of between 4k and 64k.  


BALANGHAI

Essentially, this is the entire process:

1) Open file to processed.  Analyze the data.  Then we open our file and begin to record the following:
   A) Original Filename.  Size of the Original File.  Size of Pi Index Used (how big each chunk is to be split into).  Size of the last Mega Chunk in bits (if applicable).
   B) Basekey (the first 64 bytes of the original file, giving the program enough room to establish a unique path given a number of hops.
  
2) Begin reading the data one character at a time (converting hex to Ascii Binary) all in memory using the loaded Pi Index to the Pi Index Size shown in example A above.  Convert everything (all file contents) to Ascii Binary (so every incoming piece of data is exactly 8 bits).  Begin moving forward through Pi by hopping on a single solitary digit called a "Hunt Value" meaning the number we are to be hopping on in Pi.  Starting from the decimal point, begin encoding by hopping.  Hop to the 1st 4 in Pi if our first bit is a 0.  If it's a 1, hop to the first 5 in Pi.  Hop along Pi, encoding 0s and 1s using these rules.  0 = no change in Hunt Value.  If you start with a 4, you keep searching for the next 4.  1 = +1 to Hunt Value in Pi.  I'm sure this can be done in realtime with data, you are just moving along, not having to do any hard math at all.  Computers were made for this kind of math, it's the most basic, so our encoding would be lightning fast.

3) When we reach our size limit for our first chunk and there is more data to be read in, we open our file and write our first MetaData Key.
   C) [1.x.yyyy.z]   (for those who have been following, we no longer need the first bit, since we have added something called the BaseKey, a 64 bit record of the initial string of the file.)
Keep encoding until all the data is complete.  Record the size of the last chunk into the file record in bits, and add 1s from that point out so the last chunk is equal to the other chunks exactly.  During decompression, the program will compare the data split size to the last chunk's size, and remove the 1s automatically when it comes time to write out the last file.
Close the file out and dump the Pi Index from memory, clearing up the computer.


DECODING:
Start from the Ending Index Location.  Begin hunting through Pi using all combinations of 1s and 0s in some systematic or mathematical way, looking for efficiency and intelligence in solving how to do it the most quick possible way, to find the one path through Pi that ends at our Ending Point exactly using the same exact number of steps, and calculating intelligently the number of 1s encoded in the formula using the Flip Count information (so we don't waste time searching for paths that have more 1s than are even in the timeline we've created.  Since any fork in the road will shrink and compress a given file, the overall chance of that file having the same 64-bit Base Key, number of steps to reach the end goal, and same number of 1s, would be significantly reduced.  Add to that, since we are not encoding the file all the way into Pi for one single MetaData Key, but splitting the files up into more and more MetaData Keys, the chances of the file's uniqueness will keep going up with more divisions, while the number of Metadata keys added to our final file output would create a larger and larger file.

It my goal to be able to make this work with 500k files up to 5 TB files using this system if it can be made to work.
legendary
Activity: 804
Merit: 1002

Please realize that wishful thinking does not heal a scheme that's fundamentally broken.

Onkel Paul

(I know I'm too old to be trolled, but at least it'll increase my activity, which is at least something...)

You made my day. Thank you.
legendary
Activity: 1039
Merit: 1005
But the problem does not go away when more bits are "encoded", it only gets worse!
After just 4 bits, the encoding machinery is in the same state for some bit patterns, and it has therefore lost all information about which of these patterns it had encoded. The same is true for any 4-bit (or longer) subsequence in the input file. If you have two files that are identical up to that sequence, the encoder will be in the same state with both of them before encoding that sequence, and there will be groups of possible bit values that will all leave it in the same ending state after the sequence.
Going further into Pi before starting the process does not help at all, the distribution of decimal digits is uniform.
Adding the first 64 (or 64k, what's a factor of 1024 between friends?) bytes to the encoded file does not help either - it just shifts the point where the non-uniqueness problem appears 64 or 64k bytes into the file.

Please realize that wishful thinking does not heal a scheme that's fundamentally broken.

Onkel Paul

(I know I'm too old to be trolled, but at least it'll increase my activity, which is at least something...)
b!z
legendary
Activity: 1582
Merit: 1010
Teleportation Concept Worth Big Money - I Did It!

I am a normal guy etc etc and I thought of a way for teleportation. I can't prove it works since I can't code, but the idea seems to work in theory!
This brilliant idea can change the world! It can be used for space travel, transportation and so much more.

1. Scan human
2. Convert atoms into code
3. Compress using B(asic)Miner's method, since there is too much human code
4. Send 1 line of code to the receiving end
5. Decompress and rebuild human

PLEASE GIVE ME A CHANCE, I JUST NEED SOMEONE TO BELIEVE IN ME
newbie
Activity: 28
Merit: 0
legendary
Activity: 804
Merit: 1002


I would LOVE to take your bet, I really would, but I am not a programmer, nor do I have elite math.  

That is exactly your problem. You are right, you can just index everything in pi or any other endless continuing non periodic number. BUT like everyone here told you, it is simply not possible to do that in an efficient manner.
I will give you a little example:
Say you have a file the size of 10 MB: now you read out the actual 0's and 1's in it and search pi (or any other) for the representing string and store the starting and end index in your file. So far so good, now you have a file which is only the length of the starting and end index. That is really small, you are right. BUT you have to get the corresponding string in pi, which takes A LOT of time to find first.
AND everyone who wants to recreate the file has to compute pi UNTIL he is at the corresponding string again. And that is the problem here.

Let's say you have around 500mb of data!  If you round it a little you have a corresponding string of 4*10^9 bits length.  Now, to make that a little more efficient we convert it into the dezimal system, since a string of 0's and 1's that long would be way too long to search in a dezimal number.
To make it easier say 10^9 binary is 10^3 in dezimal. so now you have a string of numbers which has a length of 4000. Now, you need to find this string in an infinite number. Now you could use BBP to fasten your search over multiple processors and you would still search a VERY long time for a corresponding number.  

I just saw your last explanation. But that changes pretty much nothin on my statement above.That is not a new theory and it is proven to be not efficient in first year university information technologies iirc.
sr. member
Activity: 364
Merit: 253
Now I got a programmer in my team, I just wonder what is the workflow of the compression(reduction) or encoding or whatever term is appropriate?

Can you give us a simple workflow like in this format:

Begin -> Analyze -> Encode -> etc. -> etc. -> End


Best Regards,
Balanghai
newbie
Activity: 28
Merit: 0
Oh God, please just give OP a bitch-slap!

I NEED a good bitch slap to calm down after arguing with you all for so long!  Hahaha ....   
newbie
Activity: 28
Merit: 0
That is why I called it a "very large out of band reproducible random data set (like pi)"

That is why it is an encoding/decoding system, not a compression system.

Very large amounts of information can be encoded in very small metadata:  ISBN numbers, URLs, etc.

He is not talking about data compression at all.  He just does/did not know the proper terminology is all.


Yes!

And Yes!!

And YEEESS!

AND YEEEESSSS!

God, I love you BurtW!  Thank God for you.  You get it.  You know I am not joking around here and you see it more and more clearly every day.
newbie
Activity: 28
Merit: 0
B(asic)Miner, how about this: wanna bet? Walk your talk? Put your money where your mouth is? After all, money talks and bullshit walks Smiley

I give you a few 1 MB files (1048576 bytes). If you can compress them to 950 KB (972800 bytes) or less, I pay you 100 BTC. If you are unable to pull this off within a reasonable time limit, let's say one or two months from now, you pay 1 BTC to any charity of your choice (must post tx here Wink)

Deal?

 I thought this was possible? Aren't we able to rip games and all? I dont get it why cant he compress 1mb into 950kb ?

 This is not 'of course it can be done' message , it is more like 'I really dont know if we are able to do this or not , I tought we were able to do it but I might be wrong' type of message
Oh, he'll probably be able to compress some 1MB files to 950KB. But not the ones I'm supplying Smiley

See, compression is all about information density. Quite some everyday files of 1MB contain much less than 1MB of actual information. A text file, for example, or an image with lots of 'empty' spaces, will not have a very high information density. Meaning they use 1MB of disk space to store a lower amount of actual data.

But if an 1MB file actually holds (close to) 1MB of information, there's no way to represent that same information in significantly less disk space. Brilliant compression algorithm or crypto key magic or not. And to get the proper persective: the vast majority (like 99.9999%) of all 1MB files actually do hold pretty close to 1MB of information.

Either way, I have enough faith in these principles to set a 100 BTC bounty to be proven otherwise. Hope to hear from B(asic)Miner soon. Smiley

I would LOVE to take your bet, I really would, but I am not a programmer, nor do I have elite math.  I have no hope of programming this thing myself.  All I did was invent the process that needs to be taken and implemented. If anyone on this forum wants to program my idea for me, then I'll give you anything you want (within fairness, as long as I get some money when we strike it rich, and as long as you keep my name as a co-inventor.  I say co-inventor (not full inventor) because that programmer is going to need to be able to figure out how to read from my index point backward and solve my timeline to decode the bits out of Pi again.  If they can't do that, its all for naught, and I'm sure its going to take some hard math and genius of his own to accomplish that.  I may not be useful for that portion.   

Then we (those of us working on the software) will take your bet, and we will also take the Hutter award too, because when we encrypt that 100 MB wikipedia document down to 4k, they are going to owe us the full 50,000 Euros, too. 
newbie
Activity: 28
Merit: 0
sr. member
Activity: 434
Merit: 250
I do not understand your propositions , both of them . Am I allow to try to .rar whatever you send me?
Sure, go ahead. I haven't tried it but I'm pretty sure Rar won't reduce them to 950K or less (in fact, I think Rar won't even reduce 1 bit off them).

See link above for 1MB test files Smiley

 I am at work now , I will try and respond in about 5 hours (more or less).

 I will simply .rar these and check if that worked , if it doesnt I just lost Cheesy

 So you can basically .rar these if you want and dont wait me for 5 hours , if it doesnt compress them the answer will be on your hands without waiting for me for 5 hours Cheesy
sr. member
Activity: 476
Merit: 250
For the 5 letter input file "Hello", I cancelled the program early on in the search for duplicate encodings, but it had already found almost 2 million five letter words which gave the same Pi index (305). There are 227k words starting with 0 which do.
Hopefully it is clear that this process does not work, as it assigns the same output value to multiple different input files, and therefore there is no way of reconstructing the original input file from just the output index.
Pages:
Jump to: