It was the Bitcointalk forum that inspired us to create Bitcointalksearch.org - Bitcointalk is an excellent site that should be the default page for anybody dealing in cryptocurrency, since it is a virtual gold-mine of data. However, our experience and user feedback led us create our site; Bitcointalk's search is slow, and difficult to get the results you need, because you need to log in first to find anything useful - furthermore, there are rate limiters for their search functionality.
The aim of our project is to create a faster website that yields more results and faster without having to create an account and eliminate the need to log in - your personal data, therefore, will never be in jeopardy since we are not asking for any of your data and you don't need to provide them to use our site with all of its capabilities.
We created this website with the sole purpose of users being able to search quickly and efficiently in the field of cryptocurrency so they will have access to the latest and most accurate information and thereby assisting the crypto-community at large.
The entropy of this and all compression benchmarks is unknown. Unfortunately there is no direct way to compute it.
In the absence of a known probability distribution, we may define the information content, or Kolmogorov complexity K(s), of a string s as the length of the shortest program that outputs s [11].
K(s) is independent of the language used to write the program, up to a constant independent of s, because any program written in language L1 can be rewritten in L2 by appending a compiler for L1 written in L2.
Kolmogorov also proved that K(s) is not computable. The proof is simple. Suppose that K(s) is computable. Then you could write a program to find the first string s whose complexity is at least n bits, for any n as follows:
s := ""
while K(s) < n do
s := next(s) // in some lexicographical ordering
output s
Now let n = 10000. The above program is shorter than n bits, but it outputs s and K(s) = 10000, which is a contradiction. This proof can be applied to any language by making n sufficiently large.
The entropy of this and all compression benchmarks is unknown. Unfortunately there is no direct way to compute it.
In the absence of a known probability distribution, we may define the information content, or Kolmogorov complexity K(s), of a string s as the length of the shortest program that outputs s [11].
K(s) is independent of the language used to write the program, up to a constant independent of s, because any program written in language L1 can be rewritten in L2 by appending a compiler for L1 written in L2.
Kolmogorov also proved that K(s) is not computable. The proof is simple. Suppose that K(s) is computable. Then you could write a program to find the first string s whose complexity is at least n bits, for any n as follows:
s := ""
while K(s) < n do
s := next(s) // in some lexicographical ordering
output s
Now let n = 10000. The above program is shorter than n bits, but it outputs s and K(s) = 10000, which is a contradiction. This proof can be applied to any language by making n sufficiently large.