I partially disagree, because I don't think they are the same language.
Yes. They are. They are both just numbers.
The values of the dice, is just raw information, which can be let's say "61235" after roll with 5 dices, which in it's basic binary form is:
110110 110001 110010 110011 110101
I highly doubt this sequence of information must inside the binary form of the picture of the dice, because it's not the same language.
It doesn't need to be. You just need the same amount of entropy. It doesn't need to be the same value.
We just interpreted the output of the dice as being 6 but the computer only sees the pixel values of the dice, but not the "6" itself.
Actually, it does.
Lets look at the most simple example.
Lets imagine that we have a black and white bitmap picture of only 1 die, and the image resolution is 7x7 pixels. In this bitmap, 1 will represent white and 0 will represent black.
Here is a perfectly centered picture of the '6':
1111111
1101011
1111111
1101011
1111111
1101011
1111111
And here is the picture of the '1':
1111111
1111111
1111111
1110111
1111111
1111111
1111111
Hopefully, you can see that in this overly simplified photo system, there are 6 unique possible ways to arrange the 49 pixels. Each of the 6 arrangements has equal probability (assuming you used a perfectly fair die).
In a larger image (more pixels), you could fit more dice. If the dice were all perfectly aligned with the rows in the bitmap, and each die was 7 pixels wide in the larger bitmap, then each 49 pixel square representing a die would have as much entropy as the die itself. Each 49 pixel portion of the picture that had a die would have 6 unique possibilities each with equal probability.
So, at the bare minimum, the picture would be guaranteed to have at least as much entropy as the dice. As the resolution of the picture increases, and the pixel values increase from black&white to full color, orientation, brightness, and color of the "dice pixels" increases the entropy significantly higher than the minimum provided by the dice values.
the picture itself is only as good randomness, as random the pixels are inside it,
And the randomness of the pixels inside the picture is partly due to the randomness of the values on the dice. Meaning that the randomness of the pixels overall are even more random than the values on the dice
but have probably nothing to do with the informational content that it carries for humans.
Correct. This has nothing to do with informational content for humans. It has to do with the fact that each dice value will create a unique set of pixel values that will ALWAYS be different than pixels of ANYTHING that is not a die of that value (because if the pixels were exatly identical to those of the dice value, then it would be a picture of an identical looking die face).
Now maybe the picture has a higher entropy value than the roll information itself, but that is another issue.
It does. But what we are trying to get you to understand is that the fact that the die are in the picture GUARANTEES that the entropy will NOT be any lower than the rolled dice themselves.
A picture of a perfect white surface perfectly lit such that every pixel has EXACTLY the same value will have NO entropy. Every time you take that picture, you will end up with the EXACT same series of pixel values. BUT if you roll a die, perfectly centered on the surface just before taking the picture, you'll ALWAYS have at least 6 different series of pixel values for the picture (since the die will change the pixel values depending on the value of the die. Roll 2 dice before taking the picture, and you'll ALWAYS have at least 36 different series of pixel values for the picture. Roll 62 dice, and you'll ALWAYS have at least 6
62 different series of pixel values for the picture.