If by programming perspective you mean the data type being used -- currently Bitcoin uses 64 bit integers for transaction amounts.
Since one bit is being used for the sign (+/-) the largest number this data type can handle is 2^63 = 9,223,372,036,854,775,807 (= 9.22e+18)
So yeah, from a data type perspective it could be "something like 100000000000000000" (= 1.0e+17).
But no, from a protocol perspective such a transaction would be invalid.
So then, Bitcoin's cases used 21,000,000 * COIN, so this COIN variable means so-called, Satoshi, and this can go below decimal 8
places, then actual Bitcoin's program can handle 21,000,000 * 10,000,000 = 210,000,000,000,000?
So this places in range of above you mentioned, 9,223,372,036,854,775,807
Then, new alt-coin's maximum possible value also should be in range of above, 9,223,372,036,854,775,807 / 10,000,000 (for satoshi, COIN variable) = 92,233,720,368
So total supply of alt-coin should be below than 92,233,720,368 ?
Unless you ALSO change the value of COIN or you change the code to use something larger than 64 bit integers. If you do that then you can have ANY VALUE YOU WANT.
At this point I would be drawing at hypotheticals, but depending on the data type you use (in this case int64 or wider) it could also have effects on the kind of network protocol used. The bigger the data type, the more data would be required per possible transaction. To be fair, this would have to be an extremely large number, but it is nonetheless also important to consider in cases in which an altcoin could potentially be used as a more private, smaller coin that is not used by many and does not require a high quantity thereof. In such a case, one could use an int32 that would occupy only half of the space, albeit it would be much more limited in quantity to around 4 million minimum-denomination coins (satoshi equivalents) or around 40 coins if the same decimal levels as Bitcoin were used. It is worth mentioning that there are definitely useful use-cases for this, although maybe not so much for, say, a 128-bit data type.
He could double up by using a uint32 instead of int32, why use int32 anyway ? Are negative numbers used anywhere ?
There's a few weird decisions made long ago which seem to be trivial and add complexity where it's not really needed, I guess this was the programming style of the original programmer.