Yes, it's grid. I've been in the business for many years.
Many of the problems being run on dedicated supercomputers could be run on a distributed grid - you only have a need a high speed interconnect when you have to move massive amounts of data between nodes. Many simulations use as many cores as they can get allocated for as many hours as they can get them allocated because the calculations are relatively distinct from one another, require small packets of data in and out, and can be parallel processed, combined, and sent out as a new calculation.
They typically aren't moving around "big data" between processing nodes, although they may be generating a bit of it as output. Granted, this is not always the case...
EDIT: There are more than just a few examples - there are literally hundreds of grid applications. Done to save money by the applications, with time and resources donated time by the volunteers running them. But suppose an app came along that DID pay some trivial amount of money per calculation - like Bitcoin does - do you not think significant spare computing capactiy would tend to gravitate to that app?
I 100% agree that a fair bit of applications running on HPC systems could very easily be run on a grid, but we tend to regard those apps as apps that shouldn't really be running on an HPC. The users using grid computing applications on a true high speed HPC system are really wasting their funding purchasing time on such a machine.
Now where I disagree on this involves getting the data to the nodes. This would end up taking a fairly large amount of time on most internet connections, unless each machine is working on a very small data subset. If the data subsets are that small, I would imagine that the network would be inefficient for even most grid computing tasks. Even grid computers tend to have a nice data backbone for loading up the ram on each node to run computations on. This means that data center based clouds would be a much more viable candidate for this sort of computing, such as the amazon cloud. They have the network connections capable of receiving the datasets in a reason amount of time. I just can't see where even paying a trifling amount would be more cost effective than buying time on a high performance (or even low performance) cloud. The I/O output is a major PITA.
Now if you decide to produce and store all data locally, you still have to keep track of the metadata so that you can stitch it back together. This is a serious issue in and of itself, as in traditional HPC, data is just stored to disk. You will have to verify the data and restitch it, which could require its own large HPC just for that. The data has to be collected somewhere. Check out research on metadata trees. There are people making careers doing just this, look at the folding@home methods in particular. Its a huge issue for your model.
What you want already exists, its called Condor. Its very slow and has been around for quite some time. If you want to test your applications on something similar to what you purpose, then go test some on Condor and see how they run. Its a major PITA though, due to the high amount of non-traditional checkpointing that must be done.