Interesting project to say the least. Looks like Golem has a competitor. From what I have studied and tested GeePeeU PCI bandwidth isn't an issue for rendering as compared to AI training. I got the following results.
Vray Benchmark:
https://benchmark.chaosgroup.com/gpu/details?hw=Intel%28R%29+Core%28TM%29+i5-2300+CPU+%40+2.80GHz+x4%2C+GeForce+GTX+1070+8192MB+x4&id=16244System Specs: i5 2300
Ram: 10 GB
GPUs: 4 x 1070s
The difference is of +/- 2 seconds. Keep in mind I used a PCI-express 4x Splitter and these results are from a single PCI-lane.
Here is another result from octane bench:
I saw a 10% performance loss while using the "splitter". Still isn't that much IMO.
What I am more interested in is why they are worried about the latency issues and how golem will tackle it. From what I have researched there are plugins already available for connecting up multiple machines for rendering in a LAN environment. Are you guys using one of these? That's the only logical explanation for me to not accept people with less than 100 GPUs. Cause their "software" isn't made to do rendering on the global scale i.e is to split up the task and send it around the globe to an average user/miner.
Also, what will happen with GPUs who will stand idle for most cases? Are you going to support mining as a backup for them? I can't imagine these GPUs are being used 24/7 unless you have managed to land every single Animation studio on the planet to use your service, which I highly doubt at this stage.
Hi,
To answer Your question, we have to go a bit backwards to the main purpose of Leonardo and the way our system faces the rendering task.
Simple Case:1 computer with 1 GPUThis Computer can run a rendering on 1 GPU. Fine.
2 Computers 1GPU EachEach computer can run a portion of the rendering, installing 2 times the 3D software (one on each computer), then each computer runs a piece of the task that the user has to assemble together somehow. ( not ideal at all for designers )
What if You want both GPUs ( in 2 separated computers ) work on the same frame? How do You combine them together?
If You know a solution that can solve this problem i'll be more than happy to try it. But for what we know, there's none, unfortunately.
Leonardo can do it. Our solution can allocate nearly infinite number of GPUs and work on a single frame. This is GPU virtualization.
In order to work, all the GPUs must be wired inside the same network. For this reason, our ideal provider has 50+ GPUs, so each render client can connect to the infrastructure and run rendering without queue and shortage of GPUs.
Latency is not an issue in Leonardo. Not our problem.
We ran hi-speed tests from China using US infrastructures. Not a single second of delay.
Try to believe.
Regarding your point about existing plugins. I am sure there are some plugs around that can split each frame on a single GPU node ( doesn't mean that you can combine gpus on a single frame ) also Blender Network Render can do it, and it's a free Addon. Not crazy magic.
My question is: if someone says that our solution is not so innovative, why there's no market leader for Cloud Rendering? I don't know any render farm that can count 250,000 GPUs aroud the world, a proprietary GPU virtualization software and a platform that eliminates queue.
All of this for a price that is 30-40% cheaper than any competitor.
Including a open Beta Test delivered before the ICO.
I hope you will give our software a try and let us know.
Thanks,
Marco