I can maybe add a bit of context to this. I worked on Moonray/Arras at DWA about 8-9 years ago.
Arras was designed to let multiple machines work on a single frame in parallel. Film renderers still very much leverage the CPU for a lot of reasons, and letting a render run to completion on a single workstation could take hours. Normally this isn’t a problem for batch rendering, which typically happens overnight, for shots that will get reviewed the next day.
But sometimes it’s really nice to have a very immediate, interactive workflow at your desk. Typically you need to use a different renderer designed with a more real-time architecture in mind, and many times that means using shaders that don’t match, so it’s not an ideal workflow.
Arras was designed to be able to give you the best of both worlds. Moonray is perfectly happy to render frames in batch mode, but it can also use Arras to connect dozens of workstations together and have them all work on the same frame in parallel. This basically gives you a film-quality interactive lighting session at your desk, where the final render will match what you see pixel for pixel because ultimately you’re using the same renderer and the same shaders.
Neat! Parallelizing a single frame across multiple machines was something I'd wanted to try back when I was working on RenderMan. It used to be able to do it back in the REYES days via netrender, but was something we lost with the move to pathtracing on the RIS architecture.
Could you go into a bit more detail on how the work is distributed? Is it per tile (or some other screen-space division like macro-tiles or scan-lines)? Per sample pass? (Surely it's not scene distribution like the old Kilauea renderer from Square!) Dynamic or static scheduling? Sorry, so many questions. :-)
My knowledge is probably outdated at this point (the now open source code is probably a better reference than my memory!) but at the time it was exactly as you described. Each workstation loaded the scene independently and work was distributed in screen space tiles and final assembly of the tiles was done on the client. I can’t remember if we implemented a work stealing queue to load balance the tile queue or not… my brain may be inventing details on that part. :)
I built a scene distribution renderer similar to Kilauea for my masters thesis in school, except with a feed forward shader design which exploited the linear color space to never send the results of computations back up the call stack… kind of neat but yeah, all sorts of reasons why that kind of design would not work well under production workloads. And RAM has gotten so stinking cheap!
Arras was designed to let multiple machines work on a single frame in parallel. Film renderers still very much leverage the CPU for a lot of reasons, and letting a render run to completion on a single workstation could take hours. Normally this isn’t a problem for batch rendering, which typically happens overnight, for shots that will get reviewed the next day.
But sometimes it’s really nice to have a very immediate, interactive workflow at your desk. Typically you need to use a different renderer designed with a more real-time architecture in mind, and many times that means using shaders that don’t match, so it’s not an ideal workflow.
Arras was designed to be able to give you the best of both worlds. Moonray is perfectly happy to render frames in batch mode, but it can also use Arras to connect dozens of workstations together and have them all work on the same frame in parallel. This basically gives you a film-quality interactive lighting session at your desk, where the final render will match what you see pixel for pixel because ultimately you’re using the same renderer and the same shaders.