This article is written assuming you have arrived with more questions than answers, we welcome those expert readers to provide a critique or preferably helpful suggestions as you wish.

So, you already know a render farm harnesses greater processing capacity to enable your artists to create multiple creative iterations, get jobs out faster and achieve photo-realistic output where desired.

You get it and have already tried…

So why doesn't mine work, how do I make it better?

Many applications such as SketchUp are primarily linear applications for modelling and utilise single-threaded processing for majority of tasks. Most applications, at least on the workstations, rely heavily on OpenGL, which provides robust 3D APIs and raster pipelines, but were designed decades ago without multi-core considerations. You may have invested in multiple (12-24+) core processors when you could have invested in lower core count with higher speed like 3.5Ghz+ for the artists and investing in a 3rd party.

1. Place the render solution where it will be correctly utilised

Autodesk logo

You might want to look at 3rd party render plug-ins such as V-Ray for Revit or SketchUp or look at exporting to a 3D program that offers genuine multi-threaded renders such as 3ds Max or Cinema 4D. Obviously there are workflow considerations here.

Revit logo

Similarly ensure the actual component of the program will benefit from additional processing power. For example, only the encoding and transcoding in Adobe After Effects and Avid Media Composer take advantage of multiple cores, otherwise these applications will simply utilise a single-core and not necessarily efficiently. Possibly dedicate a specific machine to particular jobs.

V-Ray logo

Octane Render logo

Maxon logo

Cinema 4D logo

Blender logo

Maya logo

2. Start locally to free up some time for the artists

Moving ahead image

  • Add RAM to any workstation. This will be the most economical improvement you can make and possibly alleviate render failures instantaneously.
  • Update software release if vendor has made any improvements to the renderer.
  • Upgrade the GPU to take advantage of CUDA / CL.
  • Add a 3rd party plug-in renderer for your application.
  • Purchase new machines, specifically with the creatives in mind! Performance is nearly linear. An artist currently using a 2.2Ghz machine will increase output/save time by approximately 60% by swapping out to a 3.5Ghz machine and feel far less frustrated.
3. Check network topography for bottleneck issues

Network Mess

  • The render library needs to be located on fast storage with a high IOPS server. Renders pull small amounts of data often, it does not require 10Gb networking, but any issues with switches, client connectivity, etc. will cause a render to fail.
  • Ensure versioning is replicated across client machines and render nodes. Lock it down and schedule synchronised upgrades across the whole business, as appropriate, with software updates.
  • Investigate Queue Management such as Thinkbox Deadline, which can report failed renders centrally and automate resubmission when conditions are rectified.

Garbage

Workflow efficiencies. Do artists need training to ensure they have output in mind when creating imagery?

But of course, once we start, we always want more…

We can go to cloud-render solutions, which are wonderful for elasticity and immediacy and without doubt is the preferred solution for many, however the issue is budget. The general rule of thumb is cloud solutions cost circa 300% more than tin-on-the-ground over three years, so let's address this first.

Render Farm

Rendering is all about simplicity, economy and efficiency… it does NOT need to look like this…based on price/performance/power ratio – how do I decide?

You know you need to maximise cores (or GPUs) in available space and now I hear folks say "of course we buy blades" … I think mainly because Tier 1 vendors suggest this for dense clusters.

However, this is sometimes not the most economical option, we find many clients have available rack space that may not justify expense of blade clusters. We suggest looking at alternative options for these reasons:

  • Initial investment is comparatively high as you need to purchase the complete enclosure, (often not fully populated due to budget) restricting you to that backplane for future blades, hence no accommodation for future technology.
  • Often requires Windows Server, additional administration overhead where standard OS is all that is required for render nodes.
  • Costly components limited to vendor agreements, not necessarily aimed at Media & Entertainment industry.
  • Often doesn't accommodate GPU based rendering.

Digistor Render Solution

Some of you might recall the RenderBOXX of old from Boxx Technologies

Boxes technologies image
Fig 1: RenderBoxx 10100 Series 10100 Series, awesome device with 5x
RenderBOXX modules (10 nodes) (80 cores) that would fit into 4 RU.

Since 2006 we have been installing 1U Twin devices (two motherboards in 1RU) from Super Micro, so now we can utilise standard server components for durability and reliability and achieve volume-based pricing. More commonly now we start with a 4-node appliance in 2RU (see Fig 2).

Supermicro Server 6028TR-HTR4
Fig 2: Super Micro Super Server 6028TR-HTR4:
4 nodes, 8 processors, configurable to physical 224 cores in 2RU.

Now we have more flexibility in terms of initial cost, performance requirements and rack space.

CPU vs. GPU

CPU rendering has been the default for decades now and continues to be predominant in many rendering solutions. In recent years GPU rendering has become particularly relevant for improving render performance. We know GPUs are a requirement in any 3D/Modelling/CAD application to accelerate the Graphical User Interface (GUI) and provide high performance preview, but with today's matured GPU renderers such as V-Ray, Octane, Redshift and many others, we find the features and quality are on par with CPU renders along with a massive increase in render performance, 2-10X gains.

GPUs still have a memory limitation that needs to be considered in your rendering pipeline. Complex scenes with a lot of high-resolution textures need to be broken down and optimised for GPU rendering pipelines which can often negate the performance gains in rendering if extra time is required optimising the scene.

Why do renders fail? There are many reasons a render can fail

  • Poor/unreliable Networking – paths to assets, scenes and render outputs
    It may or may not come as any surprise but rendering on a small or large render farm requires a reliable network. It is absolute must that all render nodes in a render farm must have reliable network access to all components of a render farm including storage systems that have assets and repositories, access to render manager database and manager software. If nodes are unable to access all components of the render farm a render may fail particularly if no resiliency has been built into the render farm.
  • Poor/unreliable Storage IO – read and write performance to scenes and assets
    Equally if access to the shared assets and repositories becomes unavailable because the storage solutions is offline or running at capacity limits you will find renders failing in the render farm.
  • Complex/Poorly created Scenes/Models – Scene optimisation
    Scene/Model optimisation was mentioned in terms of optimising for GPU memory limitations. Scene/Model optimisation is also a common source of render issues and failures, but is often overlooked in ArchViz and 3D rendering. Scene optimisation is a topic of its own and often requires expert knowledge and experience. Beginner and intermediate operators may look at training course that cover Scene/Modelling optimisation.

As a final note here, remember to allow for additional software license requirements if required. Some Cloud instances require per core licensing and this will need to be factored in to your budgeting. We suggest you contact Digistor to investigate with regard to a particular vendor if this is unclear.