

But some features are very hard to implement (or will come with huge penalties). IMO we will get to 95% of the old features. As this design is totally different there are also risks like are we able to implement all current features into this new architecture. Normally compositors are image based and many algorithms are also created for image based compositors. Using viewports it will be easier to composite planes into your scene when the camera is moving.Īlso at all input image/renderlayer nodes the filter (nearest, linear, cubic, smart and others) and clipping (Clip, Extend, Repeat) can be selected that will be used when sampling this image. But for canvas compositing we will allow these viewports to be visible in the backdrop of the node editor (feat: canvas compositing). These viewports can be added in the 3d scene. This leads to very crispy images compare to the current compositor.īy all buffers, like input/output images, renderlayers nodes the user will be able to select aviewport what identifies where the image is in the scene and with what kind of camera it was created with (eg plane vs spherical). For example a blur node can ‘bend’ the incoming ray (of change the samplesize).Īs samples are slightly randomized, every sample will sample different part of the same pixel, what will lead to sub pixel sampling. And select which input socket will receive which ray. Nodes will be able to create alterations to these ray. This result is then passed back to the node who requested it. The image will then be samples to a result color/value. When the ray is at an input node (Image, RenderLayer) the given ray is transformed into the specific image space. In the sample based compositor the X,Y coordinate of the output image is transformed to a Ray (Position, Direction, UpVector, samplesize). The new design should be able to run fully on the GPU. Other nodes are calculated on the CPU huge amount of data is loaded/unloaded what takes a lot of resources. GPU support: The current compositor does only support GPU for a certain number of nodes. Canvas: Being able to put images in the compositor and align/transform it visually. When the Compositor takes the actual camera data of the scene (or image) into account this the compositor can calculate more accurate in these cases. In cases like dome rendering, VR/AR a lot of trickery and working arounds are needed in order to composite correctly. When using the Panaroma camera’s the blurs are not accurate.

PixelSize aware: Currently the compositor is fixed to the perspective and ortograhic camera models. With Relative all parameters should be aware of the resolution it is calculating in. Relative: Currently changing resolution (or percentage) will effect the working of several nodes even default settings of the Blur node needs to be adjusted. When actually rendering the number of samples can be increased for better result. When speed is needed the artist can lower the number of samples and fast feedback. With number of samples the artist can switch between speed and quality at any given moment. Principles Sample based: When using samples.
