People in twitter have been asking for BEER nodes setup.
The bad news is, we don’t have node design for glsl yet.
There are few commercial glsl node design, but not for non technical (non programmer) artists.

OK, a few thoughts on UI/UX and architecture while I’m still collaring every game developer I know to find out how the OpenGL pipeline actually works. Your patience is appreciated.

BEER is just semi-intelligent sugar for GLSL. The BEER interface is ultimately a visual programming interface for GLSL. Any given node is effectively a stand-in for a function with inputs and outputs. Before we can turn the nodetree into GLSL there’ll be check time (making sure the node tree is sane and reporting to user if it isn’t), compile time (turning the node tree into GLSL) and run-time (running the GLSL). The node system needs to be smart enough not to try to compile something that will fail, but not so locked down that it prevents people from experimenting randomly.

Instant feedback is a must. Any such system (nodal or layer-based or whatever) should offer instant-as-possible feedback from user action to end result… Existing examples of this in Blender are Cycles preview render with the material node editor open in another panel, BI’s preview pane, and the Viewer node/Backdrop in compositor. These are all existing UI patterns within Blender which allow for easy experimentation and troubleshooting.

A running GLSL shader isn’t completely static. In GLSL it’s possible to send new information to the shader while it’s running and after it has compiled - info like colours, specular hardness, mix values, etc. This aligns somewhat with being able to drag particular values to input when using a node group. Obviously whenever the code structure changes, the shader will have to recompile. But this should allow realtime feedback for certain things.

A good interface ultimately describes what the user is thinking, not the underlying code. Artists who are used to Photoshop are accustomed to building stuff up in layers and applying effects over the top. Even in Blender’s compositor, something like Gaussian Blur is a processing operation. Doing a blur in GLSL may not be what you’d expect - for instance, performing a Gaussian blur involves processing the input geometry through the vertex shader and the fragment shader. Good UX should abstract that quirkiness away so that the user doesn’t need to deal with it unless they want to. Something like blur should be presented to the user as a processing step, regardless of whether it’s a combined effort between the vertex shader and fragment shader, because as far as the user is concerned blur is a processing step. Ditto stuff like texture mapping - it should “just work”.

Power to those who can use it, managed simplicity for those who don’t, tweakability for in-between. There’s a world of complexity in GLSL which an artist doesn’t necessarily want to deal with for every single shader they build. On the other hand, you don’t want to take power away from people who know how to wield it (or people who are interested in finding out). So ready-made ubershaders as well as a power-user GLSL node (analogous to the Script/OSL node in Cycles) should both be a given from the outset. For the sake of user-friendliness, all ready-made nodes should have implicit defaults for things like normals - unless there’s something explicitly connected to a node input, they use something sensible at compile time. If a user needs to tweak a ready-made, they should be able to drill down a level - whether they’re drilling down to a node group or directly into GLSL. If the user wants to go all the way and “bake” a node group to a GLSL script in order to optimise the code by hand, warn that there’s no going back then let 'em.

Optimised GLSL will almost always outperform GLSL constructed from prefabs. Until it’s been through optimisation of some sort (hand-tooling, compiler optimisation, etc), the shader’s probably not going to run as quick.

GLSL has its limits. Given we don’t always want to do stuff like glow and blur the hard way through GLSL, it would be absolutely awesome if the BEER system could output to multilayer OpenEXRs with info like Z buffer, alpha channels, movement vectors and other useful stuff as well as the usual RGB and Freestyle layers. (I tweeted a question to psy-fi about the feasibility of this and he says it’s doable.)

Architecturally, I’m not as certain how this all works but I’m learning.

I’m still finding out about it from my game dev friends but it appears that most of what BEER wants is accomplished in the fragment shader, and some of it comes from the vertex shader in a way that mostly doesn’t need to be presented explicitly to the user, e.g. blurs and glows.

One thing I’m curious about is how the different shaders interact - at some point they all have to combine together to make an image, so I’m guessing there’s a main() function which calls the material shaders, asks them in turn “what happens at these pixel co-ordinates?” for each coordinate on the screen, then combines the results.accordingly. Let’s call this the composite function.

When they’re called, the material shaders in turn ask the vertex shaders about vertex information, normals, UV coordinates, etc, asks the scene about lighting, other variables of interest, etc. So we want to be able to grab input from the vertex shader as well as from the scene. (We also might want to ask Blender about what other objects are doing as well, something we can’t currently do within materials until composite time. If we want to do a glowing object which gets occluded by a non-glowing object, all within GLSL… yeah.)

Some stuff can be be pre-computed - info from the vertex shader that represents what the camera can see, for instance.

For flexibility, I’d want to be able to output not one but multiple different image outputs per material which contain RGB, alpha, z-buffer or whatever else we want to send to the composite function. When an image output is created in a material, maybe under the hood it goes into a registry of functions which can be used for final composite. The functions would need to present some sort of signature so the compositor knows what information it can extract. Once you hit a composite node tree, all the image outputs are there as nodes and you can combine them together how you like. This aligns with Blender’s material -> composite workflow pretty well.

When it comes time to actually compile the GLSL, the naive version of the algorithm doesn’t seem to be super-difficult. Every single node represents a GLSL function. If a node takes input from another node, it calls the function of that input node. If a node has an input value, we can treat that either as a constant or a variable. Start at composite, walk back dropping more and more functions into the GLSL code with every new node we need information from. Whatever isn’t connected back to the composite node along some path doesn’t get included. Then compile, run, cross fingers. Obviously there’s more optimal ways to do this, but optimisation comes later. And effects like glow and blur need to get trickier - possibly by following the node chain back to where it talks to the vertex shader and quietly putting the appropriate calls in.

Something my gamedev friend pointed out is that the shading system needs to draw stuff in a specific order for stuff like reflections, object-as-light-source, etc. That’s probably beyond the scope of BEER for now, going off the primitives you’ve listed in that other thread.

The idea for BEER is to be able to add/discard not needed parts in the rendering pipeline.

For example:
A shadeless material without texture. This you only calculate clipping in vertex shader part (not calculating normals etc); and in fragment shader, you only need color data without lighting information etc.

If we drill into this level of shader primitives, we can make very efficient materials (aka superfast renderer) and also design the nodes breakdown.

GLSL wasn’t designed as an artist tool, that’s why nodes have to work as an abstraction on top of it.

The vertex shader part could be used like the texture coordinate/geometry nodes in Cycles (for input), and maybe for a displacement output. The normal and vector math nodes from Cycles could also be useful to tweak normals.

Common fragment shaders like SSAO, shadeless, glossy, etc. could be individual nodes. I think as long as they output color values they can all be mixed easily.

Users who want to use more exotic shaders, or get better performance can use a script node as @quollism suggests.

quollism:

Given we don’t always want to do stuff like glow and blur the hard way through GLSL

Agree. There’s no need to implement “post-process” stuff since they are already in the compositor.

Agree. There’s no need to implement “post-process” stuff since they are already in the compositor.

Agree with that part, but it would be speedier if viewport can have some screen space post processing like in game engines. That’s the original purpose of world effects.

As for only focusing GLSL on fragment shader only, I would be OK with that if Blender actually handles normals well. Current normals in BI shaders (and when any modifier is ON), is a product of already modified normal from vertex shader. No matter what normals you hook into material node input, it always get translated badly. Rendering vertex normal editing useless and with artifacts.

I looked into the cause. Blender has been using fixed pipeline for a long time. There can only be one normals, that is the one going into fragment shader but is pre-computed to lower GPU load and that is the normal that we have been using all this time. [sorry, off ranting like that ]

Normals should behave like shapekeys, UV coordinate etc (vertex shader stuff). You should be able to map many versions of it.

Back to topic:

I guess the new node system should look from user’s point of view and not programming glsl. I like the idea. Which isn’t far off from the layer stacks paradigm.

I’m not sure that adding GLSL nodes is necessarily a good idea for BEER. BEER is all about simplicity and flexibility.

The artist shouldn’t be bordered with fragments and dot products of light and normal vectors. That’s our job. The stacking system should be able to perform in any imaginable artistic situation, thereby making nodes redundant.

The normals issue sounds pretty involved, have you asked Campbell about it? I don’t think it’s something that will be addressed soon (unless the BF starts making a NPR short film )

I’ve been reading a little more and it looks like you can only have one active fragment shader at a time. Randomly combining them would probably involve either building the string at runtime, or rendering in multiple “passes” (where the result of the previous shader is baked to a texture and fed into the next).

Some people have done this apparently, I’ll see if I can find some code.

The artist shouldn’t be bordered with fragments and dot products of light and normal vectors. That’s our job. The stacking system should be able to perform in any imaginable artistic situation, thereby making nodes redundant.

Need to weigh more pros and cons. My last communication with Mike Erwin (BF hired him to optimize the new viewport code, more on the geometry streaming part), he said nodes are hard to do (remember crashing compositor node pre 2.49 day?), and our primitives via stacking are easier (too easy even).

Januz:

The normals issue sounds pretty involved, have you asked Campbell about it? I don’t think it’s something that will be addressed soon (unless the BF starts making a NPR short film )

Bastien Montagne is the guy responsible for it. And it is 1/2 baked job at the moment (if 1/2 cooked meat still ok, but 1/2 baked cake is KO). Our guys been talking a lot to him. But he has no plan to proceed further. Plus no signal that BF will help.

Januz:

I’ve been reading a little more and it looks like you can only have one active fragment shader at a time. Randomly combining them would probably involve either building the string at runtime, or rendering in multiple “passes” (where the result of the previous shader is baked to a texture and fed into the next).

Yup rendering in passes is the way, but may be bad for interactivity for heavy effects. We need to learn from the game guys now.

I see this in 2 ways:

interactive for preview (at least 30 fps) (need speed, can turn off effect), mostly we need to check on character and few objects the character interacts with.

rendering (more mesh with subdiv, full quality textures, post effects), this will be heavy, 30s per frame is possible, but still faster than BI in most cases.

You can render multi pass as GPU will stream to framebuffer that later will be combined in another final combining stream.
Above is for viewport.