I’m attempting to implement an SDF shader in Maya, using the GLSL shader as a front end for it. However, rendering the SDFs depends on being able to raycast consistently as if from a perspective viewpoint.
As far as I am aware, there is no way to determine the screen-space position of a fragment shader call without knowing the screen’s resolution. This is also essential to things like edge detection, blurring etc.
A trivial example can be found in the first line of this: https://www.shadertoy.com/view/4dcGW2
" vec2 uv = fragCoord.xy / iResolution.xy; "
- where fragCoord is the xy index of the fragment (which I can get), and iResolution is the xy resolution of the viewer (which I cannot).
I can’t see any kind of semantic in the docs https://knowledge.autodesk.com/support/maya/learn-explore/caas/CloudHelp/cloudhelp/2018/ENU/Maya-LightingShading/files/GUID-13229A83-B6A0-4280-840C-F9C6F40BB13D-htm.html or in the provided ubershader implementation. I’ve also tried some way to resize an actual polygon in maya, assign it a colour ramp texture and pass that into the shader, but that hasn’t worked either.
Apologies if this is a simple question, I’m still a total shader baby. If there is a solution to this, or an alternative way entirely, please correct me.
Thanks very much.