Ghadeer Abou-SalehFor context, please read the description of my first modest attempt at implementing close-to-real-time path tracing (of very simple geometry) with WebGPU. It could be found here: https://youtu.be/ryyuQ10hblA
This second attempt adds two enhancements that improved a tiny bit the overall animation quality:
- Importance Sampling: this is the process in which the scattered rays are skewed towards light surfaces. This makes the Monte Carlo integration converge faster. However, since there are many light/emissive surfaces in the scene, and to maintain acceptable performance, each surface or tile in the scene would scatter rays towards a maximum of 4 lights per tile. At first these surfaces would scatter rays in all directions (i.e. basic diffuse scattering). But when these rays hit light/emissive surfaces, the originating surface or tile would "remember" that light source, so that following ray samples would be skewed towards that light source. The downside of this approach is that adjacent tiles would randomly favor different light sources, causing artifacts in the form of contrast between these tiles (as can be clearly seen towards the end of the video).
- Denoising: Importance sampling helps reducing noise. But to reduce it even further I applied an "improvised" filter as a post processing step for each frame. It is not exactly a low-pass filter (which would cause lots of blurring). Instead, each pixel is mixed with the "weighted average" of surrounding pixels in a way that is a function of the difference in brightness between the pixel in question and the surrounding ones. If the difference is small, the pixel remains largely intact. If the difference is significant, the surrounding pixels are mixed heavily into the current pixel, almost replacing it. The weighted average of the surrounding pixels is also a function of the geometry. Pixels that are not projected from the same plane as the current pixel are not included in the average. To do that, the filter needs to know the depth, and the normal corresponding to each pixel. This keeps the edges sharp.
Despite all the above, the image quality is still not that good, and noise in them is still significant. But I am happy with the overall result given the modest GPU this runs on (an Nvidia Quadro T1000) and how computationally expensive path tracing is.
Path Tracing with WebGPU - Attempt #2 [No Audio]Ghadeer Abou-Saleh2022-06-24 | For context, please read the description of my first modest attempt at implementing close-to-real-time path tracing (of very simple geometry) with WebGPU. It could be found here: https://youtu.be/ryyuQ10hblA
This second attempt adds two enhancements that improved a tiny bit the overall animation quality:
- Importance Sampling: this is the process in which the scattered rays are skewed towards light surfaces. This makes the Monte Carlo integration converge faster. However, since there are many light/emissive surfaces in the scene, and to maintain acceptable performance, each surface or tile in the scene would scatter rays towards a maximum of 4 lights per tile. At first these surfaces would scatter rays in all directions (i.e. basic diffuse scattering). But when these rays hit light/emissive surfaces, the originating surface or tile would "remember" that light source, so that following ray samples would be skewed towards that light source. The downside of this approach is that adjacent tiles would randomly favor different light sources, causing artifacts in the form of contrast between these tiles (as can be clearly seen towards the end of the video).
- Denoising: Importance sampling helps reducing noise. But to reduce it even further I applied an "improvised" filter as a post processing step for each frame. It is not exactly a low-pass filter (which would cause lots of blurring). Instead, each pixel is mixed with the "weighted average" of surrounding pixels in a way that is a function of the difference in brightness between the pixel in question and the surrounding ones. If the difference is small, the pixel remains largely intact. If the difference is significant, the surrounding pixels are mixed heavily into the current pixel, almost replacing it. The weighted average of the surrounding pixels is also a function of the geometry. Pixels that are not projected from the same plane as the current pixel are not included in the average. To do that, the filter needs to know the depth, and the normal corresponding to each pixel. This keeps the edges sharp.
Despite all the above, the image quality is still not that good, and noise in them is still significant. But I am happy with the overall result given the modest GPU this runs on (an Nvidia Quadro T1000) and how computationally expensive path tracing is.
Cheers!Scalar Field Isosurface Ray Casting with WebGPU [No Audio][HD]Ghadeer Abou-Saleh2023-12-31 | This toy explores rendering scalar field isosurfaces with WebGPU using a method that could be thought of as a hybrid of Ray Marching and Marching Cubes.
Unlike Ray Marching, which usually uses Signed Distance Fields (SDFs), this method uses the scalar field itself and its gradients to determine the sizes of the steps that the tracing rays should march by until they hit the selected isosurface. All the ray marching happens in the fragment shader.
And unlike Marching Cubes, this method does not polygonize/tesselate the isosureface, as it uses Ray Casting rather than the conventional rasterization-based rendering. However, it does sample the scalar field and its gradient in a separate compute shader at the points of a 3D grid, and stores the samples in a 3D texture that could be sampled using linear interpolation.
The scalar field of choice is actually a noise field generated by starting from a simple base scalar field, and then recursively superpose a few translated, scaled and rotated copies of that field onto itself.
Using keyboard shortcuts, you could control some viewing aspects, as well as some of the parameters used to generate the scalar field. This is what you see happening in the video as the mouse is dragged on the canvas. It is basically changing the scaling and rotation parameters used in generating the noise field, as well as rotating the model and selecting different isosurfaces.
Cheers!Gravity Simulation with WebGPU + Bloom [No Audio][HD]Ghadeer Abou-Saleh2023-06-16 | This is the same toy I developed a while back, but I added to it a bloom effect. It still needs lots of tweaking for both the artistic and performance aspects.
P.S. I did upload another video about this toy recently, but it ended up getting standard definition only, instead of high definition.Gravity Simulation with WebGPU + Bloom [No Audio][SD]Ghadeer Abou-Saleh2023-06-12 | This is the same toy I developed a while back, but I added to it a bloom effect. It still needs lots of tweaking for both the artistic and performance aspects.
Cheers!glTF Renderer (Wire Frames) [No Audio]Ghadeer Abou-Saleh2022-11-26 | This is a clone of my other amateurish glTF rendering toy (see https://ghadeeras.github.io/pages/gltf.html). The only difference is that it renders wire frames of the glTF models.
The idea is to keep hidden face removal working, and to remove lines between primitive shapes that are on the same plane.
This is done by performing two passes: - Render normals and depths to a textures. - Use the texture to render the wire frames, by giving a dark shade for fragments adjacent to fragments with significantly different normals / depths.
The main change in this iteration is the way "stacking" is done. Previously, to generate a new frame, the few previous and new raw frames are stacked together, by averaging the colors of corresponding pixels from each frame to produce the pixel colors of the final frame. This caused a bit of blurring when the "Eye" is moving through the scene.
Now, the Eye's position, orientation, and perspective in each frame is taken into account when stacking. So, a pixel with coordinates x and y is not necessarily averaged with pixels of the same coordinates in previous frames. This almost eliminates blurring and renders edges sharper. It reduces overall noise, except for mirror reflections, where I think noise unfortunately increased.
Cheers!Path Tracing with WebGPU - The Floating Eyeball! :-) [No Audio]Ghadeer Abou-Saleh2022-06-26 | For context, please checkout my earlier two videos:
There is nothing remarkably new in this new round of toying with path tracing except that the "viewer", the "navigator" or the "first person" is no longer invisible/transparent. Instead the viewer is represented by a small floating eyeball that could cast shadows and could view itself in mirrors.
I should probably experiment with small spot lights instead of the big flat ones to see how this toy would render sharper shadows.
Cheers!Path Tracing with WebGPU - Attempt #1 [No Audio]Ghadeer Abou-Saleh2022-06-05 | This is my very first attempt at trying to perform close-to-real-time path tracing with WebGPU.
To simplify things, I did stick to axis-aligned boxes placed in an axis-aligned 3D grid, of which each cell could hold (or intersect with) up to 8 boxes of arbitrary sizes. The ray/box "hit" function is easy to implement in this case, and the axis-aligned grid is easy to traverse, though it is likely not the most optimal solution.
Three kinds of materials are currently supported: diffuse/matte, reflective, and emissive (for lights). Each face of each box could be assigned its own material.
There is almost no rasterization in this demo except for one inevitable big square to cover the canvas. Ray tracing here starts from each pixel. The Monte-Carlo integration is done to estimate the color arriving at each pixel by averaging multiple samples per pixel. This averaging is done in two ways, once in the fragment shader of the tracer, and once by stacking up to 256 frames.
The number of samples/frames averaged decreases significantly as you move around in the procedurally constructed maze, and increases when you stop moving. This is why the rendering becomes too noisy while navigating the maze.
Clearly, this attempt did not meet the sought and hoped-for objective (i.e. real-time rendering), but it is a step in the right direction. I will need to try a few other tricks, like: hybrid of rasterization/path-tracing, breaking down each render pass to smaller more efficient render passes, denoising techniques, and smarter averaging of frames that accounts to movements.
Cheers!Sculpting Using Marching Cubes [No Audio]Ghadeer Abou-Saleh2021-12-19 | This demo is meant to experiment with the idea of using the marching cubes algorithm / scalar field tessellation as means towards creating a simple sculpting tool.
The idea is to start from a simple scalar field which when tessellated for some contour value would give a base geometric shape, like a sphere. This can be thought of as the base "stone" from which the sculpture would be carved or modelled.
The act of carving/modelling is done by adding/subtracting another scalar field representing the shape of the sculpting tool, to the current field around the point where the action is performed. The magnitude of carving/modelling is determined by how much the user drags the pointer vertically (height/depth) and horizontally (breadth). The change will be aligned with the surface normal at the action point.
A few interesting challenges were encountered in this experiment. For example, the scalar field of the sculpting tool should be sampled at different resolutions, while applying appropriate filtering at each resolution to avoid aliasing. Lower resolutions of the carving tool are used when the magnitude of the carving is small. Another challenge is how to deform the scalar field of the carving tool depending on the desired height/depth and breadth, which could be controlled independently, while keeping the deformation perpendicular to the sculpture surface.
The end result is fun to play with, but I think it suffers from a few drawbacks that limit its practicality. For instance, the use of the marching cubes algorithm means one has only a limited space to work in. Once one grows the sculpture beyond that limit, its surface appears to rupture. In addition, it is at times hard to predict how the surface will change. That is because there is no visual cues about the scalar field value outside the surface of the sculpture. So, modelling the sculpture at some places could cause some surfaces to appear unexpectedly in surrounding spaces.
Cheers!Gravity Simulation with WebGPU [No Audio]Ghadeer Abou-Saleh2021-10-14 | This is an example of a particle system implemented using WebGPU. I used this example merely to experiment/toy with the compute shader of WebGPU, not for any serious scientific purposes.
This demo simulates gravity between 16384 "heavenly bodies" of random masses and sizes using an arithmetically safe variation of Newton's laws of gravity; safe in the sense that no division-by-zero could occur. This is done by adding a safety margin to the distance between any pair of bodies when applying the laws. This provides an adequate but rough approximation of how the gravitational field changes near or below/inside the surface of these bodies. No collision handling is performed, i.e. these bodies could move right through each other.
At the moment of uploading this video, the demo can only run on a Chrome browser version 94 or higher that supports WebGPU as an experimental feature. I have not tested it on various platforms. I suppose it requires a decent graphics card. It works fine on my system which is a laptop with the following specifications: