OOP-Hw:LuisaCompute Code Analysis
这是本学期按面向对象编程课程的大作业“代码分析”报告。第一次,直面奇迹工程。
这是本学期按面向对象编程课程的大作业“代码分析”报告。第一次,直面奇迹工程。
曾经心中的大山终于被撼动了一角,那些遥远的光芒现在仿佛触手可及。
上一次写这个课程的作业是啥时候来着
我每次都以为自己以后再也不会写渲染了,实际上渲染总是如影随形
Team Members: Xuanye Chen, Zhenzhe Li, Ruijie Jian, Huasen Xi
In this project, we aim to implement an extendable and efficient fluid simulation system on GPU. We will also implement a high-performance wavefront path tracing renderer to render the fluid data created by the simulation system.
Fluid simulation is a fundamental and highly mature field in computer graphics. However, physics based fluid simulation requires expensive computation on particles and grids to produce realistic simulation result. Since the fluid simulation is highly parallelizable, it is a perfect candidate for GPU acceleration. However, achieving high performance on GPU requires careful design and optimization of the simulation algorithm, and once it is done, we can create fluid animation much faster than CPU-based simulation.
Path tracing is also a fundamental rendering technique in computer graphics. Just like fluid simulation, it is parallelizable and can be accelerated by GPU. But to utilize the architecture of GPU, we need a method called “Wavefront Path Tracing” to reduce thread divergence. Once this is done, we can render the fluid data created by the simulation system much faster than CPU-based rendering. And to render the volume data produced by smoke simulation, it is possible that we will need to implement a volumetric path tracing algorithm, which is far more complex than the surface path tracing algorithm.
There are several challenges in this project. To better preserve the volume of incompressible fluid and reduce numerical dissipation, we will need to implement a good advection solver. To solve the projection poisson equation efficiently on GPU, we need a parallelizable preconditioner for the conjugate gradient method. Another difficulty is to implement a wavefront path tracing renderer that can handle the fluid data and smoke data. The most challenging part, however, lies in the performance optimization of both systems. Transplanting a fluid simulation algorithm from CPU to GPU and make it run is not hard, but to make it run fast is a different story. The same goes for the path tracing renderer.
We will use PIC/FLIP method to solve advection, and we will use multigrid method to precondition the conjugate gradient method. As for voumetric wavefront path tracing, we will use the null-scattering method to render heterogeneous media.
Our final project include 2 sub-projects: a fluid simulation system and a wavefront path tracing renderer. And we will use them to produce a video which shows the fluid surface and cloud rendered by the path tracer. All of the computation works are done on GPU and we will implement them using modern C++ and CUDA.
The final projects don’t need to be completely real-time, but we will seek to improve the performance as much as possible. One member of our team has already implemented his own fluid simulation system and volumetric path tracer on CPU, and we will use them as a reference to evaluate the performance of our GPU-based system.
As is said before, we plan to deliver to systems that have the basic functionalities of fluid simulation and path tracing. This is not that hard, but it is possible that we might not be able to implement multigrid preconditioner. It is also possible that porting the volumetric path tracing to wavefront path tracing framework is hard and we cannot parallelize it efficiently. But we still hope that the fluid simulation can run with 643 resolution in real-time if we don’t count rendering, and the path tracing renderer can render the scene much faster than CPU-based rendering while guaranteeing the physical correctness of the rendering result. If things go smoothly, we might be able to finish both systems and have sufficient time to do performance engineering and analysis.
At this stage, Xuanye Chen will account for the wavefront path tracing renderer on his own, and he will also help Zhenzhe Li to port the fluid simulation to GPU. Huasen Xi will be responsible for the prototype of the multigrid preconditioner, and Ruijie Jian will test the correctness of his prototype.
Week 2: Xuanye Chen will start on the volumetric path tracing on GPU. Zhenzhe Li and Xuanye Chen should finish the fluid simulation framework on GPU. Huasen Xi and Ruijie Jian should finish the multigrid preconditioner prototype.
Week 3: Xuanye Chen should finish the volumetric path tracing on GPU. Zhenzhe Li and Xuanye Chen should finish porting the multigrid preconditioner to the fluid simulation system. Ruijie Jian should test the correctness of the whole system.
Week 4: Performance optimization and video production. Xuanye Chen and Zhenzhe Li will analyze the performance of the fluid simulation system and renderer. Huasen Xi and Ruijie Jian will run the CPU-based fluid simulation and path tracing on single scene for comparison. The four of us will work together to produce the final fluid animation.
Rendering:
Megakernels Considered Harmful: Wavefront Path Tracing on GPUs
A Null-Scattering Path Integral Formulation for Light Transport
Physically Based Rendering: From Theory to Implementation, Third Edition
Physically Based Rendering: From Theory to Implementation, Fourth Edition
Fluid Simulation:
Fluid Simulation for Computer Graphics, Robert Bridson
A parallel multigrid Poisson solver for fluids simulation on
large grids
A Cut-Cell Geometric Multigrid Poisson Solver for Fluid Simulation
Course: UofT-CSC417-Physically Based Animation, by David Levin
Course: CMU-15-769: Physically-based Animation of Solids and Fluids, by Minchen Li
Others:
A Multigrid Tutorial, Second Edition, by William L. Briggs, Van Emden Henson, and Steve F. McCormick
We will mainly use Linux system as our development environment. For CPU version of fluid simulation and path tracing, we used TBB for CPU parallelization and glm for basic vector and matrix operations. For GPU version, we will use CUDA and that requires a Nvidia GPU with CUDA support. Besides the above libraries, we will not use any other third-party libraries in core computation part of our project.
本次实验失败了,不过还是记录在案,以后有机会还是要解决它
当时啊,直到DDL过了我都没复现出来,最后一听报告才发现全班就我一个傻子在复现这个…
首先运行参考标程并且抓包,监听nat的两个端口来分析其工作原理。
首先是snat。分析出来的过程为:h1发出wget请求,其端口号为12345(italk),经过nat翻译之后,发出去时ip地址被翻译为公网地址,同时端口被翻译为一个分配的端口号40580。
然后公网的h3认为这就是h1的ip地址以及端口,按照原样访问之,所以返回的包的目的地址就是这个地址,同时端口号也是40580。nat收到这个包之后,将其ip地址翻译为h1的ip地址,端口号翻译为12345,然后发给h1。
接下来是dnat。分析出来的过程为:h3发出wget请求,使用的是公网的ip和端口号,经过nat,发送的目标则也使用公网ip和公网的端口号8001,nat将该ip以及端口号翻译为相应的私网ip(h2)以及端口号(8000),从而发送给h2的相应端口。
所以综上,总是可以总结出来一个原则:从nat出去的包,和nat收到的包,都是使用公网的ip以及公网的端口号,而公网的端口号是要由nat自身来分配管理的。换言之,nat对公网隐藏了私网的所有细节。
问题出现在12月31日中午,当我执行了部分软件包的更新后,整个图形界面都寄了,习惯性地强制重启了一下,开机时喜闻乐见地卡在了磁盘检查之后,没有进入图形界面。
在文本界面操作,发现绝大部分命令都无法使用了。但是/usr/bin
下的程序都还是完好可以使用的,检查PATH
变量发现没有/usr/bin
,于是手动添加了一下,发现sudo
和pacman
等命令都可以使用了。
好在网还是连得上,接下来就是试着使用pacman
进行全系统软件的更新,然而由于密码引擎也崩了,pacman
什么都装不了。
同时由于软件包的部分更新破坏了依赖,vim
也用不了,只能用sed
强行修改pacman
配置文件,将安全检查关掉。
然后就是pacman -Syu
,更新了一堆软件包,发现情况好了很多,很多软件都能用了,但是KDE Plasma
莫名其妙不见了。
于是重新装了一下KDE Plasma
并且重新启动了服务,重启后GUI
界面回来了,完好无损,并且奇迹般地保留了之前的配置,环境变量也都莫名其妙地自己好了。
然而问题是目前的密码学库还是存在问题,pacman
还是没法继续安装新的包,等后续解决。
折磨从这里开始,填坑也从这里开始。