0%

2024.6.18

这几日处于懒得写数学只想搓低级工程的状态。

SimCraft\text{SimCraft}做了套新的OpenGL\text{OpenGL}封装,现在作为GUI\text{GUI}小工具包个人认为还好用。

重写了相机类,分离了相机的参数和控制。设计了CRTP\text{CRTP}的相机控制器CameraController类。

今天发现glfw\text{glfw}可以通过glfwSetWindowUserPointer\text{glfwSetWindowUserPointer}glfwGetWindowUserPointer\text{glfwGetWindowUserPointer}来绑定任意的指针,这样就可以在glfw\text{glfw}的回调函数中访问到自己的类了。考虑到这两个函数不是类型安全的,配合CRTP\text{CRTP}可以为模板基类定义静态函数,将指针进行static_cast,这样的静态函数就可以直接被子类实例化为对应类型的函数,从而能直接返回子类类型的指针。在回调函数中就无需担忧类型安全。

问题是这样引入了模板,自然就引入了巨大的头文件,不知道怎么解决。

Team Members: Xuanye Chen, Zhenzhe Li, Ruijie Jian, Huasen Xi

1. Summary\text{1. Summary}

In this project, we aim to implement an extendable and efficient fluid simulation system on GPU. We will also implement a high-performance wavefront path tracing renderer to render the fluid data created by the simulation system.

2. Problem Description\text{2. Problem Description}

Fluid simulation is a fundamental and highly mature field in computer graphics. However, physics based fluid simulation requires expensive computation on particles and grids to produce realistic simulation result. Since the fluid simulation is highly parallelizable, it is a perfect candidate for GPU acceleration. However, achieving high performance on GPU requires careful design and optimization of the simulation algorithm, and once it is done, we can create fluid animation much faster than CPU-based simulation.

Path tracing is also a fundamental rendering technique in computer graphics. Just like fluid simulation, it is parallelizable and can be accelerated by GPU. But to utilize the architecture of GPU, we need a method called “Wavefront Path Tracing” to reduce thread divergence. Once this is done, we can render the fluid data created by the simulation system much faster than CPU-based rendering. And to render the volume data produced by smoke simulation, it is possible that we will need to implement a volumetric path tracing algorithm, which is far more complex than the surface path tracing algorithm.

There are several challenges in this project. To better preserve the volume of incompressible fluid and reduce numerical dissipation, we will need to implement a good advection solver. To solve the projection poisson equation efficiently on GPU, we need a parallelizable preconditioner for the conjugate gradient method. Another difficulty is to implement a wavefront path tracing renderer that can handle the fluid data and smoke data. The most challenging part, however, lies in the performance optimization of both systems. Transplanting a fluid simulation algorithm from CPU to GPU and make it run is not hard, but to make it run fast is a different story. The same goes for the path tracing renderer.

We will use PIC/FLIP method to solve advection, and we will use multigrid method to precondition the conjugate gradient method. As for voumetric wavefront path tracing, we will use the null-scattering method to render heterogeneous media.

3. Goals and Deliverables\text{3. Goals and Deliverables}

Our final project include 2 sub-projects: a fluid simulation system and a wavefront path tracing renderer. And we will use them to produce a video which shows the fluid surface and cloud rendered by the path tracer. All of the computation works are done on GPU and we will implement them using modern C++ and CUDA.

The final projects don’t need to be completely real-time, but we will seek to improve the performance as much as possible. One member of our team has already implemented his own fluid simulation system and volumetric path tracer on CPU, and we will use them as a reference to evaluate the performance of our GPU-based system.

As is said before, we plan to deliver to systems that have the basic functionalities of fluid simulation and path tracing. This is not that hard, but it is possible that we might not be able to implement multigrid preconditioner. It is also possible that porting the volumetric path tracing to wavefront path tracing framework is hard and we cannot parallelize it efficiently. But we still hope that the fluid simulation can run with 64364^3 resolution in real-time if we don’t count rendering, and the path tracing renderer can render the scene much faster than CPU-based rendering while guaranteeing the physical correctness of the rendering result. If things go smoothly, we might be able to finish both systems and have sufficient time to do performance engineering and analysis.

4. Schedule\text{4. Schedule}

  • Week 1: Finish basic wavefront path tracing renderer based on HW-3. Porting fluid simulation to GPU. Implementing prototype of the multigrid preconditioner.

At this stage, Xuanye Chen will account for the wavefront path tracing renderer on his own, and he will also help Zhenzhe Li to port the fluid simulation to GPU. Huasen Xi will be responsible for the prototype of the multigrid preconditioner, and Ruijie Jian will test the correctness of his prototype.

  • Week 2: Xuanye Chen will start on the volumetric path tracing on GPU. Zhenzhe Li and Xuanye Chen should finish the fluid simulation framework on GPU. Huasen Xi and Ruijie Jian should finish the multigrid preconditioner prototype.

  • Week 3: Xuanye Chen should finish the volumetric path tracing on GPU. Zhenzhe Li and Xuanye Chen should finish porting the multigrid preconditioner to the fluid simulation system. Ruijie Jian should test the correctness of the whole system.

  • Week 4: Performance optimization and video production. Xuanye Chen and Zhenzhe Li will analyze the performance of the fluid simulation system and renderer. Huasen Xi and Ruijie Jian will run the CPU-based fluid simulation and path tracing on single scene for comparison. The four of us will work together to produce the final fluid animation.

5. Resources\text{5. Resources}

Rendering:

Fluid Simulation:

Others:

  • A Multigrid Tutorial, Second Edition, by William L. Briggs, Van Emden Henson, and Steve F. McCormick

  • CUDA Programming Guide

We will mainly use Linux system as our development environment. For CPU version of fluid simulation and path tracing, we used TBB for CPU parallelization and glm for basic vector and matrix operations. For GPU version, we will use CUDA and that requires a Nvidia GPU with CUDA support. Besides the above libraries, we will not use any other third-party libraries in core computation part of our project.

本次实验失败了,不过还是记录在案,以后有机会还是要解决它

1. 分析参考\text{1. 分析参考}

首先运行参考标程并且抓包,监听nat的两个端口来分析其工作原理。

首先是snat。分析出来的过程为:h1发出wget请求,其端口号为12345(italk),经过nat翻译之后,发出去时ip地址被翻译为公网地址,同时端口被翻译为一个分配的端口号40580。

然后公网的h3认为这就是h1的ip地址以及端口,按照原样访问之,所以返回的包的目的地址就是这个地址,同时端口号也是40580。nat收到这个包之后,将其ip地址翻译为h1的ip地址,端口号翻译为12345,然后发给h1。

接下来是dnat。分析出来的过程为:h3发出wget请求,使用的是公网的ip和端口号,经过nat,发送的目标则也使用公网ip和公网的端口号8001,nat将该ip以及端口号翻译为相应的私网ip(h2)以及端口号(8000),从而发送给h2的相应端口。

所以综上,总是可以总结出来一个原则:从nat出去的包,和nat收到的包,都是使用公网的ip以及公网的端口号,而公网的端口号是要由nat自身来分配管理的。换言之,nat对公网隐藏了私网的所有细节。

问题出现在12月31日中午,当我执行了部分软件包的更新后,整个图形界面都寄了,习惯性地强制重启了一下,开机时喜闻乐见地卡在了磁盘检查之后,没有进入图形界面。

在文本界面操作,发现绝大部分命令都无法使用了。但是/usr/bin下的程序都还是完好可以使用的,检查PATH变量发现没有/usr/bin,于是手动添加了一下,发现sudopacman等命令都可以使用了。

好在网还是连得上,接下来就是试着使用pacman进行全系统软件的更新,然而由于密码引擎也崩了,pacman什么都装不了。

同时由于软件包的部分更新破坏了依赖,vim也用不了,只能用sed强行修改pacman配置文件,将安全检查关掉。

然后就是pacman -Syu,更新了一堆软件包,发现情况好了很多,很多软件都能用了,但是KDE Plasma莫名其妙不见了。

于是重新装了一下KDE Plasma并且重新启动了服务,重启后GUI界面回来了,完好无损,并且奇迹般地保留了之前的配置,环境变量也都莫名其妙地自己好了。

然而问题是目前的密码学库还是存在问题,pacman还是没法继续安装新的包,等后续解决。