Just wanted to share with you guys, how I have fun in Unity.
I wanted to create something physically realistic, and figured out that Unity has a class to access GPU and use it as CPU. Videocard is full of slow processor units, but there's alot of them. So, some kinds of calculations benefit greatly from being computed inside gpu.
So, I simulated a ground-kind of mater with lots of particles. There's around 30K of them. They physically interact with each other. And form a soil. And then I explode it. Now as it works, I'm thinking of remaking an old Scorched Earth game with this material.
Videocard is full of slow processor units, but there's alot of them.
if i remember right the GPU processors are ultra-fast but "stupid" (less instructions) and CPU is slow but "smart" (x86 is a bigger instruction set) ?
GPU processors are better when you need to do a ton of simple arithmetic operations. I'm not the pro at computer science but i assume the larger instruction set makes it so CPU spend more time to process the incoming data?
Guys, if you have a videocard that supports shader model 5.0, you may be interested in checking this little program I wrote out of curiosity, to measure, how fast mandelbrot fractal will be rendered on gpu. It does indeed being rendered pretty fast, so it feels like exploring it in realtime. The only problem is that on bigger zooms it reaches the precision of double float, and everything becomes pixelated. But it's a pretty deep zoom, you'll see the pixels on 10^-15 scale.
Sounds like Unity has a wrapper for OpenCL/CUDA, which ends up doing that or PhysX.
Yes, and HLSL is used as a language, so practically there's not much difference from a normal C#, I just can't run functions from functions. How easily the task is parallelized is important, luckily most stuff I play with is parallelized perfectly. The most problems I had with was shared access to data, when it's critical which parallel process first reads and writes data. But hey have methods for atomic operations which fix this issue.
Cpu does have wider instruction set, but its "smartness" may be related to its ability to perform even complex operations during only one tick. If I remember correctly, one "add" operation takes one cycle, but one "multiply" operation takes a number of "add" operations. But cpu may have additional shit to multiply in one cycle. But basically CPU core runs 10-20 times faster than gpu core. But gpu has hundreds or thousands of cores.
Gpu instruction set affects coding habits only slightly. And yes, it's best for massive math.
The primary difference for CPU/GPU is their approach to decision making.
CPU's have pipelines, out of order execution and branch prediction. As you can imagine, that is great for sequential instructions and terrible for parallelization. As a result, branching basically crushes GPU performance, it drops off a cliff.
lol, now if we only can change the purple tank into yellow, we'd have a hint at the old game Tank (or battle fortress?) that i played on sega megadrive as a toddler xD it had a pretty cool enviroment destruction system for its time
Ok, after 10 months of a rather lazy but still enthusiastic and sometimes obsessed development, I finally released it to Steam. It's an early access version, unfinished and possibly buggy, but at least people can experience how is it to play an arcade game in a physical simulation.