Wednesday, November 9, 2011

Success!... Kinda...

Well, after a couple of weeks of kluging, I finally have a working SVO rasterizer in UnrealScript. THis is step one of my master plan (Bwa-Ha-Haaa!) Unfortunately, I have a leetle problem.

It's slow. I mean really slow. Slower than snail snot uphill in a headwind slow. <<1 fps slow. WHich sucks mercilessly.

The main problem is that constructing an SVO is a time consuming process, and if at all possible, the best way to do it would be on the GPU, or at least on a pathway that allows rapid switching to the GPU for processing. Unfortunately, UnrealScript does not allow for this, and, additionally, runs 20-50 times slower than native C++ anyway. Hence the incredible slowness to which I already alluded. Now, I'm sure there are ways to speed up the process; I can separate static and dynamic mesh generation, streamline what is generated and where, and basically shortcut it to hell and back, but the fact remains it will be slow.

So, I think, sadly, with regret, it's time to shut the door on the dream of achieving this in UDK and instead simply port my cone-tracer directly to CUDA and connect it to Maya natively.

It's sad, but ultimately, this is my best shot of getting this done in anything approaching realtime.

More news as it happens, I guess :(


Sunday, October 30, 2011

On Realtime GI and Other Animals

Don't get me wrong; UDK is awesome. Having looked at the other game engines out there, UDK is head and shoulders above the rest. Sure, graphically, the CryEngine 3 SDK just ekes out the win, in terms of flexibility, ease of use, stability and available middleware (and before you all shout at me that CryEngine is specifically designed to avoid middleware, I know that, but UDK's middleware is just great, so there!) Unreal Engine is simply the best, period.


So, why should I keep on harping on about realtime GI on the UDK forums? Well, simply put, this project needs it.

Most of the scenes will do fine with the baked Lightmass lighting; outdoor, daylit environments will be fine with that. But I have several indoor scenes with dynamic lighting that just needs that extra pop with realtime dynamic global illumination, and that's all there is to it. Now, I could cheat and rewrite the scenes to avoid that necessity, but that's not how I roll, baby. So I'm just going to have to suck it up and figure it out.

Options, Options...

When it comes to realtime dynamic indirect illumination, there are several options to choose from, all of which are achievable to some degree in UDK. Starting with the simplest, they are;
  1. Screen space effects
  2. Precomputed Radiance Transfer (PRT)
  3. Virtual Point Lights (VPLs)
  4. Reflective Shadow Maps (RSMs)
  5. Full-on realtime traced GI
Screen-space effects are, to my mind, the least satisfactory. I've meesed with screen-space indirect illumination (SSII), screen-space colour bleeding (SSCB) and screen-space-directional-occlusion (SSDO) individually and in combination, but the results always left me flat. Apart from simply looking like a poor-quality paint-over, the noise and difficulty of use meant that I would end up spending half my time tweaking them when I should be simply animating. They also have an annoying issue in that if a surface is hidden, it will not contribute any more. Imagine you have a red wall that reflects red light onto an object in scene. It might look great until the camera angle changes to hide the wall, and when that happens, BINGO! No reflected colour. Not terrible in the majority of cases, but in a movie, which has to look good all the time, totally unacceptable. Bleah.

PRT, on the other hand, is relatively simple to implement, requires no actual tweaking (beyond that which normal light-based methods require) and is a true 3-D effect that works in the scene. In other words, where screen-space effects will pop in and out as the view changes, PRT will not. Basically, PRT calculates what and how much light is reflected off a surface beforehand (the precomputed part of the name), but unlike Lightmass, doesn't actually bake a shadow map. Instead, PRT keeps the information stored elsewhere and modulates it based on the light levels and what is surrounding the object at any moment, then projects that light into the surroundings on the fly. At this point, the only real issues with it are it only works on diffuse surfaces, which isn't a huge problem, but tends to detract from the realism of the scene, and the fact that it doesn't work on dynamic surface, which is pretty much a dealbreaker...

VPLs are even more interesting; the idea is that each real dynamic light spawns a random collection of point lights around it that are tailored to the environment in such a way as to simulate bounced light from the original source. They provide both diffuse and glossy reflection and generally look pretty good. However, one can end up with hundreds of the buggers in even a simple scene, and to be perfectly honest, they don't look great. Don't get me wrong, they look okay, but I need more than okay. PRT gives better overall quality, but VPLs work with dynamic ad static objects. I'll mark this as "maybe".

RSMs are another possibility. The idea here is basically to generate a form of shadow map that contains information about the light itself that can be projected back onto a mesh (the technical term is "injected") to simulate bounced light. Again, it's an elegant idea. CryTek's "Light Propagation Volumes" in the CryEngine uses a variant of this technique to improve the quality of their solution. The problems with this are twofold: First it's not that good. CryEngine's realtime GI is, frankly, not great (what people think is good in a game is not necessarily good in a movie) and the bounces are actually wrong in many cases. Additionally, it will be hard (but not impossible) to implement. UDK doesn't give you access to shadow code, so I'd have to kluge something involving materials and UnrealScript and it probably won't look terribly convincing.  I'll mark this as "no".

Finally, full-on realtime traced GI. This may seem like the most implausible, but actually in may ways, it's the simplest idea, oddly. The only hindrance to its implementation so far has been speed. Now, thanks to GPU processing power, we are starting to get to the point where this is a real, feasible possibility.

So; I'm going to start turning my attention to achieving traced GI in UDK. I have gathered my materials and I will start experimenting. Along the way, I shall post updates, images and code and show you my progress. If this doesn't work, I will go elsewhere for answers, but I want to try.

Hope you'll stick around; things are about to get interesting. And code-y.

Saturday, October 29, 2011

Journeys End, Journeys Begin

Well, here we are with a site, still with that new-blog smell, in which I will detail my attempts to bring to a computer near you something (hopefully) original and fresh.

My goal over the coming months is to create an animated short entirely withing the Unreal Development Kit (UDK), using all that the engine has to offer including AI, physX, APEX and advanced shaders, the DX11 tools and post processing to hopefully acheive something on a par with (or not too disastrously far from) the Samaritan demo.

Will this work? Honestly I don't know, but I'll be posting thoughts, experiments and results as I go, so stay tuned...

I will also be cross-posting to both the UDK and Polycount forums, so you can follow along at home. If it works, I will even be releasing materials, post effects and even code, so it may be worth dipping in occasionally.

So, let's get started!