Hey NiN, thanks for the info... I'm actually not too concerned with the IREF matrices - aside from hoping that I could gather some info about the coordinate system being used from them. My bigger problem is with the pos/rot/scale info that each bone has (as a base pose). From looking at your code, those values seem to make up what I (and Cinema 4D) call a 'local' matrix (as opposed to the (IREF) global matrix). Local matrices in Cinema 4D are 'relative' to the parent bone/object's matrix... and I see that you are combining the parent matrices, so that all makes sense (I can set either the global or local matrix, so I don't/shouldn't need to worry about the order, or do the multiplication myself).
So far, I'm just not getting anything useful out of the bone pos/rot/scale values as a local matrix, but I'll keep toying with it.
Actually, let me explain my progress a little better/clearer...
I can set up the skeleton using the IREF (global) matrices fine - since all the really matters from thos are the translation values - the bones all end up 'positioned' correctly, in a good reference postion for skinning the vertices to the bones.
The problem I'm having is when trying to apply the bone pos/rot/scale values as a local matrix (I'm still just looking / working with the initial ones from the bone itself, not any of the animation keyframe data ones yet).
When you start using local matrices (relative to the parent), _then_ the rotation/angles of the bones _are_ important, because the translation/positioning of the bone is along that orientation of the parent bone axis.
...and that all animations are done starting from the deepest bone in the hierarchy and then moving upward. If you figure out anything let me know.
Hmmm... I have to admit that I didn't/don't quite grok that part of your code... I assumed that you were combining matrices starting from the root and working your way 'down' the chain (ie. from shoulder, to elbow, to hand, to finger...), but it sounds like you are doing the opposite?? If that's the case (sarting at the end of the chain and working back up towards the root), then those IREF matrices would in fact come into play.. hmm...
Yes exactly, after alot of trial and error I found you have to work your way 'up' the chain (finger->hand->elbow->shoulder) starting from the deepest bones first. The screenshot is from the base pose of the zealot after the 'initial' translation/rotation/scale values were applied to the bindpose. I think once you are able to begin applying transformations from the deepest bone first up the hierarchy, it'll all come together and your animations/base pose will start to look fine.
Be weary of one thing I have discovered with some Blizzard models. Some of them use negative bone scales when 'mirroring' certain bones. You can find this kind of negative scaling in the Immortal and the DarkTemplar models (realising you don't have access to these models, you'll have to take my word for it :P). I don't know if Cinema 4D will be able to process the negative scaling correctly, but it's something to be weary of if you plan to create a complete importer.
Ahh.. gotcha - thanks for confirming that ordering. I'm still having trouble with the bone pos/rot/scale values (the initial pose), but I think I'm making some progress back on the IREF matrices...
As mentioned earlier, Cinema 4D uses a coordinate system where Positive Y is 'Up' and Positive Z is 'into' the screen (positive X goes to the right). Anyway, to account for this, I had been just rotating the matrices -90 on the X axis, but I finally managed to get my MatrixSwapYZ() code working, so now those IREF matrices actually look like they make sense (ie. the bones end up aligned along an axis). This was something that I was seeing when trying to get the bone values (pos/rot/scale) working, so I think I'm getting close(r) to getting everything to line up :). Once I get that initial pose working, I'll know what to do with all the animation keys - I just need to get over this first hurdle.
Also, I'm having issues with mirrored UVs and Normal maps. It's the age old issue where one side is concave the mirrored one becomes convex. Not sure if SC2 engine has an internal code that handles this properly, nor if Blizzard's exporter has something that does this.
Have you guys stumbled into this, or am I doing something wrong that causes the problem stated?
Rollback Post to RevisionRollBack
Whatever you do, wholeheartedly, moment by heartfelt moment, becomes a tool for the expression of your very soul.
Thanks - I guess I'll know the pose when I (finally) see it, but seeing it ahead of time might help me figure out where my math is going wrong.
re: mirrored UVs...
That's really in the hands of the (UV) artist... it's quite common, as a means of cramming a lot of texture onto a single bitmap, but the problem with doing that is what you describe - if 2 halves of a model both share the same UVs, then the Normal Map (or bump-map/displacement-map, as the case may be) will be 'wrong' (inverted) for one of those 2 sides. There are some programming tricks that could detect this sort of thing, but I kinda doubt any games have code in place to look for that sort of thing (even even if so, it would have to make certain assumptions - or be driven by some sort of flags/data in the file and I don't think we've run across anything like that yet).
In short, I don't think it's anything you're doing wrong... the best way to fix it IMO is to re-uv-map the model, so parts aren't mirrored/shared.
Grrr.. hmm.. so, it looks like I've been chasing a wild goose for the past day or 3... I was under the impression that your script was doing the skinning based on the IREF pose and _then_ applying the base pose. I finally went back and tracked it down and it looks like you don't apply the skin until after the base pose is set (do'h!).
My bad. I think I'll just move on to the animation keys and see how that goes :).
I'm a little confused about something... in M3I_Animate_Bone() where you are setting up the keyframe animation for each bone, the transform for each bone (that has a parent) relies on and is combined with the parent bone's transform... but the parent bone has not yet been updated for each frame. So... is all of the animation data relative to the 'base-pose' ? (it sure seems like it) and not the current/previous keyframe?
NVMeshMender? Nvidia Mesh Mender. Also, (forgot the game) does the detection from their engine. Artists could just mirror and symmetry their hearts out.
Anyway, before posting this issue, I've explored blizzard's assets. From game units to cinematic. Textures and models. And what you would find is Blizzard utilizes mirrored uv shells, even for normal maps.
The sm_hydralisk for example uses flat textures for its diffuse. By flat I mean no ambient occlusion detailed into the texture. The UV islands are not the full top-down view of the hydra's head nor of it's body, but only half of it.
There was no flipped issue viewing the model through the game engine.
I understand that one can get away by tweaking the diffuse to trick the lighting and nullify the normal map issue altogether. This however has obvious drawbacks.
From my own limited understanding through recent testings, I have this feeling that:
A. Blizzard engine does not automatically detect inverted stuff from the UVW data.
B. Their exporter could have added code that detects and processes these information.
I am no expert to this, but I would argue against what you said about current game engines.
quoted from cgtalk:
"also by the nature of a normal map, It probably will not look perfect ever if you mirror them. Unless there is some process i'm not aware of.
Because (for example)
red= < that direction
green= > that direction
one side of the model will look correct, but the other side will have red green inverted.
then again, there may be a process I'm not aware of. I'll keep an eye on the thread if anyone mentions something.
Some shaders understand this and correct it when displaying.
Unreal 3 has a shader that does this afaik."
As I mentioned, there are programming tricks that can be used (possibly queing on the 'winding-order' of the uv-polys, for example) - I'm just not aware if anyone is doing it (they may well be). There may also be an existing flag in .m3 files that I/we don't know about... I know they allow for up to 4 sets of UVs, for example - a second set could easily be for decals, but I'd only be guessing at use for the other 2, so who knows?
Yeah keep up the good work guys, I think that is causing my "shading" problems, because I always get faces with inverted normal maps and maybe the "shading problem" is also caused by this, because 2 Sides of my houses seem fine while the other two do not, and I think I always did the unwrap selecting two parallel walls of my houses, I will see if that was the problem, when I finished my next one, I will do it wall by wall then.
Before I forget... while looking for models to test with, I downloaded a bunch of those converted WoW models and... every one of those has 'inverted vertex normals' (well, maybe not exactly inverted, but 'incorrect' - like maybe the x or y axis is flipped the wrong way). I'm guessing that is an issue with whatever conversion script was used, but I hadn't had a chance to look into it yet, so this is just a 'heads-up'.
Just to clarify, 'winding-order' a term used to describe the direction around a polygon that the vertices are listed (clockwise or counter-clockwise). It's primarily used for polygon Normal determination and thus, backface-culling...
Normally (pun intended), when you uv-map a model, the winding-order of the uv-polygons matches the winding-order of the mesh-polygons (the vertices are listed either clockwise or counter-clockwise around the polygon). When you 'flip' some uv-polygons over to mirror them, the winding-order flips as well, so that it no longer matches the mesh-polygons. So the program could (for example) compute what amounts to a 'uv-poly-normal' for each uv-polygon and if the sign of the Z axis differed from the sign of the Z axis of the associated mesh-polygon, it could also know to invert the Normal/Bump/Displacement Map for that polygon.
As mentioned earlier, such a scheme would involve certain 'assumptions'... when you mirror uv-polys left-to-right, you only want the x-axis (and z) axis of the Normal Map flipped - but _not_ the y-axis. However, flipping a uv-polygon from top-to-bottom is indistinguishable from flipping a uv-polygon from left-to-right just by looking at the winding-order. So the program could either 'assume' (allow) you need the x-axis of the Normal Map flipped, or assume you need the y-axis flipped, but you couldn't just arbitrarily flip either (or both) directions, without some other mechanism besides the winding-order.
Alrik, there should not be shading problems. I dont have them anymore. ONly problem I have are specular map tearing the texture shading now... All works fine without specular, but looks dull... I havent tested anything since then, coz I dont have time. But try to create new box model and test it ingame, but all maps.