How would I go about creating a function that takes a UI coordinate and returns the world point in the same location?
Edit: For simplicity, I'm fine with eliminating all cliffs and slopes and just assume the whole world is a flat plane with height equal to the camera target point.
Edit2: A few more specifications. Camera Properties that must be taken into account are Pitch (Angle of Attack), Yaw (Rotation), Distance, Field of View and Height Offset. Roll is not important. I'm not sure how these variables relate to each other, so any pointers in that regard is also very helpful
If you want to get the position of your cursor in the world space, than there are already functions for that. Otherwise, I think that things might get a bit more complicated. What are you trying to achieve exactly?
I wrote a function like that back in beta when mouse tracking wasn't around. It takes into account pitch, roll, FoV, but had some issues with distance and also required sampling some mouseclicks (It should be fixable so that no sampling is required). The terrain also had to be flat. So it's a bit incomplete and rather complicated, but I'll see if I can dig it up later.
Yes, I know of the functions for mouse world coordinates. I've been using them for a while, but they doesn't update when the camera is following a unit. I've accounted for that with offsets relative to the unit's position, but even then I'm unable to set it up to avoid the mouse position falling slightly behind the mouse ui position.
I'll try to explain what I'm trying to achieve with an example. Let's say you have control of one unit, which the camera follows. When you hold the mouse still in one location, the angle between the unit and the mouse should be constant at all times. When I use Mouse UI coordinates this is trivial (I have a working prototype using only the angle but not the correct world position). With Mouse World Coordinates it's almost impossible. I'm still working to see if I can get the world coordinates to account for all the changes happening in other places of the map, but so far I haven't found a perfect solution.
Afaik "Point from XY" just creates a point object with the coordinates x and y. What he is trying to achieve is unproject screen coordinates to world coordinates.
Here's a mouse tracker I wrote back in beta. When the mouse is moved, the camera Yaw/Pitch changes. The change is detected and a fake cursor is drawn on the UI. The position of the cursor is then projected onto the terrain and indicated with a rally point. They should line up everywhere (except the edges, that's bugged).
There's a lot of junk code in the map, but what you're looking for is in the 'calibration' and 'camera' folder.
Where the Camera Target is the result of the unprojection, when the UI Coordinate is (resolution.x/2; resolution.y/2).
Imagine the screen is a plane in the 3D space. You can get the world coordinate from the screen coordinate as follows:
You need to calculate a vector relative to this plane using the ui coordinate and the fov.
This vector is then scaled by the distance and rotated using the pitch and yaw values.
Add this vector to the camera's position and you have the world coordinate.
The only problem here is that you don't have the camera position, but the camera's target. You can convert between them, but there should be a way to calculate the world coordinate using the target position directly.
EDIT: SexLethal's solution is resolution/fov independent, but requires a calibration (which is a good idea as there is no way of getting the user's resolution as far as I know)
I've decided to go slow and take one step at a time so I can actually understand this, and weed out all errors along the way.
This is the code I have so far. I've limited the camera angle of attack to 90 degrees to avoid any rotations in the conversion. The function accurately converts the ui coordinate to the world coordinate.
I'm unclear on how to include camera pitch and yaw into this. Could you give me a few more pointers?
[NYI] UItoWorldPoint
Options: Function
Return Type: Point
Parameters
Player = 0 <Integer>
UI Point = No Point <Point>
Grammar Text: UItoWorldPoint(Player, UI Point)
Hint Text: (None)
Custom Script Code
Local Variables
ScreenHalfWidth = (ScreenWidth * 0.5) <Real>
ScreenHalfHeight = (ScreenHeight * 0.5) <Real>
AspectRatio = (ScreenWidth / ScreenHeight) <Real>
C.Dist = ((Default game camera) Distance) <Real>
C.FoV = ((Default game camera) Field Of View) <Real>
C.TargetX = (X of (Current camera target of player Player)) <Real>
C.TargetY = (Y of (Current camera target of player Player)) <Real>
dx = ((((X of UI Point) / ScreenHalfWidth) - 1.0) * AspectRatio) <Real>
dy = (1.0 - ((Y of UI Point) / ScreenHalfHeight)) <Real>
dist*Tan(FoV) = (C.Dist * (Tan((C.FoV * 0.5)))) <Real>
World Point = No Point <Point>
Actions
Variable - Set World Point = (Point((dx * dist*Tan(FoV)), (dy * dist*Tan(FoV))))
General - Return (Point((C.TargetX + (X of World Point)), (C.TargetY + (Y of World Point))))
I attached the map in case you want to look at it. Ctrl + Shift + L in the Trigger Editor to view the library.
Your solution doesn't actually work, there's a large error between where the real mouse points (nice comparison tool though. Thumbs up to that) and where you calculate it to be. I've attached a copy of your map that moves a unit to your calculated position to illustrate how large the error is. You need to sort this out before you do the transformations to account for yaw/pitch (this is actually quite easy).
My solution is a bit complicated but it works very accurately and doesn't depend on knowledge of how the camera is initially set up due to the use of sampling. Sampling is a nice tool to use since you don't need to know the underlying mechanics of things. As long as you know how something is related, the proportion constants could all be approximated. For instance the getResolution library needed research on finding precalculated factors for different aspect ratios, but you can do all that with high precision w/o any prior knowledge just by sampling 2 mouse clicks.
This applies here because we don't know exactly where in space the screen is. We know the eye of the camera, where rays project from, is exactly where the camera eye is defined by the spherical coordinates from the camera target. Rays from this reference point pass through the screen that is in front of it and strike the object in the world. The following diagram illustrates this.
The challenge is to find where this window this. I do this by taking 1 sample mouse click (well I do many, but really I only need 1) on a camera facing straight down at the terrain. If the ray is facing straight forward with respect to the camera (lets call left-right X, up-down Y and forward-backward Z), then I am looking straight at the camera target.
If I want to pick a point that's a bit left of the target, I need to rotate my ray so it faces it. But since I don't know where the screen is, I don't know how much to rotate it. So instead I take a sample click at some distance left/right of the screen and record both the distance moved in screen coords and in world coords. From the figure below you can see that the distance moved in world coords compared to the angle is proportional to the tangent of the angle. Therefore by storing the proportion constant I can now move project and point from the camera to the world along the X axis by multiplying the UI distance by proportion constant and taking the inverse tangent to get my ray.
Side view of camera looking down at world with screen space indicated in red. The black center line is a ray towards the camera target. The green line is the ray rotated on the Y axis (facing towards you, Z is up/down) moving the ray along the X axis (left-right in the image). A linear relationship exists between tan(angle) = c*d. Sampling calculates the proportion constant c. c depends on both the distance to camera and the FoV.
To handle points anywhere on the Y axis I need to do another rotation. The first one was along the Y axis (green in figure below) to move the ray left, right; the second will be along the Z axis (in red) to move it in a circle towards my point (X). Since the position of the point wrt the Z rotation only depends on relative X/Y, I don't need to store any constants. The distance of the point X from the center is how much I rotate on Y axis and the ratio of Y/X is how much I rotate on Z axis.
View of screen. To get to point X, two rotations of a ray pointing straight down at the camera target are required. The first (green) along the vertical axis rotates the ray along the X axis. The second along the Z axis (red, Z axis is point away from the screen) rotates the ray in a circle.
By applying both rotations on a ray from the camera at the target (in this case, the ray would be [0,0,-1]), you can reach any point on the world.
The rotations are applied wrt the orientation of the camera, not the world space. Therefore you would run into problems if you try to rotate any arbitrary ray as the world space and camera space don't agree. Instead we note that the camera's final orientation is achieved by 3 euler rotations: roll, yaw and pitch in that order. In the absense of any rotations, the camera always faces [-1,0,0]. This is convenient as it's aligned with the world axis. Therefore, I apply my Y,Z rotations on a ray facing [-1,0,0], then the three euler rotations to get a correctly orientated ray.
And that's how my system works. Not sure if it's the best solution, but it works. :)
Your solution doesn't actually work, there's a large error between where the real mouse points (nice comparison tool though. Thumbs up to that) and where you calculate it to be. I've attached a copy of your map that moves a unit to your calculated position to illustrate how large the error is. You need to sort this out before you do the transformations to account for yaw/pitch (this is actually quite easy).
Are you sure that's not just because the aspect ratio was wrong? I had it hardcoded to 1920x1080 in that test map. When I test the map you uploaded, the marine is directly under the mouse cursor at all times.
I should of course use a calibration method to get the correct aspect ratio, but I will worry about that after the conversion function works properly with a pre defined aspect ratio.
Thanks for the writeup on how your calibration method works. I've read through it a few times, and understand bits of it. I want to focus on one thing at a time though, so I'll go back to the calibration at a later time. I was thinking that it should be possible to samle mouse move events and get the aspect ratio from that as well, but I don't know enough about it yet.
You're saying the transformation to account for yaw and pitch is easy. How would I do that exactly?
When I run the map it's off and the amount it's off changes at the corners of the map. Make sure the marine is still right under the cursor at the far edges of the map too.
As for the transformation, my ray is represented by a vector that points from the camera eye. I look where it hits z=0 to find the terrain. To rotate the line into the correct orientation only the vector needs to be rotated. When the camera has 0 yaw/pitch/roll, it points at [-1,0,0] in world coordinates. To get the facing vector of the camera after it has yaw/pitch you need to multiply it by the correct transformation matrix. In my map it's called something like calibrationEulerMatrixNoRoll (the matrix is generated by multiplying the matrix rotations for Y*Z in that order).
In other words if you want to use the transformation matrix in my example map, you need to construct a vector from the dy,dx relative to a camera target that faces [-1,0,0] rather than [0,0,-1] as in your example map. So dx would alter the y component and dy would alter the z component of the unit vector.
I think [-z,dx,dy] would do it for the vector (I don't think it's necessary for it to be unit length). Multiply that by the EulerMatrixNoRoll as I did somewhere in the latter half of the big camera trigger and see where that vector intersect the ground when it starts from the camera eye.
EDIT: I see the flaw in your approach. It may appear to work because you are looking straight down at the ground and the relationship between UI dx,dy is close to linear to world dx, dy near the center, but everywhere else it's not. It's only true like that in an orthogonal view, not in perspective. Your approach assumes a purely linear relationship right here:
Variable - Set World Point = (Point((dx * dist*Tan(FoV)), (dy * dist*Tan(FoV))))
where dist*Tan(FoV) is a constant
At the edges of the map you'll see the nonlinearities appear and as soon as you turn the pitch up that approach won't work at all. I'll see if I can fix up my script and package it into a library.
You need those values to make UI to World Coordinates. So a map has the same amount of coordinates as the map size, so if its 168x168 it will have 168 coordinates XY. So if we say
Screen_Height = 1150
and
Screen_Width = 1150
then we need these 2 values:
Your map size's X and Y so if your map is 168x168 its 168x168. Got it?
Now make 2 new variables
Multiplier_X
Multiplier_Y
They will be:
Multiplier_X = Screen_Width/Map_Size_X
Multiplier_Y = Screen_Height/Map_Size_Y
Now we got a multiplier, which is the difference in coordinates between the world and the UI. So if we say you have a unit standing in the middle of the map. Then for converting that to the middle of the screen (in UI) we say:
Unit's X * Multiplier_X
Unit's Y * Multiplier_Y
I hope you got what I mean, I weren't quite sure if this were what you needed. But this can be used for creating custom minimaps and such. I've done it and its much more awesome than the normal minimap :)
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
How would I go about creating a function that takes a UI coordinate and returns the world point in the same location?
Edit: For simplicity, I'm fine with eliminating all cliffs and slopes and just assume the whole world is a flat plane with height equal to the camera target point.
Edit2: A few more specifications. Camera Properties that must be taken into account are Pitch (Angle of Attack), Yaw (Rotation), Distance, Field of View and Height Offset. Roll is not important. I'm not sure how these variables relate to each other, so any pointers in that regard is also very helpful
If you want to get the position of your cursor in the world space, than there are already functions for that. Otherwise, I think that things might get a bit more complicated. What are you trying to achieve exactly?
I wrote a function like that back in beta when mouse tracking wasn't around. It takes into account pitch, roll, FoV, but had some issues with distance and also required sampling some mouseclicks (It should be fixable so that no sampling is required). The terrain also had to be flat. So it's a bit incomplete and rather complicated, but I'll see if I can dig it up later.
@Pfaeff: Go
Yes, I know of the functions for mouse world coordinates. I've been using them for a while, but they doesn't update when the camera is following a unit. I've accounted for that with offsets relative to the unit's position, but even then I'm unable to set it up to avoid the mouse position falling slightly behind the mouse ui position.
I'll try to explain what I'm trying to achieve with an example. Let's say you have control of one unit, which the camera follows. When you hold the mouse still in one location, the angle between the unit and the mouse should be constant at all times. When I use Mouse UI coordinates this is trivial (I have a working prototype using only the angle but not the correct world position). With Mouse World Coordinates it's almost impossible. I'm still working to see if I can get the world coordinates to account for all the changes happening in other places of the map, but so far I haven't found a perfect solution.
@SexLethal: Go
That'd be really helpful!
Your trying to get a real-world point based on X and Y values, right? Just use "point from XY"
Afaik "Point from XY" just creates a point object with the coordinates x and y. What he is trying to achieve is unproject screen coordinates to world coordinates.
@Pfaeff: Go
correct.
Here's a mouse tracker I wrote back in beta. When the mouse is moved, the camera Yaw/Pitch changes. The change is detected and a fake cursor is drawn on the UI. The position of the cursor is then projected onto the terrain and indicated with a rally point. They should line up everywhere (except the edges, that's bugged).
There's a lot of junk code in the map, but what you're looking for is in the 'calibration' and 'camera' folder.
Things that need to be considered are:
Where the Camera Target is the result of the unprojection, when the UI Coordinate is (resolution.x/2; resolution.y/2).
Imagine the screen is a plane in the 3D space. You can get the world coordinate from the screen coordinate as follows:
The only problem here is that you don't have the camera position, but the camera's target. You can convert between them, but there should be a way to calculate the world coordinate using the target position directly.
EDIT: SexLethal's solution is resolution/fov independent, but requires a calibration (which is a good idea as there is no way of getting the user's resolution as far as I know)
@Pfaeff: Go
I'll check out both solutions and get back to you
Edit:@SexLethal: Go
I looked at your map. Unfortunately most of the triggers are way over my head.
I've decided to go slow and take one step at a time so I can actually understand this, and weed out all errors along the way.
This is the code I have so far. I've limited the camera angle of attack to 90 degrees to avoid any rotations in the conversion. The function accurately converts the ui coordinate to the world coordinate.
I'm unclear on how to include camera pitch and yaw into this. Could you give me a few more pointers?
I attached the map in case you want to look at it. Ctrl + Shift + L in the Trigger Editor to view the library.
@Builder_Bob: Go
Your solution doesn't actually work, there's a large error between where the real mouse points (nice comparison tool though. Thumbs up to that) and where you calculate it to be. I've attached a copy of your map that moves a unit to your calculated position to illustrate how large the error is. You need to sort this out before you do the transformations to account for yaw/pitch (this is actually quite easy).
My solution is a bit complicated but it works very accurately and doesn't depend on knowledge of how the camera is initially set up due to the use of sampling. Sampling is a nice tool to use since you don't need to know the underlying mechanics of things. As long as you know how something is related, the proportion constants could all be approximated. For instance the getResolution library needed research on finding precalculated factors for different aspect ratios, but you can do all that with high precision w/o any prior knowledge just by sampling 2 mouse clicks.
This applies here because we don't know exactly where in space the screen is. We know the eye of the camera, where rays project from, is exactly where the camera eye is defined by the spherical coordinates from the camera target. Rays from this reference point pass through the screen that is in front of it and strike the object in the world. The following diagram illustrates this.
The challenge is to find where this window this. I do this by taking 1 sample mouse click (well I do many, but really I only need 1) on a camera facing straight down at the terrain. If the ray is facing straight forward with respect to the camera (lets call left-right X, up-down Y and forward-backward Z), then I am looking straight at the camera target.
If I want to pick a point that's a bit left of the target, I need to rotate my ray so it faces it. But since I don't know where the screen is, I don't know how much to rotate it. So instead I take a sample click at some distance left/right of the screen and record both the distance moved in screen coords and in world coords. From the figure below you can see that the distance moved in world coords compared to the angle is proportional to the tangent of the angle. Therefore by storing the proportion constant I can now move project and point from the camera to the world along the X axis by multiplying the UI distance by proportion constant and taking the inverse tangent to get my ray.
Side view of camera looking down at world with screen space indicated in red. The black center line is a ray towards the camera target. The green line is the ray rotated on the Y axis (facing towards you, Z is up/down) moving the ray along the X axis (left-right in the image). A linear relationship exists between tan(angle) = c*d. Sampling calculates the proportion constant c. c depends on both the distance to camera and the FoV.
To handle points anywhere on the Y axis I need to do another rotation. The first one was along the Y axis (green in figure below) to move the ray left, right; the second will be along the Z axis (in red) to move it in a circle towards my point (X). Since the position of the point wrt the Z rotation only depends on relative X/Y, I don't need to store any constants. The distance of the point X from the center is how much I rotate on Y axis and the ratio of Y/X is how much I rotate on Z axis.
View of screen. To get to point X, two rotations of a ray pointing straight down at the camera target are required. The first (green) along the vertical axis rotates the ray along the X axis. The second along the Z axis (red, Z axis is point away from the screen) rotates the ray in a circle.
By applying both rotations on a ray from the camera at the target (in this case, the ray would be [0,0,-1]), you can reach any point on the world.
The rotations are applied wrt the orientation of the camera, not the world space. Therefore you would run into problems if you try to rotate any arbitrary ray as the world space and camera space don't agree. Instead we note that the camera's final orientation is achieved by 3 euler rotations: roll, yaw and pitch in that order. In the absense of any rotations, the camera always faces [-1,0,0]. This is convenient as it's aligned with the world axis. Therefore, I apply my Y,Z rotations on a ray facing [-1,0,0], then the three euler rotations to get a correctly orientated ray.
And that's how my system works. Not sure if it's the best solution, but it works. :)
Are you sure that's not just because the aspect ratio was wrong? I had it hardcoded to 1920x1080 in that test map. When I test the map you uploaded, the marine is directly under the mouse cursor at all times.
I should of course use a calibration method to get the correct aspect ratio, but I will worry about that after the conversion function works properly with a pre defined aspect ratio.
Thanks for the writeup on how your calibration method works. I've read through it a few times, and understand bits of it. I want to focus on one thing at a time though, so I'll go back to the calibration at a later time. I was thinking that it should be possible to samle mouse move events and get the aspect ratio from that as well, but I don't know enough about it yet.
You're saying the transformation to account for yaw and pitch is easy. How would I do that exactly?
@Builder_Bob: Go
When I run the map it's off and the amount it's off changes at the corners of the map. Make sure the marine is still right under the cursor at the far edges of the map too.
As for the transformation, my ray is represented by a vector that points from the camera eye. I look where it hits z=0 to find the terrain. To rotate the line into the correct orientation only the vector needs to be rotated. When the camera has 0 yaw/pitch/roll, it points at [-1,0,0] in world coordinates. To get the facing vector of the camera after it has yaw/pitch you need to multiply it by the correct transformation matrix. In my map it's called something like calibrationEulerMatrixNoRoll (the matrix is generated by multiplying the matrix rotations for Y*Z in that order).
In other words if you want to use the transformation matrix in my example map, you need to construct a vector from the dy,dx relative to a camera target that faces [-1,0,0] rather than [0,0,-1] as in your example map. So dx would alter the y component and dy would alter the z component of the unit vector.
I think [-z,dx,dy] would do it for the vector (I don't think it's necessary for it to be unit length). Multiply that by the EulerMatrixNoRoll as I did somewhere in the latter half of the big camera trigger and see where that vector intersect the ground when it starts from the camera eye.
EDIT: I see the flaw in your approach. It may appear to work because you are looking straight down at the ground and the relationship between UI dx,dy is close to linear to world dx, dy near the center, but everywhere else it's not. It's only true like that in an orthogonal view, not in perspective. Your approach assumes a purely linear relationship right here:
Variable - Set World Point = (Point((dx * dist*Tan(FoV)), (dy * dist*Tan(FoV))))
where dist*Tan(FoV) is a constant
At the edges of the map you'll see the nonlinearities appear and as soon as you turn the pitch up that approach won't work at all. I'll see if I can fix up my script and package it into a library.
If we say we have 2 variables
Screen_Height
Screen_Width
You need those values to make UI to World Coordinates. So a map has the same amount of coordinates as the map size, so if its 168x168 it will have 168 coordinates XY. So if we say Screen_Height = 1150 and Screen_Width = 1150 then we need these 2 values: Your map size's X and Y so if your map is 168x168 its 168x168. Got it?
Now make 2 new variables
Multiplier_X
Multiplier_Y
They will be:
Multiplier_X = Screen_Width/Map_Size_X
Multiplier_Y = Screen_Height/Map_Size_Y
Now we got a multiplier, which is the difference in coordinates between the world and the UI. So if we say you have a unit standing in the middle of the map. Then for converting that to the middle of the screen (in UI) we say:
Unit's X * Multiplier_X
Unit's Y * Multiplier_Y
I hope you got what I mean, I weren't quite sure if this were what you needed. But this can be used for creating custom minimaps and such. I've done it and its much more awesome than the normal minimap :)