I'm using many timers and I heard they cause lag... :( It's not clear to me what is causing lag, though. The number of timers, the frequency, or the type?
I have:
- A chase camera trigger that runs every 0.05 seconds. It has three if/then clauses in it, which check if the camera is locked for reasons of being dead, sitting at a static gun or having disabled the camera rotate function. It is also turned off when the game ends in order to pan to either base blowing up.
- Three different triggers that run every second and add minerals, tick down the bloodlust counter for infested players and give orders to groups of NPC units. They're separate in order to avoid a long if/then/else chain; if one is not applicable the trigger is turned off and out of the way entirely.
- A few triggers that run every 5 or 10 seconds.
There are, however, only 5 players.
I could replace the chase cam with a loop with a wait in it (or no wait, so it runs at max speed?) but how would I turn it off without another if/then that checks for a boolean? Are the other triggers important to optimise?
Premature optimization is the root of all evil. Are you actually experiencing lag, or are you just worried that it might happen?
The only thing I'd suggest is not having a trigger run every 0.05 seconds. If its sole purpose is to have the camera track a unit then there's a Make Camera Follow Unit function which does the same thing.
I wouldn't know of timers causing lag (unless, of course you have a huge lot of actions in a 0.01 timed timer).
What it does, however is increase the memory load of the game, which can, in turn, lead to lags on weak computers after a while.
The way to deal with that is to remove the peroidic timer event and use a loop inside that trigger. Most triggers can be changed from:
That would be especially good for the 0.05-seconds camera trigger. It's not that important for the 5-10 seconds triggers. But in general it's always better to have loops instead of peroidic events, unless there's something you can't do in a loop.
PS: Regarding the post above - there's nothing wrong with a 0.05-seconds loop. Galaxy is a rather fast scripting language and is able to deal with it very easily. So if you need it, then keep it. Don't just increase wait time because you fear for something that might not even be the case.
PS: Regarding the post above - there's nothing wrong with a 0.05-seconds loop. Galaxy is a rather fast scripting language and is able to deal with it very easily. So if you need it, then keep it. Don't just increase wait time because you fear for something that might not even be the case.
That wasn't my point, I meant that he might be having the trigger to do something that could easily be accomplished another way :)
I have done some testing and discovered that everytime a trigger fires, the memory comsumption of SC2 goes up by a bit, but doesn't decrease anymore (like it is with local variables, for example). I think the memory generally increases by 4 bytes with every trigger call. This memory will only be cleared once you close the game, or maybe if you close the map, too.
Having a 0.05-peroid trigger would eat about 5kb every minute. Now, that isn't very much, but with several low-peroid timers it adds up.
One shouldn't absolutely panic because of that, but still - it's better to reduce the usage if you don't need it.
So if you play the game for 6 hours nonstop, you lose 1.8 Mb? Hmm. :D I'd run out of memory after 568 days of playing. Still, the peace of mind of having less network traffic should be worth something. (It should only tell you 'pan the camera', not just 'I fired a trigger: pan the camera')
I had a concern about waits though: a timered trigger does its thing every few ticks, then shuts off. A trigger with a wait in it... would it sit there and take up resources in the meantime?
......
I don't have lag yet because I can't go on battle.net! :) Also, lag has traumatised me because I had to throw a half-done WASD-based competitive map into the garbage bin thanks to battle.net. I'd rather not have to do that twice, not after 3 months of development.
......
Maybe the camera trigger itself can be optimised. It consists of several if/then statements:
Player Group - Pick each player in grpPlayersInGame and do (Actions)
Actions
General - If (Conditions) then do (Actions)else do (Actions)
If
boolCannotRotateCam[(Picked player)]== false <-- cinematic mode or player is in a building
Then
General - If (Conditions) then do (Actions)else do (Actions)
If
boolChickenCamOn[(Picked player)]== false <-- player set a UI toggle to disable chase cam and use a standard isometric cam
Then
Camera - Apply camera object Rotation (Facing of unitCurrentUnit[(Picked player)])for player (Picked player) over 0.5 seconds with Existing Velocity% initial velocity and 10% deceleration
Else
General - If (Conditions) then do (Actions)else do (Actions)
If
boolIsInfested[(Picked player)]== false <-- isometric cam can face both ways depending on which direction the player has to go
Then
Camera - Apply camera object Rotation 90.0for player (Picked player) over 0.5 seconds with Existing Velocity% initial velocity and 10% deceleration
Else
Camera - Apply camera object Rotation 270.0for player (Picked player) over 0.5 seconds with Existing Velocity% initial velocity and 10% deceleration
Else
General - If (Conditions) then do (Actions)else do (Actions)
If
boolCannotPanCam[(Picked player)]== false <-- cinematic mode
Then
Camera - Pan the camera for player (Picked player) to (Position of unitCurrentUnit[(Picked player)]) over 0.2 seconds with Existing Velocity% initial velocity, 10% deceleration, and Do Not use smart panning
Else
Does anything jump out as being fps/lag unfriendly?
Yea, as I said it isn't much, but keep in mind other applications need memory as well - and SC2 itself eats about 500MB too. So you'd already run out of memory in 1 year D:
(And as I said, multiple of these triggers do add up and could cause high delays when closing the map - probably because SC2 needs to clean the memory).
Anyay, that trigger looks fine, but if you gamble on getting rid of BNET delay through trigger optimization I'll have to disappoint you.
Even the most optimized WASD system will have unusually high delay online. It's just like that at the moment. Nothing we can do about, Blizzard's gotta take care about that.
So it wasn't your fault that this half-done map didn't work well on BNET :p
I have done some testing and discovered that everytime a trigger fires, the memory comsumption of SC2 goes up by a bit, but doesn't decrease anymore (like it is with local variables, for example). I think the memory generally increases by 4 bytes with every trigger call. This memory will only be cleared once you close the game, or maybe if you close the map, too.
Having a 0.05-peroid trigger would eat about 5kb every minute. Now, that isn't very much, but with several low-peroid timers it adds up.
One shouldn't absolutely panic because of that, but still - it's better to reduce the usage if you don't need it.
How did you test? I just created a testmap with 200 copies of a trigger which fires 1000 times per second and contains an if then else statement which will always send a text message to all players. I let it run for 5 minutes and couldn't see an increase in memory usage but it should increase by:
1000 triggers per second * 200 copies of the trigger *300 seconds * 4 Byte per trigger / 1024 / 1024 = 228MB
But maybe copies of triggers are not handled as a whole new trigger? Or it isn't possible to fire a trigger 1000 times per second? Otherwise i am a bit surprised by the result, on the testmap there are also a shitload of units fighting and a didn't experience any lag at all, i thought something like this would consume more computer resources.
Increased memory usage != lag.
Lag only appears once your free memory starts to deplete.
Anyway, I made a test map with nothing going on (as little disturbance as possible) and added a trigger with peroidic calls. I measured the memory increase with the Task Manager. Not the best tool, but the best I had available. I turned the triggers on/off during runtime, added others and I think I also tried different events.
I just did some more testing and have to revise my predication, though.
It seems the memory usage for triggers only increase for a certain time. After a while the memory doesn't increase anymore. So maybe the the triggers fill some sort of stack and once this stack is filled it starts overriding. The memory only increases by a couple of hundred Kb.
I guess I have to thank you for making me check that again. So it seems periodic triggers are less annoying than I believed them to be.
PS: It could also be that they fixed that in the last couple of patches. That would at least explain why I was so horribly wrong in the first place.
Why is it that WASD lags exactly?Is it the timers required?Or something else?And what else would that apply to?
The game client seems to try to keep events in sync across all players; Network latency is slowing it down at that point - it might even have to wait for other players' game client to acknowledge the key press
Galaxy is garbage collected, so memory is going to increase until the next time the gc runs.
Doesn't garbage collection usually kicks in once a memory address is orphaned?
Anyway - it doesn't make sense that the memory usage just stops increasing at one point. If the collector would peroidically clean the memory it would rise and fall in turn, wouldn't it?
Doesn't garbage collection usually kicks in once a memory address is orphaned?
Anyway - it doesn't make sense that the memory usage just stops increasing at one point. If the collector would peroidically clean the memory it would rise and fall in turn, wouldn't it?
That's just one type of garbage collection (reference counting) - a lot of really common GC implementations perform periodic cleanup (most Java stacks, for instance - which is why you sometimes experience random pauses when using java apps.
Given your evidence, you're probably right that SC2's using some sort of reference counting method, but it could be the case that the game also has a pool of memory that stays allocated and gets reused as needed.
Quote from s3rius:
Doesn't garbage collection usually kicks in once a memory address is orphaned?
Anyway - it doesn't make sense that the memory usage just stops increasing at one point. If the collector would peroidically clean the memory it would rise and fall in turn, wouldn't it?
----
No, garbage is collected whenever the collector runs. Though technically reference counting is a form of garbage collection, generally when people talk about GC they mean more than that (which is why C++ is not generally considered a garbage-collected language even when strict RIAA is used.) And most likely memory allocated to the galaxy VM is not released until the VM is restarted, which is why the apparent memory use will only go up over the course of a map running. This is NOT a memory leak.
@Nevir27:
AFAIK they haven't released those kind of technical details on the language implementation.
I'm using many timers and I heard they cause lag... :( It's not clear to me what is causing lag, though. The number of timers, the frequency, or the type?
I have:
- A chase camera trigger that runs every 0.05 seconds. It has three if/then clauses in it, which check if the camera is locked for reasons of being dead, sitting at a static gun or having disabled the camera rotate function. It is also turned off when the game ends in order to pan to either base blowing up.
- Three different triggers that run every second and add minerals, tick down the bloodlust counter for infested players and give orders to groups of NPC units. They're separate in order to avoid a long if/then/else chain; if one is not applicable the trigger is turned off and out of the way entirely.
- A few triggers that run every 5 or 10 seconds.
There are, however, only 5 players.
I could replace the chase cam with a loop with a wait in it (or no wait, so it runs at max speed?) but how would I turn it off without another if/then that checks for a boolean? Are the other triggers important to optimise?
Thanks!
Premature optimization is the root of all evil. Are you actually experiencing lag, or are you just worried that it might happen?
The only thing I'd suggest is not having a trigger run every 0.05 seconds. If its sole purpose is to have the camera track a unit then there's a Make Camera Follow Unit function which does the same thing.
I wouldn't know of timers causing lag (unless, of course you have a huge lot of actions in a 0.01 timed timer).
What it does, however is increase the memory load of the game, which can, in turn, lead to lags on weak computers after a while.
The way to deal with that is to remove the peroidic timer event and use a loop inside that trigger. Most triggers can be changed from:
To:
That would be especially good for the 0.05-seconds camera trigger. It's not that important for the 5-10 seconds triggers. But in general it's always better to have loops instead of peroidic events, unless there's something you can't do in a loop.
PS: Regarding the post above - there's nothing wrong with a 0.05-seconds loop. Galaxy is a rather fast scripting language and is able to deal with it very easily. So if you need it, then keep it. Don't just increase wait time because you fear for something that might not even be the case.
That wasn't my point, I meant that he might be having the trigger to do something that could easily be accomplished another way :)
@s3rius: Go
Can you explain why loops are performing better than timed triggers? Just curious :)
@ErrorAsh: Go
I have done some testing and discovered that everytime a trigger fires, the memory comsumption of SC2 goes up by a bit, but doesn't decrease anymore (like it is with local variables, for example). I think the memory generally increases by 4 bytes with every trigger call. This memory will only be cleared once you close the game, or maybe if you close the map, too.
Having a 0.05-peroid trigger would eat about 5kb every minute. Now, that isn't very much, but with several low-peroid timers it adds up.
One shouldn't absolutely panic because of that, but still - it's better to reduce the usage if you don't need it.
So if you play the game for 6 hours nonstop, you lose 1.8 Mb? Hmm. :D I'd run out of memory after 568 days of playing. Still, the peace of mind of having less network traffic should be worth something. (It should only tell you 'pan the camera', not just 'I fired a trigger: pan the camera')
I had a concern about waits though: a timered trigger does its thing every few ticks, then shuts off. A trigger with a wait in it... would it sit there and take up resources in the meantime?
......
I don't have lag yet because I can't go on battle.net! :) Also, lag has traumatised me because I had to throw a half-done WASD-based competitive map into the garbage bin thanks to battle.net. I'd rather not have to do that twice, not after 3 months of development.
......
Maybe the camera trigger itself can be optimised. It consists of several if/then statements:
Does anything jump out as being fps/lag unfriendly?
@BrotherLaz: Go
Yea, as I said it isn't much, but keep in mind other applications need memory as well - and SC2 itself eats about 500MB too. So you'd already run out of memory in 1 year D:
(And as I said, multiple of these triggers do add up and could cause high delays when closing the map - probably because SC2 needs to clean the memory).
Anyay, that trigger looks fine, but if you gamble on getting rid of BNET delay through trigger optimization I'll have to disappoint you.
Even the most optimized WASD system will have unusually high delay online. It's just like that at the moment. Nothing we can do about, Blizzard's gotta take care about that.
So it wasn't your fault that this half-done map didn't work well on BNET :p
How did you test? I just created a testmap with 200 copies of a trigger which fires 1000 times per second and contains an if then else statement which will always send a text message to all players. I let it run for 5 minutes and couldn't see an increase in memory usage but it should increase by:
1000 triggers per second * 200 copies of the trigger *300 seconds * 4 Byte per trigger / 1024 / 1024 = 228MB
But maybe copies of triggers are not handled as a whole new trigger? Or it isn't possible to fire a trigger 1000 times per second? Otherwise i am a bit surprised by the result, on the testmap there are also a shitload of units fighting and a didn't experience any lag at all, i thought something like this would consume more computer resources.
@ErrorAsh: Go
Increased memory usage != lag.
Lag only appears once your free memory starts to deplete.
Anyway, I made a test map with nothing going on (as little disturbance as possible) and added a trigger with peroidic calls. I measured the memory increase with the Task Manager. Not the best tool, but the best I had available. I turned the triggers on/off during runtime, added others and I think I also tried different events.
I just did some more testing and have to revise my predication, though.
It seems the memory usage for triggers only increase for a certain time. After a while the memory doesn't increase anymore. So maybe the the triggers fill some sort of stack and once this stack is filled it starts overriding. The memory only increases by a couple of hundred Kb.
I guess I have to thank you for making me check that again. So it seems periodic triggers are less annoying than I believed them to be.
PS: It could also be that they fixed that in the last couple of patches. That would at least explain why I was so horribly wrong in the first place.
Galaxy is garbage collected, so memory is going to increase until the next time the gc runs.
Why is it that WASD lags exactly? Is it the timers required? Or something else? And what else would that apply to?
The game client seems to try to keep events in sync across all players; Network latency is slowing it down at that point - it might even have to wait for other players' game client to acknowledge the key press
Do we know what style of GC algorithm the engine uses? (mark & sweep? generational?)
Doesn't garbage collection usually kicks in once a memory address is orphaned?
Anyway - it doesn't make sense that the memory usage just stops increasing at one point. If the collector would peroidically clean the memory it would rise and fall in turn, wouldn't it?
That's just one type of garbage collection (reference counting) - a lot of really common GC implementations perform periodic cleanup (most Java stacks, for instance - which is why you sometimes experience random pauses when using java apps.
Given your evidence, you're probably right that SC2's using some sort of reference counting method, but it could be the case that the game also has a pool of memory that stays allocated and gets reused as needed.
Quote from s3rius:
Doesn't garbage collection usually kicks in once a memory address is orphaned?
Anyway - it doesn't make sense that the memory usage just stops increasing at one point. If the collector would peroidically clean the memory it would rise and fall in turn, wouldn't it?
----
No, garbage is collected whenever the collector runs. Though technically reference counting is a form of garbage collection, generally when people talk about GC they mean more than that (which is why C++ is not generally considered a garbage-collected language even when strict RIAA is used.) And most likely memory allocated to the galaxy VM is not released until the VM is restarted, which is why the apparent memory use will only go up over the course of a map running. This is NOT a memory leak.
@Nevir27:
AFAIK they haven't released those kind of technical details on the language implementation.