Actually it is exactly as I said. Leaks cause long exit times. The game has problem destructing the game state because it is so complex due to leaks.
However what I did not know was that CatalogFieldValueSet used like this causes a leak. If you look at the memory used by SC2 under certain test conditions.
10,000 -> 950 MB -> 3 seconds to exit
100,000 -> 1000 MB -> 9 seconds to exit
1,000,000 -> 1500 MB -> exit too long to measure (>>5 minutes)
The way it degrades like this points towards some cleanup algorithm with O(n^2) or worse complexity. I am guessing the way catalog natives are implemented is that they make a new entry for the player which takes precedence over the currently used entry. However although it replaces the existing entry, it does not remove it resulting in a leak. When you exit a session it has to clean up all these entries.
Why the cleanup has a complexity of O(n^2) or worse I do not know. I am guessing it might be trying to destroy the entries in the inverse order of creation using a linear search, possibly to maintain base data integrity for caching purposes. However this really is a wild guess.
I would recommend reporting this as a bug to Blizzard. It really should not be leaking like this. I can understand the creation of data structures for a per player data entry however if you change that entry the old data structures should be discarded rather than retained.
Yes those can leak. However the amount that can leak should be limited to only a few hundred or thousand before some limit is reached and errors start to be printed.
As for memory, you will be after the "Commit size" from task manager. That column for the processes view might not be visible by default and requires manual adding to the view (right click on the header and choose to add columns). Commit size represents the amount of allocated virtual memory address space for a process and hence the amount of data needed to be allocated somewhere in the system.
Before exiting the session, what is the virtual address size of StarCraft II?
Such long unload times would be the result of excessively complex game state being cleaned up. For example if a lot of data leaked during the session (was not destroyed when it should) and is then flushed when exiting it will cause a massive stall. Warcraft III suffered something similar if you leaked a lot of objects however its severity was limited due to degrading in game performance and ultimately a crash.
The 64bit build of StarCraft II is pretty much limited to as much virtual memory your system has. If the game state complexity exceeds the available memory then paging will occur. If this complexity is not used, eg leaked data, then the paging will not result in a noticeable performance decrease. However on exiting a session when the complexity is cleaned up then all that data has to be paged in again which can take a long time on mechanical drives. Worse is page thrashing could occur as the cleanup might not be memory sequential.
I recommend checking out actors in case there are some logical leaks there. Although situations like orphaned and duplicate actors should throw an error they might not under some cases. Also actor leaks might not cause performance degradation as invisible actors with no assets could persist in the actor system adding overall complexity while not requiring any computational maintenance.
Rollback Post to RevisionRollBack
To post a comment, please login or register a new account.
Actually it is exactly as I said. Leaks cause long exit times. The game has problem destructing the game state because it is so complex due to leaks.
However what I did not know was that CatalogFieldValueSet used like this causes a leak. If you look at the memory used by SC2 under certain test conditions.
10,000 -> 950 MB -> 3 seconds to exit
100,000 -> 1000 MB -> 9 seconds to exit
1,000,000 -> 1500 MB -> exit too long to measure (>>5 minutes)
The way it degrades like this points towards some cleanup algorithm with O(n^2) or worse complexity. I am guessing the way catalog natives are implemented is that they make a new entry for the player which takes precedence over the currently used entry. However although it replaces the existing entry, it does not remove it resulting in a leak. When you exit a session it has to clean up all these entries.
Why the cleanup has a complexity of O(n^2) or worse I do not know. I am guessing it might be trying to destroy the entries in the inverse order of creation using a linear search, possibly to maintain base data integrity for caching purposes. However this really is a wild guess.
I would recommend reporting this as a bug to Blizzard. It really should not be leaking like this. I can understand the creation of data structures for a per player data entry however if you change that entry the old data structures should be discarded rather than retained.
Yes those can leak. However the amount that can leak should be limited to only a few hundred or thousand before some limit is reached and errors start to be printed.
As for memory, you will be after the "Commit size" from task manager. That column for the processes view might not be visible by default and requires manual adding to the view (right click on the header and choose to add columns). Commit size represents the amount of allocated virtual memory address space for a process and hence the amount of data needed to be allocated somewhere in the system.
Before exiting the session, what is the virtual address size of StarCraft II?
Such long unload times would be the result of excessively complex game state being cleaned up. For example if a lot of data leaked during the session (was not destroyed when it should) and is then flushed when exiting it will cause a massive stall. Warcraft III suffered something similar if you leaked a lot of objects however its severity was limited due to degrading in game performance and ultimately a crash.
The 64bit build of StarCraft II is pretty much limited to as much virtual memory your system has. If the game state complexity exceeds the available memory then paging will occur. If this complexity is not used, eg leaked data, then the paging will not result in a noticeable performance decrease. However on exiting a session when the complexity is cleaned up then all that data has to be paged in again which can take a long time on mechanical drives. Worse is page thrashing could occur as the cleanup might not be memory sequential.
I recommend checking out actors in case there are some logical leaks there. Although situations like orphaned and duplicate actors should throw an error they might not under some cases. Also actor leaks might not cause performance degradation as invisible actors with no assets could persist in the actor system adding overall complexity while not requiring any computational maintenance.