View in #cornerstone3d on Slack
@Adrien: Hello
I’m trying to optimize the resource usage when using Cornerstone when displaying large CT/MRI stacks in a volume viewport using the streaming loader. I’ve observed the following:
• once the volume data has been streamed to the WebGL context, we keep the data in CPU memory as well. I can of course purge the volume from the cache, but not while displaying it (else various tools break). At the expense of having re-load+re-decode pixel data when needed, it feels like we could spare maintaining an ArrayBuffer holding the entire volume data in CPU memory, since the various frames get streamed into the volume using texSubImage3D? Note that my understanding of things is still quite limited, so I might be speaking pure nonsense here
• it seems that decoder web workers don’t release memory when done? Maybe some missing “hook” between the native code and the wasm runtime? I hack around this by killing idle workers (since they get respawned on demand), but maybe there’s something smarter to do here?
@Bill_Wallace: There is an entire change being looked at to only allocation WebGL memory, and use single-image array buffers in CPU memory, allowing those to be de-allocated to allow for loading other data. Definitely something we want to look at, but funding for this is somewhat limited.
For the web workers, yes, they allocate memory and don’t deallocate. If you are quite concerned about memory, then it might be worth having a higher peak decoder, and when there isn’t work for one, automatically de-allocate it. Would welcome a PR around that area.
@Adrien: Cool, thanks for the detailed answer!
• regarding memory allocation: would the volume object in the cache then only hold onto the gl texture object? But then on mobile gl context can go away right? I suppose it also involves changes in vtk.js?
• For workers: a first easy change would be to keep the “idle time” on workers objects (basically a timestamp of last time when the worker went from busy to idle), and expose that with the statistics. With that in place, writing a reliable worker janitor becomes easy. If the approach looks sound to you i could try to come up with a PR.
@Bill_Wallace: The gl context can go away on both mobile and desktop, and doing so would require reloading the image data, but that is the tradeoff one pays to reduce total memory usage.
Yes, that approach with an idle time sounds reasonable. The probably should be settings to control how aggressively to retire web workers.