You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I initially found this bug because I realized my deep learning training loop was leaking VRAM. Torch tensors were being kept in memory when Figure.savefig was called
I am under the impression that ExitStack within savefig is meant to hack around this issue but is somehow not working in my case?
It is not acceptable to have to call gc.collect(). Most users won't be aware of the issue. Given the prevalence of deep learning these days, it's likely objects leaking into GPU memory will be a common problem due to this bug.
Bug summary
Calling
plt.savefigleaks local variables from parent calling functions. These can be found ingc.get_objects()and are cleared bygc.collect()Code for reproduction
Actual outcome
Object is leakingExpected outcome
Nothing should be printed.
Additional information
Figure.savefigwas calledExitStackwithinsavefigis meant to hack around this issue but is somehow not working in my case?aggbackendgc.collect(). Most users won't be aware of the issue. Given the prevalence of deep learning these days, it's likely objects leaking into GPU memory will be a common problem due to this bug.Operating system
Win10 x64
Matplotlib Version
3.7.1
Matplotlib Backend
TkAgg
Python version
3.10.0
Jupyter version
No response
Installation
pip