How to release the occupied GPU memory when calling keras model by Apache mod_wsgi django?
My server configuration is as follows: apache 2.4.23. Mod_wsgi 4.5.9 By using the Django framework and apache server, we call the Keras deep learning model. And after the successful calling of the model, the model has been always running in the GPU memory, which causes the GPU memory can not be released except by shutting down the apache server. So, is there any way to control the release of GPU memory when calling a Keras model by Apache+Mod_wsgi+Django? Thanks! Runtime memory footprint screenshots
pyodbc install on heroku server error command “gcc” failed with exit status 1 - Django project
Specifying a default value for Django DateField and Postgresql
PUT, GET ,POST ,DELETE methods using djangorest framework
Django pagination limits
Dynamicly required field in django form?
Can't find .env file for locally starting Django app using 'heroku local'
django upload image file from templates
django x-editable /post URL issue & django model save
Django not loading the correct locale files
Django query set giving attribute_error when trying to get a set with multiple foreign keys
Use .id in Django Template to Admin
not getting user email after django facebook login?
Import xls file and view contents Django excel
The concept of django admin when adding information with foreign keys [duplicate]
Django template - convert variable to list
Django. Python Social Auth get user backend