If a Python script runs in the test environment, but not on the production server, chances are that the environment differ: During testing an extra 'pip install' is easily forgotten. Often the script starts to complain, and points you to a missing package. However, this seems to be not always the case.
I found some code to check the which packages are installed, and adapted it slightly:
import os
import pkg_resources
import platform
print("=== python version:")
print(platform.sys.version_info)
print("=== python packages:")
dists = [d for d in pkg_resources.working_set]
installed_packages_list
= sorted(["%s==%s" % (i.key, i.version)
for i in dists])
for i, dist in enumerate(installed_packages_list):
print(str(i)+' - '+dist )
It appeared that many more packages where installed in the (MS-SQL) Python environment on the server, but almost none of the packages from the test environment where either of the same version (often higher on the test environment) or even installed.
The first AI model had run however on both. I only got problems with newer AI models.
What i didn't know that pip has a possibility to 'sync' environments:
- Run
pip freeze > requirements.txt
on the remote machine - Copy that
requirements.txt
file to your local machine - Important: if no GPU is present on the target machine, change tensorflow-gpu to tensorflow
- In your local virtual environment, run
pip install -r requirements.txt
The versions below are important to run to avoid errors because of the absence of a GPU on the server (so I guess this can also be changed in the requirements.txt file):
python -m pip install --upgrade keras==2.1.3
Geen opmerkingen:
Een reactie posten