Since most of our clients work on Microsoft cloud services anyway (our DMS works on SharePoint) it seems natural to host the AI in Azure ML.
I created a resource name dedicated to the AI components, a workspace, and a Jupiter Notebook virtual machine, and started testing. I uploaded my own Python helper files, and started to test the main script part after part. I was pleased to see that many of imports I use in my Python code where already available. To install/update new imports you can use in a Jupiter box:
!pip install --upgrade pip
An issue appeared when I tried to open the pyodbc: No error on the 'import pyodbc', but when trying to use it, I got an exception saying that the driver file could not be found.
After much research it appeared that the VM is in fact a Linux machine, and from the azure portal one can also start 'Jupiter-lab'. This part has the option to open a Linux prompt. I looked up how to install ODBC on Linux, guessed that it was Ubunu Linux and, loo and behold, the following lines installed the ODBC:
sudo apt-get update
sudo ACCEPT_EULA=Y apt-get install msodbcsql17 Apart from a small hiccup (an error when starting learning because of having a too small number of classes) I was able to run the model.
When testing with the full dataset, it became clear that I had chosen the type of virtual machine (STANDARD_D3_V2).
On our AI PC at the office (1 NVIDIA GTX 2080), an Epoch takes 172s, on this VM 4453s!
Next test with the STANDARD_NC6, which seems the smallest configuration on Azure with GPU. When I opened it, it seemed that is shares it's folders with the first VM:
This time an Epoch with the same model uses 735s. It appears from the documentation that the NC series are provided with a NVidia Tesla K80 card, but 1 GPU stands for 0.5 k80 GPU card :) OK, this works. Next step would be how to deploy a webservice on the Azure VM. This seems like a good tutorial, although it is somewhat old.
Geen opmerkingen:
Een reactie posten