dinsdag 26 februari 2019

Python microservice

To get rid of the SQL dependency for the production webservice, and to be able to keep the learned model in memory between calls I tried to use a Python microservice.

A microservice works as as a small and simple webserver. For this, many internet resources suggest 'FLASK'. Indeed, it appeared quite simple to use:

Of course first FLASK has to be installed in the Python environment:
pip install flask

After that, running a Python script with

from flask import Flask
app = Flask(__name__)


@app.route("/")
def hello():
    return "Hello World!" if __name__ == '__main__':
    app.run()
will lauch a process listening on port 5000. The parameter of app.route can be more specific and have a method assigned:


@app.route("/email", methods=["POST"])
When json is posted, the content can be read by:

    emailFrom = request.json['From']


I used 'Postman' to test the service




The returning json took some experimentation. In the end it should be a json type array with only strings:


    res=[]

    for i, idx in enumerate(best5idx):

        t=str(idx)

        res.append({'code': t})



    return jsonify(res)

A trick is necessary to reuse the trained model, so to keep it in memory. It has to do with Tensorflow supporting multiple threads, and you have the make sure you use the correct session:

So in the root you will need:


global graph

graph = tf.get_default_graph()



model =get_model( )

Then in the method where you want to do the prediction:


    lmodel=model

    with graph.as_default():

        res = lmodel.predict(docs)




Geen opmerkingen:

Een reactie posten