A script that can accept some flags and run an inference for demonstration purposes is useful for a quickstart:
# fetch model(s) first
$ docker run...
$ python run_inference.py --gpu --model=resnet50 ./image.jpg
Dog
It doesn't need to expose all the options the server has (e.g. batch size) but just needs a simplified set of arguments and work with some set of models (enough for demo purposes)
A script that can accept some flags and run an inference for demonstration purposes is useful for a quickstart:
# fetch model(s) first $ docker run... $ python run_inference.py --gpu --model=resnet50 ./image.jpg DogIt doesn't need to expose all the options the server has (e.g. batch size) but just needs a simplified set of arguments and work with some set of models (enough for demo purposes)