API
A simple REST API implemented using FastAPI is provided by astromlp.api. To run the API locally, for example using uvicorn, clone the repository and run:
$ uvicorn astromlp.api:app
By default the API listens on http://127.0.0.1:8000
and the following requests are available:
/infer/<model>/<objid>
: request for prediction for SDSS object identifierobjid
using model identifiermodel
/proc/<pipeline>/<objid>
: request for process an SDSS object identifierobjid
using pipeline identifierpipeline
Running the API using Docker
A Docker file is also available to run the API in a container, to build the Docker image run from the repository:
$ docker build -t astromlp-api:latest .
And then to run a container:
$ docker run -d --rm -p 8500:8500 astromlp-api
The API is available from http://127.0.0.1:8500
, and the same methods illustrated before can be used to send queries,
for example:
$ curl http://127.0.0.1:8500/infer/i2r/1237648720693755918
(...) "output":[0.09091393649578094],"x":["img"],"y":["redshift"]}
An image is also available from Docker Hub, to pull the image run:
$ docker pull nunorc/astromlp-api