Currently its unclear exactly what the most optimal packaging strategy is. Neptune does not allow model registratry which is not optimal for scalability, however, its very good at model tracking and storage (for as a hedge against single points of failures). Practically, it also is pretty easy to sort amongst runs to find the model runs you are interested in, given that they are named in a logical manner!
We also have a docker dev. file, it works.
What needs to be done is to explore different tools and what the optimal manner in which to use them for our usecase.
- Look at MLFlow, Kubernetes, Azure, KServe, GentoML, etc.
- Reads:
Currently its unclear exactly what the most optimal packaging strategy is. Neptune does not allow model registratry which is not optimal for scalability, however, its very good at model tracking and storage (for as a hedge against single points of failures). Practically, it also is pretty easy to sort amongst runs to find the model runs you are interested in, given that they are named in a logical manner!
We also have a docker dev. file, it works.
What needs to be done is to explore different tools and what the optimal manner in which to use them for our usecase.