For those of us who use llama.cpp as a service on Debian, got tired of editing and restarting the service by hand if i wanted to change the model, so this made it easier.
Shows you what is running now, what local models are available, lets you swap it out for a new model and restart the service.
Just python and flask and nothing fancy, hard coded folder paths, so update as needed for your enviroment.