Not sure anymore that we want to have custom keys like --model.name. Since we don't plan on supporting another inference backend anytime soon, I think it's better UX to mirror the vLLM config names exactly. Will need to redefine them still, in order to support TOML configs.