Could combine the existing real-time driving model with influence from the LLM, as an improvement to allow understanding of unusual situations, or as a cross-check?
I wasn't intending to say it would be useful today, but pushing back against what I understood to be an argument that, once we do have a model we'd trust, it won't be possible to run it in-car. I think it absolutely would be. The massive GPU compute requirements apply to training, not inference -- especially as we discover that quantization is surprisingly effective.
I wasn't intending to say it would be useful today, but pushing back against what I understood to be an argument that, once we do have a model we'd trust, it won't be possible to run it in-car. I think it absolutely would be. The massive GPU compute requirements apply to training, not inference -- especially as we discover that quantization is surprisingly effective.