If you plan on letting llava-v1.5-7b drive your car, please stay away from me.
More seriously, for safety critical applications, LLM have some serious limitations (most obviously hallucinations). Still, I beleive they could work in automotive application assuming: high quality of the output (better than current SoA) and very high token count (hundreds or even thousand of token/s and more), allowing to bruteforce the problem and run many inferences per seconds.
Could combine the existing real-time driving model with influence from the LLM, as an improvement to allow understanding of unusual situations, or as a cross-check?
I wasn't intending to say it would be useful today, but pushing back against what I understood to be an argument that, once we do have a model we'd trust, it won't be possible to run it in-car. I think it absolutely would be. The massive GPU compute requirements apply to training, not inference -- especially as we discover that quantization is surprisingly effective.
More seriously, for safety critical applications, LLM have some serious limitations (most obviously hallucinations). Still, I beleive they could work in automotive application assuming: high quality of the output (better than current SoA) and very high token count (hundreds or even thousand of token/s and more), allowing to bruteforce the problem and run many inferences per seconds.
Clearly we are not there yet.