Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How would any insurance company even begin to control costs on this? It seems like a fast-track to insolvency.

AI models hallucinate, and by their blackbox nature can't have any kind of safeguards put in, as has been evidenced by the large number of paths in research to prompt jailbreaking.

Inherently also, AI is operating on a non-deterministic environment, but its architecture for computation is constrained by determinism and decide-ability. The two are foundationally incompatible for reliable operations.

Language is also one of those trouble areas since the meaning is floating. It seems quite likely that a chatbot will get stuck in a infinite loop (halting problem) with the paying customer failing to be served, and worse the company involved imposes personal cost on them in the process (in frustration and lack of resolution). If the company involved eliminates all but that as a single point of contact, either in structure or informal process; I don't see any way you can actually control costs sufficiently when the lawsuits start piling up.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: