It should never fail since any failure could potentially create a fatal scenario. People usually accept fatalities because of human error but they won't accept death because of algorithmic failure.
I suspect that it won't take long for people to come to terms with it in the same way we now "accept" industrial accidents. "Accept" in this case simply means that the industry in question is allowed to continue doing business.
That's an unattainable high acceptance bar. A more reasonable one would be to have mass adoption of self driving cars as soon as self driving cars cause less accidents than human drivers.
Not every car crash ends in death. But the AI will learn a lot from each crash. I think mistakes and 'bugs' in the system will get ironed out at low speed crashes and in high speed crashes on test circuits...
Have you seen the AI Formula 1 called roborace? Once those cars get good enough to beat Lewis Hamilton or Seb Vettel I'll trust it with me and my family.
Do people accept death due to autopilot error in aeroplanes? It's the same thing. There has been no demands for autopilot to be removed from planes or mass refusal to fly. The reason is that most people can see that autopilot is an overall safety gain compared with getting a human to concentrate on the same thing for long periods of time.