Political bias is measurable and significant across models(and probably changing over time for closed-sourced). In search of objectivity, what are the best ways to account for this abstraction(s)?
Well, text is political. You're not going to say "Tiananmen Square" without a political sentiment, so your only option would be to censor it.
LLMs are text tokenizers, if the majority of it's training material leans liberal or conservative then the output should reflect that. I think a better idea is to avoid relying on glorified autocorrect for anything related to political drama.
Actually the place itself is not controversial https://en.wikipedia.org/wiki/Tiananmen_Square any more than the National Mall in Washington, DC is controversial. It's what happened there on one day which is suppressed.
the results are not free of political bias, but may well highlight it in a starkly hilarious way.
you might do human training at that level but then you've only created a newly biased model.