Things like Tianaman square aren’t Western propaganda, it was a thing that happened. There is a difference between alignment fine tuning and straight up wiping things from the models knowledge base.
It’s not like totalitarian regimes don’t have form on censoring inconvenient facts including various revolutions, the Nazis and the Catholic church.
China’s narrative on the events preceding “tank man” isn’t that no one was hurt/nothing happened. It is that a riot had to be put down. Generally, people (brainwashed by US media) won’t be happy until CIA is only valid information source, and AI must parrot it.
Just as your other media, use sources that validate your preconceptions for any superficial question.
The popularity of local LLMs has very little to do with seeking private answers to politicized questions, and more, utility in coding/images/reasoning capabilities. The news in this post appears to be the concensus that Chinese open models are better at solving user problems/tasks.
Things like Tianaman square aren’t Western propaganda, it was a thing that happened. There is a difference between alignment fine tuning and straight up wiping things from the models knowledge base.
It’s not like totalitarian regimes don’t have form on censoring inconvenient facts including various revolutions, the Nazis and the Catholic church.
China’s narrative on the events preceding “tank man” isn’t that no one was hurt/nothing happened. It is that a riot had to be put down. Generally, people (brainwashed by US media) won’t be happy until CIA is only valid information source, and AI must parrot it.
Just as your other media, use sources that validate your preconceptions for any superficial question.
The popularity of local LLMs has very little to do with seeking private answers to politicized questions, and more, utility in coding/images/reasoning capabilities. The news in this post appears to be the concensus that Chinese open models are better at solving user problems/tasks.