THE WEF PROPOSES TO USE AI TO CENSOR YOUR THOUGHTS, BEFORE YOU WRITE THEM.
The World Economic Forum this month published an article calling for an online censorship system powered by a combination of artificial and human intelligence.
PLEASE LIKE SUBSCRIBE and SHARE
I'm going to let you read this first before I have my say on it. So at the bottom, I will write my bit.
The World Economic Forum this month published an article calling for an online censorship system powered by a combination of artificial and human intelligence that one critic suggested would “globalize” the “search for wrongthink.” Warning about a “dark world of online harms” that must be addressed.
The World Economic Forum (WEF) this month published an article calling for a “solution” to “online abuse” that would be powered by artificial intelligence (AI) and human intelligence.
The proposal calls for a system, based on AI, that would automate the censorship of “misinformation” and “hate speech” and work to overcome the spread of “child abuse, extremism, disinformation, hate speech and fraud” online.
According to the author of the article, Inbal Goldberger, human “trust and safety teams” alone are not fully capable of policing such content online.
Goldberger is vice president of ActiveFence Trust & Safety, a technology company based in New York City and Tel Aviv that claims it “automatically collects data from millions of sources and applies contextual AI to power trust and safety operations of any size.”
Instead of relying solely on human moderation teams, Goldberger proposes a system based on “human-curated, multi-language, off-platform intelligence” — in other words, input provided by “expert” human sources that would then create “learning sets” that would train the AI to recognize purportedly harmful or dangerous content.
This “off-platform intelligence” — more machine learning than AI per se, according to Didi Rankovic of ReclaimTheNet.org — would be collected from “millions of sources” and would then be collated and merged before being “content removal decisions” on the part of “Internet platforms.”
According to Goldberger, the system would supplement “smarter automated detection with human expertise” and will allow for the creation of “AI with human intelligence baked in.”
This, in turn, would provide protection against “increasingly advanced actors misusing platforms in unique ways.”
“A human moderator who is an expert in European white supremacy won’t necessarily be able to recognize harmful content in India or misinformation narratives in Kenya,” Goldberger explained.
However, “By uniquely combining the power of innovative technology, off-platform intelligence collection and the prowess of subject-matter experts who understand how threat actors operate, scaled detection of online abuse can reach near-perfect precision” as these learning sets are “baked in” to the AI over time, Goldberger said.
This would, in turn, enable “trust and safety teams” to “stop threats rising online before they reach users,” she added.
In his analysis of what Goldberger’s proposal might look like in practice, blogger Igor Chudov explained how content policing on social media today occurs on a platform-by-platform basis.
For example, Twitter content moderators look only at content posted to that particular platform, but not at a user’s content posted outside Twitter.
Chudov argued this is why the WEF appears to support a proposal to “move beyond the major Internet platforms, in order to collect intelligence about people and ideas everywhere else.”
“Such an approach,” Chudov wrote, “would allow them to know better what person or idea to censor — on all major platforms at once.”
The “intelligence” collected by the system from its “millions of sources” would, according to Chudov, “detect thoughts that they do not like,” resulting in “content removal decisions handed down to the likes of Twitter, Facebook, and so on … a major change from the status quo of each platform deciding what to do based on messages posted to that specific platform only.”
In this way, “the search for wrongthink becomes globalized,” concludes Chudov.
In response to the WEF proposal, ReclaimTheNet.org pointed out that “one can start discerning the argument here … as simply pressuring social networks to start moving towards ‘preemptive censorship.’”
Chudov posited that the WEF is promoting the proposal because it “is becoming a little concerned” as “unapproved opinions are becoming more popular, and online censors cannot keep up with millions of people becoming more aware and more vocal.”
According to the Daily Caller, “The WEF document did not specify how members of the AI training team would be decided, how they would be held accountable or whether countries could exercise controls over the AI.”
In a disclaimer accompanying Goldberger’s article, the WEF reassured the public that the content expressed in the piece “is the opinion of the author, not the World Economic Forum,” adding that “this article has been shared on websites that routinely misrepresent content and spread misinformation.”
However, the WEF appears to be open to proposals like Goldberger’s. For instance, a May 2022 article on the WEF website proposes Facebook’s “Oversight Board” as an example of a “real-world governance model” that can be applied to governance in the metaverse.
And, as Chudov noted, “AI content moderation slots straight into the AI social credit score system.”
UN, backed by Gates Foundation, also aiming to ‘break chain of misinformation’
The WEF isn’t the only entity calling for more stringent policing of online content and “misinformation.”
For example, UNESCO recently announced a partnership with Twitter, the European Commission and the World Jewish Congress leading to the launch of the #ThinkBeforeSharing campaign, to “stop the spread of conspiracy theories.”
According to UNESCO:
“The COVID-19 pandemic has sparked a worrying rise in disinformation and conspiracy theories.
“Conspiracy theories can be dangerous: they often target and discriminate against vulnerable groups, ignore scientific evidence and polarize society with serious consequences. This needs to stop.”
UNESCO’s director-general, Audrey Azoulay, said:
“Conspiracy theories cause real harm to people, to their health, and also to their physical safety. They amplify and legitimize misconceptions about the pandemic, and reinforce stereotypes which can fuel violence and violent extremist ideologies.”
UNESCO said the partnership with Twitter informs people that events occurring across the world are not “secretly manipulated behind the scenes by powerful forces with negative intent.”
UNESCO issued guidance for what to do in the event one encounters a “conspiracy theorist” online: One must “react” immediately by posting a relevant link to a “fact-checking website” in the comments.
UNESCO also provides advice to the public in the event someone encounters a “conspiracy theorist” in the flesh. In that case, the individual shold avoid arguing, as “any argument may be taken as proof that you are part of the conspiracy and reinforce that belief.”
The #ThinkBeforeSharing campaign provides a host of infographics and accompanying materials intended to explain what “conspiracy theories” are, how to identify them, how to report on them and how to react to them more broadly.
According to these materials, conspiracy theories have six things in common, including:
An “alleged, secret plot.”
A “group of conspirators.”
“‘Evidence’ that seems to support the conspiracy theory.”
Suggestions that “falsely” claim “nothing happens by accident and that there are no coincidences,” and that “nothing is as it appears and everything is connected.”
They divide the world into “good or bad.”
They scapegoat people and groups.
UNESCO doesn’t entirely dismiss the existence of “conspiracy theories,” instead admitting that “real conspiracies large and small DO exist.”
However, the organization claims, such “conspiracies” are “more often centered on single self-contained events, or an individual like an assassination or a coup d’état” and are “real” only if “unearthed by the media.”
In addition to the WEF and UNESCO, the United Nations (UN) Human Rights Council earlier this year adopted “a plan of action to tackle disinformation.”
The “plan of action,” sponsored by the U.S., U.K., Ukraine, Japan, Latvia, Lithuania and Poland, emphasizes “the primary role that governments have, in countering false narratives,” while expressing concern for:
“The increasing and far-reaching negative impact on the enjoyment and realization of human rights of the deliberate creation and dissemination of false or manipulated information intended to deceive and mislead audiences, either to cause harm or for personal, political or financial gain.”
Even countries that did not officially endorse the Human Rights Council plan expressed concerns about online “disinformation.”
For instance, China identified such “disinformation” as “a common enemy of the international community.”
An earlier UN initiative, in partnership with the WEF, “recruited 110,000 information volunteers” who would, in the words of UN global communications director Melissa Fleming, act as “digital first responders” to “online misinformation.”
The UN’s #PledgeToPause initiative, although recently circulating as a new development on social media, was announced in November 2020, and was described by the UN as “the first global behaviour-change campaign on misinformation.”
The campaign is part of a broader UN initiative, “Verified,” that aims to recruit participants to disseminate “verified content optimized for social sharing,” stemming directly from the UN communications department.
Fleming said at the time that the UN also was “working with social media platforms to recommend changes” to “help break the chain of misinformation.”
Both “Verified” and the #PledgeToPausecampaign still appear to be active as of the time of this writing.
The “Verified” initiative is operated in conjunction with Purpose, an activist group that has collaborated with the Bill & Melinda Gates Foundation, the Rockefeller Foundation, Bloomberg Philanthropies, the World Health Organization, the Chan Zuckerberg Initiative, Google and Starbucks.
Since 2019, the UN has been in a strategic partnership with the WEF based on six “areas of focus,” one of which is “digital cooperation.”
Ok, now that you have read this nonsense, I will make a few points and some real truths about this whole idea.
Firstly, this whole misinformation and conspiracy theory war between the elite, pharmaceutical companies and governments vs the unvaccinated are just that, a war.
A war for either the elite (WEF) to convince the people of the world that they are doing everything for our health since the Covid pandemic and to label anyone who doesn't agree with them as, conspiracy theorists, domestic terrorists and anti-vaxxers to convince the people that those who tell you the truth are bad people.
Let's look at the definition of “misinformation”
Wrong information; false account or intelligence.
Untrue or incorrect information.
Information that is incorrect.
So let's start with the covid outbreak.
Lie after lie about its origin and the gain of function research. ( def no 1 )
Then there were the masks “ DO NOT STOP TRANSMISSION “ ( truth )
Then masks do stop the spread ( def no 2 )
Then the vaccine is safe and effective. ( def no 2 )
But then it took a real sinister turn as Boris Johnson said the vaccine was voluntary and only those who want it need to have it. ( def no 2 )
Now by this time people who had not fallen for the agenda, knew that vaccines were coming before the government had mentioned the idea.
We warned people that there would be mandatory vaccines, and told them that there would be booster after booster and that millions could die. ( true )
I remember telling people that they will be trying to bring in a cashless society and I was laughed at. ( true )
I said that they will try to introduce a social credit system like the CCP ( true
All they have done is lie, lie and lie. But I found out there's a reason for the lies. Everything they said was planned. Every lie was meant to happen no matter what you thought. It's part of the process.
They tell lies, and then they deny telling the lies. But because they control the media, you can't see it again. So you start to get confused. You question yourself. Did they say it? They did, everyone knows that they did. But they blatantly deny saying it. Then people believe them, and when someone like me says they did say it, the people I'm telling will say “ no they didn't “
It's called the “Mandela effect” and people think it's a phenomenon. But it's created. TV is the most powerful weapon on the planet.
Now, this paragraph I've copied this because they have decided to censor people BEFORE they say anything.
Pre-emptive censorship. AI that will censor you become you write it. Ok 🙈
The “intelligence” collected by the system from its “millions of sources” would, according to Chudov, “detect thoughts that they do not like,” resulting in “content removal decisions handed down to the likes of Twitter, Facebook, and so on … a major change from the status quo of each platform deciding what to do based on messages posted to that specific platform only.”
In this way, “the search for wrongthink becomes globalized,” concludes Chudov.
In response to the WEF proposal, ReclaimTheNet.org pointed out that “one can start discerning the argument here … as simply pressuring social networks to start moving towards ‘preemptive censorship.’”
PLEASE LIKE SUBSCRIBE and SHARE