cancel
Showing results for 
Search instead for 
Did you mean: 
Joely2024
Explorer
Status: New Idea

It seems to me that when discussing controversial topics on social media — topics such as Russia-Ukraine, USA gun laws, Israel-Palestine, or LGBTQ+ rights — it becomes very difficult to distinguish fact from opinion, truth from fabrication, and fact-based claim from baseless one.

This is because Facebook, Instagram, and other social media platforms allow anyone to comment or post about these topics, stating things as though they are facts regardless of the genuine amount of knowledge they have. Of course, Meta does try to control what is said by removing content that seems to be dangerous or overly hateful. These efforts, however, aren't enough in my opinion, for still if someone wanted to know what's really happening in Ukraine, or in Gaza — it would be very difficult to filter the vast amounts of information and find what's true. 

In addition, some people claim that the current method of preventing so-called hate speech and dangerous talk goes against the democratic right of freedom of speech. Of course, this is a difficult balance to strike and therefore I do not blame Meta for failing to satisfy everyone. 

Furthermore, this issue also bleeds into other popular topics of discussion on social media, like health or self-help, where miss-information is also very common. In these cases the lack of a true/false filter can lead to people accidentally harming themselves because of incorrect or incomplete information provided by unverified gurus on social media platforms.

However, I do offer a course of action — an alternative one to the current way of blocking accounts and deleting posts (that of course can either replace or simply add to this method). Meta can introduce a new feature into Facebook and Instagram, marking what content is fact based and reliable, and what may contain false or biased information. The feature can look something like this:

1. When posting anything (post, story, comment, etc.), a user will have the ability to site the resources upon which they are basing what they are saying. 

2. An AI tool can then be used to verify these resources and check that they are reliable and objective sources.

3. Having been verified as based in fact, the bit of content can now be uploaded with a prominent Green V mark that will serve as the mark of fact-based, reliable content

4. Any comment, post or story detected as discussing a topic that can be misunderstood easily — or really any content that is providing information — without having acquired a Green V, will be marked with a counter sign: a Red X sign, and a prominent disclaimer pronouncing something along the lines of: "This [type of content] has been detected as providing information without basis in verified facts, and therefore may contain false, biased or incomplete information."

Of course, changes to this framework that may better fit Meta's style are perfectly acceptable. 

I believe that adding these features will not only distinguish Meta as a company that respects truth and stands strong in the fight against fake news, biased propaganda, and the like — but will also restore, to some extent, the meaning of the word "truth", and the ability to have real, respectable, and intelligent discussions on Meta's platforms.

Thank you very much!

P.S. I would like to add that, of course, determining whether a certain source is reliable and objective is highly difficult, and since I do not aspire to have enough knowledge on this matter it is clear to me that this will have to be discussed by developers and thinkers on higher levels. I do hope, and like to believe, though, that Meta can be trusted to remain loyal to the values of equality, truth, and critical thinking in the process of doing this.