The COVID-19 pandemic has become a global public health emergency.

It is affecting every aspect of life badly.

The spread of false and misleading information regarding COVID -19 is increasing on Facebook.

Misinformation appears in different forms like text, images, videos, or articles.

It is very difficult for Facebook to parse slightly different images containing misinformation.

As a reminder, Facebook is a global platform, and any misinformation spreading on this platform results in misconception across the whole world.

Which is why accurate and precise information on this global platform is very necessary.

Facebook recognizes this threat and is diligently working toward managing the most advanced AI models to combat misinformation regarding the COVID-19 crisis.Community Standard Enforcement Report:

In May 2020, Facebook has released a report about their Community Standards.

The report is showing the use of a combination of human resources and Artificial Intelligence to maintain Community Standards.

Facebook is trying to rely more on Al software than manpower to enhance progress.

During this COVID-19 pandemic, Facebook is also relying more on technology.

As during the pandemic, the moderators and co-workers are advised to work from their computers but the company can’t give access to its sensitive data from their home computers.

The company has given $52 million to their moderators to compensate for the mental stress during this pandemic.

But Facebook is trying to continue its work from modern technologies.

The Community Standards Enforcement Report contains larger trends of enforcement or offending behaviors because of this pandemic.

The report only contains data from October 2019 to March 2020.

The report contains metrics about the actions against Community Standards taken by Facebook.

The report also shows statistical data about removing content and mark content by warning labels.

All the violations regarding the Community Standards and how they have combat are explained in the report.

Furthermore, the company has also announced to anticipate all the impacts of changes made during this COVID-19 pandemic.

Photo by NeONBRAND

Warning on COVID-19 Related Posts:

Facebook has put more than 50 million warning labels to COVID-19 related posts during April 2020.

Facebook has announced all the information about ongoing COVID-19 misinformation in a separate blog post.

2.5 million pieces of content have been removed by Facebook since March 1st, 2020.

All this content is related to the sale of hand sanitizers, disinfecting wipes, and COVID-19 test kits.

It’s a very difficult task for the company to manage all these challenges with perfection.

The company has issued that 95% of the time, anyone who is warned once is not allowed to view that content across the platform.

Hateful contents, misinformation, and hate speeches are shown in the form of images and videos.

It is difficult for the company to keep an eye on images and videos than text and article links.

The most challenging content is meme formation in which text and image are used together to attack a particular group.

Image Credit: Facebook

Facebook has announced complete information about hate speech in the form of a separate article.

Tackling with memes and images is the toughest challenge for Artificial Intelligence (AI).

The major problem AI software faces during parsing of meme images or videos is the difference in the language and wordplay.

Artificial Intelligence AI-trained models must find duplicates or modified versions of content across Facebook.

A new AI model has been achieved by a company named SimSearchNet.

This AI model is trained to recognize both copies of original images or images that are almost similar to changes in one or two words.

Hate Speech Occurs in Images or Videos on Facebook:

Once an image containing a false claim about coronavirus is determined, SimSearchNet recognizes all the nearly matching images and content and apply warning labels on them.

SimSearchNet is the main AI trained model of Facebook that has end-to-end image indexing and matching system.

This software parses every image uploaded to Facebook and Instagram and then checks it against specific human-curated databases.

Billions of images are checked through this software daily to detect any misinformation related to COVID-19.

An example quoted in the report by the company is a misleading image having a text line.

“COVID-19 is found in toilet paper”.

The AI software of Facebook has recognized this fake news and marked warning labels.

But there is a difficulty across the software; the AI Software has to train to differentiate between the original image and modified image which is saying.

“COVID-19 isn’t found in toilet paper”.

Image Credit: Facebook

Detection of Near-Exact Duplicates:

There may occur some difficulties for AI software to recognize between original and slightly modified images.

The main goal of Facebook is to reduce the spread of duplicate images even if they don’t contain misinformation.

A lot of politically motivated pages and organizations use the photographs and images, then alter them to change their meanings according to their purposes.

AI model is trained to differentiate between genuine and duplicate content and can label one as misinformation and other as original.

This is a very meaningful step by Facebook to manage the removal of duplicate content either if it contains an offending image or not.

The most important thing about these similarity systems is to be accurate and precise.

A little mistake means the removal of content that doesn’t violate the privacy policies of Facebook.

When specific misinformation is identified by fast checkers, millions of copies of duplicate related content is also there on Facebook.

AI detects all possible matches and marks them with a warning label.

This enables the fact-checking partners to focus on catching new instances of misinformation.

Any new kind of misinformation is detected by fast-checker partners of the company then AI software detects all the possible matches related to that misinformation.

Sale of COVID-19 Products:

After the crisis of coronavirus, many people started to sell different products for their financial gain.

Facebook is also used as a marketing platform for all these products.

These products may be face masks, hand sanitizers, and disinfecting products.

Facebook is trying to detect and remove ads for all these products.

People are making different strategies to parse through the AI-based screening system.

Facebook has maintained an object-level database that contains all the ads related to the ads of COVID-19 products.

This allows the company to detect manipulated ads and automatically rejecting these ads.

Facebook is also using instance matching for data augmentation.

It allows Facebook to bootstrap their models with limited data only.

There is a diversity seen in the Marketplace product photos.

To combat such a large diversity is a great challenge for Facebook.

People used spliced objects by modifications like rotation, occlusion, cropping, taking screenshots, and adding noise.

Facebook is trying to detect all these modifications and using instance matching to detect all the related images of that product.

This is helping the company to remove all such ads precisely.

In this coronavirus crisis, Facebook is trying to discourage all such people who are willing for their financial benefits by the sale of their products through this platform.

AI-models are very helpful for the accurate and precise detection of all these productions.

Facebook is using Ads level classifier to prevent the distribution of the products and policy-violating ads.

Photo by Erik Mclean 

Training Vision Models for Marketplace:

Facebook is a global platform for the marketing purposes of products of different companies and industries.

People use images with different backgrounds, details, and quality to sell their things in the Marketplace.

It becomes very difficult for vision models to detect the original images produced by professional photographers.

Facebook is using advanced techniques to deploy classification and object, detection models.

This allows Facebook to build a new platform by PyTorch to quickly train detectors for new images and videos.

This technology leads to Facebook AI’s groundbreaking work.

After the COVID-19 pandemic, Facebook has utilized this technique to deploy classification for hand sanitizers, face masks, and disinfecting wipes.

Facebook first collects public photos and then processes these.

For precision and accuracy, Facebook has also added many negative images that a model can parse mistakenly for any product.

The company has trained its models to deploy the concept of production inference platform.

Now, these vision models are running on Facebook Marketplace listing and helping the company to cope with the best possible products related to coronavirus crisis.

Facebook is planning more to invest in its platforms for improvement in modes.

Downstream multi-modal classifiers are using signals from these models and look holistically at a marketplace post level.

Improvement in Hate Speech Moderation:

The company is utilizing similar techniques for coronavirus related content to improve hate speech moderation.

In the report, Facebook has revealed that 88.8% of the hate speech content is removed by AI models.

In the first 3 months of 2020, Facebook has taken action against 9.6 million pieces of content that are violating the hate speech policies of the company.

There is an increase of 3.9 million than the previous 3 months.

Facebook is now relying more on AI models to combat with its privacy policies.

It helps manage hateful speech or content in images or videos.

People who are sharing hate speech usually adopt different strategies to prevent detection by software.

They misspelled the words or avoid certain phrases to modify their images and videos.

AI models face difficulties in the recognition of such hate full speeches.

Facebook is trying to improve its systems regarding these challenges.

It is very important to be accurate and precise in these terms.

If any content is mistakenly classified as hate speech, then it prevents that person to express itself in the future.

The recent report issued by Facebook contains the major data from Instagram.

The degree of content that is removed and which is reinstated is explained in the report.

In this COVID-19 pandemic condition, the suicide and self-injury posts are increasing therefore, Facebook applied AI models to finding matching images and removed content from the platform.

Facebook is trying to improve techniques for the detection of hateful speech online.

Facebook is organizing an opportunity for researchers to create advanced models.

A prize of $100,000 is announced for the best model that can successfully parse all these hateful speeches and memes across Facebook with accuracy.

Facebook is trying to manage proactively AI models for its Community Standards.

Photo by Obi Onyeador 

Automated Moderation Across Multiple Languages:

For a better understanding of text across multiple languages, Facebook has announced a new few features of a neural network named XLM-R in November 2019.

The main focus of XLM-R is to parse more data for a longer time and transfer learning across multiple languages.

As previously stated, the most difficult thing for detection is meme(s), even from many advanced technologies.

Memes sometimes contain hate speeches, misinformation, and hit to a specific group.

To overcome this difficulty, Facebook has built ‘hateful meme’ data set that has more than 10,000 examples.

The understandings of these memes are made by processing of image and text and the relationship between them.


All the problems related to Community Standards is not new for Facebook.

Due to COVID-19, there is an increase in the problems of misinformation and hateful speeches.

Another major problem is the selling of prohibited items on this platform.

Facebook is managing long-term investments to address all such challenges.

Advanced visual reasoning systems and multi-modal understandings are developing by the company.

More and more advanced techniques are trying to use to make the best Community Standards and addressing all the challenges related to violation of privacy rules.

Cutting-edge research is helping with better production today.

For the better protection of people on these global platforms, Facebook is trying to adopt new research techniques and tools.

More advanced AI-models are introducing to combat with all the false and misleading information on these platforms.

This COVID-19 crisis has also put a lot of pressure on Facebook regarding the violation of the rules of Community Standards.

Facebook is trying to combat will all these problems in the best possible and advanced manner with precision and accuracy.