Login and Password. Account Settings. Adjust Your Account Settings. Your Username. Choose a Legacy Contact.
Names on Facebook. Ad Preferences. Deactivating or Deleting Your Account. Privacy, Safety and Security. Policies and Reporting. Where can I find and manage my Facebook language settings? The company offers its 2. Reuters has found another 31 widely spoken languages on Facebook that do not have official support.
Automated tools for identifying hate speech work in about Countries including Australia, Singapore and the UK are now threatening harsh new regulations, punishable by steep fines or jail time for executives, if it fails to promptly remove objectionable posts.
The community standards are updated monthly and run to about 9, words in English. A Facebook spokeswoman said this week the rules are translated case by case depending on whether a language has a critical mass of usage and whether Facebook is a primary information source for speakers. The spokeswoman said there was no specific number for critical mass. She said among priorities for translations are Khmer, the official language in Cambodia, and Sinhala, the dominant language in Sri Lanka, where the government blocked Facebook this week to stem rumors about devastating Easter Sunday bombings.
A Reuters report found last year that hate speech on Facebook that helped foster ethnic cleansing in Myanmar went unchecked in part because the company was slow to add moderation tools and staff for the local language. Facebook says it now offers the rules in Burmese and has more than speakers of the language among its workforce. But human rights officials say Facebook is in jeopardy of a repeat of the Myanmar problems in other strife-torn nations where its language capabilities have not kept up with the impact of social media.
Mohammed Saneem, the supervisor of elections in Fiji, said he felt the impact of the language gap during elections in the South Pacific nation in November last year. Racist comments proliferated on Facebook in Fijian, which the social network does not support. In order to be able to productionize this model in the future, we need to scale models as efficiently as possible with high-speed training.
For example, much existing work uses multimodel ensembling, where multiple models are trained and applied to the same source sentence to produce a translation. To reduce complexity and compute required to train multiple models, we explored multisource self-ensembling, which translates a source sentence in multiple languages to improve translation quality.
Also, we built on our work with LayerDrop and Depth-Adaptive , to jointly train a model with a common trunk and different sets of language-specific parameters. This approach is great for many-to-many models because it offers a natural way to split parts of a model by language pairs or language families. By combining dense scaling of model capacity with language-specific parameters 3B in total , we provide the benefits of large models as well as the ability to learn specialized layers for different languages.
For years, AI researchers have been working toward building a single universal model that can understand all languages across different tasks. A single model that supports all languages, dialects, and modalities will help us better serve more people, keep translations up to date, and create new experiences for billions of people equally.
This work brings us closer to this goal. This collective research can further advance how our system understands text for low-resource languages using unlabeled data. For instance, XLM-R is our powerful multilingual model that can learn from data in one language and then execute a task in languages with state-of-the-art accuracy.
And most recently, our new self-supervised approach, CRISS , uses unlabeled data from many different languages to mine parallel sentences across languages and train new, better multilingual models in an iterative way.
To help personalize content, tailor and measure ads, and provide a safer experience, we use cookies. By clicking or navigating the site, you agree to allow our collection of information on and off Facebook through cookies. Learn more, including about available controls: Cookies Policy. Skip to content. Back to Newsroom. October 19, March 24, Facebook AI is introducing M2M, the first multilingual machine translation MMT model that can translate between any pair of languages without relying on English data.
When translating, say, Chinese to French, most English-centric multilingual models train on Chinese to English and English to French, because English training data is the most widely available. Our model directly trains on Chinese to French data to better preserve meaning.
0コメント