<td id="kg486"><optgroup id="kg486"></optgroup></td>
<button id="kg486"><tbody id="kg486"></tbody></button>
<li id="kg486"><dl id="kg486"></dl></li>
  • <dl id="kg486"></dl>
  • <code id="kg486"><tr id="kg486"></tr></code>
  • Facebook announces plan to fight misinformation campaigns

    Apr 28, 2017

    Facebook made its most direct statements about how the platform has been used to spread misinformation in a report released today by its security team. The report acknowledges how actors created a coordinated campaign on the platform to spread misinformation during the 2016 U.S. election, and explains measures Facebook is taking to combat it.

    “Our mission is to give people the power to share and make the world more open and connected,” the reports’ authors — Facebook CSO Alex Stamos and Threat Intelligence team members Jen Weedon and William Nuland — wrote. “The reality is that not everyone shares our vision, and some will seek to undermine it — but we are in a position to help constructively shape the emerging information ecosystem by ensuring our platform remains a safe and secure environment for authentic civic engagement.”

    Facebook calls the campaigns “information operations” and says the goals of such campaigns are usually to distort or manipulate political sentiment. Ordinary users can get caught up in the operations and take part in the spread of misinformation, Facebook said.

    The company’s response includes collaboration with other organizations to educate users, undermining campaigns that have a financial motivation, creating new products that slow down the spread of fake news and informing users when they encounter untrustworthy information.

    Facebook explains that information operations on the platform often manifest in three ways: targeted data collection, content creation, and false amplification. Stealing and publishing data allows actors to control public discourse, the company said, and that data can then be amplified across fake Facebook profiles.

    These tactics allow operations to sway public opinion about specific issues, sow distrust in political institutions, and spread confusion. This kind of behavior is often attributed to bots, but Facebook claims that most of the activity it sees on its network isn’t automated.

    “In the case of Facebook, we have observed that most false amplification in the context of information operations is not driven by automated processes, but by coordinated people who are dedicated to operating inauthentic accounts,” Facebook said. The company added that specific language skills and knowledge of regional political context indicated that those involved in the misinformation campaigns were humans, not bots.

    To fight back, Facebook is amping up its efforts to detect false amplification. It’s trying to block the creation of fake accounts and use machine learning to detect abuse.  The company says that the new measures are proving effective in France, where an election is currently underway.

    “In France, for example, as of April 13, these improvements recently enabled us to take action against over 30,000 fake accounts,” the report says.

    Facebook used the recent U.S. election of Donald Trump as a case study into misinformation on its platform. The company concluded that a coordinated campaign existed, “with the intent of harming the reputation of specific political targets.” The campaign included inauthentic Facebook accounts that were used to amplify certain themes and information, the report notes, adding:

    These incidents employed a relatively straightforward yet deliberate series of actions:

    • Private and/or proprietary information was accessed and stolen from systems and services (outside of Facebook);
    • Dedicated sites hosting this data were registered;
    • Fake personas were created on Facebook and elsewhere to point to and amplify awareness of this data;
    • Social media accounts and pages were created to amplify news accounts of and direct people to the stolen data.
    • From there, organic proliferation of the messaging and data through authentic peer groups and networks was inevitable

    Although Facebook admitted it was the unwitting host of a disinformation campaign during the election, the company said that the reach of this operation was “statistically very small” in comparison with overall political activity and engagement.

    Facebook also said it did not have enough data to definitively attribute the campaign to its creators, but nodded to a report published by the Director of National Intelligence which attributed hacking campaigns during the election season to Russian operatives and said that Facebook’s data does not contradict the findings of the Director.

    Putting the responsibility for fighting misinformation under the purview of its security team is an interesting move for Facebook, indicating that the company views the problem as a security risk similar to hacking or fraud.

    The company said it would continue to work directly with politicians and campaigns to make sure they use the social network securely.

    “Our dedicated teams focus daily on account integrity, user safety, and security, and we have implemented additional measures to protect vulnerable people in times of heightened cyber activity such as elections periods, times of conflict or political turmoil, and other high profile events.”

     

    Source: TechCrunch


    Copyright ? 2017, G.T. Internet Information Co.,Ltd. All Rights Reserved.
    主站蜘蛛池模板: 久久精品视频7| 国产成人AV三级在线观看按摩| 日本中文字幕在线电影| 国产熟女乱子视频正在播放| 亚洲国产精品久久久天堂| 91精品一区二区三区久久久久| 波多野吉衣中文字幕| 天堂草原电视剧在线观看图片高清| 免费中国jlzzjlzz在线播放| japmassage日本按摩| 电车痴汉在线观看| 大战bbw丰满肥女tub| 亚洲欧美日韩综合一区| 91丨九色丨蝌蚪3p| 欧美国产亚洲一区| 小雪把双腿打开给老杨看免费阅读 | 精品久久久久久无码中文字幕一区 | 青青国产成人久久激情911| 日本高清xxx| 国产乱人伦AV麻豆网| 中文字幕久精品免费视频| 欧洲一级毛片免费| 日韩高清国产一区在线| 国产偷久久久精品专区| 中文字幕欧美激情| 精品久久亚洲中文无码| 夜色邦合成福利网站| 亚洲日产韩国一二三四区| 四虎免费影院ww4164h| 日本精品久久久久中文字幕8| 四虎国产精品永久地址99| 一个人免费观看www视频| 波多野结衣中出在线| 国产熟睡乱子伦午夜视频| 久久大香香蕉国产| 美女把尿口扒开让男人添| 日本一道在线日本一道高清不卡免费| 四虎国产精品永久在线播放| jlzz奶水太多奶水太多| 欧美日韩国产网站| 国产精品视频全国免费观看|