British lawmakers on Monday accused Facebook of “intentionally and knowingly” violating data privacy and anti-competition laws as they called for social media companies to assume clear legal liabilities for content shared on their platforms.

Facebook, Inc. is an American online social media and social networking service company.

“Social media companies cannot hide behind the claim of being merely a platform.

“It cannot maintain that they have no responsibility themselves in regulating the content of their sites,’’ a major report by the Digital, Culture, Media and Sport Committee released on Monday.

The committee, which reviewed a trove of internal Facebook emails, accused the tech giant of being “willing to override its users’ privacy settings in order to transfer data to some app developers.’’

The lawmakers also accused chief executive Mark Zuckerberg of showing “contempt” of the British parliament by choosing not to appear before the committee nor “respond personally to any of our invitations.”

The committee called for the establishment of a compulsory code of ethics overseen by an independent regulator to draw a rulebook of acceptable and unacceptable behaviours on social media.

“The process should establish clear, legal liability for tech companies to act against agreed harmful and illegal content on their platform,’’ the report said.

The regulator should have the ability to launch legal proceedings “with the prospect of large fines being administered” for non-complying companies.

The committee also called for electoral law to be changed “to reflect changes in campaigning techniques” and for “absolute transparency of online political campaigning.”

Facebook users have continued to rise despite a series of data privacy scandals and criticism over its attempt to stem toxic content.

The social media giant said the number of people who logged into its site at least once a month jumped 9% last year to 2.32 billion people.

Fears the firm's scandals could put off advertisers also proved unfounded with annual revenues up 30% on last year.

The rise came despite campaigns which urged people to shun the tech giant.

Founder Mark Zuckerberg said the firm had "fundamentally changed how we run the company to focus on the biggest social issues".

The strong financial performance comes amid continuing concerns over how the social media firm handles users' personal data and privacy after the Cambridge Analytica data sharing scandal and fears the network has been used as a political tool.

Facebook paid teens to mine device dataFacebook 'less popular with UK children'The company's shares have lost almost a third of their value since July when it warned about slowing revenue growth and they remain near a two-year low.

But they jumped over 9% in after-hours trading after profit and revenue beat analyst forecasts.

Facebook's total profit for 2018 was $22.1bn (£16.9bn), up 39% on 2017.

User growth was particularly strong in India, Indonesia and the Philippines, but flat in the US and Canada.

Despite this, profits are up by almost 40%. Facebook isn't just surviving, it's thriving.

In the face of severe turbulence, Mark Zuckerberg's company has proven to be resilient.

But while users appear to be turning a blind eye, the same won't be said for regulators - Facebook knows huge fines are likely coming its way.

The question is how damaging those fines will be, and what other measures might be put in place that might clip the wings of a company that many lawmakers feel is too powerful.

George Salmon, analyst at Hargreaves Lansdown, said Facebook's revenue growth in the final three months of the year was its weakest since the firm listed on the Nasdaq stock exchange in 2012, but said the figures were still "reassuring".

"Only time will tell if Mark Zuckerberg's ambitious plans to revolutionise Facebook pay off, but these results will go a long way towards regaining the trust of Wall Street - analysts had been jittery after a tumultuous 2018 which included the trials and tribulations of the Cambridge Analytica scandal and a reset on strategy," he added.

The attorney general for Washington, D.C. said on Wednesday the U.S. capital city had sued Facebook Inc for allegedly misleading users about how it safeguarded their personal data, in the latest fallout from the Cambridge Analytica scandal.

The world’s largest social media company has drawn global scrutiny since disclosing earlier this year that a third-party personality quiz distributed on Facebook gathered profile information on 87 million users worldwide.

It sold the data to British political consulting firm Cambridge Analytica.

Washington, D.C. Attorney General Karl Racine said Facebook misled users because it had known about the incident for two years before disclosing it.

The company had told users it vetted third-party apps, yet made few checks, Racine said.

Facebook said in a statement: “We’re reviewing the complaint and look forward to continuing our discussions with attorneys general in DC and elsewhere.”

Facebook could be levied a civil penalty of 5,000 dollars per violation of the region’s consumer protection law, or potentially close to $1.7 billion, if penalized for each consumer affected.

The lawsuit alleges the quiz software had data on 340,000 D.C. residents, though just 852 users had directly engaged with it.

Shares in the company were down 4.7 per cent in afternoon trade on Wednesday.

Privacy settings on Facebook to control what friends on the network could see and what data could be accessed by apps were also deceiving, Racine said.

“Facebook’s lax oversight and confusing privacy settings put the information of millions of consumers at risk,” he told reporters on Wednesday.

“In our lawsuit, we’re seeking to hold Facebook accountable for jeopardizing and exposing the information” of its customers.

Racine said Facebook had tried to settle the case before he filed the lawsuit, as is typical during investigations of large companies.

He described Facebook’s cooperation as “reasonable,” but said that a lawsuit was necessary “to expedite change” at the company.

At least six U.S. states have ongoing investigations into Facebook’s privacy practices, according to state officials.

In March, a bipartisan coalition of 37 state attorneys wrote to the company, demanding to know more about the Cambridge Analytica data and its possible links to U.S. President Donald Trump’s election campaign.

Also in March, the Federal Trade Commission took the unusual step of announcing that it had opened an investigation into whether the company had violated a 2011 consent decree.

It cited media reports that raise what it called “substantial concerns about the privacy practices of Facebook.”

If the FTC finds Facebook violated the decree terms, it has the power to fine it thousands of dollars a day per violation, which could add up to billions of dollars.

State attorneys general from both major U.S. political parties have stepped up their enforcement of privacy laws in recent years, said James Tierney.

Tierney is a lecturer at Harvard Law School and Maine’s former attorney general.

Uber Technologies Inc in September agreed to pay 148 million dollars as part of a settlement with 50 U.S. states and Washington, D.C., which investigated a data breach that exposed personal data from 57 million Uber accounts.

Facebook's secretive research lab, where the company developed new hardware like its Portal speakers and researched moonshot projects like brain computer interfaces, is no more.

Building 8, the division Facebook created in 2016 to house some of its most ambitious projects, has been disbanded and the projects have been redistributed to other groups within the social media company.

The change, which was first reported by Business Insider, marks the end of the "Building 8" brand, though the group's work will continue on.

Facebook created Building 8 in 2016, with CEO Mark Zuckerberg committing to pour "hundreds of millions of dollars into this effort over the next few years" in order to "to advance our mission of connecting the world." To lead the new Building 8 work, the company poached former DARPA head Regina Dugan from Google, where she oversaw the company Advanced Technology and Projects (ATAP) group.

The following year, Dugan wowed crowds at the company's F8 developer conference where she showed off some of the company's research, including a brain computer interface and tech that would let people "hear" through their skin.

Dugan left Facebook at the start of this year, saying "the timing feels right to step away and be purposeful about what's next."

Work at Building 8 continued on, most prominently on Portal, the company's first non-VR hardware product. The Facebook-connected speakers were the first consumer products to come out of Building 8. The company is also reportedly working on a camera-equipped TV set-top box that would use the same software as Portal.

Now, thanks to BI, we know that behind the scenes Facebook has separated the Portal team into its own group, which oversees Facebook's other "unannounced hardware projects." Meanwhile, Building 8's researchers have been shuffled to Facebook Reality Labs (FRL), another new group at Facebook lead by Facebook's top VR researcher, Michael Abrash. The FRL group was created in May, around the same time Facebook announced a bigger reorganization among its top executives.

A Facebook spokesperson confirmed to BI that the Building 8 brand was no more, but said it continues to work on the same projects and hasn't laid off any employees as a result of the re-structuring.

Building 8 was the early name of the team building consumer hardware at Facebook. Building 8 is part of Facebook's AR/VR organization. Now that we're shipping, it's the Portal team. And Rafa Camargo is still leading the team; that has not changed. We also unified research looking at longer terms projects under one team, which became Facebook Reality Labs, which is also part of our AR/VR organization. This includes research projects like the Brain Computer Interface.

The bitter truth buried in recent headlines about how the political consulting company – Cambridge Analytica – used social media and messaging, primarily Facebook and WhatsApp, to try to sway voters in presidential elections in the US and Kenya is simply this: Facebook is the reason why fake news is here to stay.

Various news outlets, and former Cambridge Analytica executives themselves, confirmed that the company used campaign speeches, surveys, and, of course, social media and social messaging to influence Kenyans in both 2013 and 2017.

The media reports also revealed that, working on behalf of US President Donald Trump’s campaign, Cambridge Analytica had got hold of data from 50 million Facebook users, which they sliced and diced to come up with “psychometric” profiles of American voters.

The political data company’s tactics have drawn scrutiny in the past, so the surprise of these revelations came more from the “how” than the “what.” The real stunner was learning how complicit Facebook and WhatsApp, which is owned by the social media behemoth, had been in aiding Cambridge Analytica in its work.

The Cambridge Analytica scandal appears to be symptomatic of much deeper challenges that Facebook must confront if it’s to become a force for good in the global fight against false narratives.

These hard truths include the fact that Facebook’s business model is built upon an inherent conflict of interest. The others are the company’s refusal to take responsibility for the power it wields and its inability to come up with a coherent strategy to tackle fake news.

Facebook’s biggest challenges

Facebook’s first issue is its business model. It has mushroomed into a multibillion-dollar corporation because its revenue comes from gathering and using the data shared by its audience of 2.2 billion monthly users.

Data shapes the ads that dominate our news feeds. Facebook retrieves information from what we like, comment on and share; the posts we hide and delete; the videos we watch; the ads we click on; the quizzes we take. It was, in fact, data sifted from one of these quizzes that Cambridge Analytica bought in 2014. Facebook executives knew of this massive data breach back then but chose to handle the mess internally. They shared nothing with the public.

This makes sense if the data from that public is what fuels your company’s revenues. It doesn’t make sense, however, if your mission is to make the world a more open and connected place, one built in transparency and trust. A corporation that says it protects privacy while also making billions of dollars from data, sets itself up for scandal.

This brings us to Facebook’s second challenge: its myopic vision of its own power. As repeated scandals and controversies have washed over the social network in the last couple of years, CEO Mark Zuckerberg’s response generally has been one of studied naivete. He seems to be in denial about his corporation’s singular influence and position.

Case in point: When it became clear in 2016 that fake news had affected American elections, Zuckerberg first dismissed that reality as “a pretty crazy idea.” In this latest scandal, he simply said nothing for days.

Throughout the world, news publishers report that 50% to 80% of their digital traffic comes from Facebook. No wonder Google and Facebook control 53% of the world’s digital and mobile advertising revenue. Yet Zuckerberg still struggles to accept that Facebook’s vast audience and its role as a purveyor of news and information combine to give it extraordinary power over what people consume, and by extension, how they behave.

All of this leads us to Facebook’s other challenge: its inability to articulate, and act on, a cogent strategy to attack fake news.

The fake news phenomenon

When Zuckerberg finally surfaced last month, he said out loud what a lot of people were already were thinking: there may be other Cambridge Analyticas out there.

This is very bad news for anyone worried about truth and democracy. For in America, fake news helped to propel into power a man whose presidential campaign may have been a branding exercise gone awry. But in countries like Kenya, fake news can kill.

Zuckerberg and his Facebook colleagues must face this truth. Fake news may not create tribal or regional mistrust, but inflammatory videos and posts shared on social media certainly feed those tensions.

And false narratives spread deep and wide: In 2016, BuzzFeed News found that in some cases, a fake news story was liked, commented and shared almost 500,000 times. A legitimate political news story might attract 75,000 likes, comments and shares.

After Zuckerberg was flogged for his initial statements about fake news, Facebook reached out to the Poynter Institute’s International Fact-checking Network in an effort to attack this scourge. Then in January 2018, the social network said that it was going to be more discriminating about how much news it would allow to find its way into the feeds of its users. In other words, more videos on cats and cooking, less news of any kind.

The policy sowed a lot of confusion and showed that Facebook is still groping for how to respond to fake news. It was also evidence that the social network does not understand that fake news endangers its own existence as well as the safety and security of citizens worldwide –- especially in young democracies such as Kenya.

Angry lawmakers in the US and Europe, along with a burgeoning rebellion among its vast audience, may finally grab Facebook’s attention. But we will only hear platitudes and see superficial change unless Facebook faces hard truths about its reliance on data, accepts its preeminent place in today’s media ecosystem and embraces its role in fighting fake news.

Social media giant Facebook has agreed to pay more than 100 million euros ($114 million) to end a fiscal fraud dispute, Italian tax authorities said Thursday.

Italy has already drawn similar agreements from Amazon, Apple and Google, joining EU neighbours seeking a bigger tax take from multinationals previously able to use loopholes allowing the booking of profits in countries with more favourable tax regimes.

The accord aims to “end the disagreement relating to tax inquiries undertaken by the financial police (GdF) at the behest of the Milan prosecutor for the period 2010-2016,” Italy’s tax authority said in a statement.

The authority added that Facebook Italy would be “making a payment of more than 100 million euros.”

Online retail behemoth Amazon agreed on a similar deal last December while in May last year Google agreed to pay 306 million euros to end a dispute relating primarily to 2009-2013 profits booked in Ireland.

Ireland has one of the lowest corporate tax rates in the European Union.

Apple had earlier, in December 2015, agreed to make payment of more than 300 million euros on Italian-generated profits dating back to 2008. (AFP)

Social media giant Facebook has agreed to pay more than 100 million euros ($114 million) to end a fiscal fraud dispute, Italian tax authorities said Thursday.

Italy has already drawn similar agreements from Amazon, Apple and Google, joining EU neighbours seeking a bigger tax take from multinationals previously able to use loopholes allowing the booking of profits in countries with more favourable tax regimes.

The accord aims to “end the disagreement relating to tax inquiries undertaken by the financial police (GdF) at the behest of the Milan prosecutor for the period 2010-2016,” Italy’s tax authority said in a statement.

The authority added that Facebook Italy would be “making a payment of more than 100 million euros.”

Online retail behemoth Amazon agreed on a similar deal last December while in May last year Google agreed to pay 306 million euros to end a dispute relating primarily to 2009-2013 profits booked in Ireland.

Ireland has one of the lowest corporate tax rates in the European Union.

Apple had earlier, in December 2015, agreed to make payment of more than 300 million euros on Italian-generated profits dating back to 2008. (AFP)

Facebook has tried to shut down ads that illegally used South African billionaire Mark Shuttleworth as its "front man" - but hours later they popped up again.

Shuttleworth’s name and picture have been posted on adverts promoting cryptocurrency scams on Facebook and on fake news websites.

The images and ads look similar to those used for a scam called QProfit System, which surfaced earlier this year. That scam claims that its designer, Jerry Douglas, was asked by Shuttleworth to develop a cryptocurrency system. It also creates the impression that Shuttleworth is out for “revenge” after losing a R250.5 million lawsuit against the Reserve Bank.

Shuttleworth, who has an estimated fortune of R9.6 billion and is CEO of the open-source operating system provider Canonical, has denied any involvement in a blog post.

“I can’t comment on whether or not Jerry Douglas promotes a QProfit System and whether or not it’s fraud. But I can tell you categorically that there are many scams like this, and that this investment has absolutely nothing to do with me. I haven’t developed this software and I have no desire to defraud the South African government or anyone else. I’m doing what I can to get the fraudulent sites taken down. But please take heed and don’t fall for these scams,” wrote Shuttleworth.

Ads for the new version of the scam, which asks for an initial fee of $250 (R3,750), have recently appeared on Facebook. Here, Shuttleworth’s face and a false testimony are used to promote a scheme for an app called Bitcoin Trader and Bitcoin Revolution. The ads take the users to different fake news websites, including a site called POIP News.

Facebook was alerted to the new scam ads last week. On Monday it responded by removing the ads.

“We do not allow adverts which are misleading or false on Facebook and encourage people to report any adverts that they believe infringe on their rights, or shouldn’t be on Facebook. As soon as these pages and ads were highlighted to us, we worked promptly to remove them and can confirm that several accounts and pages that violated our Advertising Policies have been taken down," said a spokesperson.

Facebook, Google and other tech firms have agreed a code of conduct to do more to tackle the spread of fake news, due to concerns it can influence elections, the European Commission said on Wednesday.

“Intended to stave off more heavy-handed legislation, the voluntary code covers closer scrutiny of advertising on accounts and websites where fake news appears.

“Thereby working with fact checkers to filter it out,’’ the commission said.

However, a group of media advisors criticised the companies, also including Twitter and lobby groups for the advertising industry for failing to present more concrete measures.

Brussels, with EU parliamentary elections scheduled for May, is anxious to address the threat of foreign interference during campaigning.

Belgium, Denmark, Estonia, Finland, Greece, Poland, Portugal and Ukraine are also all due to hold national elections in 2019.

Russia has faced allegations, which it denies, of disseminating false information to influence the U.S. presidential election and Britain’s referendum on EU membership in 2016 as well as Germany’s national election in 2017.

The commission told the firms in April to draft a code of practice, or face regulatory action over what it said was their failure to do enough to remove misleading or illegal content.

European Digital Commissioner Mariya Gabriel said that Facebook, Google, Twitter, Mozilla and advertising groups had responded with several measures.

“The industry is committing to a wide range of actions, from transparency in political advertising to the closure of fake accounts and we welcome this,” she said in a statement.

The steps also include rejecting payment from sites that spread fake news, helping users understand why they have been targeted by specific ads, and distinguishing ads from editorial content.

However, the advisory group criticised the code, saying the companies had not offered measurable objectives to monitor its implementation.

“The platforms, despite their best efforts, have not been able to deliver a code of practice within the accepted meaning of effective and accountable self-regulation,” the group said, giving no further details.

Its members include the Association of Commercial Television in Europe, the European Broadcasting Union, the European Federation of Journalists and International Fact-Checking Network, and several academics.