Posts tagged facebook
The new series of BBC Rip Off Britain kicks off this week and once again I’m helping to shine a light on the digital shams and scams that have been plaguing viewers across the country.
Such as this one, where Facebook fraudsters buy or cultivate pages with thousands of likes, then rename the page and clone their victim’s shopfront before defrauding their customers:
It can be difficult for shoppers to know which pages are real and which are fakes.
For this film I created an almost identical clone of the BBC Rip Off Britain Facebook page within a matter of minutes. It’s also a challenge for owners of Facebook pages who feel can powerless to stop scammers ripping off both their business and their customers
My advice for Facebook page owners – and for visitors to those pages – is to look out for Facebook verification badges. These grey or blue ticks alongside the profile name indicate that the page has been vetted by Facebook, with official documentation provided in some cases, and can reasonably be expected to be the real deal. Page owners can request a grey tick by following Facebook’s verification process.
To find out more about this – and other digital rip offs – tune in to BBC1, weekdays 9.15 to 10.00am or watch on-demand on BBC iPlayer.
In today’s Metro, I ask how the tech firms are tackling online abuse.
Despite the efforts of social networks such as Twitter and Facebook, many of the internet’s most popular destinations remain troubled by trolls.
When the trolls are in town, popular social platforms become unpleasant, unsocial places, not a carefree online destination to catch up with family and friends.
Some of those accused may claim they are exercising free-speech, but that doesn’t wash if the intent is to cause alarm or distress. Hurling abuse at somebody isn’t free speech, it’s hurling abuse at somebody.
So, isn’t it high time that tech firms stepped up their game to tackle the online abuse that runs riot on their platforms?
That’s what I examine in How tech is tackling trolls: how artificial intelligence, machine learning and image recognition are being deployed to disarm the trolls who terrorise the web.
However, there’s another angle to this that I’d like briefly to expand upon here: social networks need to tackle online abuse not only for their users’ sakes but for their investors’.
You see, for online social platforms driven by advertising – which is most of them – it is impossible to ignore the economics of trolling.
Economics of Trolling
Social networks are based on the principle that we humans are social creatures who like to express ourselves. The more we share, the more the networks know about us, and the more able they are to sell targeted advertising (ads that are, in theory, more relevant to us) on behalf of their partners.
Overall, it’s a happy relationship, and the numbers speak for themselves: almost 2 billion of us log in to Facebook every month to share status updates, likes and photos, from which it made almost $10 billion in 2016.
However, fear of unsocial behaviour on social platforms makes us more reluctant to express ourselves online; the less we share, the less they know and the less we visit, so the more it hurts the online platform’s ad revenues. The likes of Facebook and Twitter make nothing if we’re too afraid to use them.
Facebook and Twitter make nothing if we’re too afraid to use them
Twitter: We Suck
There are other ways in which the economics of online abuse can hurt too. Last year, Disney dropped its plans to buy Twitter over concerns that widespread trolling and bullying on the platform might, according to Bloomberg, ‘soil the company’s wholesome family image’.
Months before, Twitter boss Dick Costolo wrote, “We suck at dealing with abuse and trolls,” adding “It’s no secret, and the rest of the world talks about it every day.”
It does: just ask Leslie Jones, Katie Price, Zelda Williams, Katie Price, and countless others who have made the news after leaping from the toxic platform, having unwittingly stirred the trolls’ nest.
So, clamping down on unsocial behaviour is an obvious investment for businesses that rely on us being socially generous.
As I explore in the Metro feature, technology can go some way to weeding out abuse, but the trouble with automated tools is where the boundaries blur between abuse and robust argument. Even human moderators struggle with this and, for a while yet in my opinion, it’s likely AIs will too.