Instagram will remove fake likes and follows
Instagram says it has built machine learning-powered moderation tools that will help identify which accounts use these services and automatically remove the likes, follows, and comments. Any accounts that are identified as using third-party apps to boost popularity will be notified within Instagram that its fake likes have been removed. They’ll also be prompted to change their password, in case the third-party apps have compromised their account security.
As Instagram grows into a platform for influencers and brands to hawk more products, more accounts will inevitably turn to third-party apps to artificially boost the popularity of posts. Just this week, The New York Times reported on the phenomenon known as “nanoinfluencers,” or people with as little as 1,000 followers now trying to earn free products in exchange for advertising those items on Instagram. Like with Twitter’s crackdown on bots, weeding out fraudulent activity is something Instagram will need to continue addressing if it wants to protect the integrity of its ad business.
Although Instagram has long removed fake accounts, it hasn’t taken action against fake likes before. According to the company’s press release, the platform is planning on taking more measures against fake activity in coming weeks.
Twitter now lets you report accounts that you suspect are bots
Twitter has updated a portion of its reporting process, specifically when you report a tweet that you think might be coming from a bot or a fake account masquerading as someone or something else. Now, when you tap the “it’s suspicious or spam” option under the report menu, you’ll be able to specify why you think that, including an option to say “the account tweeting this is fake.”
Twitter announced the change through its official safety account today, and it’s now live on both the web version and mobile version of the service.
Of course, while this change certainly provides users a bit of much-needed granularity in the reporting process, it’s still less clear what happens after you’ve sent the report off to Twitter’s safety team. We don’t know whether listing an account as fake will increase the likelihood that it gets banned — and that’s likely for good reason, considering coordinated groups could flood the system with bad faith reports that try to get a genuine account deemed a fake one and banned as a result.
According to a Twitter spokesperson, “The new reporting flow will allow us to collect more detailed information so we can identify and remove spam more effectively. With more details to review, we’ll be adding more resources to our review processes.”
So knowing that — and knowing Twitter is taking spam, bots, and its overall fake account problem more seriously of late — there is a good chance this reporting change could inspire more proactive bans. The company said back in July that it had removed 70 million accounts in May and June of this year for violating its policies on malicious and spammy behavior. Twitter even lost users quarter to quarter, as noted in its most recent earnings report, as a result of its bot crackdown.
So while it may be bad for business in the short term, its clear Twitter sees the overall integrity of its platform as an important long-term goal.