- On Thursday, Instagram announced changes to its feed to make newer posts appear higher up for users. The change was prompted by user feedback criticizing a move made a couple years ago from a chronological feed to one curated by machine learning algorithms, which prioritized posts users were more likely to like and interact with.
- Back in 2012, Instagram CEO Kevin Systrom said, "We're also going to be a Big Data company." Today, like most social media companies, Instagram runs a heavy Big Data and artificial intelligence operation to power user feeds, run targeted advertising, personalize discovery and search results, filter spam and conduct large-scale studies of human behavior and preferences, reports Forbes.
- Machine learning has also helped the company reduce cyberbullying and trolling by pinpointing inappropriate words and phrases in comments, said Systrom in an interview with WIRED. The DeepText system, which uses word embeddings to determine context, was brought on from Facebook in 2016 and has been taught to pinpoint obvious cases of spam and mean comments (such as an explicit word) as well as more subtle slurs and harassment.
We're rolling out some changes to make feed feel fresher and also fix that super annoying thing when you're scrolling and then IG refreshes and you lose your place. @cgartenberg with the details: https://t.co/nN9E0pcFA2— Mike Krieger (@mikeyk) March 22, 2018
Instagram has around 800 million monthly users and, outside of social functions, serves as an important business and advertising platform for thousand of companies. Its parent company, Facebook, is considered a world leader in AI, outpacing even Alphabet in a recent global AI index ranking.
AI capabilities are an augmentation of and not replacement for human work, but striking the right balance can be tricky for any company. Social media companies like Instagram and Facebook are placed under especially strong scrutiny for the influence and power they have over consumers' daily lives — especially in light of the Cambridge Analytica scandal.
Many of the ML applications in Instagram's application go unnoticed by users, as is the intention. But Instagram therefore has a tight line to toe trying to make sure its ML systems do not generate false-positives identifying inappropriate content.
After all, AI and ML algorithms have many well-documented cases of social blunder, especially on gender and racial issues. The onus of forming a diverse team to train these algorithms, which has been shown to reduce bias in these systems, falls on management.