If Drake is Worried About Bots, You Should Be Too
Manipulation techniques spreading across the online world
The BBC reported this week that Drake is suing the largest music company in the world, Universal Music Group, and Spotify, the world’s premier music streaming service, for using bots to artificially promote his nemesis Kendrick Lamar’s wildly popular diss track. We track online manipulation, so decided to have a look at whether the claim stacks up.
Two of the world’s biggest hip hop artists, Drake and Lamar, have a long running “beef” that at one point saw Drake call Lamar a “midget”, while Drake was accused of getting a Brazilian butt lift. The stand-off culminated with Lamar releasing a track called “Not Like Us” in which he accused Drake of relationships with underage women.
Drake’s lawsuit claims bots rather than real audience interest drove the track’s popularity. The use of bots (and manipulation more generally) is becoming more common in high-profile, reputation-related controversies. Since 2020, we have seen the use of bots move from conflicts and politics into more mainstream sectors of society as a wider array of bad actors realise their potential.
Underground Cottage Industry with Huge Reach
Court documents filed by Drake’s lawyers in New York state Lamar’s record label, UMG, paid “currently unknown parties to use ‘bots’ to artificially inflate the spread of Not Like Us and deceive consumers into believing the Song was more popular than it was in reality”. The claim goes on to state an unnamed individual has publicly spoken about being paid $5,000 and a cut of the song’s revenue to make it into a “crazy hit” on Spotify.
From our experience investigating the use of bots to influence stock prices and undermine politics, the request and the payment structure sounds plausible. However, we have not previously come across the use of bots on Spotify, so instead of investigating the whistleblower’s claim, we looked at the underworld of “botting” on the streaming platform.
We found that the whistleblower in question likely comes from an underground community of individuals who make money from social media platforms by spoofing plays/streams/views which the platform then pays for. In the case of Spotify, we found multiple influencers offering to teach individuals looking to make money online how to generate music tracks through AI, which are then posted on the music streaming platform. The aspiring revenue scrapers are then directed to a website that allows them to pay for a certain number of plays of their song.
Fig. 1 (below) a screen grab from a website offering the services of bot farms to play online music tracks to artificially boost their listener numbers and earn money from Spotify.
The voice over in one of the instruction videos claims:
“They (the website) are the only providers of cheap and profitable streams that Spotify can’t detect. They are able to accomplish this with an army of bots… Spotify will pay you $3 for every thousand streams so every dollar you spend on streams, you will get around seven dollars back”. The influencer then goes on to claim they spend $200/$300 per month buying streams which earns them around $2,000/month.
The figures are vastly inflated by influencers whose own business model is to hook in aspiring scrapers and entice them to pay for instruction manuals.
Very few people (we know of only two academics studying it) have a handle on the nascent but fast growing revenue scraping cottage industry. What we do know is that it involves multiple players all trying to make money from each other. Bot farmers create and nurture fake accounts, increasingly by using AI; service providers then attempt to aggregate bot farms and establish advertising, marketing and payment functions; the ultimate customers are hundreds of thousands (or maybe millions) of tech savvy but cash poor young people from across the world looking to make an easy few dollars a month.
No Help Coming from Governments or Platforms
Drake’s diss track woes have brought to light the problem of Spotify, but we have seen similar methods being used on YouTube. Although the phenomenon could be dismissed as an issue that only impacts the bottom line of highly profitable social media platforms, the way the methodology works increases the risk of collateral damage to unrelated organisations and individuals.
For example, a Valent investigation into race riots that broke out in the UK over the summer showed that a revenue scraper running a news aggregate website designed to maximise views had played a key role in spreading the false claim a Muslim immigrant had committed a murder, sparking the riots. Although initial reports suggested the claim was disinformation from a hostile state, we concluded it was unintentional; which, however, raises even more concerning questions about the risk autonomous processes could pose in the future.
Drake’s claim touches on the potential for mainstream actors to tap into this underworld. As such activity becomes increasingly affordable, risk free and effective, the overall impact on our information environment risks becoming catastrophic. On a macro level, it means the basis of any societal collective decision making is undermined. On a micro level, it leaves all public organisations and public figures at the mercy of potential attackers whose motivations are impossible to predict.
Legacy media, lawmakers and regulators are still struggling with the idea of bots being used to affect real-world outcomes through the manipulation of narratives. The decentralised nature of revenue scraping and the potential for bad actors to tap into this underground world is not yet on their horizon. However, these issues are likely to grow and become more dangerous as social media platforms reduce moderation activities and the incoming US administration promises to reduce legislative oversight. In the coming years, threats are likely to increase while protection will be down to individual organisations.
If you would like to know more about Ariadne, our AI tool designed to help you see what's real online, please…