Connect with us

Hi, what are you looking for?

HEADLINES

Who’s a bot and who’s not

Bots are constantly evolving – with fast paced advancements in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms.

Photo by Dhaval Parmar from Unsplash.com

Bots are social media accounts which are controlled by artificial software rather than by humans and serve a variety of purposes from news aggregation to automated customer assistance for online retailers. However, bots have recently been under the spotlight as they are regularly employed as part of large-scale efforts on social media to manipulate public opinion, such as during electoral campaigns.

A new study in Frontiers in Physics has revealed the presence of short-term behavioral trends in humans that are absent in social media bots, providing an example of a ‘human signature’ on social media which could be leveraged to develop more sophisticated bot detection strategies. The research is the first study of its kind to apply user behavior over a social media session to the problem of bot detection.

“Remarkably, bots continuously improve to mimic more and more of the behavior humans typically exhibit on social media. Every time we identify a characteristic we think is prerogative of human behavior, such as sentiment of topics of interest, we soon discover that newly-developed open-source bots can now capture those aspects,” says co-author Emilio Ferrara, Assistant Professor of Computer Science and Research Team Leader at the University of Southern California Information Sciences Institute.

In this work, the researchers studied how the behavior of humans and bots changed over the course of an activity session using a large Twitter dataset associated with recent political events. Over the course of these sessions, the researchers measured various factors to capture user behavior, including the propensity to engage in social interactions and the amount of produced content, and then compared these results between bots and humans.

To study the behavior of bot and human users over an activity session, the researchers focused on indicators of the quantity and quality of social interactions a user engaged in, including the number of retweets, replies and mentions, as well as the length of the tweet itself. They then leveraged these behavioral results to inform a classification system for bot detection to observe whether the inclusion of features describing the session dynamics could improve the performance of the detector. A range of machine learning techniques were used to train two differ-ent sets of classifiers: one including the features describing the session dynamics and one with-out those features, as a baseline.

Advertisement. Scroll to continue reading.

The researchers found, among humans, trends that were not present among bots: Humans showed an increase in the amount of social interaction over the course of a session, illustrated by an increase in the fraction of retweets, replies and number of mentions contained in a tweet. Humans also showed a decrease in the amount of content produced, illustrated by a de-creasing trend in average tweet length. These trends are thought to be due to the fact that as sessions progress, human users grow tired and are less likely to undertake complex activities, such as composing original content.

Another possible explanation may be given by the fact that as time goes by, users are exposed to more posts, therefore increasing their probability to react and interact with content. In both cases, bots were shown to not be affected by such considerations and no behavioral change was observed from them.

The researchers used these behavioral results to inform a classification system for bot detection and found that the full model including the features describing session dynamics significantly outperformed the baseline model in its accuracy of bot detection, which did not describe those features.

These results highlight that user behavior on social media evolves in a measurably different manner between bots and humans over an activity session and also suggests that these differences can be used to implement a bot detection system or to improve existing ones.

Emilio highlights: “Bots are constantly evolving – with fast paced advancements in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms.”

Advertisement. Scroll to continue reading.

“We are continuously trying to identify dimensions that are particular to the behavior of humans on social media that can in turn be used to develop more sophisticated toolkits to detect bots.”

Advertisement
Advertisement
Advertisement

Like Us On Facebook

You May Also Like

HEADLINES

From January to December 2024, Kaspersky solutions used by businesses here detected and blocked more than 53 million bruteforce attacks. 

HEADLINES

According to Kaspersky experts, 2024 saw over 3 billion malware attacks globally, with a daily average of 467,000 malicious files detected. Windows systems were...

HEADLINES

Cybercriminals target SMBs, schools, and other smaller organizations because they often have less robust security compared to large corporations and other institutions. 

HEADLINES

Sophos Counter Threat Unit revealed the NICKEL TAPESTRY threat group’s scheme involving fraudulent workers operating on behalf of North Korea (formally known as the...

HEADLINES

PRSP is a staunch advocate of communication based on honesty and integrity. While our role is to uphold and strengthen the reputation of our...

HEADLINES

Poor password management is compounded by a reliance on common combinations of names, dictionary words and numerals. Not only are these passwords relatively easy...

White Papers

This demonstrates that despite a slight improvement from last year, cybersecurity preparedness remains low as hyperconnectivity and AI introduce new complexities for security practitioners.

HEADLINES

The Fraud Bureau is a collaborative initiative that unites banks, fintechs, and financial institutions to share data on potentially deceptive applicant activity securely. This...

Advertisement