While the wonders of social media are revealed with each passing day, there is also a dark side that we cannot deny.
On one hand, we see absolute nobodys rise to global fame while on the other, we hear news of social media users dying while Snapchatting or making Tiktok videos every other day.
Cybercrime continues to grow, and so does the importance given to battling/controlling it. While social media platforms do have rules, conditions and terms of use in place, it can be extremely difficult to monitor each and every interaction that users make on a daily basis.
Recently, researchers from Binghamton University have developed algorithms to identify two specific types of what they deem to be “offensive online behavior” on Twitter – “cyberbullying” and “cyberaggression.”
According to the researchers, intending to insult another Twitter user once is enough to get an account classed as “aggressive” while doing it twice or more is a sign of “bullying.” The researchers also claim that Gamergate and “gender pay inequality at the BBC” are topics that are “more likely to be hate-related.” However, they don’t explain how they ascribe user intent.
According to the new algorithm, not only will the language of a user be considered when identifying offensive behaviour, but also the accounts that a user follows will be taken into consideration.
The researchers go on to say that the algorithm can identify Twitter accounts engaging in what they deem to be bullying with “over 90% accuracy.” Additionally, they suggest that the algorithms could be used to find and delete what they define as “abusive accounts” on Twitter.
It seems like a great attempt to make the platform a safe space for communities and individuals, where freedom of expression is not taken unfair advantage of.
What are your thoughts? Let us know in the comments!
Stay tuned to Brandsynario for more news and updates.