Installing the Twitter Bullies Chrome Extension

Google Drive Download Link

This chrome extension provides an interface for users on Twitter to quickly view the classification of a user. Clicking an embedded button on a user's profile pages returns the results for a user. Users can be classified as one of (a) Normal (b) Aggressor (c) Bully (d) Spammer. Users can also see results for their own profile by visiting their profile page.

Getting Started

These instructions will get you a copy of the extension up and running on your local machine for development and testing purposes.

Prerequisites

This project was developed and tested on Google Chrome v73
If the extension doesn't work, please verify you are running v73 or higher by visiting chrome://version on your browser.

Installing

Download the entire TwitterBullies_v0 directory, located here. Unzip the folder to an appropriate place on your machine.

Visit chrome://extensions and toggle Developer Mode in the top right corner.

Click on "load unpacked" and select the directory "TwitterBullies_v0".

It is very important that you select this directory, as this is the direct parent directory for all the source code.

Refresh any open Twitter tabs, ensure you are logged in, and you will be able to navigate to a user's profile page to view their results.

For example, visit https://twitter.com/BarackObama and select the "Bully Test" button on the left panel.

Please allow 15 seconds on your first request, as the API the extension connects to might need to wake up (free Heroku hosting).

Languages Used

Authors

License

This project is licensed under the MIT License - see the LICENSE.md file for details

Resources

Disclaimer



The results obtained from using this tool may very well be incorrect. The model for which this tool is based on takes in a variety of features from users, and attempts to make a prediction based off a previously trained random forest classifier model. The model was trained with approximately ~500 users, all manually labelled as one of the four classifications. The trained model achieved a 61.6% accuracy, which is far from perfect. So if you test a user that typically tweets very offensive material, and the result doesn't make sense, it may likely be due to error on the model's part. Furthermore, if you run the tool on a user that is very wholesome, and the model reports that the user is an aggressor or bully, I apologize for the model's error. Another factor that may lead to an incorrect classification is due to the way features are sampled from a user's profile. Due to performance concerns, not all tweets from a user can be sampled. The 20 most recent tweets, replies and retweets are selected for feature extraction, so historic behavior does not factor in to the result.