Hi! I'm Kai-Cheng Yang (杨凯程), the pronunciation is KY-cheng YAHNG. I also go by Kevin.
I'm a third year Ph.D student in Informatics at School of Informatics, Computing, and Engineering in Indiana University Bloomington. I mainly work with Filippo Menczer, Yong-Yeol Ahn and Brea L. Perry. Check out the Projects and Publications sections for what I have been working on.
Before joining the Ph.D program at IU, I received my bachelor and master degree in theoretical physics from Lanzhou University in China.
Botometer is a machine learning tool that extracts over 1000 different features from a Twitter account and evaluates its likelihood of being social bot. Currently Botometer is handling over 250,000 requests every day and serves as the foundation for many researches.
Contribution: maintaining, training data annotation and model retraining
BotometerLite is a light version Botometer. Using a minimal set of features, BotometerLite can perform bot detection for the Twitter Firehose volume in real time with just a desktop. With novel evaluation system and model selection method, BotometerLite is able to achieve comparable accuracy with Botometer. Because of the simplified design, it becomes possible to interpret BotometerLite's results.
BEV is a tool that visualizes the activity of likely bots on Twitter around the 2018 US midterm elections. It allows to explore how active bots are on a daily basis in efforts to influence online discourse about the elections. It also shows what topics are being targeted by likely bots.
BotSlayer is an application that helps track and detect potential manipulation of information spreading on Twitter.
Equipped with BotometerLite and newly developed algorithms, BotSlayer is able to detect coordinated amplification by
likely bots in real time.
BotSlayer is free and easy to install.
With some simple configuration, everyone can have a customized instance running in the could.
BotSlayer is under public Beta testing now.
We also provide an open source version called BotSlayer-CE.
So far, studies of social bots have largely been conducted in computational perspectives. How social media users perceive social bots and a series of related questions remain unclear. In this human subject research project we use experimental design to understand social media users' perception towards social bots. We also characterize the effect of human biases on the efficacy of social bot detection task.
Hoaxy is a tool that visualizes the spread of fake news and related fact checking articles on Twitter. With the incorporation of Botometer, Hoaxy can also visualize the bot-like activities involved in the spread of the articles.
Contribution: maintaining, developing API for Hoaxy to fetch Botometer scores
Impersonators are a type of bad actors who attempt to deceptively influence political communication by exploiting features designed to let social media users manage their public personas. Deleting tweets and editing profiles are common and legitimate practices for social media users, but we show that impersonators perform these actions in a systematic, coordinated, and deceptive manner. This study exposes a conflict between a user’s right to remove their content and the need to hold abusers and platforms accountable for healthy online communication.
Traditional methods for identifying drug seeking behavior focus on each patient's medical history individually. Typical criteria involves the number of different prescriber, visits of different pharmacies and total drug dose in certain time period. Our analysis shows such type of methods has become less useful as the patients are intentionally altering their behaviors to avoid being spotted by those methods. This project tends to utilize social network analysis to identify drug seeking behavior which has proven to be very effective and harder to trick.
Doctor shoppers are people that visit multiple physicians to obtain multiple prescriptions of controlled substances. The opioid doctor shoppers have been found to be more likely to overdose leading to the ever severer opioid crisis in US. The project intends to apply computational methods to over 9 years of longitudinal medical records from a large group of patients to characterize the geographic related behaviors of doctor shoppers.
Word2vec is applied to large scale of medical records to find a distributed representation of the diagnoses. The embedding can effectively reduce the dimensions needed to encode all the diagnoses therefore serves as a preprocessing step for other machine learning tasks. Besides, the embedding itself can reveal interesting relationship between diagnoses.
Multipartite viruses have a genome divided into different disconnected viral particles. A majority of multipartite viruses infect plants; very few target animals. To understand why, we utilize a simple network-based susceptible-latent-infectious-recovered model. We show both analytically and numerically that, provided that the average degree of contact exceeds a critical value, even in the absence of explicit microscopic advantage, multipartite viruses have a lower threshold to colonizing network-structured populations, in comparison to the case of a well-mixed population. We corroborate this finding further on two-dimensional lattice networks, better representing contact structures typical of plants. Our work therefore provides a potentially promising perspective from the point view of network epidemiology in understanding factors promoting multipartitism among plant viruses.