The RAISE Lab @ Penn State is a group of Artificial Intelligence (AI) researchers that make foundational contributions to the field of Responsible AI for Social Emancipation; our goal is to advance the state-of-the-art in AI tools and algorithms to solve critical challenges faced by marginalized communities around the world, while ensuring that our algorithms do not exacerbate existing health/social/economic inequities in society.
Our research is highly interdisciplinary; we closely collaborate with domain experts in public health, social work, agronomy, conservation, and public safety and security (among others) to develop an understanding of key societal issues in these domains; we then develop state-of-the-art AI tools and algorithms which address these societal issues. In particular, we conduct fundamental AI research in the sub-fields of spatiotemporal deep learning, social network analysis, game theory, and FAT-ML (fairness, accountability, and transparency in ML), while using techniques from multi-agent systems and operations research. Aiming to address the most pressing problems in current-day societies, the RAISE Lab intends to bridge the divide between theory and practice in AI research by not only providing strong methodological contributions, but also by building applications that are tested and deployed in the real world. These applications have fundamentally altered practices in the domains that we have worked in. A unique aspect of our research is that we spend a considerable amount of time in the field, whether it is in urban settings in Los Angeles, or in rural settings in Kenya, Ethiopia, and India, to translate theory into practice, and to ensure that our AI models and algorithms actually get deployed in the real-world.
Specifically, we work on advancing AI research motivated by the grand challenges of the American Academy of Social Work and Social Welfare and the UN Sustainable Development agenda. A particular interest of ours is focusing on problems faced by under-served communities around the world, and trying to develop AI-driven tools and techniques to tackle these problems. While developing these solutions, a key focus of our is to ensure that our algorithms do not exacerbate existing health/social/economic inequities in society.
Examples of research projects include (i) AI for raising awareness about HIV among homeless youth; (ii) AI for mitigating substance abuse crisis among homeless youth; (iii) AI for helping smallholder farmers to mitigate the impacts of climate change; and (iv) AI for designing optimal testing policies for COVID-19.
We focus on three problems faced by hard to reach populations (such as homeless youth). We develop AI/ML algorithmic interventions for (i) HIV (and STI) prevention among homeless youth; (ii) substance abuse prevention among homeless youth; and (iii) suicide prevention among homeless youth. All three projects focus on the study of diffusion processes in friendship-based social networks of homeless youth, and how these processes can be harnessed in order to achieve desirable behavior. On a humanitarian level, the end goal of this project is to demonstrably reduce the suffering of disadvantaged populations by influencing and inducing behavior change in homeless youth populations that drives them towards safer practices, such as regular HIV testing, lesser substance abuse, etc. On a scientific level, the goal is not only to model these influence spread phenomena, but to also develop decision support systems (and the necessary tools/algorithms/mechanisms) using which algorithmic interventions can be conducted in social networks of homeless youth in the most efficient manner. Our primary focus in this project is to develop algorithms and tools which are actually usable and deployable in the real world, i.e., algorithms which can actually benefit society for good. In fact, we strive to validate all our models, algorithms and techniques in the real world by testing it out with actual homeless youth (specifically youth in Los Angeles). Over the past seven years, we have been collaborating with social workers from Safe Place for Youth (SPY) and My Friend’s Place (homeless shelters in Los Angeles) to conduct pilot deployment studies of our algorithms with actual homeless youth.
More than 300,000 farmers have committed suicide in India since 1995. Infact, this problem is not specific to India, it is quite common in many developing countries. This happens primarily because of financial hardships faced due to inability to grow crops successfully, and an inability to sell crops successfully (at a profit). In this project, our goal is to develop an easy-to-use AI based decision support system for poor illiterate farmers which can give them data-driven recommendations at multiple stages of their crop-growing lifecycle. For example, when should they plant their crops? When should they irrigate their farm, is their farm at-risk of pest invasion? After the crop is harvested, when and where should they sell their crops for maximum profit? From a technical perspective, problems in this space translate to spatio-temporal machine learning problems because of an abundance of remote-sensed and crowdsourced data that we have access to (due to our collaborators help at PlantVillage).
COVID-19 is the greatest public health crisis that the world has experienced in the last century. Tackling it requires the collective will of experts from a variety of disciplines. While a lot of efforts have been made by AI researchers in developing agent-based models for simulating the transmission of COVID-19, we believe that AI's enormous potential can (and should) be leveraged to design decision support systems (e.g., in the allocation of limited healthcare resources such as testing kits) which can assist epidemiologists and policy makers in their fight against this pandemic. In particular, COVID-19 testing kits are extremely limited especially in developing countries. Therefore, it is very important to utilize these limited testing resources in the most effective manner. In this project, we research adaptive testing policies to optimally mitigate the COVID-19 epidemic in low-resource developing countries like Panama. Our work is informed through multiple discussions with epidemiologists.
Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical national infrastructure, or protecting wildlife/forests and fisheries, or suppressing crime in urban areas. In many of these cases, limited security resources prevent full security coverage at all times. Instead, these limited resources must be allocated and scheduled efficiently, avoiding predictability, while simultaneously taking into account an adversary’s response to the security coverage, the adversary’s preferences and potential uncertainty over such preferences and capabilities. Computational game theory can help us build decision-aids for such efficient security resource allocation problems.
The field of influence maximization has made rapid advances, resulting in many sophisticated algorithms for identifying “influential” members in social networks. However, in order to engender trust in influence maximization algorithms, the rationale behind their choice of “influential” nodes needs to be explained to its end-users. This is a challenging open problem that needs to be solved before these algorithms can be successfully deployed on a large scale. This project attempts to tackle this open problem via four major contributions: (i) we propose a machine learning based paradigm for designing explanation systems for influence maximization algorithms by exploiting the trade-off between an explanation’s accuracy (or correctness) and its interpretability; our novel paradigm treats influence maximization algorithms as black boxes, and is flexible enough to be used with any such algorithm; (ii) we utilize this paradigm to build XplainIM, a suite of explanation systems which can explain the solutions of any influence maximization algorithm; (iii) we illustrate the usability of XplainIM by using it to explain the solutions of a recent influence maximization algorithm to ∼200 human subjects on Amazon Mechanical Turk (AMT); and (iv) we provide extensive analysis of our AMT results, which shows the effectiveness of XplainIM in explaining solutions of influence maximization algorithms.