Social Responsible Computing (SRC) Curriculum Handbook

PI: Prof. Suresh Venkatasubramanian, Prof. Julia Netter

The Socially Responsible Computing (SRC) program reimagines computer science education at Brown and beyond by exposing future engineers to the social impact of modern digital technology, ethical and political challenges surrounding such technologies, as well as technical and theoretical tools to address those challenges. The program develops curricula, pedagogical approaches, and instructional materials to support the inclusion of SRC in a wide variety of computer science courses. The handbook will serve as a curriculum guide on socially responsible computing education within Brown’s Computer Science department, geared towards teaching assistants, faculty, and students. It also has the potential to grow into a sustainable, public resource for the Brown community and beyond. The handbook project has been awarded funding from the Public Interest Technology University Network (PIT-UN) and Google Research. As part of the Accessibility Team, I explore how accessibility intersects with privacy, security, and usability.

https://srch.cs.brown.edu/

Data Security Laws and Cybersecurity Research

PI: Prof. Ira Rubinstein

In light of the Biden Administration's Executive Order on Improving the Nation's Cybersecurity (EO 14028), my research for Professor Ira Rubinstein at NYU Law School’s Information Law Institute provided a critique of its central themes. I investigated whether imposing liability on software vendors would be an effective regulatory strategy. My analysis began with a look into the major cyberattacks that inspired EO 14028: I compiled and compared case studies of events like the SolarWinds, Log4j, and Microsoft Exchange intrusions to test the applicability of a new liability framework. This work reflected EO14028's goal of establishing clear national security standards, as I tested how best practices (which might grant immunity) and worst practices (which might incur liability) could be defined from these cyberattacks. Furthermore, by referencing NIST’s vulnerability metrics, I addressed the question of why such a federal intervention would be necessary, testing the assumption that security incidents are, in fact, getting worse. All in all, this project sought to answer why cybersecurity has failed to improve despite decades of investment and new legislation.

Digital Accessibility Guidelines for Los Angeles City Council

Paragon Policy Fellowship

In today’s digital age, accessibility is essential to ensure that all residents can effectively and equitably engage with their local government. Recognizing this, the Department of Justice (DOJ) issued guidance requiring municipal governments to bring their digital platforms, sites, and media into compliance with WCAG standards within two years. These requirements build on Title II of the Americans with Disabilities Act (ADA), which mandates that municipalities make their digital services and communications accessible to all users, including individuals with various types of disabilities. This project seeks to address these inconsistencies by proposing a comprehensive, citywide strategy for digital accessibility. By standardizing compliance and resolving remaining accessibility challenges, LA City Council District 3 (CD3) and the broader City of Los Angeles can strengthen their commitment to equitable service and establish themselves as a model of digital inclusivity for municipalities nationwide.

https://paragonfellowship.org/projects/la-cd3-ada-fa24

Automated Content Moderation Program for News Feed

Passion Project

While at the 2024 Trust & Safety Research Conference at Stanford University, I developed an interest in online safety. How can platforms ensure free speech while censoring harmful content? Is Section 230 outdated? With AI woven ever more tightly into daily life, would automating content moderation be possible? To answer these questions, I used Python and BERT models to analyze article titles from the RSS feeds of major news outlets. This project shed light on the difficulty in defining "harm". While the BERT models were highly effective at identifying explicit toxicity (based on the tone of words), they struggled with nuanced political satire or critical reporting about potentially harmful ideologies. Titles from different news outlets, when discussing the same sensitive event, would often be flagged inconsistently based on keyword triggers rather than intent. This experience solidified my understanding that while AI is a necessary tool for moderation at scale, it cannot be the sole arbiter. As automated systems (still) lack the human context required to navigate the fine line between censorship and safety, content moderation should still be manually conducted.