Spring AI Safety Forum

Date: Mar 7th, 1 p.m. - 4 p.m. Eastern

Location: Georgia Institute of Technology, Scheller School of Business Floor 2

See https://www.aisi.dev/spring-ai-safety-forum.

Keynote: Jason Green-Lowe Executive Director of the Center for AI Policy

The Disconnect Between Heavy AI Risks and Lightweight AI Governance

Jason Green-Lowe is the Executive Director of the Center for AI Policy (CAIP), which works with government partners on legislation that will mitigate the catastrophic risks associated with advanced AI. Prior to joining CAIP, Jason worked as a product safety litigator and as a data compliance counselor, advising local governments and nonprofits about how to safely store and manage sensitive data. He graduated from Harvard Law in 2010 and has two certificates in data science.

Technical Workshop Opening Pandora’s Box: Creating Malicious RL Agents

Workshop Leader: Changlin Li is the founder of the AI Safety Awareness Foundation, and began his career at the Systemized Intelligence Lab at Bridgewater Associates for 5 years before joining Vowel.com (now acquired by Zapier) as a founding engineer for 4 years. Along the way he did a stint at the Recurse Center in New York City studying formal verification of software and then another stint at the Recurse Center with a focus on modern AI and AI safety.

Governance Workshop Down the Rabbit Hole: Forecasting and AI Legislation

Workshop Leader: Parv Mahajan is a counter-WMD and counterproliferation researcher at the Georgia Tech Research Institute Advanced Concepts laboratory and a curriculum developer with the GT School of Mathematics. His research focuses on cyberbiosecurity and RL interpretability. Outside of GT, he works on LLM red-teaming with Apart Lab Studio and on de novo protein design research with Big Data Big Impact.