Verification of a Global AI Treaty
Mentor: Naci Cankaya
Project area: Technical Verification Mechanisms for Low-Trust AI Governance
Project Language
English or German.
Minimum Time Commitment
15 hours per week.
Project Abstract
Main Goal:
Build the datacenter lie detector. Or at least figure out good approaches to technical challenges around network taps, workload re-execution or physical security/data integrity/confidentiality. Or creative ways to catch black site AI clusters.
Methodologies and contributions:
To verifiy that a ML workload was computed as claimed, one can re-compute inputs and outputs on a secure, verifier-controlled device.
Open problems around workload re-execution are probably quite accessible for beginners to work on, since the hardware needed for experimentations can be rented (rather cheaply) via the cloud. Key problems to solve here revolve around non-determinism, red-teaming and threat modeling.
More context: https://nacicankaya.substack.com/p/catching-misreporting-about-ml-hardware
One can collect evidence of what an ML cluster is doing, by capturing network traffic with dedicated devices that hash what they observe, without revealing confidential data directly.
Network tap work is heavy on physical engineering, but maybe you have unique skills and access.
Even without those, you can contribute to open questions around threat modeling: What covert workloads are possible at what covert I/O bandwidths in what parts of the datacenter network?
More context: https://nacicankaya.substack.com/p/catching-misreporting-about-ml-hardware-bd2
Preventing secret AI clusters is above even my paygrade, but there are cool ideas to explore for how to approach this, given political will and/or manufacturer-cooperation. The supply chain of some key components is both international and constrained by multiple critical chokepoints, which could be a promising opportunity to fully account for what hardware exists or why has it. Also, even if there was an established "ground truth" for which hardware exists, how could inspection catch decoys, diversion and gaps?
I am also interested in political angles to this. The debate around "AI arms race" has become quite toxic and zero-sum minded, and this needs to change. I appreciate good ideas for what to do about this. Generally, I think that "stop doing X" activism is less impactful than "we figured out a better way, and this is how".
Theory of Change
The "AI arms race" argument is the favourite excuse for accelerationism among even those who consider it reckless. The justification of many who push for recursively self-improving superintelligence ASAP is that this is a prisoner's dilemma and inevitable. If you are tired of hearing "but China" everytime someone asks for more cautious and responsible AI development, this SAIGE project may be for you.
The world CAN cooperate on AI. Verification breaks the game theory of defect as the winning strategy. An international agreement on restraining artificial intelligence in any strategically impactful way is unlikely to succeed without credible verification and confidence building mechanisms. This is where our work comes in.
Beyond a slowdown or pause of AI takeoff, verification may become an important defense against power concentration towards owners of large AI capital, where laws alone are not enough to defend democracy against the power of intransparent AI and special interests. I wrote about it here.
Desired Mentee Background
Computer Science/ML, Maths, International Relations, Political Science, Anything quantitative that involves programming and ideally ML.
Desired Mentee Level of Education
Any level. Must have taken a course that covers ML basics or take an ML course during the semester they work with me on the project.
Other Mentee Requirements
You learn fast and iterate and experiment quickly. You find technical problems around AI interesting on their own, rather than just the outcome of your work.
You have high media literacy and can separate well-grounded signal from noise/sensationalism/confabulation. This applies to academic sources and news, as well as AI outputs.
You have a basic understanding of ML algorithms and hardware. A uni course or similar/better.
You have an ambitious can-do attitude and a first-principles based mindset. Take inspiration from Michael Faraday:
"Nothing is too wonderful to be true, if it be consistent with the laws of nature."