Opening the black box

Shiv Bhatia

November 2024

Abstract

The Society for Technological Advancement (SoTA) is organising a hackathon on 18-19 January 2025 focused on interpretable AI. As deep learning models become more powerful and more integrated into the machinery of civilisation, the need to understand why and how they work continues to grow.

More than just developing reliable and trustworthy systems, investigating the learned structure of deep neural networks presents an opportunity to gain unique insight into the nature of thinking itself. And unlike much of frontier AI research, extremely consequential work can be done on a laptop, without setting a thousand GPUs on fire.

The Event

This hackathon will bring together researchers, engineers, and other talented people who want to bring unusual perspectives to the problem of making interpretable machine learning a practical reality. The event will be based in London but we will likely also have some online participation.

In addition to the main competition, we will have a series of talks from researchers from leading AI labs and startups. We will have judges and mentors from Anthropic, Google DeepMind, MILA, and MIT among others.

SoTA is a non-profit founded in January 2024 with the purpose of promoting techno-optimism in the UK. This is our third major event and our second hackathon. For more information about why we exist and what we hope to accomplish see ilikethefuture.com.

We're currently looking for sponsorship, judges, and speakers for the day of the event. If you're interested in getting involved, feel free to get in touch with us at info@ilikethefuture.com.

Your Role

Unlike many hackathons, we're looking for a very specific set of skills and expertise from our participants. We're looking for people who have experience in machine learning and fluency with the basic tools, but who are also interested in mathematics, epistemics, computational neuroscience, or any other field that gives them a unique perspective on the problem of interpretability.

We hope that teams will work on a wide range of projects, from pushing the boundaries of research to applying interpretability techniques to new domains. We're particularly interested in projects that can demonstrate real progress on making widely used systems more reliable and transparent.

Apply now at ilikethefuture.com/hack.