About Us

In Summer 2024, the former OxAI Safety and Governance team decided to spin out into an independent organisation: OAISI, the Oxford AI Safety Initiative. (We pronounce it oh-ay-see.)

We hold that:

  1. The AI safety community - particularly at the undergraduate level - at Oxford would benefit from an organisation specifically trying to support research and education in AI safety here.

  2. There are many people who are highly capable of working on AI safety in Oxford who haven’t yet been introduced to the field or aren’t working on it.

OAISI’s role is to:

  1. Support the existing community, increasing the productivity, experience, and quality and quantity of conversations and projects concerning AI safety.

  2. Introduce new talent - primarily Oxford students - to the field, and provide structure and support in building their skills.

AI safety is a sociotechnical issue: we support both governance and technical work, and we run programmes to build skills in both.

If would like to join our community, please visit our “Get Involved” page.

If you have any particular feedback on how we can improve our community and programming, please let us know on our feedback form.

FAQs & Resources

  • As a society, we primarily focus on catastrophic risks posed by advanced AI systems. For more detail on what that entails, this paper provides a good overview.

  • We recommend aisafety.info as a good place to start - it offers a series of introductory articles.

    If you’re looking for an introduction to different concepts in AI Safety, you might find Rob Miles’ YouTube channel useful. For a more in-depth, up-to-date and structured course, BlueDot Impact run an excellent introductory course - you can browse the curriculum here.

    We also encourage you to browse the selection of courses listed here. See also the resources we list below.

  • Yes! Some high-level familiarity with the AI training process and key concepts in AI Safety is very helpful, but you can acquire these without formally studying AI or ML. As we discuss here, “AI safety is a sociotechnical issue: we support both governance and technical work, and we run programmes to build skills in both”. Some of our activities are aimed at experienced researchers, but we also have more introductory programmes which assume no prior technical knowledge.

  • You might want to check whether it’s close to one of the objections considered on this site, in this article or in the Appendix of this paper. If you’d like to chat about any other uncertainties you have about AI Safety, please do get in touch.

  • For frequently updated compilations of resources, you might be interested in Arkose’s, BlueDot Impact’s, and AISafety.com’s lists.

    If you want to keep up to date with developments in transformative AI, and AI Safety in particular, we recommend the Don’t Worry About the Vase, Transformer, ACX, and Import AI blogs and the 80,000 Hours, Dwarkesh, AXRP, and Inside View podcasts.

  • If you haven’t done so already, we recommend BlueDot Impact’s AI Safety Fundamentals for understanding AI safety from first principles, especially the courses and readings which focuses on catastrophic risks. You can either formally enrol in these (they operate on a cycle) or self-study.

    If you’re already familiar with the basics, consider putting in an application for FIG, MARS and SPAR to start building your research portfolio.

    If you’re just getting into AI Safety at the start of the long vacation, please don’t hesitate to reach out to us! We might be able to put you in touch with formal opportunities, introduce you to safety researchers doing some cool projects, or provide more informal guidance, so that you’ll be able to hit the ground running at the start of Michaelmas! We also recommend perusing our Resources section above.