OpenAI's access to automated virus development and production
Automating virus production is bad.
At the beginning of November, I learned about a company called Red Queen Bio, that automates the development of viruses and related lab equipment. They work together with OpenAI, and OpenAI is their lead investor.
On November 13, they publicly announced their launch. On November 15, I saw that and made a tweet about it: Automated virus-producing equipment is insane. Especially if OpenAI, of all companies, has access to it. (The tweet got 1.8k likes and 497k views.)
In the tweet, I said that there is, potentially, literally a startup, funded by and collaborating with OpenAI, with equipment capable of printing arbitrary RNA sequences, potentially including viruses that could infect humans, connected to the internet or managed by AI systems.
I asked whether we trust OpenAI to have access to this kind of equipment, and said that I’m not sure what to hope for here, except government intervention.
The only inaccuracy that was pointed out to me was that I mentioned that they were working on phages, and they denied working on phages specifically.
At the same time, people close to Red Queen Bio publicly confirmed the equipment they’re automating would be capable of producing viruses (saying that this equipment is a normal thing to have in a bio lab and not too expensive).
A few days later, Hannu Rajaniemi, a Red Queen Bio co-founder and fiction author, responded to me in a quote tweet and in comments:
This inaccurate tweet has been making the rounds so wanted to set the record straight.
We use AI to generate countermeasures and run AI reinforcement loops in safe model systems that help train a defender AI that can generalize to human threats
The question of whether we can do this without increasing risk was a foundational question for us before starting Red Queen. The answer is yes, with certain boundaries in place. We are also very concerned about AI systems having direct control over automated labs and DNA synthesis in the future.
They did not answer any of the explicitly asked questions, which I repeated several times:
- Do you have equipment capable of producing viruses?
- Are you automating that equipment?
- Are you going to produce any viruses?- Are you going to design novel viruses (as part of generating countermeasures or otherwise)?
- Are you going to leverage AI for that?- Are OpenAI or OpenAI’s AI models going to have access to the equipment or software for the development or production of viruses?
It seems pretty bad that this startup is not being transparent about their equipment and the level of possible automation. It’s unclear whether they’re doing gain-of-function research. It’s unclear what security measures they have or are going to have in place.
I would really prefer for AIs, and for OpenAI (known for prioritizing convenience over security)’s models especially, to not have ready access to equipment that can synthesize viruses or software that can aid virus development.
(If you want to share information with me, you can reach me on Signal, at misha.09, or email me at ms at contact dot ms. Don’t use your corporate email account, corporate devices, or the corporate WiFi network. If you want to stay anonymous, sign up for a ProtonMail account and email me from it.)
Huh, interesting - why do you think they're making viruses?
The lack of straigt answers to your questions is telling. If Red Queen Bio can't clearly state whether AI models will have access to virus synthsis equipment, that's a red flag. Given OpenAI's track record with safety versus speed, having them as the lead investor makes this even more worrying. Those six questions you asked should have simple yes/no answers.