I’ve been wondering for some time whether to write about this topic. But after listening to almost two years of ‘information’, conspiracy theories and general waffle, I have decided to give it a go. This piece is just my opinion.
Just so you know where I am coming from, I have been working in biosecurity since 2008, all over the world. I’ve been funded by several bodies including the US and UK governments, plus speaking at the UN in Geneva for the Biological and Toxin Weapons Convention numerous times, and at the OPCW in The Hague about the Chemical Weapons Convention. I have many publications on biosecurity, including The Ethics and Biosecurity Toolkit for Scientists (World Scientific). So there you go. I can be wrong just as much as the next person, of course. So feel free to disagree with anything I say below, but show me why you disagree and based on what evidence. Right?
I’m going to look at two issues here: firstly, the likelihood of an unintentional escape of any virus from the Wuhan lab, and secondly, the current state of play in the US around gain-of-function research*. They’re linked.
*gain-of-function experiments involve lab-work that does, or could, make a pathogen more transmissible and/or more lethal. That is just a simple summary and a basic definition for non-scientific readers.
The first question is, where did the Covid-19 virus come from? Did it originate in the Wuhan wet market? Or from the Wuhan Institute of Virology? Or somewhere else? The public don’t know. But some people know. It may be that the public will never be told the true origin of the virus. And to some extent, that doesn’t matter. Why? Because the effects of the virus on the world’s health do not depend on where it originated. The fact is, Covid-19, or the coronavirus SARS-CoV-2 as it is properly called, is ‘out there’ and we can’t put it back in the bottle. We must deal with it.
So why can’t we find out where it came from? The answer lies in politics, in the ethics (or lack of) of international science funding, and the inadequate monitoring and reporting of potentially dangerous work, amongst other things.
There is also a problem with the scientific community itself. I have never yet met a life scientist who thought (prior to being taught about biosecurity and dual use) that his or her work was a danger to life. The concept of dual use – where a peacefully-developed scientific technology is weaponized by people with hostile intentions – was unknown to most life scientists until events of the last 20-25 years brought it to the fore. And as most scientists go into the science business with the aim of doing good, it is challenging to be told that their work could be turned into a weapon. But dual use, or the potential for dual use, can even happen by accident – look up the Australian Mouse Pox Experiment, in which a research project aimed at reducing a national mouse pest problem inadvertently resulted in a technology to overcome a vaccine. That’s just one example.
We are up against a range of challenges to finding the truth about Covid-19’s origins in addition to those mentioned above. Other factors include political pressures, funding pressures, a tendency towards secrecy in the face of a disaster, the protection of reputations, monumental egos in action, the possible intentions of some scientists and funders to get work done overseas when it is banned ‘at home’, and a refusal in some scientific quarters to admit that mistakes can and do happen. If I had a pound for every time I have heard a scientist say ‘our work is totally safe because of our enhanced biosafety protocols’, I’d be lying on a lounger in the Bahamas sipping a cocktail. Permanently.
I have no idea whether the virus originated in the Wuhan laboratory or not. But based on what I know about biosafety and biosecurity, alongside the human propensity for error, we must face some facts. Continued failures to effectively monitor and control research that could be dangerous make a laughingstock of the many assurances that we are fed by assorted interested parties. It’s now abundantly clear that some scientists, and funding bodies (who have non-scientific pressures acting on them), are prepared to play with words and definitions – such as ‘gain of function’ – to move the goalposts, allowing work to go ahead that would otherwise be banned or extremely highly regulated (in theory at least). And finally, there is already a wealth of evidence in the public domain about past biological ‘escapes’ from labs – which ought to give us pause to consider just how safe today’s work is.
Bearing all this in mind, I would say that it is entirely possible that the Covid-19 virus did escape from the lab. As could any other virus. That is just my opinion. This does not mean that Covid-19 did escape from the Wuhan lab. I’m just pointing out how it is entirely possible that it could have done. Neither does this mean the Covid-19 virus was ‘invented’ or enhanced in the lab – although it may have been. That is another story, which I won’t address here. What I am trying to highlight in this article is the risk of unintentional escapes of any pathogens from high containment labs. In contexts where you have all the factors listed above in operation, plus political, time and prestige pressures, you have a recipe for mistakes. And heck, do they happen.
For all those scientists who claim that they operate perfect biosafety procedures, let’s have a look at some previous escapes from high-containment labs in the UK and the US. Our labs are among the best-regulated in the world. If accidental escapes and near-misses can happen here in the numbers that they do, what chance is there that they don’t happen more often elsewhere? Just sayin’, folks!
British readers will recall the UK’s Foot and Mouth disaster of 2001. In 2007, a further outbreak was identified during routine government testing in Surrey. Was history about to repeat itself? Fortunately, this outbreak was contained to a very small number of farms and controlled quickly. So how did this happen, when biosecurity was by then at the top of every farmer’s list? Well, folks, the pathogen escaped from the government’s own lab just down the road in Pirbright. From the drains. Live virus had been allowed into the drainage system (which should never have happened) and it leaked out into the ground. Investigations showed poor drain maintenance over the years and inadequate biosecurity precautions around the site during building works, resulting in infected mud getting onto vehicle wheels and being driven out of the gates. Got that?
Using data from the UK Health and Safety Executive (HSE), a review of biosecurity and/or biosafety breaches in the UK published in 2014, showed that in the previous five years:
- More than 70 incidents at government, university, and hospital labs were serious enough to be investigated,
- Some were serious enough to warrant legal action as a response to the events and others triggered enforcement letters or led to prohibition and crown notices (orders which suspend or stop work),
- UK labs handling the most dangerous pathogens had reported more than 100 accidents or near-misses (it is worth pointing out here that reporting mistakes and mishaps is a good thing – and we should not censure people who admit to their mistakes, but tighten up procedures),
- In one case, live anthrax had been sent from a government facility to other labs in the UK because tubes of live and heat-inactivated materials were mixed up in the lab.
In the US, things are no better. In 2006, live Clostridium botulinum (the botulism bacterium) was shipped from one Centers for Disease Control facility to another, having gone through ineffective inactivation processes. In 2009, an even more worrying incident involved the shipping of samples of Brucella (the brucellosis bacterium) to Laboratory Response Network labs which had been going on since 2001. It had been thought that the samples shipped were all an attenuated vaccine strain, but on testing in 2009 it became apparent that this was not the case — the shipped strain was actually a select agent (biological agents subject to tight legal controls due to their highly dangerous nature).
A report by the National Research Council in 2011 stated that between 2003 and 2009, US government labs recorded almost 400 incidents involving the potential release of select agents. These included:
- animal bites and scratches, 11 cases,
- needle stick or sharps injuries, 46,
- equipment mechanical failure, 23,
- personal protective equipment failure, 12,
- loss of containment, 196,
- procedural issues, 30.
In 2014, during a clean-out at the U.S. Food and Drug Administration (FDA) laboratory located on the NIH Bethesda campus, several vials of live smallpox (Variola major) were discovered. Smallpox had been declared eradicated in 1980 and since that time, the only two facilities allowed to store samples of it have been the Centers for Disease Control and Prevention (CDC) in the United States and the State Research Center of Virology and Biotechnology, VECTOR, in Koltsovo, Russia. The Bethesda samples had been sitting there for decades, unrecognized. And just in case you’re thinking that things have improved since 2014, further vials labelled ‘smallpox’ were found just last month in a freezer at a Merck facility in Pennsylvania.
Lastly in this list of US examples, in 2015 the Pentagon inadvertently shipped live anthrax spores to 88 labs who shared it with 106 others, amounting to a total of 194 labs (at the last count reported on 1 September 2015) in 50 US states and across nine other countries. The shipments originated at the US Army’s Dugway Proving Grounds in Utah. An early estimate on 1 June 2015 said that live anthrax had gone to just 24 labs in 11 states and to two other countries. The anthrax strain involved was the Ames strain, the same as was used in the 2001 anthrax letters in the US. This is a virulent strain that caused the deaths of five people and infected 17 more in the letter attacks. The Pentagon claimed that the live spores did not constitute a risk to the public, but given that the receiving labs did not think they were receiving live anthrax, can this be a legitimate claim?
Let’s look now at the way gain-of-function research is being handled in the US – and therefore in countries in which it funds research. Many readers will have seen Dr Anthony Fauci arguing about gain-of-function funding in various hearings this year in Washington. This has highlighted a worrying development, in my opinion.
Back in 2004, the Fink Report (Biotechnology Research in an Age of Terrorism, National Research Council) produced a seven-point list of ‘experiments of concern’, all which involved work that could render biological agents more transmissible, more lethal, or able to affect a wider range of organisms and so on. These experiments were couched in simple, clear language as ‘gain-of-function’ experiments. What we see now is a lengthy ‘development’ of such definitions, all to ‘clarify’ exactly what can and cannot be done in the lab. This has arisen, arguably, because scientists and governments don’t like the Fink Report’s seven classes of research of concern. Watching out for unintended outcomes is time-consuming, expensive and causes delays. It can prevent certain research from being started at all. Nobody in the business of science or international relations wants that, for various reasons. Go figure. However, few folks wanted biosafety measures when they were first made obligatory, but we all got used to it and now accept biosafety procedures as a normal element of the scientific process. Why don’t we do the same with gain-of-function work?
The latest versions defining gain-of-function work that I can find are outlined in the National Institutes of Health’s Framework for Guiding Funding Decisions about Proposed Research Involving Enhanced Potential Pandemic Pathogens (2017) and the accompanying documents listed on the NIH website. What we are seeing is, in effect, a moving of the gain-of-function goalposts to focus on known ‘potential pandemic pathogens’ (PPPs) and on what can be ‘reasonably anticipated’ to result in enhanced (manipulated) PPPs (the results of gain-of-function work, whether intentional or not). This is a big change in gain-of-function definitions. The two big problems I see here are the focus only on PPPs that we already recognize, and the ‘reasonable anticipation’ clause. It’s often the PPPs and the outcomes beyond ‘reasonable anticipation’ that we need to be worried about. And these were covered by the seven recommendations in the Fink Report. Pressure has obviously been applied to water these down in the guise of ‘clarification’.
The big get-out clause here is, of course, what is classed as ‘reasonable anticipation’? Fauci relied on this as a response to certain claims being made to him in recent hearings. But by concentrating on agents we already know are dangerous, we can too easily miss the possible unintended and unanticipated outcomes of research that may produce unwanted results that have enhanced the function of pathogens. The Australian scientists who accidentally found a way to render a vaccine useless in 2001 could not have reasonably anticipated that this would happen. What’s different now?
Proponents of the new guidelines insist that regular monitoring, reporting and built-in caveats to stop work if x or y happens, act as effective barriers to the development of unwanted outcomes. But check out the lapses in monitoring, reporting and failing to stop work when x and y did happen, in relation to some of the work Fauci has been defending. Policies and prohibitions only work if they are applied in practice.
This is not a good situation. These changes will have been promoted by using the ‘cost-benefit’ argument. I have long argued, as have others, that the ‘cost-benefit’ approach (supposedly ‘ethical’) to deciding what science ought to be ‘done’ is deeply flawed. No scientist, with the prospect of fame, fortune, promotion and years of funding dangled under his or her nose, is likely to voluntarily predict more costs than benefits arising out of their proposed research. If they did, they would probably be looking for a new position pretty fast. Likewise, we can probably safely assume that some scientists working on what could produce enhanced PPPs today, are – amazingly – not likely to ‘reasonably anticipate’ that enhanced PPPs would be an outcome of their work. Stating that you don’t ‘reasonably anticipate’ enhanced PPPs resulting from your work would, it seems, exempt you from the stringent oversight that previous gain-of-function regulations would have imposed on you. Fauci has shown this in his evidence in Washington hearings. Anybody like to talk to the scientists who did the Australian Mouse Pox work?
Feeling confident, dear readers?
And just to make you feel even safer in your beds tonight, let’s see what else the NIH has to say on identifying (and therefore regulating) enhanced PPPs:
A pathogen previously considered by an agency to be an enhanced PPP [i.e. an enhanced pathogen as a result of intentional or unintentional gain-of-function work] should no longer be so considered if the HHS and the White House Office of Science and Technology Policy, in consultation with the Departments of Defense, Homeland Security, Agriculture, and Justice, generally acting through the Federal Bureau of Investigation, jointly determine, on the basis of additional information that has been developed about the risks or the benefits of that pathogen’s creation, transfer, or use, that the department-level review processes outlined in this framework are no longer appropriate.
In other words, if it is decided, for whatever reason, that a pathogen currently identified and regulated as an enhanced PPP, ought not to be considered as such, then there is now an approved mechanism to downgrade it in the light of ‘additional information’. Uh-huh. See what they did there? Care to outline what such ‘additional information’ might include? Or who has the final say? And how or when will this be conveyed to the public?
Let’s wrap up with a few final thoughts. The dangers of gain-of-function research continue, despite the supposed efforts to mitigate the risks. The evidence of past errors and accidents shows that mistakes happen, even in the best-regulated facilities. When we factor in the non-scientific pressures acting on scientists around the world, and bear in mind human fallibility, we are left with no other conclusion than ‘accidents will happen’.
Much public debate has been aired about whether Covid-19 is a purposely-developed biological weapon. From the perspective of the public, this is a red herring. It makes no difference whether it was designed as a bioweapon or not – it has had the same effect. This is the sneaky, insidious nature of biological weapons. They are not like nuclear weapons, where an attack is seen and heard immediately. They operate silently and invisibly, over prolonged periods.
The current word-play and goalpost-moving seen in changing NIH guidance is not helping us in the fight against dual use potential in civilian science research. But it certainly reflects politics, human nature, funding pressures and the search for fame, funding, and publications. What on earth will it take to make us prioritize world health over these?