The first major breakthroughs in the development of effective psychiatric drugs came in the years following the Second World War. The introduction of effective anti-psychotics, to treat schizophrenia, and anti-depressants revolutionised how the mentally ill were cared for. The ability to control the florid symptoms of psychotic episode in schizophrenia or break someone out of a deep depression was a major step forward.
For the first time many patients were able to live more ‘normal’ lives inside and on occasion outside of the confines of asylums and mental hospitals. Patients with more developed and serious conditions experienced a slowing down of the worsening of their disorders. A few suffers were even able to become treatment free as their illness appeared to disappear.
The first class of drugs to make it on to the pharmacist’s shelves were the ‘neuroleptics’. The manic and psychotic (hallucinations, delusions) symptoms associated with a ‘positive’ episode in schizophrenia are some of the most destructive to an individual and most common reason for enforced hospitalisation. The development of these drugs was both a consequence of careful design but also a fair slice of luck.
The story started with the discovery of the first antihistamines in the 1940’s and 50’s. These drugs provided a chemical template from which a wide range of drugs useful in psychiatry was developed. Today antihistamines are found in nearly every home and are effective in dealing with hay fever and other allergic responses. Like many drugs, they work like a key fitting into lock, which here is referred to as a ‘receptor’. In the case of the antihistamines the drug fits into the receptor but does not turn the lock and open the door.
Thus, the drug by sitting in the lock blocks access of natural ‘key’, histamine, to the lock and hence stops the door opening. The first generation of antihistamines developed worked on receptors found in the brain and the rest of the body. These early antihistamines often induced sleepiness and sedation; effects that we now know are caused by the drug’s actions on receptors in the brain. Modern anti-histamines are designed so that they do not reach the brain and are therefore less sedative. However, pharmacologists and psychiatrists wondered if this sedative effect could be used to ‘calm’ the positive symptoms in schizophrenia.
At the same time as these discoveries were being made ideas as to how certain symptoms occurred where being developed by studying the effects of drugs in animals. The advantage that this offered was the ability to observe the effect of drugs in both a whole animal but also in specific isolated tissues, such as the brain, in the laboratory. The first often difficult and the second impossible if working in man at the time.
A revolution in the care and treatment of schizophrenia was thus achieved allowing for the first time effective control of at least the most damaging symptoms. This allowed many patients to emerge back into the community from the asylums to which they had been previously confined.
Fluoxetine, the first selective serotonin re-uptake inhibitor (SSRI) marketed in 1986 as Prozac for depression.
Fluoxetine, the first selective serotonin
re-uptake inhibitor (SSRI) marketed in 1986 as Prozac for depression.
This approach, new chemistry, testing compounds in tissues and then animals, finally careful experiments in man, formed a template for a very productive
Of importance in this story, amphetamine induced a state in healthy people very similar to a positive schizophrenic episode. Indeed, amphetamine also disturbingly triggered psychotic episodes in certain individuals who then went on to develop full-blown schizophrenia.
Could amphetamine be used as a model of psychosis, in rats, to explore and test potential new treatments with without the risks of testing new drugs straight from the chemists test tube in human volunteers ? A rat given a dose of amphetamine becomes much more active and at high doses induces curious behaviours reminiscent of those seen during a psychotic episode.
However, when the rats were treated with an antihistamine the effects of amphetamine were not reversed. Similarly, psychotic patients did not improve when treated with antihistamines already shown to be safe in man. The chemists went back to the drawing board. Small changes to the chemical structure of the anti-histamines produced compounds that were very much weaker as antihistamines but were able to reduce the amphetamine-induced activity in rats.
Indeed, in line with the lower anti-histamine effect they did not make the rats sleepy. Based on these findings the compounds with the best profiles were quickly moved into experiments in man in the early 1950’s. When given to schizophrenics experiencing a positive episode they were indeed found to be effective in significantly reducing the clinical positive symptoms of schizophrenia. In turn, these compounds were quickly adopted into clinical use as the first generation of ‘neuroleptics’ which roughly translates as neural calmers.
twenty, or so, years of drug discovery and development. By combining; careful observation of the effects of drugs in patients, an increasing understanding of how the drugs work, and rapidly expanding range of new compounds from the chemistry laboratories more predictive ‘test tube’ and animal experiments were developed. From the promising start with the neuroleptics the first anti-depressants, for the treatment of depression, and then anxiolytics used to control anxiety and phobias were developed. In most cases, these new drugs also proved closely related in chemical structure with only small changes by the medicinal chemists resulting in quite different patterns of effect.
These drugs, by definition, are powerful modifiers of behaviour and brain function. Considering the complexity of the brain and, even today, our limited understanding of how these systems interact it is not surprising that these early drugs also proved to have major side effects. Indeed, we now know that many of the systems found in the brain also have major roles in other parts of the body in most cases explaining the mechanism behind the side effects.
Initially, the huge improvements in the control of the symptoms of the severely disturbed psychiatric patients outweighed these often very significant side effects. However, as the use of these drugs spread into wider use much of the research and development effort switched to producing safer drugs that produced fewer side effects and were easier to take long periods. Therefore, much of the ‘drug discovery’ effort over recent decades has been principally ‘drug refinement’ rather than the discovery of radically new approaches.
In parallel with these changes in strategy, the formal process of drug development has become increasingly rigid. This change has several drivers but the most significant has been the increased need for safety testing required by the governmental regulatory authorities.
The Process of Modern Drug Discovery and Development
In order to understand the effect that regulation has had it is useful to understand the modern ‘process’ that has developed over the years. The process starts with a pre-clinical phase, which takes on average four years to complete and costs, on average, between £1.5 and 3 million.
Pre-clinical development: This phase takes a new idea through a series of laboratory-based studies to confirm that the idea makes sense and that the tests required are sufficiently reliable to support the assessment of what, on occasion, can be millions of compounds. If this is confirmed then the in vitro biologists and medicinal chemists work to first identify new chemical starting points. This process can involve the ‘screening’ of huge libraries of compounds in highly robotic systems.
Increasingly though the use of complex computer models allows for this work often to be performed ‘in silico’ with only a comparatively small number of real compounds being assessed in ‘wet lab’ experiments. The best performing compounds are then retested with greater precision to allow a clear decision as to which ones to use as the basis to move forward with. At this point, the compounds may well do the job required but; require high doses, breakdown very quickly, have the potential to produce side effects, or are predicted to be toxic. The chemists and biologist then work to ‘optimise’ the compound by designing out the unwanted aspects while maximising the desired profile.
Early clinical development: Once a compound has been ‘optimised’ to a point where it is believed to have the potential to be a ‘drug’ it enters into the formal toxicology studies that have to be completed before it can be tested in man. These take between a year and a year and a half to complete and cost between £1 and 2 million. In parallel, the compound will be remade to the very strict levels of purity required for human use and plans developed to see if the compound could be easily manufactured. Once the toxicology studies have confirmed that, the compound is safe and IND (Investigational Novel Drug) application is filed with the government authorities. If accepted the compound progresses to a Phase 1 clinical study.
Phase I: For the first time the compound is given to humans, but in most cases to a small number of health individuals rather than patients. The aim of these studies is to confirm the safety profile, examine how long the compound remains acting in the body and also usually to determine the maximum doses that it is acceptable to take without inducing unacceptable minor side-effects. This phase again usually takes about a year to complete and often cost in excess of £1.5 million.
Phase II: In Phase II the compound is tested for the first time to a patient and some idea of whether it is really effective at preforming the job it was designed to do achieved. When the first of these studies has been completed successfully, the ‘research phase’ is often deemed complete. These studies are often complex examining a huge range of factors in very carefully selected patients and can take 2-3 years to complete and analyse at the cost of in excess of £3 million.
To reach this point may have taken 8 to 10 years and to have cost upwards of £10 million. Before progressing to Phase III studies further Phase II studies are usually performed in other carefully selected patient groups to explore how robust the effects of the drug are across a range of related disorders. These later Phase II studies are the start of the ‘Development’ phase of the drug.
Phase III: To move a programme into Phase III studies is a major one to take many due to the scale and cost. Rather than testing in small carefully controlled groups, phase III studies often encompass thousands if not tens of thousands of patients. This is done to understand the level of efficacy that will be achieved when used in on prescription by the clinical community at large. To complete these studies now requires manufacture of very large quantities of often-expensive drug needing bespoke plant and machinery.
The clinical studies will be typically performed in centres all over the world at the company’s expense. This requires a complex system of administration to ensure that the studies are performed safely to the highest clinical standards; the data collected, analysed, and properly reported to the regulatory authorities. Consequently, these studies cost £10’s and often £100’s of millions and take 3 to 5 years to perform. Very rarely will a regulatory authority accept a single study as sufficient evidence with in most cases 5 to 10 studies needing to be performed. Further, despite the care taken up to this point many of the studies failed to reach their intended goals for a variety of reasons.
Filling: If Phase III can be completed then a dossier is filled with the regulatory authorities requesting a license to market the new drug. These dossiers most often go to either the Federal Drug Administration (FDA) in the United States and/or the European Medicines Agency (EMA or EMEA) for approval. These files now run routinely into gigabytes of information on all aspects of the drug’s profile, manufacture and safety. The regulatory process, often a dialogue between the agency and the sponsor can again take over a year and may involve further studies being requested and performed.
Licensing and Phase IV: If the regulatory authorities are satisfied with the safety of a compound and convinced that it offers a significant advance in treatment over the existing options a licence to market the compound is granted. At this point, assuming that the health system in a particular country is prepared to pay for a new drug, doctors will be able to write a prescription, the patient receive the drug and the company to receive some income. The drug will however remain under very close scrutiny by the sponsor company and the regulatory authorities. It is by no means unheard of for a new drug to fail in ‘Phase IV’ now that is being prescribed to the general patient population. To reach this stage will often have taken up to 15 years and can cost in excess of £4 billion.