DS InPharmatics Head of Analytical Services, Colman Byrne joins the show to share his experience in analytical method development and validation. Colman is the most senior analytical services expert at DS InPharmatics and is very technically proficient in all aspects of analytics. In this episode, Ed, Brian, Meranda and Colman expound on the process of analytical method development, including challenges, physiochemical properties that can impact the process and regulatory parameters and agencies that exist.
Welcome again to CMC Live. FDA regulations and guidances simplified through examination, real-life experiences, and risk-based advice. This podcast hopes to educate sponsors and individuals on agency-related CMC matters. It's not intended to be prescriptive advice but rather an interpretation that's right for you. In this episode, we have Colman Byrne live from King of Prussia, PA. This podcast will be a rare treat for me as I've known Colman for about 16 years now; he is a very knowledgeable analytical person. He's our most Senior Analytical Services Expert here at DSI, and he's technically proficient in all aspects of analytical service. So welcome, Colman.
Thank you. Glad to be here.
All right. We're talking about analytical here today. Manufacturing and drug development are important, and there are usually unique challenges that you're not aware of if you're not analytically oriented to ensure product quality and safety. Analytical testing provides the data needed to produce safe and effective drugs, and the development and validation of these methods are crucial in drug development. We'll talk about some of that and why. First question, how did you become an analytical chemist, Colman?
Essentially, because I found out how colors work. Back in college, as I was listening to the lectures, they were very specifically showing how various electronic transitions cause different colors, and I thought, “Wow, this is fascinating.” So, that’s how I decided that's what I wanted to do. I got my degree in physical chemistry, and then I started off working in a lab, doing testing of different types of pharmaceutical products, and just expanded through testing small molecule products into testing peptides, biomolecules, and all sorts of different styles and types of products, and using different techniques, each of which has its own different series of challenges analytically. It's been a gradual and continual learning experience, which is always enjoyable because I like finding out new stuff.
“One of the difficulties you face is that, when you're starting with a molecule, you frequently know relatively little about it. Over the course of a development project, as you go further and further through it, from pre-IND to phase one, phase three, and eventually into commercialization, you're constantly learning more and more about the molecule and about what can happen to it under different circumstances.”
Okay, what else makes it enjoyable? What do you like about chemistry right now?
Well, it's always like you’re doing a little bit of detective work. Initially, you're starting off trying to find the best way of solving a problem, getting an unknown piece of information that serves to solve a puzzle, and the puzzle I show to ensure the quality of the product. What is it supposed to be? Are the levels of impurities as low as they need to be? What are the impurities? Is the product being manufactured properly, and is it going to be safe when it is dosed? There are always little challenges in designing the series of tests that are appropriate for any particular product. How you make sure that the tests are working appropriately so that you can trust the results. That's what it eventually comes down to is, can you trust the data? Can you support the testing results when you bring your data along to the agencies to get approval for your product?
Okay, some folks would say that the process is the backbone for drug development and drug product sides if we talk to our API colleagues. It does go across those areas of analytical development; this is very important. In your career, you've seen methodologies or method development; you've seen the good stuff. Some are done the right way, and some things are done improperly. Overall, can you talk about some of the challenges in analytical method development and validation?
One of the difficulties you face is that you frequently know relatively little about it when you're starting with a molecule. Over the course of a development project, as you go further and further through it, from pre-IND to phase one, phase three, and eventually into commercialization, you're constantly learning more and more about the molecule and about what can happen to it under different circumstances. You have to adapt your analytical methodology to establish, at any given time, what the molecule is and add to the knowledge you've previously generated. When you're starting, you know a bit of the chemistry of the molecule. You can work with the synthetic chemists that develop it to understand what impurities can potentially be present. You try to create methods that will separate the known molecule from its known synthesis impurities.
If you have availability of those impurities, you can challenge the methods to confirm that separation and to show that you can resolve them and quantitate them reliably. However, you generally don't have access to every minor impurity that can potentially be generated, so you then rely on trying to use analogs or degraded samples that you sometimes manufacture deliberately, either through stress stability studies or chemical degradation, to establish the separation. You try to establish the potential impurities that can potentially develop within a product and then show that your method(s) can resolve and quantitate the levels of the impurities, both consistently and accurately. That's always the challenge because, quite frequently, impurities and degradation tend to be very similar to the parent compound. You can sometimes have difficulties in separating them and knowing that they are there, but there are various techniques that you can use to do that.
Quick question, based on that. Now, put your consulting hat on a bit here. A lot of times, sponsors may not truly understand those challenges. This may be discovered relatively late, and you have to go back, redo, or add additional work that they hadn't planned for. How do you distinguish when that work has to be done if it has not been done originally? How do you differentiate if it's the characterization of impurities? How do you make that line of demarcation where they have to do it, and you're going to recommend strongly, or perhaps that is something they can continue to work on down the road to characterize their product further?
I typically would recommend that, by the time a sponsor gets to an IND phase, they have a test method that has been what I would call qualified, meaning that it has been sufficiently developed. This means that you know, based on experimental studies, that it can separate the primary product from some of its known degradants. It means that you know that there has possibly been some forced degradation work done to show that if you expose the chemical or the drug product to specific stresses that are likely to occur in the actual storage of the product, you're not getting additional degradants, whether it's temporary stress or whether it's exposure to chemicals as part of a synthesis process.
If you do observe those degradants, you need to have confidence that your methods can resolve them. You need to have confidence that you're going to encounter no significant surprises when you start doing your manufacturing for your initial clinical studies and doing the stability testing that shows your clinical supplies will continue to be acceptable for use throughout the life of the product.
Now, during those stability studies, because they will typically be done in multiple batches with multiple sets of tests over different periods, you're going to potentially see things that you didn’t initially understand or which you didn't know could be problems initially when you were doing your development work. You may sometimes have to go back, reevaluate, re-optimize the test method.
Once you have established the vast majority of potential issues that you're going to see, you then do the complete validation of the test method. The method validation would typically be done during phase two or pre-phase three timeframes. When you get to phase three, you're generating your registration stability and your pivotal stability test data to support your shelf-life assignment for your finished product, so you need to have complete confidence in your data. When you go to commercialization, then that must be one with a method that is known to be sound, which gives reliable and consistent results. This then gives you confidence that you can trust your stability data and any projections of shelf life that you make from that.
I want to go back to the challenges. One of the things I remember mostly being a bit involved with analytical from the regulatory side, often development timelines are compressed for whatever reasons, right? In many instances, not enough consideration is given to the method development in the validation plan. With that, I have a three-part question. You answered a little bit of all three of them. The first one, how would you recommend analytical method development? How would you recommend it to be incorporated into the long development timeline for drug candidates? Where would that come in, in the plan? My second question is, and I think we get this often, can methods be changed midstream? What are some of the pitfalls, and wherein some instances is it necessary? The third part of the question, I think you answered a little, at what point should methods be validated? I believe you mentioned somewhere during phase two, but can you talk about breakthrough designation programs when you're not a traditional drug development program and things are going a little faster and CMC needs to catch up? With that, can you talk about timelines? Are methods ever held up or a bottleneck?
They can certainly be a hold-up or a bottleneck. Again, it's a matter of looking at the potential problems early on and getting the best method that you can as early as possible. It is typically less expensive to develop a solid test method initially than developing, e.g., a drug substance manufacturing process or going through and manufacturing batches of the drug product. It becomes a question of trying to spend the appropriate resources at the appropriate time. The suggestion again is to establish a scientifically sound test method when you get to the IND stage. That should, where possible, be based on quantitation of the compound and its impurities against standards rather than just based on area percent response.
Ideally, standards are made for the primary known degradants and impurities. Suppose those impurities are not known, which is quite frequently the case in the early stages of development. In that case, work during phase one and phase two should be focusing on establishing the identity of significant impurities and degradants and establishing standards for those. By the time you come to phase three and you're doing your validation in preparation for testing your phase three samples, you will have known standards of known impurities that you can quantitate your samples against. This permits you to establish response factors that allow you to get accurate results for your impurities during your phase three testing.
That is roughly when things should occur, but each project is different. Sometimes resources are more plentiful, and compounds are more easily and readily available. At an early stage, you could potentially isolate, characterize, and identify impurities that allow you to have standards of your impurities that you can use for quantitation by the time you're in the pre-IND or phase one. You can develop the method very heavily and front-loads the development to almost validate the method in phase one. That is certainly appropriate if you can do that.
There are other times when you only identify that a particular degradant occurs during your phase three stability study samples. Sometimes, it is necessary to isolate a degradant that you haven't seen in the early development. Over longer-term stability studies, you may start to see it occur at levels that are high enough to be significant. You may not have data from your early-stage toxicology studies to show that a particular impurity material is appropriately safe, depending on what it is, because you may not have known about it when the toxicology studies were being performed. As a result, sometimes, in phase three, you're left with a situation where you've got a new impurity that develops, and you identify it and modify your test procedure at a late stage such as that.
“It is typically less expensive to develop a solid test method initially than developing, e.g., a drug substance manufacturing process or going through and manufacturing batches of the drug product. It becomes a question of trying to spend the appropriate resources at the appropriate time.”
Going back to the question I had, the second part of it, can methods be changed midstream? You touched on why you see some stability issues, maybe the process is scaled up, and you see some variances. Can you talk about changing methods midstream?
Yeah, it's very common to change the test method midstream because you sometimes find out things you didn't know previously. In that sort of circumstance, you can always do a supplemental validation of an existing procedure. To expand the scope of a method because a new degradant is discovered, you may have to change some of the conditions you use to quantitate it. You may have to generate an alternate test method looking specifically for a new degradant or a new impurity that comes up from a process change or stability. In some cases, you can modify an existing method. In other cases, you’re effectively adding new methods. If you're modifying an existing method, you have to evaluate how that method may change and affect the results you've generated previously.
You can quite frequently do that evaluation via a paper exercise, evaluating the effect of the method change on the other components being tested. If you find that change is negligible, you don't need to update the data. Sometimes, however, you find that a more significant change is required. A typical case might be where impurities are quantitatively based on their relative amounts when a test procedure was initially developed. All of the impurities are calculated based on the area percent of a chromatographic response. This is a common way that early-stage methods are developed before standards are available.
However, by the time you're getting to think about filing and a method suitable for commercial production, the regulatory authorities don't typically approve all methods that are area percent based. This is because different impurities can respond differently to the different detection methods that may be used. You may have a series of small peaks that all appear to be the same intensity, but if one represents a material that isn’t detected strongly, whereas another is strongly detected, based on the detection method used, then you can be either underestimating the low responder or overestimating the high responder. This could make you think that you've measured different amounts of your impurities present than you do.
When you then try to evaluate those impurities to determine, based on your available toxicology data, on whether they're metabolites or not or based on what levels you see in your clinical studies, knowing what impurity is and how strongly it is detected can help you show that the overall product is safe. If you don’t have a true understanding of the actual levels of an impurity, then you're making decisions on product safety and stability based on inaccurate information.
The regulatory agencies will typically look for an impurity content method based on quantitation against a known standard, either a qualified standard of each impurity or, in situations where that's not possible, a standard of your active. Sometimes you can have so many impurities that having standards for them all becomes impractical. Establishing that you know the relative response factor compared to the primary component for all the different impurities, you can then use that value as a conversion factor for each impurity within your test method.
That situation frequently occurs where a method is developed in pre-IND and qualified as an area percent method and is then used perhaps for phase one or phase two clinical studies. Suddenly, you now have to convert that over to a method based on quantitation against known standards, i.e., weight percent method. You will sometimes find that the apparent levels of your impurities change suddenly in the middle of your stability study, primarily because you've changed from a less accurate quantitation procedure to a more accurate quantitation procedure. If that happens, you can show that you can convert the originally determined area percent results to give simulated weight percent results because you know how the response is changed. This allows you to establish that you've got a consistent response across your stability study. That allows you to make comparisons between your original study data and your subsequent study data.
Basically, for those folks that are listening in, when they're concerned about a change or improvement to a method because of the impact of the stability data, what you're telling me is, don't let that dissuade you from improving the method if you know it's flawed and needs to be further developed and refined, because you can make in most cases, some correspondence from the old data generated with the old method.
Yes. It's not necessarily that the data is flawed. It may have been generated appropriately but just quantitated without considering the appropriate correction factors. It's less accurate than the subsequent data but still can be made valid and can be used to make a better decision than it would have been done several years previously at an earlier stage of development.
Okay, I think Brian has a question about how this data comes in, what you're doing, where you're validating, and how it all fits into the regulatory submission part? I had a couple of questions on the regulatory front about that interface panel. Colman, can you talk about the regulatory parameters that exist out there for analytical method development and validation for drug development? Maybe about the FDA guidance’s, the ICH guide, any guidance that the industry generally follows to perform analytical methodology.
For the past 20, nearly 25 years, I would say, there have been a series of guidance promulgated by the ICH, the International Council for Harmonization, which comprises the USP, European pharmacopeia, Japanese pharmacopeia, and several auxiliary groups throughout the world. This body essentially looks to standardize practices for all sorts of different areas within pharmaceutical development. In analytical testing, some of the earliest guidance they issued were ones on the validation of test methods.
Typically, you're looking at seven different parameters for validation. The first one, one of the more important ones, the specificity. This means showing that you can separate whatever you're looking to quantitate with that test method. With some methodologies, you don't need to resolve impurities from the primary components, but for others, you do, so on those particular methods, then specificity is critical.
Next, you’re looking at linearity to show that you can measure a consistent response over a known concentration range. Sometimes that range can be quite high, in the order of a thousand-fold. More frequently, a few hundred-fold ranges are reasonable. With some methods and some detection techniques, you're looking at situations where you may only have linearity over a tenfold range of concentrations. Those methods would typically be used in very specific circumstances for very specific types of products, and you try to avoid those where possible.
Following linearity, accuracy is measured, typically through spiking the sample and measuring how much of the added material the method detects. This shows that you don't have any interferences in the method caused by the components in a drug product or some of the conditions you're using in your drug substance analysis.
Repeatability or precision is the next parameter that you look at. This defines how reproducible your method is under routine analytical conditions. Associated with repeatability is reproducibility, i.e., if you have somebody else doing the analysis using different equipment or in a different lab, how comparable that data is to your initial set of data. That's also called intermediate precision. Again, those parameters are very important to allow you to understand how reliable the data you're generating is to make good decisions about the quality of the product with it. That's basically what the analysis is all about; getting high-quality data that will allow you to make good decisions.
Finally, for compounds where you're looking at low levels of impurities, such as degradants, or small compounds, you're sometimes looking to see how sensitive the method is. You're looking to see how little of a compound you can see, which is the detection limit, or how little you can measure accurately, which is usually slightly higher than the minimum detectable level – that's the quantitation limit. Those are other areas you look at for certain types of methods focused on impurities.
“Typically, you're looking at seven different parameters for validation. The first one, one of the more important ones, the specificity. This means showing that you can separate whatever you're looking to quantitate with that test method.”
Finally, you look at the robustness of the method. This means what happens if something goes slightly wrong and the conditions change slightly? Can you still trust the data? How far away from the target conditions can the method deviate before the data stops being reliable? Understanding that parameter allows you to see what variability you can tolerate within a method during routine long-term operation.
Those are the seven typical areas that you will typically look at within an analytical method validation. Those are all captured within the ICH guidelines and FDA guidelines.
Yeah, that's terrific. As you were talking, I had to rush back to my college analytical chemistry course, going through each one and having memories. Anyway, Brian, you had a question earlier. We didn't get to before we started the podcast. Do you want to bring that one up right now? I think that was a good segue over to some of the next set of questions I had about review and what you're putting into submission?
Yeah, if you were to offer advice specific to a client, in terms of the state of the methods… I think you touched on it originally going into IND and state the method needs to be in. We've had situations before where the method validation was inadequate, or we felt that it was not sufficient for a filing to be done. At a high level, go through that process of identifying what makes a method possibly inadequate, and then any remediation steps that you've gone through in your career to get that up to par to support a submission?
Typically, inadequate areas will fall under some of the seven validation categories that I talked about a few minutes ago. Specifically, specificity can be an issue over time, even if a method was appropriately developed initially. If two years later, you see a new impurity in a stability study that has started to occur, or a new degradant comes along, the original method specificity may not be adequate. You may not be adequately separating that impurity from some of the other impurities or the active peak. So, in that situation, you would be looking to change the method conditions to improve the separation to give yourself less interference and more accurate quantitation.
In the situation where you make a change, you try to cause the new impurity to be better resolved. Any of the data you've generated previously on other impurities is not necessarily affected by the change you've made in the method. In this case, it may not be necessary to go back and revalidate the method for all those other impurities in areas such as determining their linearity, determining whether there are interferences as part of the accuracy, determining whether the method by which you are preparing the samples gives you repeatable results. If you're changing the separation or the chromatography, then all you would need to do in terms of improving the method and showing that it is still giving you valid data would be to do a smaller supplemental validation, focusing perhaps only on the specificity and on some of the other changes that may have affected the original method.
Sometimes you would look at a precision determination. Sometimes, you may monitor your robustness to see what conditions may have changed that you may need to reevaluate. You're not looking at necessarily needing to fully validate a test method again just because you've made a small change. You can make incremental changes. The key thing when you go to do a filing is that you can present all these incremental changes as part of a continuous improvement process. You present your initial validation, and you present a supplemental. We have had involvement in regulatory filings that have been successful and have come back from the agency without review questions. There have been two or three supplemental validations performed to establish better levels of confidence in your method. These supplemental validations would have been the result of minor changes to methods that had taken place over a multi-year development phase.
It’s possible and desirable to improve the method consistently, but the key is to establish valid reasons for why it has to be done. Document thoroughly, and as you're doing it, the rationale behind the improvement, why something is changing, so that in three or five years, anyone can look back and see why we changed this; what we looked at that didn't work, how we got to where we are with the current method. These changes then suggested that a certain supplemental validation would be required, so we performed that validation. We then present that data along with the original validation in the regulatory filing. We do this in a fashion that the regulators can understand and see that you have done the job you're supposed to do well and that they have confidence that you know what you're doing and the data you're presenting is accurate and trustworthy. That's what it's all about if you're an analytical person.
Yeah, right. Brian, any follow-up question or supplemental question? That was a good one!
No, I think it's good. I think what you described, Coleman, really is in line with the same thing the folks in the drug substance and drug product are in. You're telling that comprehensive development history, that story, but at the same time are showing the efforts for continuous improvement. As many of our clients have limited resources, in some cases, they have a minimal number of batches produced, limited opportunities to generate data, where you may have to look at even engineering batches and things like that to get as much data as you can. I think that it's really important to note that don't be afraid of that supplemental validation or continual characterization of the method to do that as long as you're moving the ball forward and you're showing continuing confinement. That's the story that is just as appealing to the reviewer.
“You don't know, and you can't know everything upfront right away. You're always going to find more things out. Ideally, they're not going to be bad things that you have to do a lot to resolve, but sometimes, that's what happens due to situations that are outside your control.”
Yeah, you don't know, and you can't know everything upfront right away. You're always going to find more things out. Ideally, they're not going to be bad things that you have to do a lot to resolve, but sometimes, that's what happens due to situations that are outside your control. Examples might include if an API manufacturer has to be changed, or a process needs to be modified, or you have equipment issues or chromatographic column issues that you couldn't have predicted two years previously when you were developing a method. So, everything has to be potentially modifiable.
The key is to try to consider as much as possible about what could change upfront and as reasonably as possible. What could need to change? You have to develop the method as thoroughly as you can to start with, without making a Ph.D. research project out of it, because it has to be practical. It has to do what you need it to do in the early stage. Focusing on doing things thoroughly early on saves time and money later on because you better understand the method. You're more likely to have identified potential pitfalls that may not be critical upfront but which you at least know are out there. This way, you can adapt when you have more resources and are in a later stage of development where those pitfalls may become more critical.
You mentioned a couple of typical culprits, and they're in method development and validation. One question, from a regulatory side, is the physicochemical properties of a molecule. I guess they play a huge role in method development, right? Can you talk to maybe some of that? For example, if the material is light sensitive or more moisture sensitive, there are different ways you have to go about looking at that thing versus other types of products. We worked on a program a long time ago with no stability issues, and you can run over it with a steamroller. Can you talk about some of the physical-chemical properties that might affect method development?
Well, you mentioned light sensitivity. We've worked with certain compounds in which the sensitivity of the product to light was part of the mechanism by which they perform the function physiologically that they were supposed to. What made the product good as a potential medication made it more difficult to work with from an analytical standpoint. In that situation, you're looking to take whatever care you can to minimize the product's exposure to light. Typically, that's done by covering samples and using dark glass containers of low actinic glass or amber glass. Sometimes, it's necessary to work in environments where the lights within the lab use filters to exclude certain wavelengths of light or in very low light intensity areas.
Based on knowing the molecule's physiological properties or physical properties, understand if something is light sensitive. You know then that you need to take certain precautions when working with the molecule. That is the sort of thing that you would typically try to work out in the early stages of development to figure out what sort of precautions you need to take so that when you do come to the point where you're starting to generate release and stability data on your clinical materials, you can trust the data, and you know it's not being affected by forces and circumstances that you're not controlling properly.
“You try to minimize the frequency at which you have these unfortunate learning experiences, but sometimes they're unavoidable. By focusing your development upfront and looking at the available information and the potential pitfalls, you can minimize the possibility of having unfortunate learning experiences, which sadly, will cost time and money and cause potential delays.”
Water sensitivity is another potentially problematic situation where the sample or standard can pick up water from the atmosphere. As a result, if you're weighing a quantity of standard to use to quantitate a sample, suddenly, your assay can change because the standard concentration is not what it's supposed to be. Because of water absorption, the same weight standard now has less active in it, so the same amount of active in a sample appears to be greater than it is. That is the sort of circumstance whereby comprehending, as you're going into a project, what the physical parameters of the molecule are, allows you to understand what you need to do and what precautions you need to take to avoid analytical problems.
Sometimes you only get that experience through sad experiences where things go wrong. You have to investigate and develop preventive actions and corrective actions to stop what happened from happening again. This is a learning experience, and it is part of what causes methods sometimes to have to be modified and updated throughout the life of a project. You're finding out things you didn't know initially, and as we pointed out before, overall, that is a good thing. You try to minimize the frequency at which you have these unfortunate learning experiences, but sometimes they're unavoidable. By focusing your development upfront and looking at the available information and the potential pitfalls, you can minimize the possibility of having unfortunate learning experiences, which sadly will cost time and money and cause potential delays. By preparing adequately for them upfront, you can minimize those from happening.
So, looking at root causes and getting a thorough understanding of the physical-chemical properties, etc., should always be done early on in the program, back to some validation questions, and review issues or issues that pop up. Let's say, for example, you’re in breakthrough designation, and you're entering a phase one or a very early phase, and you don't have a validated method. You need to use some data to make a critical decision. Can you share some examples, any history that you've had, or you're using some data that you know from a non-validated method to make a critical decision, maybe a good story or lessons learned?
You always are at a point where you are making decisions based on available information. The quality of that information affects the riskiness of the decision. With nearly everything in pharmaceutical development, there are risk-benefit analyses that you are doing all the time. That is frequently a question of what resources you have to devote to resolving a specific problem at a particular stage in development, when resources may be more or less plentiful than needed.
If you're in a circumstance where you have an expedited development process going along, then you will have a different balance to your risk-benefit. There has been a decision that there is a desire to have this particular molecule move forward through the development pathway as rapidly as possible because of the potential benefits. You may be in a situation where, instead of having 99.9% confidence that the approach that you're taking is correct, that the quality of the data you're generating is as good as it could be, you may only have 99 or 95% confidence. Based on the data you've generated, you have to look at how much confidence you can truly have. As I pointed out earlier on, when you're going into a pre-IND and phase one type of development of a method, you have to try to make sure that it is scientifically valid, that you're not expecting to have many, if any, surprises when you go into your later development.
Validation of a test method should be a smooth process because you should expect that you know everything that could have gone wrong and have addressed those before validation. The validation is ideally just a documented process to show that you have, under controlled conditions, done all the work needed to show that your data is accurate, precise, and linear, etcetera. There should not be any surprises.
This does not mean that it doesn't happen. Still, if the method was well developed, it reduces the probability that something unexpected will happen during validation to acceptably close to zero. If you're developing a method like that, then even at the pre-IND and phase one stage, you should have good confidence that a method will eventually make it through validation by the time you get there.
If you do end up having a compressed timeline, and you're going straight from phase one onto phase three, if the method is well enough developed and qualified well enough initially, then going into a phase three level validation and the commercial level validation becomes a simple step with a low probability of a failure and of having a problem that is going to need corrective action. More resources and time could be needed in a supplemental validation, just as you are about to try finalizing your clinical studies and getting ready for your filing in your expedited situation.
“Validation of a test method should be a smooth process because you should expect that you know everything that could have gone wrong and have addressed those before you do the validation. The validation is ideally just a documented process to show that you have, under controlled conditions, done all the work needed to show that your data is accurate, precise, and linear, etcetera. There should not be any surprises.”
Again, the key is to do the due diligence upfront, try to develop a method as thoroughly as possible with the view that it may eventually need to be made into a commercial method. Develop it as best as possible with that in mind while working under the constraints you do when you're in an early-stage development process. You may have more limited resources at this stage, more limited time, and the company’s focus is just on making clinical material, releasing it, and getting it into the clinic.
You have to balance these priorities; you can never omit to look at what's in front of you. Still, if you consider the end goal, you can appropriately develop methods immediately and for the near future that can be easily used, validated, and/or, if necessary, adapted to the more stringent needs that you have in phase three and into commercial.
That's where the challenge is; trying to do as much upfront, where you don't necessarily know everything, and you don't necessarily have all the resources that you might have at a later stage. Still, try to prepare as best you can for the later stages to understand things better and get better data as early as you can.
Okay, perfect. Okay, I think we have a few more questions, but we're running short on time for this podcast. I do have two more serious questions. The first one is, are the Finn Harps poised to make a run this year if they play?
Well, they're already playing. I don't know about poised to make a run. They're always poised to make a run. They normally trip over their ankles and fall flat on their faces, but as Finn Harps fans, we're always optimistic. For those of you out there that are not aware of who or what a Finn Harps is, it is a little small Irish soccer team, the bane and joy of my life over here in the United States. They are a source of great pride and great aggravation over 50 years, and Ed has heard more about these guys over more beers than he probably cares to think about. As I said, that is what keeps me optimistic. There's always next year, and it's always going to be better. Never mind, we might have lost our last three games in the row. There's always the next game. Stay positive.
Yeah. So, last other last question here. Are you a Spotify, Apple Music, or other?
I'm old school. I like having my music available. Necessarily I'm not too fond of the streaming, partly because the artists get paid so little out of it. I am increasing, like everybody else, being forced to go to streaming services. I'm using them. I tend to be more of a Spotify person because years and years ago, an Apple device took my extensive music collection tried rearranging it without me asking it to do so. Seeing your precious electronic music files getting garbled right in front of you sort of is a scary sight. I've been staying away from Apple products ever since. So, I would be a Spotify person.
Honorary endorsement for Apple. Anyway, when I met you, Colman, I think we were together at a small biotech, and you had some device with all this music. I think it was pre-Napster, or right around that time. I think you had more music than Napster. That was probably 16 years ago. I remember you had pretty much any song ever existed on there, and it was pretty cool.
Wrapping it up here, scientifically sound analytical methods, well understood and properly validated within that regulatory pathway, keeping that in mind, are the basis for successful manufacturing and ultimately regulatory approval as well as safe and effective drugs. Thanks again, Colman, for joining us for this podcast, and we appreciate all your thoughts. Take care.
FDA CMC regulations and guidance are simplified through examination, real-life experiences, and risk-based advice. This podcast hopes to educate sponsors and individuals on agency-related regulatory CMC matters. We will focus on the critical CMC issues and build programs that enhance drug development. CMC topics will include Regulatory Starting Materials, API and Drug Product Process, Formulation Development, Supply Chains, Analytical Controls. Advocating and interpreting CMC Strategy, directing CMC Operations and Quality Assurance oversight in conjunction with developing CMC submission content that represents the best interests of emerging biotech. NOT INTENDED TO BE PRESCRIPTIVE ADVICE BUT RATHER AN INTERPRETATION THAT IS RIGHT FOR YOU. Since 2007 we have provided our partners with innovative strategies and exceptional advice to enhance program development, product approval, and marketing presence.