Nautilus Biotechnology, Inc. (NASDAQ:NAUT) Q2 2024 Earnings Call Transcript July 30, 2024
Nautilus Biotechnology, Inc. beats earnings expectations. Reported EPS is $-0.14, expectations were $-0.16.
Operator: Good day and thank you for standing by and welcome to Nautilus Q2 2024 Earnings Conference Call. [Operator Instructions] Please be advised that today’s conference is being recorded. I would now like to hand the conference over to your speaker today to Ji-Yon Yi, Investor Relations. Please go ahead.
Ji-Yon Yi: Thank you. Earlier today, Nautilus released financial results for the quarter ended June 30, 2024. If you haven’t received this news release or if you’d like to be added to the company’s distribution list, please send an e-mail to Investor Relations at nautilus.bio. Joining me today from Nautilus are Sujal Patel, Co-Founder and CEO; Parag Mallick, Co-Founder and Chief Scientist; and Anna Mowry, Chief Financial Officer. Before we begin, I’d like to remind you that management will make statements during this call that are forward-looking within the meaning of the federal securities laws. These statements involve material risks and uncertainties that could cause actual results or events to materially differ from those anticipated.
Additional information regarding these risks and uncertainties appears in the section entitled Forward-Looking Statements in the press release Nautilus issued today. Except as required by law, Nautilus disclaims any intention or obligation to update or revise any financial or product pipeline projections or other forward-looking statements, whether because of new information, future events or otherwise. This conference call contains time-sensitive information and is accurate only as of the live broadcast, July 30, 2024. With that, I’ll turn the call over to Sujal.
Sujal Patel: Thanks, Ji-Yon, and welcome to everyone joining our Q2 2024 earnings call. The update that Parag, Anna and I will be sharing with you today is due to the great work of our teams in the Bay Area, Seattle and San Diego. My thanks go out to them for their continued progress against our key scientific and business objectives. I look forward to the day that their work is in the hands of researchers who will leverage our platform to explore important biological questions once thought unanswerable. The team remains focused on both our development milestones and commercialization goals. Through these and other efforts, we remain motivated by our purpose to revolutionize Proteomics in the name of improving the lives and health of millions of people around the world.
Our entire team is motivated by this goal, fully aligned and committed to doing what it takes to make this reality. As you’ve heard me say on previous calls, to deliver a range of long discussed and long desired improvements in human health, we believe biomedical research needs a dramatic acceleration in targeted identification and therapeutic development. We believe we are pioneering a fundamentally new approach that holds the potential to overcome the limitations of traditional and peptide-based analysis methods and to unlock the value of the proteome, both in targeted proteoform analysis and broad-scale discovery, something we continue to view as one of the most significant untapped opportunities in biology today. As you’ll hear from Parag in a few moments, KOLs are enthusiastic about the data that we shared at this year’s U.S. HUPO conference.
In fact, several have begun to discuss with us the specific initiatives against which they plan to apply our platform. In Q2, we saw continued progress against core development goals for each of the components of our platform and I look forward to additional progress towards commercial launch in 2025. For a more detailed R&D update, let me now turn the call over to Parag. Parag?
Parag Mallick: Thanks, Sujal. You may remember that during our last call, I reported on the promising data we released at U.S. HUPO late in Q1. Among other things, we shared how by exploiting the core capability of our platform to iteratively probe individual protein molecules, we were able to measure 32 distinct tau proteoforms from control samples. We also demonstrated the ability to perform measurements of enriched cell lysates. These results are the foundation of future assays which will be accessible to the broader biological community enabling more detailed investigation into molecular mechanisms of diseases like Alzheimer’s and other telepathies. In addition, they suggest new frontiers in diagnostics. As proteoforms are not yet widely discussed outside of the proteomic research community, let me take just a moment to define what they are and why they’re important.
The term proteoform was introduced by Lloyd Smith and Neil Kelleher in 2013 to ” to be used to designate all of the different molecular forms in which the protein product of a single gene can be found, including changes due to genetic variations, alternatively spliced RNA transcripts and post-translational modifications”. We know from examples like signaling molecules, cyclin-dependent kinases, oncogenes, histones, et cetera, that what makes proteoforms an important driver of biological outcome is not just that a protein has a mutation, a splice variant or a post-translational modification. What matters from the perspective of biological relevance is the combination and pattern of those modifications. The result is an exponentiation and complexity of proteins actions that have tremendous potential to alter the behavior of a biological system.
Researchers seeking insight into the role that proteoforms may play in, for example, the progression of Alzheimer’s or other neurologic diseases have been limited by a lack of robust and accessible technologies to measure proteoforms at scale. Approaches such as Western Blots and Digital ELISA assays and can only measure one post-translational modification at a time, typically as a bulk measurement averaging across collections molecules. When there are potentially millions of patterns of modifications across billions of protein molecules in a sample, being able to measure one modification yields a very limited biological insight. Other technologies such as bottom-up shotgun mass spectrometry or frankly, any peptide centric technology, are simply unable to measure proteoforms as they cannot know that multiple alterations were present on a given protein molecule.
These methods also cannot measure modifications at low concentrations. In general, measuring protein presence at low concentration is hard, measuring particular variance of proteins that are at even lower concentrations is exponentially harder. But it may be that these low abundance proteins and proteoforms hold the keys to unlocking new more effective drugs across a range of indications. To date, the majority of proteoforms studies have been performed using top-down mass spectrometry. This technology is the basis of ongoing efforts to build a proteoform Atlas. However, though powerful, this technology is extremely complex and unlikely to be able to be broadly accessible to the wider biological community. The limitations in existing technologies have prevented meaningful analysis of what is believed to be an extraordinarily complex interplay of diverse proteoforms.
This gap has inhibited meaningful understanding of disease mechanisms and drug actions. In addition, examples like troponin and prostate-specific antigen, PSA, have shown how proteoforms can serve as powerful biomarkers. Creating a technology to see these proteoform patterns and measure their relationship to one another, has the potential to hugely advance biomarker identification, drug discovery and development and precision medicine. We believe that the Nautilus platform holds precisely that potential. The single molecule capabilities of the Nautilus platform, combined with the systems dynamic range, sensitivity and ease of use, enable researchers to reveal and leverage extraordinarily valuable proteoforms data that has never been available.
In concert with our team’s continued focus on the platform’s broad scale discovery capabilities, we’re concurrently creating proteoform assays that quantify at scale, the functional proteoforms present in the sample. Tissue and cell lysates initially with blood and CSF to follow, in a way that has not been possible with the bulk analysis methodologies of the past. Since we announced our preliminary proteoforms data at U.S. HUPO, we heightened our focus on proteoform development activities, primarily in response to, as you’ll hear from Sujal in just a moment, an enthusiastic reaction to the data from the research community. Specifically, based on the experimental work done in Q2, we have been able to reproducibly quantify mixtures of proteoforms, improve our assay performance and successfully extract, enrich and detect proteoforms from humanized mouse brain.
We have also demonstrated that those patterns of proteoform abundances can be shifted with biochemical perturbations, such as by kinases and phosphatases. This latest data demonstrates the platform can be applied to important biological questions in relevant biological samples. We are very excited about our progress on this front and look forward to updating the community further at the HUPO World Congress in late October. As I wrap up, I want to emphasize the fact that any advances made to our core platform accrue value to both our targeted proteoform detection capabilities and our broad scale discovery capabilities. Both modes of the platform rely upon a single molecule library preparation, nano-pattern chips supporting super [ph] deposition of that library, iterative probing of individual molecules with fluorescently labeled affinity reagents and machine learning software to infer molecule identities and quantities.
In Q2, in addition to meaningful advances in our proteoform assay capabilities, we continued to make progress against our core and broad scale development goals. We remain focused on increasing scale, stability and reproducibility across our consumables, assay and platform. And continue to see meaningful gains along those and related areas. In particular, this quarter saw the successful execution of the large-scale experiments we’ve performed to date. This progress is in lockstep with advancing the reliability, quality and customer readiness of our instrument and software. With that, I’ll turn the call back to Sujal.
Sujal Patel: Thanks for the update, Parag. I could not agree more with Parag’s enthusiasm for our progress in detecting proteoforms and the substantial impact we could have initially with tau on the efficiency and cost effectiveness of biomarker discovery and drug development in Alzheimer’s and other neurodegenerative diseases. This progress represents a perfect example of our platform’s unique ability to and our continued focus on enabling both targeted proteoform analysis and broad scale discovery proteomics, that understanding of the platform’s dual value is shared by others. Extensive voice of the customer work done since U.S. HUPO, during which we previewed our latest data shows enthusiasm for targeted proteoform analysis from both academic researchers and pharma for use in drug targeting and drug discovery efforts.
In fact, one high-profile KOL said as part of our VOC interviews that he believes building a reference database containing millions of proteoforms will transform biological research and health care. We share his and others enthusiasm about the potential here and are energized to generate and share additional high-value data. As Parag mentioned, our next significant opportunity to educate the community about the platform and our progression towards commercial availability will occur when Nautilus participates as a top-level sponsor of this year’s HUPO World Congress, October 20 through 24 in Dresden, Germany. As we’ve previously discussed, my management team and I, in fact, the entire Nautilus team continued to proactively manage our resources to maximize our cash runway while balancing that with investments to drive our scientific progress forward.
As of the end of last quarter, we still hold on our books over half of the cash that we’ve raised in our 7.5 years as a business and at our anticipated 2024 run rate we expect to be resourced through commercial launch. For more on that and other financials, let me hand the call over to Anna. Anna?
Anna Mowry: Thanks, Sujal. Total operating expenses for the second quarter of 2024 were $20.8 million, up $1.8 million compared to the second quarter of 2023 and $0.8 million below last quarter. This 9% increase in operating expenses year-over-year was driven primarily by continued investment in personnel and their activities towards the development of our platform, as well as investment in personnel and services engaged in maturing our business operations. Research and development expenses in the second quarter of 2024 were $12.4 million compared to $11.9 million in the prior year period. General and administrative expenses were $8.4 million in the second quarter of 2024 compared to $7.1 million in the prior year period. Overall, net loss for the second quarter of 2024 was $18.0 million compared to $15.8 million in the prior year period.
Turning to our balance sheet; we ended the quarter with approximately $233 million in cash, cash equivalents and investments, compared to $248 million at the end of last quarter. As our Q2 results show, we continue to tightly manage our spend. Given our operating expenses in the first half of 2024, combined with our spend expectations in the second half, we anticipate our total operating expense growth for the full year to be between 15% and 20%, well below our previous guidance of 25%. Importantly, we remain committed to disciplined cash management and running an efficient organization as we execute our strategy to launch our revolutionary proteoanalysis platform. With that, I’ll turn it back to Sujal.
Sujal Patel: Thanks, Anna. We’re excited about what lies ahead for Nautilus and the difference we believe our platform can make. I’m grateful to our team, our investors, our strategic partners and our research collaborators for joining us on this journey to revolutionize proteomics and empower the scientific community in ways never thought possible. We made good progress in Q2 and look forward to building on those successes as we move through the remainder of 2024 on our way to our expected commercial launch in 2025 and beyond. With that, I’m happy to open the call up for questions. Operator?
Q&A Session
Follow Nautilus Biotechnology Inc.
Follow Nautilus Biotechnology Inc.
Operator: [Operator Instructions] And our first question comes from Subbu Nambi from Guggenheim Securities.
Unidentified Analyst: This is [Ricky Labidus] [ph] on for Subbu Nambi at Guggenheim. Are you able to provide any further specificity on the launch time line other than calendar year 2025? And if not, when would you be able to provide that insight? Is there a specific milestone that you might be looking to achieve? And then I have a follow-up.
Sujal Patel: Thanks, Ricky. This is Sujal. Maybe I’ll take this one first. So I think that when you look at the core things that are necessary for launch that we’ve been describing for 2025, it’s a continued set of development activities related to bringing all of our platform components together and building the 300 or so reagents that we need which are the multi-affinity probes and our labels that we need to be able to provide the information from each of the molecules on our chip to be able to detect exactly what protein they are, which gene and kind of protein they are. We continue to believe that 2025 is still an appropriate time for the launch. We still got a number of development activities that we’re working through.
We still have a significant amount of effort that continues on reagent development, qualification, movement to our platform and we are still in the process of putting all those things together. And so that’s what I would — that’s what I’d say in terms of guidance for launch. You asked the question around what the milestone would be where we’ll provide more specificity. And I think that one of the things that I’ve mentioned for a number of quarters now is that there will be a milestone coming up where at one of the HUPO conferences or perhaps at another venue, we will bring data that shows the ability to measure, call it, anywhere from 1,000 proteins or more from cell lysates, from – meaning from a complex sample. And by the time that we’re able to do that, the vast majority of our technology development has been completed.
The reagents, we have more than half of them to be able to get to that goal. And because our system is not a system where 5 reagents detects 5 things, 10 to text 10, it’s an exponential curve because of the data science behind how we detect what molecules are in the chip, that’s a point where we’ll have much more specificity in terms of launch timing and in terms of specifications of our products and so forth. And we do anticipate that, that milestone will come before our early access period and then following an early access period of, call it, 6 months or so, you’ll have a product launch. So if you look at all of those things, you could sort of back into the fact that when you look at what a launch timing for 2025 is, it’s probably not the first half, but it still looks good for us in terms of where we’re at and where we need to get to.
Unidentified Analyst: Great. And then a follow-up on the early access launch. Are there any updates or additional color you could provide on what strategy you’re looking at for that beta testing? And what customers you’d be targeting?
Sujal Patel: Yes. So this is Sujal. And when you look at our early access programs, our early access programs goals are, first and foremost to give customers who are — have no variety in the proteomics world and customers who are proteomic savvy, early access to our platform, so that they can generate unique and meaningful biological insight. And that biological insight has 2 goals. One is the value that we get out of it, publishing, bringing into conferences, papers abstracts, posters, that value really is important to us because it provides the customer evidence that we need for the next stage of the business and for landing the first instrument deals and so forth. The second major activity that we want out of that early access program is really related to signing preorders for the instrument.
And so when you think about those 2 goals, the types of customers that we will have in our early access program are very similar to the types of companies we’re working with today on our collaboration. So it will be pharmaceutical organizations like Genentech, who we’ve been working with as a collaborator for quite some time and it will be academic and non-profit research organizations, particularly those that are proteomic savvy and the key opinion leaders in the proteomics world and then as well in there some diagnostic types of applications.
Operator: And the next question comes from Matt Sykes from Goldman Sachs.
Matt Sykes: Maybe just first on something that I don’t think has been discussed in the last couple of quarters, just the bioinformatics platform, that’s going to be attached to the Nautilus instrument. Just curious, given how unique and novel the data sets that you’re providing, the proteoforms for scientists are, could you just maybe dig a little bit more into the bioinformatics platform and what that looks like? And have you worked with customers to make sure that they get reports and data that’s easily understandable, just given, again, how novel the information is to them, given the capabilities of the instrument?
Parag Mallick: This is Parag. I’ll take the first crack. It’s a great question. I think when you think about our bioinformatics platform, there — when we think about it, we think about it as a couple of layers. The first layer is just at the level of primary data, quality confidence that did the experiment run well. And the metrics on our data are very different than the metrics on standard mass spectrometry data set. So we’ve had to build a series of metrics to say, yes, this looks great; so that’s Layer 1 at the level of primary data. Then there is the next layer up from that about the protein identification and quantification layer. And again, there, you want to provide both primary data access, so that people can download simple spreadsheets of protein identities, quantities and false discovery rates.
And then you want to be able to enable them to visualize the data, look at the data within the context of their own data sets; and so that’s Layer 2. Layer 3 is really comparative analysis. And this is where you analyze between different cohorts and case control studies, respond or non-responders and that’s really where you start getting into the biology. And then there’s a fourth layer ultimately which is incredibly powerful which is the integration of our data with other data. This — so when we think of the bioinformatics portal, we think across that span. And the places we’ve done a tremendous amount of voice of customer work in understanding what are the gaps for people, what are — depending upon the type of consumer, whether they’re extremely sophisticated, have an existing bioinformatics process or whether they’re earlier stage in their bioinformatics development or biological researchers and they are a set of fairly common analyses that come out from that.
Things like the ability to do principal component analysis or generating volcano’s pathway analysis. And so we’ve heard all of that feedback and are incorporating it. On the proteoform side, this is an entirely new modality. The level of detail that people haven’t ever seen before. And so for those, we’ve actually been developing custom visualizations as well to enable people to look at that incredible detail on individual protein molecules that they’ve never been able to see before. So, all of that has been — any time we develop these things, we do spend effort going in and talking to the customers and saying, hey, how do you feel about this? What else are you looking for? And the feedback has been very consistent and positive.
Matt Sykes: Got it. And then Sujal, just as we look into ’25 in the launch, it’s obviously unclear what kind of NIH, NSF [ph] budgets may or may not be but there’s clearly some concern that those budgets could be somewhat compromised next year. I think you’ve stated in the past that you feel like the novelty of the instrument will likely penetrate through different types of budget environments. But just curious how you’re thinking about the potential level of spend and budget for the academic end market next year and the various scenarios under which the NIH budget or the NSF budget could kind of be and what your go-to market if that would change it at all?
Sujal Patel: Yes, it’s a good question. I think that when — I think that my comments that I’ve made previously are still relevant today which is that we are building a technology that is extremely novel and produces a breadth and scale of biological insight that no other instrument is capable of doing and it’s extremely valuable data to our customers. So we believe that with that as a backdrop. And with that as a backdrop, we think that that we’ll still be able to push through even if some of the government funding moves down through 2025. That being said, whenever there is some downward pressure on government funding. What you’ll find is that there could be some elongation in sales cycles or could be a little more complicated to acquire funds, in those organizations that rely on the government.
So that’s typically academic and non-profit research. And I think that the way to overcome that is that in DXN tools [ph], it’s quite common to provide lots of different on-ramps onto a new technology. One is instrument purchase. One is that a customer may choose to run in a service model for a longer period of time and then switch to an instrument purchase. There are other models such as instrument rentals and instrument lease types of opportunities and consumable prepays and those types of things. We’re not committed to any of those models but they’re all possible and we’re open that if we need some of those on-ramp to help customers get into our platform, we’re open to those types of things. One of the things that is particularly beneficial to us when we think about those alternatives is that typically, you don’t want to put capital out there and not be able to recoup at least the cost of it very, very quickly.
And because all of our revenue streams, including the instrument or high gross margin, it gives us a little bit more flexibility and should we need it in ’25 or ’26 based on where government funding trends, I think we’ll be able to react quickly.
Matt Sykes: Got it. And if I could squeeze one more in for Anna. Just on the total OpEx growth of $15 million to $20 million you said versus the previous guide of $25 million. What areas are you kind of achieving some level of savings to modify that guide that you had previously?
Anna Mowry: Matt, I can definitely speak to that. In our original operating expense plan, we had anticipated investments in a targeted way across all areas of the business. On the R&D side, we’ve had a few years of growth there. And so we’ve realized that we have the resources we need and we can limit further growth and just work with what we have. We’ve been prior reallocating resources from areas of the business to the areas of highest need. We’ve also brought down our cost of reagents at the same — in a way that offsets the growth in consumption of those reagents. That’s what — really what has driven our ability to hold R&D expense growth a little bit lower. On the G&A side, we’ve found savings there as well. And as you know, we hold off on hiring the commercial team until we hit those product milestones. So the combination of those has really behind the reduced OpEx guidance.
Operator: And our next question comes from Tejas Savant from Morgan Stanley.
Unidentified Analyst: This is Yuko [ph]. Would you talk about where you are in development progress for the instrument with respect to launch target in ’25? Would you say that development of city region is the gating factor at this point?
Sujal Patel: Parag, you want to take it?
Parag Mallick: Yes, this is Parag. I’ll take this first. And then one of the things that I’m very excited about and I mentioned earlier was our continued efforts to improve the scale and quality of our large-scale experiments as we — as you know, those experiments bring together a mix of large numbers of affinity reagents, large numbers of cycles, the newest chips, the newest instruments and advances that we really focus on are: one, the ability to execute those experiments to, as Anna mentioned, the costing of those three, the reliability of each of the components of that system from the consumables which includes the nanoparticles for protein deposition, the affinity reagents as well as the — all the buffers in the system and ultimately, the bioinformatics.
And the last quarter, we’ve seen a tremendous increase in both the scale of those experiments. And in terms of stability, facets like cycle after cycle is the chip remaining clean are the protein staying mobilized. And latest data just look beautiful, frankly, in terms of the nonspecific binding background, the removal efficiencies, all of those continuing to improve. And so very exciting progress in development.
Unidentified Analyst: Great. And then second question for me. Regarding development cadence to reach a milestone where you are able to measure, let’s say, 1,000 proteins reproducibly to unlock greater visibility towards that specific development time line, is this something that would happen fairly quickly once you — you hit a certain point like 100 proteins measured? Or is it something where development would move in a fairly linear fashion?
Sujal Patel: Yes, I can take this. Do you want to talk Parag and I’ll take it from that?
Parag Mallick: Well, I’ll just mention that one of the most exciting aspects of the platform really is this exponential nonlinearity in how the number of proteins decoded scales with a number of cycles. And so my expectation would be that there would be a very strongly nonlinear aspect to that. But Sujal, please get some color there.
Sujal Patel: Yes, that’s right. I mean I think that we’ve talked about this for a number of years. With any traditional platform that uses antibodies to measure proteins, you need 1 or 2 of those antibodies or affinity reagents to be able to detect each of the different proteins in the in the gene cutting human proteome. Our technology is very different with only 300 or so multi-affinity probes as we call them, you’re able to gather all of the information that you need from a particular molecule to almost with 100% certainty differentiated from every other molecule in the human proteome and so therefore, identifying it accurately. In order to make an accurate identification of just about anything from a complex sample, we have to have most of the probes and we have to gather quite a bit of information.
And so once we cross over that point, you’ll cross through 100, 500, 1,000, 2,000 proteins pretty rapidly because it really just has to do with getting more of those reagents on the platform. In terms of cycle count, Parag mentioned earlier that our large cycle experiments are performing quite well. And so we feel like the assay stability and reliability are there. And so now really, it’s focused on getting the reagents that have the right characteristics on our platform to be able to put the entire set together to be able to get first, those early milestones that I talked about but then ultimately comprehensive pretty on coverage.
Operator: And our next question comes from Tycho Peterson from Jefferies.
Tycho Peterson: Maybe you could touch on publication road map. How important is that out of early access that gets good publications. What should we be focused on there?
Parag Mallick: This is Parag. That’s a great question. And we definitely view publications as critical for sharing information with the community and getting them excited. In general, there are a couple of different types of publications that we focus on. The first are ones like our PrISM manuscripts that we shared previously which really get to how does the platform work. We really view those as both core demonstrations of the capabilities of the platform as well as exposition helping the scientific community understand the core components and how they work. It’s first layer of publication. The second are applications and these are really things that we would do with our partners to say, “Hey, look, you can use this kind of — you can learn this kind of biology with our platform and demonstrating not just the components but the integrated system and how it can be used together and how it performs.
And then the next layer beyond that whole integrated system are even more application studies. So hey, I’m asking this biological question and using the Nautilus platform to see something that I wasn’t able to see any other way. Here’s what we learned. And we see that layering of components to integrated system to biology as a multilayer stack that brings in different communities from the early adopters to the middle adopters to the late adopters and helps drive a cycle of excitement about the platform and what it can provide.
Tycho Peterson: Great. And then you’ve had a number of questions on the tech development. I guess, Matt, obviously asked about informatics, maybe flipping it around upstream. Is there anything on the sample front we should be paying attention to in terms of kind of improvements there?
Parag Mallick: Absolutely. I think one of the — on the sample prep, when we — there — the first aspect there is really just the simplicity of the workflow, the amount of input material that’s required the extent of demonstration that we have about does the sample prep bias the output in any way. And so I think we’ve had really great progress in the last quarter on that last one which has been a question that comes up, hey, there’s a chemical process. There’s a functionalization, there’s an attachment in nanoparticles, does this substantially bias you towards this class of proteins, with that class of proteins? And the data coming back very strongly indicates, no. There’s — that chemistry is very general which is very exciting.
So that will be data that we’ll share as well at HUPO. And then the other aspect sample prep is what is the amount of input material for proteoforms. Again, there is a question of if we’re doing an enrichment, how much enrichment are we achieving? Are there biases in that enrichment towards or away from particular protein forms. And so that’s other data that looks really exciting and excited to share that at HUPO as well.
Tycho Peterson: Okay. And then I want to follow up on the question earlier on funding. And Sujal, you mentioned may be entertaining leasing reagent rental, other kind of types of business models. I’m just curious how seriously you’re thinking about that and how we should think about your willingness to kind of carry the cost of the capital equipment on your balance sheet if you do move to a kind of reagent rental model?
Sujal Patel: So I would characterize that our thinking in that regard is still relatively early. And so by relatively early, for example, we haven’t really floated that with potential customers as a model. I think that with all of those types of strategies, I view them as a bridge to instrument purchase. And so with that, we’re not going to be carrying the cost of the instrumentation and on our balance sheet for a whole lot of time. That being said, sometimes there are some special cases. There’s a particular researcher that you want to do business with and you want to do that — use that model for longer. And given that the cost of the instrument for us to manufacture and ship it, is relatively low compared to the sales price of an instrument deal which is roughly $1 million.
I think that that sort of bottle intuitively is more doable but we haven’t done the detailed work yet. And I think that as we get closer to launch and we’re closer to conversations with customers about what the capital acquisition cycle looks like, I think we’ll be able to go and think through that in more detail.
Tycho Peterson: Okay. And then one last one. On the diagnostics front, kind of a couple of angles here. Roche is obviously entering the clinical mass spec market. They’re talking about blood-based test for amyloid pathology in Alzheimer’s. I’m just curious, how you think about them in the context of the space, how you think about kind of what needs to happen for the diagnostic market to open up more broadly? Will you guys go down the regulatory task to the box at some point down the road? Can you maybe just talk a little bit about how you think about the diagnostic opportunity evolving?
Sujal Patel: Yes. Why don’t I start and then Parag can add any detail in here. I think that first and foremost, we think that the Nautilus platform is really important for the diagnostic world for — in really 2 categories. Number one, on the broad scale proteome side, our platform provides a dynamic range and a sensitivity which enables you to reach much rarer biomarkers in blood. And so if you think about biomarkers that are present in blood at low concentration, these are things that are shredded from tissue, potentially from tumors. And so you really have to have a huge dynamic range of single molecule sensitivity to be able to reach all the way down to the lowest concentration proteins. And that, I think, is going to unlock new biomarkers that are going to be really interesting.
On the proteoform side, as Parag talked about in his prepared remarks, the ability to detect an entire Proteoform is a whole new level of biological insight. Today, the standard analysis that can be done with assays and with mass specs can detect protein modifications which is a single modification. For example, there is a modification on tau at site 217 with phosphorylation. But what you can’t tell is there are 3 phosphorylations and they’re at 3 different sites in this population and 2 different sites in that population. That proteoform information, we believe and many KOLs in the proteomics world believe will unlock a new type of biomarker and a new class of biomarkers that will be really important to diagnostic. And so we think that from a discovery perspective, finding those biomarkers enables Dx companies to do some really exciting things over the course of the next 2, 3, 4, 5, 10 years.
The question around whether we will enter the clinical space with this product, not initially for sure. The first use cases of this product for a number of years will be all RUI [ph] used cases. And when a customer makes a discovery, we’ll say, hey, great, customer, you found this great biomarker now go build a high-throughput assay, go get it cleared through the FDA. And we’re going to go onto the next research discovery. But there will quickly come a point where either the dynamic range of sensitivity of our platform or its unique nature to measure proteoforms will become a necessity and the customer won’t have a way to really build an assay that replicates the finding that they made with the [indiscernible] platform. And I think that’s probably the right point for us to start pushing the product through the FDA and moving towards clinical.
I don’t think that’s in the first 4 to 5 years of shipping. I might be surprised but I don’t think it is. I think that those RUO use cases are going to be more than enough to fuel our growth for a number of years.
Operator: [Operator Instructions] Our next question comes from Dan Brennan from TD Cowen.
Dan Brennan: Maybe just back to the timing of the launch. It appears to have slipped a little bit here from mid-’25 to, I guess, back half ’25. Obviously, you guys are tackling a very ambitious goal with very novel single molecule protein detection and that’s not surprising things can slip. But just given the series of slippage that you’ve seen from end of ’23 to end of ’24 to mid-’25 and now back half ’25. I’m just wondering, can you address kind of the key factors for the latest delay? And how should investors gain confidence that this won’t continue to up, let’s say, beyond ’25 into ’26 or even later?
Sujal Patel: Yes. Thanks for the question. This is Sujal. Maybe I’ll take this one first. First and foremost, I wouldn’t characterize my comments here today as being another slip. Our previous guidance from our last earnings call was that we intend to launch in 2025. We didn’t provide any further specificity on that time line. And I think our official guidance continues to be a launch in 2025. But certainly, it is fair to say that it has taken us longer than we would have liked or we thought it would. And certainly, if you went back to when we went public, we thought we would be commercial by now. I think that the nature of bringing something that is truly revolutionary and truly ground-breaking to market is characterized often by a lengthy development period and a lot of hard work in blood sweat and tears that goes into building that first product.
And if you look at where we are, closing in on as we get close to the end of the year here, it will be 8 years since Parag and I got together to get this company off the ground and start development of the product, it has been a long journey but we are building something that is truly ground-breaking in a lot of different areas, the ability to immobilize billions of molecules on a on a flow cell of a chip, the ability to build this very new novel class of reagents and to be able — the ability to build an instrument and an assay that can cycle those reagents, one after the other and build up data points on single molecules. Together, that is a massive amount of work. And we continue to make really good progress on that front. Today, there’s an instrument that is able to perform all the cycles that it needs to, to reach our launch targets.
It’s able to provide — perform an assay reliably. The data quality is already sufficient to be able to measure proteoforms. For example, we believe that we should be able to measure in the near term, a 1,000 different Proteoforms of tau. And with that capability coming, we do expect ahead of the proteome launch to do more engagements on the proteoforms side of the fence. And so all those are indications that — all those are indications that the platform is coming together and working. Now you asked a very pointed question about how do we know that it’s not going to be ’25 or — it’s not going to be ’26 or ’27. And I think the answer to that question is, no one knows. Myself, Parag, [indiscernible], our R&D organization, my management team, we spend a lot of time looking at what our progress is towards our goals in R&D, what the data looks like.
We spend a lot of time pattern matching against the experiences that we’ve had in the past. And all of those things continue to tell us that that we are heading in the right direction and we are doing the things that we need to, to get the product out. And on the other side, I will use this as an opportunity to throw in there. Anna and myself and the entire management team, frankly, every person in our company is very focused on making sure that we run incredibly capital efficiently so that as the time line has elongated, we’ve also been able to significantly stretch how long our cash lasts from those original projections back in 2023, 2024. And so I think that that’s a long way of saying, I think we’re operating the business well. I think we’re on the right track.
And I continue to have a ton of confidence that we’re making the progress that we need to, to get this product on the hands of biologists all over the world where we can do that.
Dan Brennan: Terrific. I know you also mentioned during the Q&A in the prepared remarks, you’re making meaningful advances in Q2 on scale and stability. So could you just quantify that a little bit? I know in the past, you’ve talked about probes recycle talked about a number of reagents to get to full coverage kind of where you’re at. Can you just provide updates maybe on those metrics or whatever metrics you think are relevant to distill what the scale and stability advances that you saw?
Parag Mallick: Sure. This is Parag. So I think we haven’t specifically disclosed a number of probes run but it can be inferred. The number of cycles at — if we go back a couple of quarters, we showed data of stability and chip stability, removal performance and efficiency that was on the order of about 25 cycle. Then at the HUPO after that. We showed about 70 cycles. The most recent U.S. HUPO, we showed about 100 cycles and our I think now we’re in the 125, 150 that we are expecting to show shortly. And that shows both, again, the stability of the platform, removal efficiency so that after you introduce and probe a reagent, are you kicking it off effectively and getting rid of it so that it’s not hanging around. Same thing with that baseline of nonspecific binding.
We showed that up to about 100 cycles previously and we’re stretching that by another 25 to 50 cycles. So those are key metrics that get to the quality of the data. One of the other aspects we look at is degradation of signal over the course of end cycles. Several years ago, we would be able to get to about 5 cycles and then the signal would have decayed. Now when we do these experiments where we introduce defined positive control cassettes at 15 cycle interviews — or 15 cycle intervals, we see substantially — we’re able to carry those out throughout the entire run. And so those are all really key metrics that show the improvement and stability of the platform over increasing numbers of cycles.
Dan Brennan: Got it. And then maybe a final one for Anna. So Anna, with the reduced burn or the reduced OpEx, I know you mentioned in the prepared remarks something about where the cash gets you through. But can you just provide an update there in terms of timing of how far your cash runway is now with the reduced burn?
Anna Mowry: Dan, thanks for the question. I can speak to that. The previous guidance which you’re referring to, we said we had cash runway into the second half of 2020 — yes, second half of 2026. Our reduced OpEx certainly helps us in achieving that target. With that being said, second half of ’26 is still 2 years away and our commercial build-out is yet to come. So I think, while we have the ability to extend cash runway further if necessary, we’re not ready at this point to commit to that.
Sujal Patel: Just for the final point. So, I’ll just add just that cash just the cash forecast that I gave you includes the product, building out a commercial team launching and starting to get into the revenue ramp before cash out. And those activities of commercialization are expensive. And so the question before was about if there was any further slip hypothetically, the runway on cash would elongate in that case because the commercialization build would get pushed out. I just wanted to make sure that I connect all those dots.
Dan Brennan: Got it. No, that makes sense. Thank you.
Operator: And thank you. And I’m showing no further questions. This concludes today’s conference call. Thank you for participating. You may now disconnect.