Nautilus Biotechnology, Inc. (NASDAQ:NAUT) Q4 2024 Earnings Call Transcript

Nautilus Biotechnology, Inc. (NASDAQ:NAUT) Q4 2024 Earnings Call Transcript February 28, 2025

Operator: Good day, and thank you for standing by. Welcome to the Nautilus Fourth Quarter and Full Year 2024 Earnings Conference Call. At this time, all participants are in a listen-only mode. After the speakers’ presentation, there will be a question-and-answer session. [Operator Instructions] Please be advised that today’s conference is being recorded. I would now like to hand the conference over to your first speaker today, Ji-Yon Yi, Head of Investor Relations.

Ji-Yon Yi: Thank you. Earlier today, Nautilus released financial results for the quarter ended December 31, 2024. If you haven’t received this news release or if you’d like to be added to the company’s distribution list, please send an e-mail to investorrelations@nautilus.bio. Joining me today from Nautilus are Sujal Patel, Co-Founder and CEO; Parag Mallick, Co-Founder and Chief Scientist; and Anna Mowry, Chief Financial Officer. Before we begin, I’d like to remind you that management will make statements during this call that are forward-looking within the meaning of the federal securities laws. These statements involve material risks and uncertainties that could cause actual results or events to materially differ from those anticipated.

Additional information regarding these risks and uncertainties appears in the section entitled Forward-Looking Statements in the press release Nautilus issued today. Except as required by law, Nautilus disclaims any intention or obligation to update or revise any financial or product pipeline projections or other forward-looking statements, whether because of new information, future events or otherwise. This conference call contains time-sensitive information and is accurate only as of the live broadcast on February 27, 2025. With that, I’ll turn the call over to Sujal.

Sujal Patel: Thanks, Ji-Yon, and welcome to everyone. We’ll provide a quick look back at our 2024 progress, update you on our work since the last call and present our fourth quarter 2024 financial results. We’ll then, as always, open the call for questions. As you know, our goal at Nautilus is to enable proteomics researchers to study the entirety of the proteome at a depth and breadth never before possible. And to make the creation, accessibility and use of that higher-resolution, higher-quality data easy enough that it will be practical for every lab everywhere to accelerate scientific research, enable the discovery of new biomarkers and ultimately power the development of new therapies and diagnostic tests that will positively impact human health.

As you saw in this morning’s press release, based on the desire to reduce technical risk and bring to market a product with the greatest possible performance, we now expect that the launch of our proteome analysis platform will occur in late 2026. Parag will provide detail on the rationale for that timeframe in a few moments. Since the achievements of last year serve as a foundation for the work ahead, I want to take a few moments to walk you through some notable recent accomplishments. But before I do, it’s important to remember that we’ll be discussing the status of our overall platform development initiatives and share detail on progress against each of the platform’s modalities, broadscale discovery, which aims to comprehensively quantify the proteome and targeted quantification, which is currently focused on proteoform detection.

While both modalities share the same core platform, each has its own development path. With that said, in 2024, we had a number of key demonstrations and progress with regard to the core platform. Among them, we developed improvements to the scale and quality of our reagent production, an instrument and an assay capable of multi-cycling reagents over many cycles and observing protein binding events at the single molecule level, and software capable of processing the data coming off the instrument and through proprietary bioinformatics algorithms, turning that multi-cycle data into biological insight. With regards to our pursuit of broadscale decoding, we developed a large number of probes that successfully bind epitopes spanning the human proteome.

We also performed an ultra-deep characterization of a large number of probes to define detailed binding profiles and kinetics. Lastly, we demonstrated via western blot that these probes can bind to and differentiate proteins successfully and that the results strongly correlate to our binding models. We also made progress on our integrated proteoform capabilities. At World HUPO last October, and as Parag reported on our previous call, we shared data on the world’s first quantitative measurement of biological variation in tau proteoforms potentially associated with Alzheimer’s disease. These preliminary findings have spurred substantive conversations with a number of potential partners interested in exploring tau proteoform landscapes at a resolution never before possible.

Armed with the learnings and advances of last year and years prior, we now have greater clarity about what remains to be done to deliver what we believe will be a game-changing product to the market. We’re focused on the good that we anticipate our platform can do and confident in our ability to get there. For a more detailed update on our R&D efforts, let me turn the call over to Parag. Parag?

Parag Mallick: Thanks, and good morning all. As Sujal shared, in Q4 and throughout 2024, we continued to make progress against our core development goals. We remain focused on increasing scale, stability and reproducibility across our consumables, assay and platform and continue to see meaningful gains along each of those dimensions. This progress goes hand-in-hand with advancing the reliability, quality and customer readiness of our instrument and software along with advancements in our ability to investigate the proteoform landscape of tau. As Sujal mentioned, both our broadscale discovery and targeted proteoform analysis are built upon the same core platform. As such, the movement from platform development towards platform application demonstrated recently for our proteoform analysis also serves as a general validation of our progress developing a fully integrated end-to-end platform that starts with sample in, immobilizes that sample at the single molecule level, robustly interrogates that sample cycle after cycle and then coalesces that data through a data analytic and machine learning pipeline, producing quantitative output that can be a foundation for unlocking biological insight.

At U.S. HUPO earlier this week, we presented several posters and a luncheon seminar, which we demonstrated progress towards both our broadscale discovery and targeted proteoform capabilities. On the proteoform side, we demonstrated successful development of a high-resolution single molecule tau proteoform assay to quantify the molecular heterogeneity of tau proteoforms, high accuracy and reproducibility with over three orders of magnitude of dynamic range, precise measurements of specific tau isoforms and phosphorylation levels in organoid model systems and the first-ever measurement of tau proteoform profiles between neuronal model systems and the human brain that could be used to reveal markers of Alzheimer’s disease pathology. These results demonstrate our readiness to engage in significant partnerships to explore the role that tau proteoforms may play in both drug and biomarker development.

On the broadscale side, we discussed the development and characterization of robust multi-affinity probes capable of binding to a variety of proteins. Extreme sensitivity into the yoctomole range, the potential for the platform to be applied not just to human, but to a diversity of organisms and a new adaptive decoding algorithm that is able to account for run-to-run variation in probe binding. In meetings with KOLs throughout U.S. HUPO and in interviews with a range of potential future customers over recent weeks, we continue to hear researchers discuss the value of data attributes that go far beyond just the number of measurable proteins. They consistently discuss the quality of data they seek and point to factors such as reproducibility, specificity and accuracy.

We discussed how there is a range of confidences in proteomics data, which vary from proteins identified by essentially a single demultiplexed peak through highly abundant proteins that may be identified by a multiplicity of peptides. We additionally discussed how our approach is substantially different in confidence and quality relative to traditional affinity-based approaches in which proteins are identified and quantified by one or two affinity reagents versus dozens. One particularly exciting moment for me came in discussions of our proteoform assay. When the researcher declared that our approach was something he had always wanted and in his opinion, would revolutionize progress in combating neurodegenerative diseases. Moving on to our current R&D priorities.

You’ll recall that last quarter, we reported that we are behind on our internal milestones with respect to our next major broadscale goal to be capable of quantifying a significant number, 500, 1,000, 2,000 proteins from a complex sample like cell lysate on the road to measuring the comprehensive proteome. This represents the last piece of validating the broadscale capabilities of our platform. Our unique method of identifying proteins, protein identification by short-epitope mapping, or PrISM for short, involves the development and integration of hundreds of proprietary multi-affinity probes, which interrogate single protein molecules. Over the last three years, we have spent substantial time and energy building and optimizing our affinity reagent pipeline and building and characterizing thousands of probe candidates.

A scientist working with a microscope in a laboratory, using the companys life science platform.

These studies, over Q4 in particular, have given us increased confidence in the probes we have built with regards to their ability to bind to a diversity of epitopes within proteins, their ability to differentiate amongst proteins, a key requirement for decoding and the predictability of their binding to proteins. One key ingredient in this was the large-scale screening of probes against millions of peptides drawn from the human proteome to define very detailed models of sequence specificity for each probe. We additionally did a significant amount of work on the binding kinetics of these probes and on testing how probes bind to dozens of different proteins through a range of techniques, including western blot and bilayer interferometry. Through that detailed analysis, we can confidently say that our affinity reagent pipeline does indeed produce probes with the characteristics necessary to implement PrISM.

Alongside our extensive probe characterization efforts, we have been doing the hard development work to optimize and increase the robustness of the fluorescent labels used within our platform, the chemistry used to attach probes to these labels, the chip surfaces themselves to maximize specific binding and the buffers used during binding and measurement. We additionally examined how diverse label types and labeling approaches impacted these metrics on a probe-by-probe basis. Internally, we defined criteria for transitioning probe candidates to platform-ready labeled probes. As we entered 2025, many of these probe candidates were not meeting the performance targets desired of platform-ready labeled probes. In an effort to decrease the fallout rate, in Q1, we focused on a number of new development work streams related to our label, labeling approaches, assay buffers and surface chemistry.

The data from those experiments have made clear the need for us to optimize some elements of our surface chemistry and assay conditions in order to achieve better alignment between our probes and our assay in a way that will increase our confidence that a significant number of our existing and to-be-developed labeled probe candidates can become platform-ready. It is clear what work is needed and how that work will translate into a simple and robust assay. However, appropriately testing these optimizations and integrating any subsequent platform modifications will require time not anticipated when the current launch timeframe was established. Thus, this evolutionary work will push back the anticipated timeline on our ability to quantify a significant number of proteins from a complex sample like cell lysate.

While we are disappointed with this delay, we are encouraged by the large data corpus we’ve collected that suggests our probe library is capable of successfully implementing PrISM and thereby unlocking the proteome. With that, I’ll turn the call back to Sujal.

Sujal Patel: Thanks for the update, Parag. Parag just outlined how the learnings of recent quarters have positioned us to pursue a development path with reduced technical risk and that we believe will yield the greatest possible platform performance, but at the cost of time. Based on the efforts required to implement these modifications to our assay configuration, surface chemistry and related platform elements that Parag articulated, we now expect that the launch of our proteome analysis platform, instruments and reagents will occur in late 2026. All along this development path, we envision significant scientific milestones and value creation inflection points for both modalities of our platform, targeted proteoform detection and broadscale discovery proteomics.

Here are a few examples. One, a major goal in the first half of 2025 is to provide leading researchers with access to our platform for tau proteoform-related studies. We firmly believe that 2025 will be the year that researchers begin to apply the platform’s capabilities to ask and answer important questions about the role of tau proteoforms in Alzheimer’s disease. Two, creation and publication of data showcasing the tau proteoform assay performance characteristics such as sensitivity, dynamic range and reproducibility. Three, signing at least one tau-related partnership in the first half of 2025. Four, decoding of an increased number of proteins beginning with predefined mixtures and progressing towards complex samples such as cell lysate.

And five, the sharing of data showcasing the broadscale proteome assay performance characteristics, such as stability, sensitivity, dynamic range and reproducibility. We remain focused on driving our scientific and development efforts forward in the most efficient, most effective ways possible. By making the decision to pursue modifications to our assay configuration, surface chemistry and related platform elements at this time, we believe that we are positioning Nautilus to ultimately make the maximum possible impact on the marketplace and on biological science. This elongated development timeframe necessitated that we reevaluate our operating plan and organizational structure to ensure that we are in the best position to execute against both our broadscale and targeted proteoform goals.

To that end, yesterday, we reduced our headcount by approximately 16% in order to align the resources we need to pursue our development goals with the desire to extend our cash runway. Based on these difficult but necessary changes and with ongoing very tight financial management of the business, we now anticipate that our cash runway will extend through 2027. For more on that and a full report on our finances, let me now hand the call over to Anna. Anna?

Anna Mowry: Thanks, Sujal. Total operating expenses for the fourth quarter of 2024 were $20.0 million, roughly equal to the fourth quarter of 2023 and $0.9 million above last quarter. This flat year-over-year operating expense for Q4 2024 is a result of the focus and ongoing efforts of our team to identify better and more cost-effective ways to achieve our goals. Research and development expenses in the fourth quarter of 2024 were $12.8 million compared to $12.5 million in the prior-year period. General and administrative expenses were $7.2 million in the fourth quarter of 2024 compared to $7.5 million in the prior-year period. Overall, net loss for the fourth quarter of 2024 was $17.6 million compared to $17.0 million in the prior-year period.

For fiscal year 2024, operating expenses were $81.5 million, an increase of $5.3 million or 7% from $76.2 million in the fiscal year 2023. Both research and development expenses and general and administrative expenses also increased by 7% in fiscal year 2024. Net loss for the fiscal year 2024 was $70.8 million compared to $63.7 million in fiscal year 2023, an increase of 11% year-over-year. As Sujal stated previously, we now anticipate the launch of our platform in late 2026. To ensure our cash runway well exceeds this timeline, yesterday, we made the decision to reduce our headcount by approximately 16%, impacting all areas of the business. We expect this will result in limited one-time costs that will be recorded in the first half of 2025.

While these steps will lead to cost savings in the short term, it will also allow us to invest in future business needs within a lower spending envelope. For fiscal year 2025, we anticipate our total operating expenses to be at or below 2024 levels. Turning to our balance sheet, we ended the year with approximately $206 million in cash, cash equivalents and investments compared to $264 million at the end of last year. The efforts we took in 2024 to limit growth in spending, combined with yesterday’s workforce reduction means that we now expect our cash runway to extend through 2027. With that, I’ll turn it back to Sujal.

Sujal Patel: Thanks, Anna. Anna’s report clearly demonstrates our total and continued commitment to very tight financial management of this business. We understand what it will take to get Nautilus to commercialization and have developed a culture of rigorous financial discipline that will benefit us both in the short-term and the long-term. We’re excited about what lies ahead for Nautilus and the difference our platform can make in biological science. Our mission to positively impact the health and lives of people around the world remains unchanged and serves as the standard to which we hold ourselves. I’m grateful to our team, our investors, our strategic partners and our research collaborators for joining us on this journey to revolutionize proteomics and empower the scientific community in ways never before thought possible.

We made good progress in 2024 and look forward to building on those successes as we move through development in 2025 on our way to commercial availability next year. With that, I’m happy to open the call up for questions. Operator?

Q&A Session

Follow Nautilus Biotechnology Inc.

Operator: Thank you. At this time, we will conduct a question-and-answer session. [Operator Instructions] Our first question comes from Yuko Oku at Morgan Stanley. Your line is open.

Yuko Oku: Good morning, and thank you for taking my questions. Could you further elaborate on your plan to modify the assay configuration and surface chemistry? What are the specific issues you’re currently facing that these changes would address? And with these planned changes, has anything changed in terms of how you’re thinking about initial specs for the platform at launch or your plans to continue to improve those specs of subsequent kits?

Sujal Patel: Good morning, Yoku. This is Sujal. Why don’t I have Parag start with this question, and then I will take the second half.

Parag Mallick: Great. Thank you for the question. The key aspect of the assay involves a couple of different components. One is that all are targeted at driving the specific binding of our affinity reagents to proteins that contain an epitope of interest and to differentiate the non-specific binding away from proteins that don’t contain an epitope. Some of the key factors that influence that are, for instance, how those particular probes are labeled with a fluorophore. For example, if those probes are labeled in a way that is slightly suboptimal, you might end up conjugating a fluorophore into the binding region of the antibody and interfering with its ability to bind to its target. In addition, depending upon the surface chemistry, it’s possible that as you add fluorescent moieties, you might drive towards non-specific binding.

And so, those are the kind of separations that we’re working to enhance. And many different small factors can influence those asset configurations, such as how the surface is passivated, such as how the — what the actual chemical structure of the fluorescent label is and how it is attached to the probe of interest.

Sujal Patel: Thanks, Parag. This is Sujal. Let me just take the second half of your question, Yuko, which is related to specifications. And I think the key thing here that I want to point out is — or there’s just two key pieces, right? One is the assay configuration change that Parag is discussing is really meant to allow us to get the large number of probe candidates that we have built and that we are building to have a higher yield where they function well on our platform and enable us to get the type of information that we need to decode the complete proteome. And so, when we say in the prepared remarks that this is an approach that has less technical risk and allows us to optimize our performance, that’s what we really mean, which is we’re trying to get a much higher yield out of the probes that are developed already and the probes we’re developing so that we can deliver a high specification in terms of coverage of the proteome.

Now, on other parts of our specifications, things like dynamic range sensitivity, the reliability of the instrument, I think the additional time that it’s taken us to develop our first instrument and reagents at full commercial launch of our proteome product, that additional time gives us more time for those other areas to bake. And so, we anticipate that those will be closer to launch spec or at launch spec or exceed launch spec by the time that we get out by the end of — or late in 2026.

Yuko Oku: Great. That was helpful color. Thank you for that. And just a related question. Does the planned changes to the assay configuration or the surface chemistry change how you’re thinking about cost structure of the platform or consumables? And is that $1 million bundle pricing still the right way to think about the price of the platform?

Sujal Patel: Yeah, that’s a great question. In terms of what the changes that we are developing now do to our cost structure, they have no negative impact and may even have some positive impact, particularly on the consumable side in terms of cost. And with that, we do anticipate that our pricing is roughly correct based on the previous guidance that we’ve given you, which is that we expect that an instrument deal, which includes the instruments, the software, the services and support, kind of the initial deal to get you going is roughly $1 million and sample costs will vary based on the configuration of the product and what you’re looking for, but could start at a few thousand dollars per sample and then decline over time. And we think that those price points based on continued conversations with customers are the right price points given the differential data that our platform produces and the quality of the data.

Yuko Oku: Okay. Thank you.

Sujal Patel: Yes.

Operator: Our next question comes from Subbu Nambi at Guggenheim Securities.

Subbu Nambi: Hey, guys, thank you for taking my question. Parag, this is for you. I’m confused a little bit. Shouldn’t surface chemistry be uniform for all proteins? And if tau worked so well, why does that require optimization for different proteins? My understanding was you use the same surface chemistry. That’s one. And then, why should labeling of fluorophore to the Fc region of antibody require optimization? Isn’t that pretty standard as well? Hey, guys. Can you hear me okay?

Sujal Patel: We can. Parag, are you on mute?

Parag Mallick: I apologize. I was only able to hear the second part of your question. Could you please repeat the first part of your question?

Subbu Nambi: Absolutely. So, I’m confused a little bit, shouldn’t surface chemistry be uniform for all proteins? And if you were able to attach tau to a surface, shouldn’t we assume that it should be the same chemistry for all different proteins to attach on the slide — on the chip? The second is, labeling of fluorophore to Fc region of antibody is pretty standardized. And how does that require optimization?

Parag Mallick: Sure. So, maybe I’ll — with regards to the surface chemistry and passivation thereof, really, what we’re — we are not talking about the mobilization of the proteins via the nanoparticles to the surface. That — you’re absolutely correct that, that is identical between any assay and speaks to how we immobilize proteins from the sample onto the chip. On the other hand, depending upon the labeling strategy, the number of cycles and the buffers, there are interplays between the fluorescent moieties on the — that may be used to label the probes and their interaction with the surface. Different buffers may lead to increases in non-specific binding to the surface or to other targets. Likewise, even factors like temperature and time of measurement can play into that differentiation between specific and non-specific binding.

And with regards to fluorescence labeling, you’re absolutely correct that fluorescence labeling in general is a very well-established method that there is a number of different conjugation chemistries for labeling of antibodies. Within our system, we — one of our key considerations is that we want to be able to perform the measurement repeatedly, and we’ve shown hundreds of cycles of repeated measurement. And so, maintaining that balance of specific binding cycle after cycle is something that we really have optimized tremendously and that we believe further advancements in our configuration will allow for greater differentiation for a wide number of our probes. And really, this is about aligning the probe characteristics to the assay configuration.

Subbu Nambi: Thank you for that, Parag. Each protein is quirky, right? So, how are you confident that whatever optimization you do is going to be applicable on a broadscale in terms of specificity?

Parag Mallick: Absolutely. So, I think what — while each protein is quirky and we — internally in the building, we think of them as essentially their own beautiful snowflake, the optimizations are really about the interaction between a labeled probe and a protein. And at that point, that’s really driven by very fundamental physics of binding, where if you increase the concentration, you increase the extent of on. If you increase the time prior to measurement, then you decrease the amount of bound. And so, those fundamental kinetics of the system are at play. And so, those apply across proteins. Those are just general principles of binding.

Subbu Nambi: Okay. Thank you, guys.

Parag Mallick: And we see that actively in the platform.

Subbu Nambi: Okay.

Operator: Our next question comes from Dan Brennan at TD Cowen.

Dan Brennan: Great. Thank you. Could you just review — I know you did in the prepared remarks, just kind of what are the key milestones and timing over, say, ’25 and maybe into ’26? Are there two or three checkpoints that the market will see, whether the customers or investors that we could kind of get a further update if you’re meeting your expectations? Or will it just come at some point in early ’26? It’s either going to be — you’ll kind of reveal and then we’ll get a sense if you’re on track or not?

Sujal Patel: Yeah, Dan, thanks for the question. Let me try to answer the question in two different directions for you. Remember, the core platform has two different modalities. One is a mode where you take a deep dive in a single protein or a small number of proteins, and that today is really focused on proteoform detection. And then, there’s another modality where we’re looking for what we call broadscale discovery proteomics, which is getting all of the gene encoded proteins that you have within the sample. And each of those modalities has different catalysts that we think are coming up here over the course of the next, call it, four to six quarters. I won’t assign individual timelines necessarily to all the pieces of it.

So, on the tau, let’s start with the proteoform side first. The first proteoform that we really have a great deal of interest in is tau, which is a key biological marker implicated in diseases like Alzheimer’s disease. And as we move through the first half of 2025, we expect to provide the platform’s capabilities to researchers to do tau proteoform-related studies. And Parag talked a little bit about some of the data that we’ve produced over the course of the last few months in his prepared remarks. And we will update our investor deck here over the course of the next day, and there continues to be some more information in there on what we’re doing on the tau proteoform front. As we continue on the tau proteoform side, we expect to also through the year, continue to show more data and publish more data related to our performance characteristics, sensitivity, dynamic range reproducibility and so forth.

And we expect that in the first half of 2025, we’ll also sign our first tau-related partnership, and stay tuned for more on that front. So, on the tau front, I think that’s kind of what you should look for in the near and medium term. So let me change my — shift my attention to broadscale. On the broadscale side, I think that one of the things that you’ve heard us say is that the big milestone on the broadscale side is when we can decode a significant number of proteins out of cell lysate. There is going to be a lot, 500, 1,000, 2,000 proteins. And by the time we get to that point, all of the platform pieces have come together, all of the assay performance that’s required for decoding is there. And we will, at that point, have a very firm grasp on the timeline remaining and final specifications and so forth.

And I think that will be a big update for our investors and our analysts. On the road to being able to do that, once we are able to move through this assay configuration change and surface chemistry change, you’ll see us have some intermediate milestones such as decoding predefined mixtures of proteins as we progress towards cell lysate, as an example, of a complex sample, and then ultimately to that 500, 1,000, 2,000 protein milestone. And so, as we start moving through those predefined mixtures, we’ll bring the scientific community and our investor community along so that there are some interim checkpoints before we get to that big data readout. And as well as we move through the year, I expect we’ll continue to — once we make our asset configuration change, continue to share data, particularly at scientific conferences related to our broadscale capabilities, stability, sensitivity, reproducibility dynamic range, coverage in our assay.

Dan Brennan: Great. Maybe just one on tau then since you’re going to be engaging with customers now in the first half and planning a partnership. From what you’ve produced so far, could you just speak to the — I think you said you’re going to provide some details, obviously, on sensitivity specificity the key measurement tools. Could you just speak to — there’s a number of players emerging in the market. There’s a lot of specs out there. Just kind of how you think your performance would compare to some of the other kind of leading kind of tau protein platforms?

Parag Mallick: Maybe I’ll take this one. And I think one of the key and most important differentiator of our platform relative to everything else out in the world is that we are the only commercial platform that can measure proteoforms in high throughput and high sensitivity from complex samples. So that aspect of being able to comment on the combination of isoforms plus potentially triple phosphorylation of tau or quadruple phosphorylation of tau at sites A, B, C and D or ABC and E is a unique capability of our platform and something that our customers are extremely excited about, because that allows you to reveal the order and timing of events that are on the way to Alzheimer’s. It allows you to unveil substructure and subtypes that are potentially indicative of response to therapy of one therapeutic versus another and also potentially allow you to define differences between patients who have aggressive, rapidly progressing disease versus not.

So, it’s really in the resolution of the measurement that is incredibly unique. With regards to other specifications, some of the things that we’ve been looking at are the dynamic range within the measurement of an individual proteoform. And so keep in mind, there are two different measures of dynamic range. One is the across-analyte dynamic range, and the other is the within-analyte dynamic range. And typically, for instance, in TMT assays and mass spectrometry, you have what’s called range compression. And so, your within-analyte dynamic range is typically below 1 order of magnitude. We’ve demonstrated dynamic range of upwards of 3 orders of magnitude within our within-analyte dynamic range with regards to reproducibility, showing reproducibility of — with CVs well below 20%.

And then, one other common factor that you’ll see people look at is just what are the range of analytes that you’re able to look at. And that’s where, as I highlighted, we’re able to access analytes that are simply inaccessible to other platforms.

Dan Brennan: Great. Thank you.

Operator: Our next question comes from Matt Sykes at Goldman Sachs.

Will Ortmayer: Good morning. This is Will Ortmayer on for Matt Sykes. Thanks for taking our questions. I appreciate the commentary around the probe optimization taking some time and pushing out the launch date. But just want to clarify, is that late 2026 launch for both the broadscale discovery and the more targeted platform, or are those timelines different?

Sujal Patel: Good morning, Will. Let me try to take that one. Both of our modalities are heading to the market with different strategies. So, on the — let’s take the broadscale side first. On the broadscale side, we are moving towards a model where late in 2026, we have our commercial launch. And from that point forward, we’re largely selling instruments, we’re selling consumables, we’re selling software services support. That’s the business model going forward. And we’ll provide some services capability after that point as an on-ramp to buying instruments. and burst capacity and that sort of thing. On the proteoform side, because the proteoform data coming off of our platform is a type of data and a level of detail that you can’t get with other assays, we’ve chosen not to just productize it as a service or to sell an instrument that does proteoform assays and instead have focused on partnering with organizations who are looking for this level of detail and jointly exploring the space with tau initially of that proteoform and understanding what are the implications biologically?

How does it inform therapeutic development? Are there potential diagnostic applications. And so, on the proteoform front, we are today talking to a number of organizations around partnerships on the tau proteoform. Those analysis will be done in our facility, and we will work with our customers and return results to them and then work with them on the next phases of their projects. And then ultimately, or certainly, over the course of 2025 and 2026, we will also add additional biomarkers, some that are driven by customer conversations and some driven by our own research and desires for that product. And for the foreseeable future, those proteoform capabilities are capabilities that we’re going to introduce to customers via partnerships and collaboration.

Will Ortmayer: Got it. That’s super helpful. Thank you. And then, just following up on that, you mentioned giving researchers access to proteoform in the first half of ’25. But then, I just wanted to see if the launch date impacts your expectations for the early access program, maybe on the broadscale side and in relation to instrument placements as well. Thank you so much.

Sujal Patel: Yeah. So, the early access period is just to kind of define it for those that may not remember, the early access period is approximately six to nine months prior to the commercial launch of our instrument on the broadscale side. And that is an opportunity for customers to see the data that our platform produces with their own samples. And so, that’s a model where potential customers will send us samples, we’ll analyze them using the preproduction broadscale capabilities. And the goal of those engagements is, one, to generate data and excitement and have a set of data that we can leverage as we move towards launch, but even more importantly, to get customers excited enough to want to place orders for the instrument once we get to that commercial launch. And so, you should think about commercial launch late 2026 and early access period starting some six to nine months before that.

Will Ortmayer: Great. Thank you so much.

Operator: I’m showing no further questions at this time. Thank you for your participation in today’s conference. This does conclude the program. You may now disconnect.

Follow Nautilus Biotechnology Inc.