Swayampakula Ramakanth: Perfect. Thank you very much for taking all my questions.
Jeff Hawkins: Thank you.
Operator: Stand by for our next question. Our next question comes from Kyle Mikson with Canaccord Genuity. Kyle, go ahead with your question.
Kyle Mikson: Yes, hey guys, thanks for the questions. Congrats on the quarter. So Jeff, can you kind of walk through or maybe like parse out the components of product revenue. At this point, it’s been 20 years so since the soft launch was commenced, maybe you have some sense of instruments and consumable mix and pull through things like that. And then if you could, I mean, the service revenue really increased a ton. I mean, it’s possible you went through that already, but could you just talk about some of the dynamics here that we’re seeing with the certain product level segments and stuff in services? Thanks.
Jeff Hawkins: Yes, I’ll start and talk a little bit about the services and then perhaps, Jeff, you can pick it up and talk about sort of overall revenue and the mix to the extent we break that out. I think on the services front, Kyle, it’s a mix of a couple of different things in there. Obviously we placed some instruments throughout 2023 and some of those instruments are now coming off of their initial warranty and moving into service contracts. So that is one sort of source of service revenue that comes through. The other source of service revenue is some of our customers are electing to sort of engage us in more of an advanced training to really come in and train a large number of operators across their laboratory and give them sort of deeper hands-on training and interactions than maybe they can get on their own by setting the machine up.
So there’s a cost associated with that advanced training. It’s not a significantly large number, but when multiple customers sort of choose to take on that advanced training, it can certainly add to that services line. So, sort of those advanced trainings and service contracts are the two primary drivers there on the services line. Jeff Keyes, how about you go into the rest of it?
Jeff Keyes: Yes, Kyle, on mix, we don’t break that out in our financials, but what I can say is that it’s primarily driven by instruments, as you would expect in early commercial days for the company we’re focused on instrument placements and also pull-through for sure, but it’s really about instrument placements and getting those out in the field. So our general mix is high percentage on instrument placements. And then on pull-through from a kit’s standpoint, we’re still watching that as we go through the commercialization phase. I mean, we have an evolution of what we think that the consumable pull-through will be long-term and there’s going to be a little difference between the academic labs and the commercial customers, biopharma and pharma that we’re going to watch closely as we go through this ramp process, and then we can provide potentially more information in the future.
Kyle Mikson: Okay, that was great guys, thanks so much. And then going back to you, Jeff Hawkins, all these versions, these new version kit, I guess, three’s coming out pretty soon. You know, when we think about which kits or consumables are kind of ready for prime time, was the version 2 sort of like, you consider that to be like the legitimate sort of like first product of the company, and then, these incremental versions or iterations upon that, or like when do we see like an inflection in terms of what a customer can really do with platinum, and in terms of capabilities and sort of just robustness of what you can kind of do with sequencing, peptides?
Jeff Hawkins: Sure, yes, so I would tell you that the version 2 kit really did represent a pretty significant increase in terms of both the output, sort of how much sequencing information a customer could get from their sample, but also, a very meaningful improvement in the reproducibility of those sequencing results, right, the ability to put a sample in and run it across multiple chips or multiple days and get a very reproducible set of results. So, I think, that version 2 kit was that first big sort of step-up and improvement over the initial version one kit. I think the way I would think about version 3 is I wouldn’t view it as incremental in terms of, it’s going to be a very modest improvement. I still think we are in a stage of our innovation that we’re really seeing great inflection points in terms of the technology evolution, the output, the coverage and our performance broader sample type compatibilities, various things improving each time we rev it.
So I think in general, when we version the kit, Kyle, we’re versioning it when we think the totality of the improvements made is a very meaningful increase in performance such that customers will be able to do things they haven’t been doing before. And then, on a future version 4 kit, it would be something similar. We don’t really intend at this point to do very small sort of iterative changes that might be seen as like a 2.5 or something. So really think of these changes, V2 to V3 and so on, as really unlocking more and more capabilities and application space for customers.
Kyle Mikson: Okay, yes, that was perfect. Let me ask another one before I hop off, on the Platinum analysis software that was an interesting launch. Jeff just given your background experience, curious like where that kind of stands in the world of like the software ecosystem, when you think about life science tools research in the research world, like you think about the drag game or the base space, things like that, secondary analysis, tertiary analysis, given you’re dealing with proteomic information, which is like super complex, just coming from like mass spec, I mean, that’s definitely pretty complicated. What’s like the future of the — I guess two questions. Why was this such a bottleneck or like an unmet need for customers like the software? And then number two, like what’s the future of kind of monetizing data from the platform over time?
Jeff Hawkins: Sure, yes, good question. So, I think obviously in the proteomic space, software or analysis of data has always been a very challenging thing. I think if you go into a large proteomics core lab that’s running mass spec, they’ll often have some number of bioinformatics staff or data science type of staff that has created custom pipelines or analysis tools that make it easier for them to analyze data as it comes off of the mass spec. That’s great when you’re in the types of institutions that have those resources and capabilities, but that’s not the majority of research labs in the world, right? That’s a small number of labs that have that level of sophistication. So I think automating of data analysis was always sort of a core tenant of the strategy to ensure that over time, we could distribute this technology as broadly as possible.