Sanjeev Aggarwal: Nick, are you talking about 12 to 18 months of qualification time? Or did you mean?
Nick Doyle: Right. Yes.
Sanjeev Aggarwal: Yes. So that — the qualification time for that new STT product that we brought out is still 12 to 18 months and we expect the early production to be in late 2024, early 2025. So nothing is “changed or delayed in that process.”
Anuj Aggarwal: Yes. I think the way I would explain the pipeline, if you look at the design wins, they’re still healthy and they continue to gain traction, right? I think that’s part of where you’re going. So there is a strong pipeline and we are seeing backlog outside of the lead time just not as much as before.
Nick Doyle: Thanks for the clarification.
Operator: Thank you. And we do have a question from another line. One moment, please. I have a question from Orin Hirschman with AIG. Please proceed.
Orin Hirschman: Let’s see just one more clarification question on that last topic. In terms of the guidance that you put out for Q4, does that assume product increases sequentially? Being too much more specific than what you want to be.
Anuj Aggarwal: Yes. Hi, Orin, this is Anuj. I would say, we’re looking at guidance for Q4 relatively flat to Q3 with a similar mix.
Orin Hirschman: Okay. And you never mentioned AI before. Where does the product fit in the overall scheme of AI? Is it only if it becomes the actual a piece of IP in the FPGA and you go through that a little bit more? And are you mentioning because they are customers that have interest?
Sanjeev Aggarwal: Yes. So the – Orin, this is Sanjeev. So the solution that we developed for the FPGA market for the instant on FPGAs. the requirements for the AI inference engines are very similar to the requirements for the – for that instant on FPGA, namely you want it to be low standby or zero standby current. And you want it to be non-volatile and then you need it to be extremely fast so that you’re comparing an image for example with something that the GPU processed and brought to the edge for comparison. So this solution basically applies to both. And we’ve had some early discussions with some of the R&D folks looking at some AI solutions and there seems to be good compatibility. Again, it’s only in the early stages but it’s something that we are hoping to focus on and over the next year, 1.5 years as a new focus for the company.
Orin Hirschman: Is there enough density in these parts to be able to do anything practical on the AI side? Do they have to be chained together? And how would it work?
Sanjeev Aggarwal: So it’s mostly targeted towards Edge AI, Orin where they don’t require very high density. It’s not for the servers where they would require gigabits and much higher densities. But for the Edge AI, we have plenty of density. I mean, we meet the density requirements for the Edge AI applications.
Orin Hirschman: Okay. And just one more follow-up. You mentioned the second instant-on FPGA development. What’s going on with the first one? And is the timing — what is the timing lag on both of these?
Sanjeev Aggarwal: So we are — so both the programs are active as of Q4 of 2023. So the original one with QuickLogic and SkyWater that is progressing just fine. We are making progress on the deliverables and continuing forth with that. And then this is a new program that the QuickLogic has been awarded from the US government where it actually uses CMOS from a different vendor, but has a similar requirement as far as solutions are concerned –as the FPGA solutions are concerned. So these two programs are going to run in parallel at least for the next couple of quarters.