And the display architecture itself isn’t going to change that much but when we looked at the overall system architecture we realized that there were efficiencies to be made by adding sensors. If you look at my background I spent a decade at analog devices running their high performance sensing group. There was lots of ways that we could improve that and also some of the covenants that these devices have broken in consumer electronics size, weight, power consumption, user adaptation those are things that are getting AR adoption rates right now. So we took that challenge. We came up with what we call the neural display which is an AI powered display and a software defined backplane which Glenn we already have by the way. Many of our backplanes are already software defined.
So we have this capability and we think that we can solve the problems in the AR VR marketplace by those sensor fusion activities that we’re embarking upon now. We’re developing some partner networks and partnerships to help us get there. More on those later.
Glenn Mattson: Thanks for the additional time. I look forward to hearing more about it as it progresses. Thanks guys.
Michael Murray: You bet. Thanks.
Operator: Your next question comes from the line of Kevin Dede from H. C. Wainwright. Your line is open.
Kevin Dede: Hi Michael, Rich. Thanks for having me on.
Michael Murray: Hi Kevin.
Kevin Dede: Yes, yes. I’d just like to piggyback off Glenn’s question. Right. I get the neural display. Appreciate the detail you’ve offered on it. Understand your prepared remarks noted a patent. Maybe you could talk a little bit about that. And maybe you could talk given your analog device and sensor experience and that sensor capability already or software capability already embedded in the backplane. Maybe you can talk a little bit about including sensors, the timeline to prototypes and when you think you might be able to get stuff into people’s hands.
Michael Murray: Great question. Thanks Kevin. So I’ll be focused on this in two ways to answer your question. Number one, the architecture itself has been patented. We have five patents submitted currently. There’s a sixth patent that we are submitting shortly. And we believe that those patents are going to be the foundation for where the neural display is going to come from. We’ve been working with one very large micro display company. And we’re looking for support from them. We’re trying to create an organization with those folks for the consumer market. We’re also looking at go-to-market activities with the defense market and with a specific office in the United States Department of Defense to enable this technology as well.
Next year we’re looking at potential funding lines from not only our consumer companies and customers, we’re also looking at potential funding from U.S. DOD applications as well as potentially some congressional money that we’ve applied for. So that’s where the money is going to come from to support this technology development. And when we see the money come in, we’ll see how quickly we can get this in the hands of folks, specifically in the consumer marketplace where we have tremendous demand currently.
Kevin Dede: Okay, maybe I need to take a different tack on the questioning line, Michael. So please excuse me.
Michael Murray: Sure.
Kevin Dede: Obviously you feel comfortable with the technology development. But how do you think we should look at timelines for you to get a display that incorporates your sensors and you might be able to serve as physical evidence for customers?
Michael Murray: Yes, I think we have a pretty targeted approach in that area. I’m going to defer the question for proof-of-concept samples until a few more NDAs get signed.