That’s a very exciting feedback loop, and it’s making our models better every day. It’s something we’ve been making good use of at Ginkgo. We did just announce this quarter, our partnership with Google Cloud. That is going to enhance our development efforts here at Ginkgo. And as a reminder, this partnership with Google gives Ginkgo scalable compute capacity and attractive prices to train large foundation models, but it also represents a commitment by Google to fund our model development efforts upon completion of certain milestones. We’re already well on our way to building out those models as we have already achieved our first cash milestone in this deal and expected to earn the second in relatively short order. One way to measure making biology easier to engineer at Ginkgo is by reducing the cost to get to a successful result for our customers.
And that cost is a function of 3 things. First, the cost per unit operation, right? So this is like the various operations happening in our foundry, and we drive that down through investments in increased scale automation, miniaturization of liquid handling, so we can use less reagents and so on. Second, the number of unit operations that we need per design cycle. In other words, each round of engineering we do, how many — how much work do we need to do in the factory. And this one is tricky because it requires judgment. Sometimes you want to run a giant campaign or we try tens of thousand designs. And that’s the right thing to do, and it’s something that Ginkgo can do uniquely because of our scale. And those early large campaigns can really increase how fast you learn.
But to the extent that AI models increase the quality of those designs, we could reduce the size of those libraries, which would save a lot of money, and also mean we get more scale out of that automation, right? If you can use less per program, you can do more programs on the same infrastructure. So it’s very exciting. Finally, and I think this may be the most important, is the number of cycles that we need to do for a project. Again, the ability to do reinforcement learning from prior results, in other words, take what we learned from something and feed it back into the model is a key part of why customers are working with us. But in certain areas, we’ve developed so much depth. In other words, we’ve done enough projects like that, that we can exceed a customer spec in just the first design cycle.
And this is really critical to the customer because the number of cycles, reducing that significantly speeds up programs and often customers, especially in biopharma, care much more about speed than they do about budget. All right. So I want to share a couple of case studies that highlight how we’re seeing those variables move, right? So the first case study, that’s really cool. This is an enzyme engineering program that we started earlier this year. A customer came to us with an enzyme that had been produced for them by another service provider and wasn’t sufficient to meet their need in the market. On the left, you can see the various enzyme designs we tested. So that black dot in the middle is the starting sequence. It represents a starting sequence our customer gave us.
Dots closer to that are protein sequences that are closer, that are more similar. And the further way you get, the less similar the sequences to the original. And so this is where the combination of AI and our foundry become really powerful. First, we can afford because of the foundry to test much more broadly than is typical. We see over and over again that minor tweaks to an enzyme is not sufficient to get the kind of big step change improvements customers want. So adding — but adding all that diversity, all that change in the protein is risky, right? Many of — if you change sequences a lot, they tend to have less success. And so because we can screen enzymes so much more efficiently, we can search that much wider space and find that kind of needle in a hay stack.
Second, our AI/ML models are getting extremely predictive. So ultimately, here, we tested a 500-member library comprising both known enzymes in our code base as well as custom-engineered novel enzymes. Each member is represented by a dot on the left. And in the first cycle, we discovered an enzyme that was 21x better than the original from the customer. And the big win here is speed. Yes, we were also able to use a highly efficient workflow with a relatively small library, those 500 members versus, again, sometimes we do tens of thousands in a cycle. But what really unlocked it was improved accuracy of our AI/ML models predicting sequences that would work, far exceeding our customers’ expectations in just the first cycle of design. So that’s really exciting.