Andrew Cutler: No, I think you’ve said it well. It’s in line with what I said earlier. We’re really trying to make sure we have the right patients, with legitimately with the illness, and not patients with mild depression, or who are not legitimate patients. So, we’re really trying to be careful, about selecting the right patients.
Sergio Traversa: Maged, I think – the second one is for you.
Maged Shenouda: Sure. Yes. So hi, Uy. Thanks for the question. So, G&A experiments, should follow the pattern we’ve had in 2023 on a quarterly basis. A lot depends on enrollment patterns, but our current expectation, is that R&D should pick up a little bit in the third quarter. And then, excuse me, in the first quarter, in the first quarter. And then again, increase in the second quarter. And then, stay at that level through the third and fourth quarter. As you can sort of see, enrollment pick up in 302, and then pick up in 304 as well. So I hope that helps.
Uy Ear: Very helpful. Thank you.
Maged Shenouda: Sure. My pleasure.
Operator: Thank you. And your next question comes from the line of Andrew Tsai from Jefferies. Please go ahead.
Andrew Tsai: Hi, good afternoon. Thanks for taking my question. So first one I noticed in your prepared remarks, you said how you’re monitoring sites in real time, making changes accordingly. So, what exactly, are you monitoring for, and what kinds of changes, are you making on a day-to-day kind of basis? And then secondly, are there any learnings, or thoughts that you might have, on Sage’s recent rejection for their MDD study? And sorry, not study, but the approval. And if there’s any read-through or any lessons learned that you think you could apply to REL-1017? Thanks.
Sergio Traversa: Thank you, Andrew, and thanks for the call. I will ask you after to repeat the second question, because I didn’t get it 100%. But the first question is, I think Andrew can go a lot more in detail, since he has run his site for 30 years, and has done many, many CNS clinical trials. But there is no magic, right? You monitor in general, you monitor every like three, four patients, four, five patients enrolled by the site. How the blinded data they look like. Of course, they’re blinded, so you don’t know, if they are good. But you definitely can have a good understanding if there is something wrong, right? When you have data that, the variability week-over-week of the first four week, is one week up, one week down, and you go up and down, usually that’s not the behavior that, placebo and the drug do, right, but there is a trend.
There is a trend, so that’s one signal. And then the overall quality of the site, right? How the quality is documented in data. They put the data in the database. So there is, no one single factor. Is there a combination that, can give you some sense, if the site is providing like the service that we would like to? Andrew, I mean, you have done this for a long time. Do you want to add anything?
Andrew Cutler: Yes. Yes, there are various quality indicators you look for. And I think we’re watching, reminding the store, much more closely here. You look for things like, are the rating scales consistent? Are they kind of all moving in the same direction? You look for adherence to the protocol and what we call protocol violations, which indicates sloppiness. This time, we’re being careful to not let, as Sergio said, any sites just kind of get off to the races and recruit, too many patients or too fast. So, there are a variety of quality indicators you look for and consistency things you look for, as Sergio said, and we’re watching those. And then if there’s a site that has issues, we’re actually stopping their enrollment. We’re trying to figure out what’s going on, and deciding if we want to continue with them or not.
Sergio Traversa: Hope that answers your question, Andrew. And if you don’t mind to repeat the second one because I didn’t get it.
Andrew Tsai: Oh, perfect. Sage recently had their NDA rejected for MDD. And I’m just curious, if the reason behind that rejection, has any bearing, or read through to your as methadone basically?
Andrew Cutler: Yes. Sergio, maybe I could help. Maybe I could help.
Sergio Traversa: Yes, you’re in, go ahead.
Andrew Cutler: Because, I was very involved with that. It’s really apples and oranges. Their paradigm was very different. It was a two-week treatment paradigm, with a very different mechanism. And really the problem, was they didn’t have a good story, for how two weeks of treatment, would hold a charge. And in their Phase 2 study, there was a suggestion that, the efficacy continued on beyond the two weeks. However, it was not well replicated in Phase 3. So the FDA, had concerns about that. It’s a very different paradigm, very different medicine. I don’t see it as a competition, as an issue, or anything that would influence what we’re doing.
Andrew Tsai: Makes sense. Okay. Thank you very much.
Sergio Traversa: Thank you, Andrew’s. Both of you.
Operator: Thank you once again [Operator Instructions] And your next question comes from the line of Andrea Tan from Goldman Sachs. Please go ahead.
Andrea Tan: Good afternoon. Thanks for taking our questions. Sergio or Andrew, just curious if you’re able to share what RELIANCE II and the RELIGHT studies, are powered to detect, and remind us what you’re assuming here for placebo response?
Sergio Traversa: Yes, it’s great. Thank you, Andrea, for the question. Well, we haven’t filed with the FDA, the final statistical plan. You usually do it at the very end. There is no advantage, to do it before. But as a fair assumption that usually what you want to detect is a clinically meaningful effect that, according to the expert in adjunctive treatment, is like 2.5 points. So the trial is designed, to detect that kind of a change from placebo. And that is right. That would be the minimum, right? We hope we can do better than that, based on the Phase 2 data, but that’s a fair assumption. That would be the statistical plan. And with 300 patients enrolled, right? It is feasible, it’s realistic.