Structure- and ligand-based analysis - How to combine the best of two worlds for driving compound design [EVENT HIGHLIGHTS]

How many drug design projects are you currently working on or managing? Are the various tools for compound design well-integrated and simple enough for everyone? How efficiently do you manage to combine data coming from 3D methods with those coming from the 2D side? How easy is it to communicate in a multidisciplinary team to reach the common project goals on time?

Last week I moderated the virtual panel discussion “Structure- and ligand-based analysis - How to combine the best of two worlds for driving compound design?” with the two panelists Malin Lemurell, Executive Dir., Head of Medicinal Chemistry at AstraZeneca, and Troy D. Smith, Senior Expert I, Data Science at Novartis Institutes for BioMedical Research (NIBR). The session was hosted by the Boston Area Group for Informatics and Modeling (BAGIM) and attracted over 120 attendees.

The aim of the discussion was to get answers to the above-mentioned questions from our panelists based on their personal experiences, tackle current challenges and future opportunities for leveraging structure- and ligand-based analysis in drug discovery teams as well as identify best practices for driving compound design.

I thought of highlighting some key takeaways and continuing the conversation here, so feel free to share your point of view anytime.

 
 


Introduction

The discussion started with an image of chemists spending long hours at the bench synthesizing new compounds. Most drug discovery project teams can quickly begin to explore the chemical space by reaching out to known compound libraries and already established chemistry. However, to increase the project’s chances of success, you many times need to develop more complex chemistry to address the design hypothesis. But this comes at a cost. As both panelists pointed out, the "Making" is often the most time-consuming step in the DMTA (design-make-test-analyze) cycle. You, therefore, need to be smart when choosing the compounds to make.

Today there are tons of tools to help the project team enumerate, recommend, analyze and rationally prioritize new compound ideas. Yet, it's not always straightforward for drug design experts to integrate and digest all data coming from different team efforts (LBDD and SBDD analysis) and efficiently drive a truly collaborative drug design.


Common Playground to drive compound design

With diverse teams, asynchronous progress updates, and scattered data, it can be very tricky to really collaborate in a timely manner. It affects the decision-making process and can put the DMTA cycle at risk of unnecessary delays. Having the right tool can help. The panelists emphasized that developing a common playground by means of a centralized platform that gathers all the data from different sources in real-time is the key to successfully communicate, increase rapid decision-making and, at the same time, allow the scientists to do actual science.

Moreover, such a platform can efficiently guide the selection of "compounds to make" by combining complementary key data such as drug-like properties and advanced structure-based predictions (e.g. binding affinities from FEP simulations). The federation of tools becomes even more crucial if the molecules are complex and hard to synthesize.

It is very important to know what to make but more importantly – what not to make!
— Troy D. Smith (Novartis)

Easily Accessible Common Playground

If we admit that a centralized platform is an excellent way to save time and cost and drive new compound design, we must make sure people will use it. The bench chemist who battles finding the right reaction conditions or the molecular modeler who strives to produce reliable affinity predictions before the next design cycle does not want to take time off to learn new software. That is why having just a common playground to gather and interpret data is not entirely enough – it needs to be easy to use for everyone. 

One way to achieve this is to integrate existing tools that the scientists are already familiar with and expose them through the common platform. If we allow scientists with different backgrounds and levels of expertise to access all data, everyone can contribute and generate new ideas.

It is a capital loss not to use all the brains in a team by not providing them a way to look at 3D data.
— Malin Lemurell (AstraZeneca)

Highly Collaborative Easily Accessible Common Playground

Another necessary build-up to consider when trying to make a universal system is the efficient exchange of information between the scientists themselves. Various existing tools can already capture ideas, but their cross-linking to structural knowledge or ligand-based studies or similar are often missing. That’s why the common playground should have additional features for instant information delivery accessible to everyone. It could be a way out of a Power Point slides or overcrowded email inboxes that are hard to follow – and even harder to coordinate.

The platform then gives complete visibility to the rest of the team. Plus, precious time can be saved on reports and short distracting meeting and ensure the focus on the project goals. And one thing that caught my attention as well is that with the new generation of scientists, if there is no feedback or way to get access to data right away, new ideas can get lost as the team gets engaged in the next idea.

Share your thoughts

If you were to join this panel discussion, what would you say? Please share your thoughts in my LinkedIn post.