Welcome to the Squonk Data Manager, a tool for running scientific workflows.
The Squonk Data Manager is the core of the Squonk 2.0 suite of applications. It provides a managed environment for your data and allows you to use applications and jobs that work on that data. Currently it is focussed around virtual screening workflows, but the scope will expand over time.
Data is at the heart of the Squonk Data Manager. Your datasets stay in one place, and applications fire up and work the data. This is the bring the compute to the data paradigm.
This managed data environment is highly collaborative in nature, allowing sharing of data and skills across a workgroup, but in a secure manner.
All of this takes place in a Kubernetes cluster, providing high resilience and potential scalability.
These are some of the things you can do:
- Select molecules for virtual screening based of molecular properties or chemical similarity and generate 3D conformers (see here)
- Perform target based virtual screening (e.g. docking - see here)
- Perform ligand based virtual screening (e.g. 3D shape comparison)
- Calculate or predict molecular properties and filter based on those properties
- Run Jupyter Notebooks
- Concepts - Concepts that you should understand
- Guided Tour - High level walk through
- How To Guides - Guides for doing various tasks
- Deployed jobs - Jobs that have been deployed here
- About Squonk - Squonk's history and future direction
- Knowledge base - User support and ideas
- Developer docs - Design docs and how to use as a developer
- Administrator docs (TODO)
How to access
The Squonk Data Manager is in early stages of development. Access depends on which deployment you are using, but for the public evaluation site is currently by invitation only. Please email us at firstname.lastname@example.org if you would like to evaluate or want further information.
- Log in
- Create a Project or select an existing one.
- Upload files to your project or attach existing datasets.
- Use the Executions tab to launch a Jupyter notebook or execute jobs that work on those datasets.
- View those executions nd their results in the Results tab.