Frequently Asked Questions

What services do you offer?

We work with individual researchers, labs and organizations including research consortia, diagnostics companies, biotechnology companies, and other life science focused organizations. Typical ways we work together are:

Individual researchers and labs:

  • We work with you to get your methods on Truwl, and add resources around those methods to maximize the accessibility and usability of methods for you and others.
  • We work with you to get your data processed using trusted methods. With Truwl you have access to a web-based input editor, the ability to fork usage examples as a starting point for your analysis, access to a robust preconfigured execution environment, the ability to get feedback on your experiments from others, complete records of your analyses that can be cited in papers, and more. We can add workflows to the platform as needed.

Organizations:

  • Research consortia: We can provide access to uniform processing methods to all your consortium members and external researchers. Extend the value of your data and results by allowing others to process their data in a consistent and reviewable manner.
  • Diagnostic companies: You need to ensure that you choose and optimize the right workflows for your assays and ensure that your assays continue to perform across time, locations, and and experimental conditions. Truwl provides access to benchmarking methods and specialized tools to track and view metrics across all your jobs.
  • Biotech companies: Truwl can provide access to uniform processing methods within your organization and to your customers and partners, without vendor lock-in.

Why should I put my methods on Truwl?

Truwl aims to maximize the availability and impact of your methods and provides a home for your methods that is findable by search engines, allows users to share how they're using it, builds community around methods, and aggregates method resources including documentation, containers, workflow language wrappers, and more. Workflows written in WDL can also be directly executed from the platform. Owners of workflows that are runnable on Truwl get access to a Workflow Owner Dashboard that allows them to track usage of their workflows. The value of getting your methods on Truwl will increase over time as we build community, add content, and add features.

How do I get my methods on Truwl?

To get your methods on Truwl you can make a pull request to the capanno repository on GitHub that we use to manage public content on the site. Capanno has a companion tool called capanno-utils for generating, validating, and managing the content in the public capanno repository and private repositories. Please view the documentation in those repositories. Once your method is in capanno and validated we will import it onto Truwl and if you have a Truwl account, we'll make you the owner of the the method on the platform.

What workflow languages do you support?

We support running workflows described in the Workflow Description Language (WDL). For tools we also display WDL, common workflow language (CWL), Nextflow, and Snakemake wrappers on the tool description pages. These are not executable from Truwl yet but you can easily copy them from the site to use on your own. We are planning to support execution of both tools and workflows described in these four workflow languages and are collecting as many as we can.

I want to run workflows. How do I get started?

Truwl provides knowledge and resources for running workflows on your own, and workflows marked with ">>>" can be launched on the cloud directly from the platform.

Running on your own: Workflow pages on Truwl link to the repositories where the workflow files are located if they are publicly available.  You can usually find documentation for running these workflows in those repositories. To get you started you can explore use-cases on Truwl and download input specification files that you can adapt for your own needs. Truwl also provides workflow code (WDL, CWL, Snakemake, Nextflow) for tools directly from workflow pages.

Running on Truwl: You can run workflows on Truwl by starting your free trial or registering for a paid account. Once your account is activated:

  • Find a runnable workflow you'd like to try at  https://truwl.com/workflows.
  • Click on the '+ experiment' button OR start from a shared example by looking at the "Shared Example" tab and clicking on the 'fork' button. This will pre-populate your input editor from the inputs used in the shared experiment.
  • Fill in the required parameters. Specify the file inputs with a URI.  Our system can access internet-accessible publicly available data that has a URI (e.g. gs://bucket-name/mydata.fastq, s3://bucket-name/mydata.bam, https://www.encodeproject.org/files/ENCFF207YHP/@@download/ENCFF207YHP.fastq.gz, etc) . If your data needs to remain private, or you want Truwl to provide you with a cloud bucket for storage, you will need to have a paid account.
  • Hit run.
  • Once a job starts the inputs for that job are stored and you can modify the inputs and launch more jobs.

How much does it cost to run workflows?

The cost to run workflows varies by the size and number (many workflows accept multiple samples) of input data and the particular workflow as this effects the amount of cloud resources used. We are working on providing typical per sample costs directly on the workflow pages. Cloud compute resources are discounted for users with a Truwl subscription plan.

How can I put files in my own bucket so your system can access them?

Truwl can access any files with a public Uniform Resource Identifier (URI). Using private data requires a paid Truwl account. URI's can be pasted directly into Truwl's workflow input editor to specify file inputs.  Google Cloud Platform (GCP) and Amazon Web Services (AWS )are the most common choices to create a public bucket to hold your files.

GCP: Using GCP requires a Google account such as gmail that you can also use to log into Google Chrome, etc.  With your Google account you can log into the the cloud console and create a bucket following the instructions here . Creating a bucket requires your billing to be enabled but there are free options to get started. The free tier includes 5-GB-months of cloud storage and new customers can also get $300 in credits. Once your bucket is created, you can make it public by navigating to the 'permissions' tab and granting access to 'allUsers'. After putting files in your bucket you can get file  URI's by selecting the file in the console and copying the gsutil URI which will start with gs://.

AWS: AWS requires an account which you can create or log into here. Once you login to the console you can create a bucket following these instructions . AWS has a free tier that includes 5GB-months of cloud storage and researchers and students may be eligible to to apply for cloud credits.  Once files are in your bucket you can make them publicly accessible by editing permissions on the 'permissions' tab at either the file or bucket level. You can get file URI's by selecting the file in the console and copying the S3 uri which will start with s3://.

Where are workflow jobs executed?

Workflow jobs submitted through Truwl currently run on Google Cloud Platform. We have avoided writing code that depends on Google specific functionality so we can provide support for other cloud providers if requested. Although jobs are executed on GCP our system can access data stored on other cloud providers including AWS and Azure. Results of analyses can be transferred to other locations as well.

My data needs to remain private, can I still run workflows on Truwl? Can I still share my usage examples without exposing my data and results?

Yes and yes.  Neither your data nor your workflow run details are available to anyone unless you say so. When you share your workflow runs publicly, you can choose which input and output objects you share details about, if any. When you share the inputs and outputs associated with a workflow run, metadata pages about the objects are generated so you can provide details about these objects. Whether the actual inputs and outputs are accessible to others is controlled by the permission settings where the data is stored--typically a cloud bucket.

I need to run a lot of samples. Is there an easy way to run large batches?

Our batch job web interface is still in development but we can assist you to run larger batch jobs. If you programmatically generate your own inputs definition files, you can also use our drag and drop feature for quickly specifying a large number of jobs.

Are you hiring?

We list open positions on LinkedIn, jobs.mthightech.org/jobs, https://jobs.twobearcapital.com/ and this site. However, nearly all of our current full time employees started out as contractors (ranging from 10-40 hours/week) and started helping out with specific projects, then transitioned to employees with positions never being posted. If you have experience with software development, bioinformatics, or marketing and business development in the genomics space, reach out and introduce yourself and let us know you're interested. You might hit us at the right time when we could use your talents.

Where does the name Truwl come from?

Truwl (rhymes with tool) initially came about as a blend of the names Trumble and Owl which are a couple of creek and road names in Northwest Montana. We liked that that it had "true" in it as we wanted to help life scientists discover true insights from their data and 'wl' helps to pay tribute to the workflow languages that help computational research be more sharable and reproducible. Importantly, the domain name was available and we had a bit of an idealogical stance against paying domain name campers.

How can I follow what Truwl is doing?

There are links to our Twitter account, LinkedIn profile, GitHub project, Slack workspace and a newsletter sign up at the bottom of this page where you can follow/subscribe/watch and otherwise interact with us. You need an invitation to join our Slack workspace, which you can request by contacting us.