PathoFact issueshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues2020-02-03T10:30:44+01:00https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/1Missing file2020-02-03T10:30:44+01:00Susheel BusiMissing fileHi,
Thanks for a great tool. I tried running it this morning, but a file seems to be missing! See error message below:
`cannot access scripts/VirSorter/wrapper_phage_contigs_sorter_iPlant.pl: No such file or directory`
-SusheelHi,
Thanks for a great tool. I tried running it this morning, but a file seems to be missing! See error message below:
`cannot access scripts/VirSorter/wrapper_phage_contigs_sorter_iPlant.pl: No such file or directory`
-SusheelLaura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/2PathoFact for metagenome assembled genomes (MAGs)2020-04-14T09:13:30+02:00Christiane HassenrückPathoFact for metagenome assembled genomes (MAGs)Hi @laura.denies
I am quite new to the analysis of virulence factors and antimicrobial resistance genes, and I have been looking for a suitable tool to search for these genes also in MAGs. Your tool looks very promising. If I understoo...Hi @laura.denies
I am quite new to the analysis of virulence factors and antimicrobial resistance genes, and I have been looking for a suitable tool to search for these genes also in MAGs. Your tool looks very promising. If I understood your approach correctly, PathoFact takes as input the assembly (i.e. contigs) and the predicted ORF before the binning step. Would it also be possible to run PathoFact on the binned genomes (MAGs)? Would the performance of the tool be reduced by having less complex input (i.e. MAGs instead of the full metagenome assembly)?
Thanks a lot for your advise!
Kind regards,
Christianehttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/3DeepARG -- git lsf issue2021-01-12T15:24:52+01:00Valentina Galatavalentina.galata@uni.luDeepARG -- git lsf issueThe issue is described [here](https://bitbucket.org/gusphdproj/deeparg-ss/issues/1/git-lfs-erros-when-cloning-the-repository).
It seems to have no effect on tool execution.The issue is described [here](https://bitbucket.org/gusphdproj/deeparg-ss/issues/1/git-lfs-erros-when-cloning-the-repository).
It seems to have no effect on tool execution.Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/4IMPiris: new conda dependencies2020-04-17T12:46:47+02:00Valentina Galatavalentina.galata@uni.luIMPiris: new conda dependenciesCreate `conda` YAML files for:
* [x] `hmmer`, version `3.2.1`, [conda](https://anaconda.org/bioconda/hmmer)
* [x] `plasflow`, version `1.1.0`, [conda](https://anaconda.org/smaegol/plasflow)
* [x] `virsorter`, version `1.0.5`, [conda](ht...Create `conda` YAML files for:
* [x] `hmmer`, version `3.2.1`, [conda](https://anaconda.org/bioconda/hmmer)
* [x] `plasflow`, version `1.1.0`, [conda](https://anaconda.org/smaegol/plasflow)
* [x] `virsorter`, version `1.0.5`, [conda](https://anaconda.org/bioconda/virsorter)
* [x] dep.s for `deeparg` using `pip`: [readme](https://bitbucket.org/gusphdproj/deeparg-ss/src/master/), [nolearn dep.s](https://raw.githubusercontent.com/dnouri/nolearn/0.6.0/requirements.txt)
- numpy==1.10.4
- scipy==0.16.1
- Theano==0.8
- -e git+https://github.com/Lasagne/Lasagne.git@8f4f9b2#egg=Lasagne==0.2.git
- joblib==0.9.3
- scikit-learn==0.17
- tabulate==0.7.5
- nolearn==0.6.0
- tqdmValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/5IMPiris: Add workflow option to config2020-04-17T11:29:30+02:00Valentina Galatavalentina.galata@uni.luIMPiris: Add workflow option to configAdd `workflow` to `config.yaml` and change/replace variable `w` in main `snakemake` fileAdd `workflow` to `config.yaml` and change/replace variable `w` in main `snakemake` fileValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/6IMPiris: escape tabs in rule CMDs2020-04-17T11:29:41+02:00Valentina Galatavalentina.galata@uni.luIMPiris: escape tabs in rule CMDsEscape `\t` properly by using `\\t` s.t. `snakemake` does not interpret themEscape `\t` properly by using `\\t` s.t. `snakemake` does not interpret themValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/7IMPiris: log files2020-05-05T07:19:17+02:00Valentina Galatavalentina.galata@uni.luIMPiris: log filesAdd `log` files to rules
* [x] redirect for CMD
* [ ] logging in scripts
* [x] R scripts
* [ ] Python scriptsAdd `log` files to rules
* [x] redirect for CMD
* [ ] logging in scripts
* [x] R scripts
* [ ] Python scriptsValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/8IMPiris: modify rule messages2020-04-17T11:29:46+02:00Valentina Galatavalentina.galata@uni.luIMPiris: modify rule messages- Every rule should have a message
- It should be clear from the text which rule is executed- Every rule should have a message
- It should be clear from the text which rule is executedValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/9IMPiris: test data2020-05-05T07:19:06+02:00Valentina Galatavalentina.galata@uni.luIMPiris: test dataCreate a test data setCreate a test data setValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/10Snakemake: use && or || to concat CMDs in rules2020-06-08T09:09:24+02:00Valentina Galatavalentina.galata@uni.luSnakemake: use && or || to concat CMDs in rulesIf a rule has multiple CMDs use `&&` or `||` to concatenate them (if appropriate)
```bash
# execute cmd1
# if cmd1 fails execute cmd2 otherwise cmd2 is not executed
cmd1 || cmd2
# execute cmd1
# if cmd1 succeeds execute cmd2 otherwise ...If a rule has multiple CMDs use `&&` or `||` to concatenate them (if appropriate)
```bash
# execute cmd1
# if cmd1 fails execute cmd2 otherwise cmd2 is not executed
cmd1 || cmd2
# execute cmd1
# if cmd1 succeeds execute cmd2 otherwise cmd2 is not executed
cmd1 && cmd2
```https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/11IMPiris: logging in R-scripts2020-04-17T11:29:51+02:00Valentina Galatavalentina.galata@uni.luIMPiris: logging in R-scriptsAdd logging to R-scripts
```R
# Logging
sink(file=file(snakemake@log[[1]], open="wt"), type="message")
```Add logging to R-scripts
```R
# Logging
sink(file=file(snakemake@log[[1]], open="wt"), type="message")
```Valentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/12IMPiris: mv all scripts to scripts/2020-04-17T11:29:57+02:00Valentina Galatavalentina.galata@uni.luIMPiris: mv all scripts to scripts/Valentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/13spring-clean: add .gitignore2020-04-20T14:04:47+02:00Valentina Galatavalentina.galata@uni.luspring-clean: add .gitignore* [x] add `.gitignore`
Impact: Will not affect the pipeline* [x] add `.gitignore`
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/14spring-clean: mv conda_environments and YAML files2020-04-20T13:46:18+02:00Valentina Galatavalentina.galata@uni.luspring-clean: mv conda_environments and YAML files* [x] mv `conda_environments/` to `envs/` - shorter and more convenient
* [x] mv YAML files: name based on tool, `*.yaml`
* [x] adjust paths in scripts
Impact: Will not affect the pipeline* [x] mv `conda_environments/` to `envs/` - shorter and more convenient
* [x] mv YAML files: name based on tool, `*.yaml`
* [x] adjust paths in scripts
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/15spring-clean: all scripts in scripts/2020-04-23T12:31:27+02:00Valentina Galatavalentina.galata@uni.luspring-clean: all scripts in scripts/* [x] mv scripts in `rules/` to `scripts/`
* [x] adjust paths when calling these scripts
Impact: Will not affect the pipeline* [x] mv scripts in `rules/` to `scripts/`
* [x] adjust paths when calling these scripts
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/16spring-clean: main YAML file2020-04-20T14:18:45+02:00Valentina Galatavalentina.galata@uni.luspring-clean: main YAML fileCreate main `conda` YAML file containing
- `python=3.6.4`
- `snakemake=5.5.4`Create main `conda` YAML file containing
- `python=3.6.4`
- `snakemake=5.5.4`spring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/17spring-clean: DeepVirFinder2020-04-21T15:09:05+02:00Valentina Galatavalentina.galata@uni.luspring-clean: DeepVirFinderDependency: `DeepVirFinder`
* [x] `git submodule` (for version see below)
* [x] rm dir. in `scripts/`
* [x] rm `git clone` from `set-up.sh`
* [x] update path in `config.yaml`
```shell
git submodule add https://github.com/jessieren/Deep...Dependency: `DeepVirFinder`
* [x] `git submodule` (for version see below)
* [x] rm dir. in `scripts/`
* [x] rm `git clone` from `set-up.sh`
* [x] update path in `config.yaml`
```shell
git submodule add https://github.com/jessieren/DeepVirFinder.git submodules/DeepVirFinder
cd submodules/DeepVirFinder
git checkout ddb4a94
```
Impact: Could affect the results if there are any discrepancies between the versions of any dependency. Therefore:
- Confirm `git` version
- Use same YAML as in original repospring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/18spring-clean: give snakemake files an extension2020-04-24T10:43:37+02:00Valentina Galatavalentina.galata@uni.luspring-clean: give snakemake files an extensionAdd `.smk` to all `snakemake` files in `rules/` and `workflows/`.
Makes it easier to recognize `snakemake` files and to have code highlighting in editors.Add `.smk` to all `snakemake` files in `rules/` and `workflows/`.
Makes it easier to recognize `snakemake` files and to have code highlighting in editors.spring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/19spring-clean: DeepARG2020-04-21T15:09:05+02:00Valentina Galatavalentina.galata@uni.luspring-clean: DeepARGDependency: `DeepARG`
* [x] `git submodule`
* [x] rm dir. in `scripts/`
* [x] rm `git clone` from `set-up.sh`
* [x] update path in `config.yaml`
* [x] set-up commands: replace path and make `DIAMOND` bin executable
```shell
git lfs ins...Dependency: `DeepARG`
* [x] `git submodule`
* [x] rm dir. in `scripts/`
* [x] rm `git clone` from `set-up.sh`
* [x] update path in `config.yaml`
* [x] set-up commands: replace path and make `DIAMOND` bin executable
```shell
git lfs install # if not already done
git submodule add https://gaarangoa@bitbucket.org/gusphdproj/deeparg-ss.git submodules/deeparg-ss
cd submodules/deeparg-ss/
git checkout 14b8dce
```
Impact: Could affect the results if there are any discrepancies between the versions of any dependency. Therefore:
* Confirm `git` version
* Use same YAML as in original repospring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/20spring-clean: PlasFlow2020-04-21T15:09:05+02:00Valentina Galatavalentina.galata@uni.luspring-clean: PlasFlowDependency: `PlasFlow`
* [x] `conda` YAML file
* [x] rm dir. in `scripts/`
* [x] update path in `config.yaml`
* [x] add YAML to rule(s) calling `PlasFlow`
Impact: Could affect the results if there are any discrepancies between the versi...Dependency: `PlasFlow`
* [x] `conda` YAML file
* [x] rm dir. in `scripts/`
* [x] update path in `config.yaml`
* [x] add YAML to rule(s) calling `PlasFlow`
Impact: Could affect the results if there are any discrepancies between the versions of any dependency.
Using `conda` installation (`plasflow=1.1.0`) instead of `git` repo (commit `v1.1-11-g82e9c75`).
**NOTE**: [release v1.1 vs. v1.1-11-g82e9c75](https://github.com/smaegol/PlasFlow/compare/v1.1...v1.1-11-g82e9c75)
- README update
- changed description of arg parser in `PlasFlow.py`
- no other changes in the code
--> No critical changesspring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/21spring-clean: VirSorter2020-04-21T15:10:53+02:00Valentina Galatavalentina.galata@uni.luspring-clean: VirSorterDependency: `VirSorter`
* [x] `conda` YAML file
* [x] rm dir. in `scripts/`
* [x] rm `git clone` from `set-up.sh`
* [x] update path in `config.yaml`
Impact: Could affect the results if there are any discrepancies between the versions o...Dependency: `VirSorter`
* [x] `conda` YAML file
* [x] rm dir. in `scripts/`
* [x] rm `git clone` from `set-up.sh`
* [x] update path in `config.yaml`
Impact: Could affect the results if there are any discrepancies between the versions of any dependency.
Using `conda` installation (XX) instead of `git` repo (commit `v1.0.6-2-g52a1cd6`).
**NOTE**: [v1.0.6 vs. v1.0.6-2-g52a1cd6](https://github.com/simroux/VirSorter/compare/v1.0.6...v1.0.6-2-g52a1cd6)
--> **not sure** if all changes are non-criticalspring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/22spring-clean: HMMER2020-04-21T15:21:16+02:00Valentina Galatavalentina.galata@uni.luspring-clean: HMMERDependency: 'HMMER'
* [x] `conda` YAML file
* [x] update path in `config.yaml`
* [x] update rule(s)
Impact: Could affect the results if there are any discrepancies between the versions of any dependency. Therefore
- Confirm version: `3...Dependency: 'HMMER'
* [x] `conda` YAML file
* [x] update path in `config.yaml`
* [x] update rule(s)
Impact: Could affect the results if there are any discrepancies between the versions of any dependency. Therefore
- Confirm version: `3.2.1`spring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/23spring-clean: clean up YAML files2020-04-21T15:26:09+02:00Valentina Galatavalentina.galata@uni.luspring-clean: clean up YAML filesRemove `name` and `prefix` from `conda` YAML files, `conda` does not use them anyway.
Impact: Will not affect the pipelineRemove `name` and `prefix` from `conda` YAML files, `conda` does not use them anyway.
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/24spring-clean: add workflow parameter2020-04-21T13:47:28+02:00Valentina Galatavalentina.galata@uni.luspring-clean: add workflow parameter* [x] Add `workflow` variable to `config.yaml`
* [x] Adjust `Snakemake`* [x] Add `workflow` variable to `config.yaml`
* [x] Adjust `Snakemake`spring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/25spring-clean: rm `configfile` from snakemake rule files2020-04-21T14:08:06+02:00Valentina Galatavalentina.galata@uni.luspring-clean: rm `configfile` from snakemake rule filesRemove `configfile` in all `snakemake` files except in `Snakemake`.
Otherwise it is impossible to use a custom file to call the pipeline.
Impact: Will not affect the pipelineRemove `configfile` in all `snakemake` files except in `Snakemake`.
Otherwise it is impossible to use a custom file to call the pipeline.
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/26spring-clean: snakemake rules indentation2020-04-21T14:08:28+02:00Valentina Galatavalentina.galata@uni.luspring-clean: snakemake rules indentationFix formatting in all `snakemake` files (do **NOT** change the commands)
- Indentation: 4 spaces
Impact: Will not affect the pipelineFix formatting in all `snakemake` files (do **NOT** change the commands)
- Indentation: 4 spaces
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/27Snakemake rule messages2020-08-06T12:06:37+02:00Valentina Galatavalentina.galata@uni.luSnakemake rule messagesAdd a short message to each `snakemake` rule
Impact: Will not affect the pipelineAdd a short message to each `snakemake` rule
Impact: Will not affect the pipelinePaper review - Microbiome - 1Laura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/28spring-clean: logging2020-04-21T15:53:04+02:00Valentina Galatavalentina.galata@uni.luspring-clean: logging- Add `log` to rules (if applicable)
- Add logging in R-scripts
```R
# logging in R scripts
sink(file=file(snakemake@log[[1]], open="wt"), type="message")
```
Impact: Will not affect the pipeline- Add `log` to rules (if applicable)
- Add logging in R-scripts
```R
# logging in R scripts
sink(file=file(snakemake@log[[1]], open="wt"), type="message")
```
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/29spring-clean: snakemake rules: use names in shell/run/script2020-04-30T09:08:33+02:00Valentina Galatavalentina.galata@uni.luspring-clean: snakemake rules: use names in shell/run/scriptWhen accessing `snakemake` variables use names if provided, e.g. `snakemake@input[["faa"]]` in an `R` script.
Impact: Will not affect the pipelineWhen accessing `snakemake` variables use names if provided, e.g. `snakemake@input[["faa"]]` in an `R` script.
Impact: Will not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/30spring-clean: test data set2020-04-29T16:30:25+02:00Valentina Galatavalentina.galata@uni.luspring-clean: test data setAdd a test data set to be used to check whether the pipeline runs through and whether the results are consistent.
* [x] Create/copy input files (from `IMP3` test data set)
* [x] Create expected output (using `PathoFact`'s version of the...Add a test data set to be used to check whether the pipeline runs through and whether the results are consistent.
* [x] Create/copy input files (from `IMP3` test data set)
* [x] Create expected output (using `PathoFact`'s version of the `master` branch)
* [x] Create a config file
*Note: Scripts to compare the output will be part of a separate issue*spring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/31spring-clean: args in virulence_prediction.py and rule classifier in the vir ...2020-04-23T12:32:59+02:00Valentina Galatavalentina.galata@uni.luspring-clean: args in virulence_prediction.py and rule classifier in the vir stepScript `scripts/virulence_prediction.py` uses `snakemake` variable to access parameters but the rule `classfier` in `rules/Virulence/Virulence.snk` uses `shell` instead of `script`.Script `scripts/virulence_prediction.py` uses `snakemake` variable to access parameters but the rule `classfier` in `rules/Virulence/Virulence.snk` uses `shell` instead of `script`.spring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/32spring-clean: escape special chars in rules properly2020-04-30T08:05:22+02:00Valentina Galatavalentina.galata@uni.luspring-clean: escape special chars in rules properlyEscape special chars in rules' shell call properly, e.g. replace `\t` by `\\t`.
Reason: Prevents `snakemake` from interpreting these characters.
Impact: Should not affect the pipelineEscape special chars in rules' shell call properly, e.g. replace `\t` by `\\t`.
Reason: Prevents `snakemake` from interpreting these characters.
Impact: Should not affect the pipelinespring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/33spring-clean: add test workflow2020-04-30T08:04:55+02:00Valentina Galatavalentina.galata@uni.luspring-clean: add test workflowAdd a test workflow. Purpose:
- check whether the pipeline can be run
- check whether the pipeline produces the expected results
Tasks:
* [x] create a new `snakemake` file
* [x] create workflow
* [x] create rules
Impact: Will extend th...Add a test workflow. Purpose:
- check whether the pipeline can be run
- check whether the pipeline produces the expected results
Tasks:
* [x] create a new `snakemake` file
* [x] create workflow
* [x] create rules
Impact: Will extend the pipeline, old workflows will remain untouchedspring-cleanValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/34Replace SignalP2021-09-30T14:55:36+02:00Valentina Galatavalentina.galata@uni.luReplace SignalPReplace `SignalP` by a tool
- with same/better functionality
- more flexible license
- which can be installed automatically
- e.g. using `conda`, `git submodule` or by downloading a binary
Or, instead of using one tool, use a combin...Replace `SignalP` by a tool
- with same/better functionality
- more flexible license
- which can be installed automatically
- e.g. using `conda`, `git submodule` or by downloading a binary
Or, instead of using one tool, use a combination of several tools as it is done by [Uniprot](https://www.uniprot.org/help/signal)
*Add suggestions for tools as comments*Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/35Update SignalP2020-06-22T11:26:19+02:00Valentina Galatavalentina.galata@uni.luUpdate SignalPConsider using version `5.0b` instead of `4.X`.
See the list of available versions [here](https://services.healthtech.dtu.dk/software.php).
Related to issue #34: no need for an update if `SignalP` will be replaced by a different tool.Consider using version `5.0b` instead of `4.X`.
See the list of available versions [here](https://services.healthtech.dtu.dk/software.php).
Related to issue #34: no need for an update if `SignalP` will be replaced by a different tool.Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/36README.md update2021-01-12T15:24:45+01:00Valentina Galatavalentina.galata@uni.luREADME.md update* [x] move "Pipeline environment" below "PathoFact"
- the env. can only be created after the repo has been cloned
* [x] remove `-b SpringClean` from the CMD for cloning the repo
* [x] in "Input files" there are two `SAMPLE_A.fna`: re...* [x] move "Pipeline environment" below "PathoFact"
- the env. can only be created after the repo has been cloned
* [x] remove `-b SpringClean` from the CMD for cloning the repo
* [x] in "Input files" there are two `SAMPLE_A.fna`: replace one with `SAMPLE_A.faa`Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/37PlasFlow threshold from config2021-01-12T15:25:03+01:00Valentina Galatavalentina.galata@uni.luPlasFlow threshold from configGet the parameter `threshold` for `PlasFlow` from the config file.Get the parameter `threshold` for `PlasFlow` from the config file.Paper review - Microbiome - 1Valentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/38size_fasta is not used by seqkit2020-06-11T15:40:51+02:00Valentina Galatavalentina.galata@uni.lusize_fasta is not used by seqkitThe parameter `size_fasta` from the config file is not used in any of the rules.
All `seqkit split2` calls contain `-s 10000`The parameter `size_fasta` from the config file is not used in any of the rules.
All `seqkit split2` calls contain `-s 10000`Paper review - Microbiome - 1Valentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/39Error if no VirSorter results in AMR_MGE.R2021-03-24T16:26:45+01:00Valentina Galatavalentina.galata@uni.luError if no VirSorter results in AMR_MGE.RIf `VirSorter` does not have any results then the script `AMR_MGE.R` fails with an error:
```
Error: `f` must be a factor (or character vector or numeric vector).
```
This happens here:
```R
Phage_prediction$VirSorter_prediction <- fct...If `VirSorter` does not have any results then the script `AMR_MGE.R` fails with an error:
```
Error: `f` must be a factor (or character vector or numeric vector).
```
This happens here:
```R
Phage_prediction$VirSorter_prediction <- fct_explicit_na(Phage_prediction$VirSorter_prediction, na_level = "-")
```
Reason: `Phage_prediction$VirSorter_prediction` contains only `NA`s.Paper review - Microbiome - 1Valentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/40PlasFlow minimal contig length from config2021-01-12T15:25:57+01:00Valentina Galatavalentina.galata@uni.luPlasFlow minimal contig length from configGet the minimal contig length for `PlasFlow` from the config file: rule `filter_seq` in `Plasmid.smk`.Get the minimal contig length for `PlasFlow` from the config file: rule `filter_seq` in `Plasmid.smk`.Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/41Documentation2021-01-12T15:25:21+01:00Valentina Galatavalentina.galata@uni.luDocumentation
* [x] output files
* [x] config file
* [x] running time and memory requirements
* [x] output files
* [x] config file
* [x] running time and memory requirementsPaper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/42intallation with docker/singularity2020-06-08T08:15:55+02:00soilmicrobiomeintallation with docker/singularityI am excited to use this program. But I work on a HPC which does not allow for conda. Is it possible to have the program available to pull with docker/singularity?
thanksI am excited to use this program. But I work on a HPC which does not allow for conda. Is it possible to have the program available to pull with docker/singularity?
thankshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/43Add RGI (AMR prediction)2020-06-18T13:23:04+02:00Valentina Galatavalentina.galata@uni.luAdd RGI (AMR prediction)Add `RGI` for AMR prediction
- [paper](https://pubmed.ncbi.nlm.nih.gov/31665441/)
- [repo](https://github.com/arpcard/rgi)
* [x] Evaluate for incorporation
* [x] `conda` YAML file
* [x] relevant parameters in config
* [x] rule to gener...Add `RGI` for AMR prediction
- [paper](https://pubmed.ncbi.nlm.nih.gov/31665441/)
- [repo](https://github.com/arpcard/rgi)
* [x] Evaluate for incorporation
* [x] `conda` YAML file
* [x] relevant parameters in config
* [x] rule to generate predictions
* [x] combine with other resultsPaper review - Microbiome - 1Laura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/44DeepARG - parameters in config file2020-06-22T11:26:03+02:00Valentina Galatavalentina.galata@uni.luDeepARG - parameters in config fileAdd relevant parameters to config fileAdd relevant parameters to config filePaper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/45Reduce output size2020-06-22T14:40:26+02:00Valentina Galatavalentina.galata@uni.luReduce output sizeRemove all intermediate files which are not relevant to the user.
Files with relevant results should be compressed, e.g. output created by `PlasFlow`.
Check if there are very large log files.
* [x] make not relevant intermediate files t...Remove all intermediate files which are not relevant to the user.
Files with relevant results should be compressed, e.g. output created by `PlasFlow`.
Check if there are very large log files.
* [x] make not relevant intermediate files temporary w/ `temp(...)`
* [ ] redirect `hmmer`s stdout to `/dev/null` instead of the log file
* [ ] compress relevant intermediate files
* [x] add rule to removes files which cannot be made temp() (files used in snakemake checkpoints) Paper review - Microbiome - 1Laura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/46Dependency gcc/g++2021-01-12T15:25:35+01:00Valentina Galatavalentina.galata@uni.luDependency gcc/g++A reviewer had to install `gcc/g++`.
Check whether this is really necessary and add to dependencies if required.A reviewer had to install `gcc/g++`.
Check whether this is really necessary and add to dependencies if required.Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/47More cores used than specified2022-10-17T13:56:56+02:00Valentina Galatavalentina.galata@uni.luMore cores used than specifiedOne of the reviewers reported that more CPUs were used than specified; he suspects `SignalP`.One of the reviewers reported that more CPUs were used than specified; he suspects `SignalP`.Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/48Output format2020-06-22T13:37:43+02:00Valentina Galatavalentina.galata@uni.luOutput formatUse consistent output format
* [x] Either `TSV` or `CSV`, not both
* [x] Labels: `pathogenic`, `non-pathogenic` etc.Use consistent output format
* [x] Either `TSV` or `CSV`, not both
* [x] Labels: `pathogenic`, `non-pathogenic` etc.Paper review - Microbiome - 1Laura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/49Replace SeqKit2020-06-11T15:38:12+02:00Valentina Galatavalentina.galata@uni.luReplace SeqKitReviewers reported issues with `SeqKit`.
Since it is used for FASTA splitting it can be replaced easily.Reviewers reported issues with `SeqKit`.
Since it is used for FASTA splitting it can be replaced easily.Paper review - Microbiome - 1https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/50Add Prodigal to Pipeline2020-06-11T15:38:23+02:00Laura DeniesAdd Prodigal to PipelineLaura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/51Update deeparg installation2020-06-18T16:58:08+02:00Laura DeniesUpdate deeparg installationInclude upgraded installation method of deepargInclude upgraded installation method of deeparghttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/52Signal IP version2021-01-12T15:26:20+01:00Adrian-HowardSignal IP versionHello as I have seen in the Dependencies SignalIP is needed and saids that the version is 5.0 but then it asks to download version 4.1g. Just wanted to make sure with version is needed to use PathoFact.Hello as I have seen in the Dependencies SignalIP is needed and saids that the version is 5.0 but then it asks to download version 4.1g. Just wanted to make sure with version is needed to use PathoFact.https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/54Skip prodigal and use curated annotation2022-08-10T15:14:30+02:00avilaHugoSkip prodigal and use curated annotationDear developers,
I have an extensively cured dataset of annotated genomes, but when I run pathofact with fastas files I lose those annotations. Is there a way to use my annotaions to run the program, such as using a gbk or gff file?
(...Dear developers,
I have an extensively cured dataset of annotated genomes, but when I run pathofact with fastas files I lose those annotations. Is there a way to use my annotaions to run the program, such as using a gbk or gff file?
(quick fix if you are a user like me with the same problem: use blast 100% to find match between Prodigal locus tags and your annotations.)https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/55SyntaxError in line 85, Preprocessing.smk2021-01-20T11:40:04+01:00Ghost UserSyntaxError in line 85, Preprocessing.smkHey,
I have read the PathoFact publication. Very impressing, thank you very much! I have tried to install and test the tool with my own data, but I always get stuck in the initial run (also with the test run).
I get the following error ...Hey,
I have read the PathoFact publication. Very impressing, thank you very much! I have tried to install and test the tool with my own data, but I always get stuck in the initial run (also with the test run).
I get the following error message: <br>
SyntaxError in line 85 of /PathoFact/rules/Universal/Preprocessing.smk: <br>
invalid syntax <br>
File "/PathoFact/Snakefile", line 8, in \<module> <br>
File "/PathoFact/workflows/Combine_PathoFact_workflow.smk", line 3, in \<module> <br>
<br>
Any ideas on how I can fix the problem? <br>
Thank you very much in advance! <br>
<br>
Mariehttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/56DeepVirFinder: Resource unavailable2021-01-21T09:09:49+01:00Susheel BusiDeepVirFinder: Resource unavailable- During the `AMR` module, every once a while a sample fails for `DeepVirFinder` due to a `numpy` error
- This error only occurs on certain HPC clusters, example: at UL - only fails on `esb-compute-01` and not on `IRIS`
- Specific error ...- During the `AMR` module, every once a while a sample fails for `DeepVirFinder` due to a `numpy` error
- This error only occurs on certain HPC clusters, example: at UL - only fails on `esb-compute-01` and not on `IRIS`
- Specific error message: `Resource unavailable`Laura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/57PathoFact for assembled genomes2021-08-28T18:35:29+02:00in3skaPathoFact for assembled genomes![Bildschirmfoto_2021-01-31_um_17.06.22](/uploads/94e465e6aaffb58786232492933e2111/Bildschirmfoto_2021-01-31_um_17.06.22.jpg)Hello @laura.denies,
I have a similar question regarding the issue "PathoFact for metagenome assembled genomes (...![Bildschirmfoto_2021-01-31_um_17.06.22](/uploads/94e465e6aaffb58786232492933e2111/Bildschirmfoto_2021-01-31_um_17.06.22.jpg)Hello @laura.denies,
I have a similar question regarding the issue "PathoFact for metagenome assembled genomes (MAGs)". In my case, I have the WGS of a bacterial isolate that is closed and was obtained by Hybrid assembly via Unicycler. I guess that my input is a bit too big for the pipeline, cause I always get the following error message (see picture above).
Do you have any suggestions?
Regardshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/58how to deal if i am not allowed to create conda environment on my server?2021-03-08T14:46:03+01:00AA8898how to deal if i am not allowed to create conda environment on my server?I am currently trying to run PathoFact on my server but I am not allowed to creare conda environment so it give me back this error:
```
Could not create conda environment from /home/PathoFact/rules/AMR/../../envs/Biopython.yaml:
Collect...I am currently trying to run PathoFact on my server but I am not allowed to creare conda environment so it give me back this error:
```
Could not create conda environment from /home/PathoFact/rules/AMR/../../envs/Biopython.yaml:
Collecting package metadata (repodata.json): ...working... failed
NotWritableError: The current user does not have write permissions to a required path.
path: /opt/miniconda3/pkgs/cache/497deca9.json
```
Is there a way to overcome this issue and so run the pipeline without being sudoer?
Thanks a lothttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/59Input: allow to provide a protein FAA file2022-06-15T15:31:24+02:00Valentina Galatavalentina.galata@uni.luInput: allow to provide a protein FAA fileTo be able to include `PathoFact` into other workflows and to use the already existing annotations, it should be allowed to provide a protein FAA file as input (together with the corresponding contigs FASTA).To be able to include `PathoFact` into other workflows and to use the already existing annotations, it should be allowed to provide a protein FAA file as input (together with the corresponding contigs FASTA).https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/60erorr while running classifier and run_Virifinder2021-08-28T18:37:55+02:00AA8898erorr while running classifier and run_VirifinderI get the following error message while running PathoFact on the test data
```
bad escape \s at position 0
Traceback (most recent call last):
File "/opt/miniconda3/lib/python3.7/sre_parse.py", line 1015, in parse_template
this = ch...I get the following error message while running PathoFact on the test data
```
bad escape \s at position 0
Traceback (most recent call last):
File "/opt/miniconda3/lib/python3.7/sre_parse.py", line 1015, in parse_template
this = chr(ESCAPES[this][1])
KeyError: '\\s'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "scripts/virulence_prediction.py", line 4, in <module>
import pandas as pd
File "/opt/miniconda3/lib/python3.7/site-packages/pandas/__init__.py", line 6, in <module>
from . import hashtable, tslib, lib
File "pandas/tslib.pyx", line 3222, in init pandas.tslib
File "pandas/tslib.pyx", line 3169, in pandas.tslib.TimeRE.__init__
File "pandas/tslib.pyx", line 3206, in pandas.tslib.TimeRE.pattern
File "/opt/miniconda3/lib/python3.7/re.py", line 309, in _subx
template = _compile_repl(template, pattern)
File "/opt/miniconda3/lib/python3.7/re.py", line 300, in _compile_repl
return sre_parse.parse_template(repl, pattern)
File "/opt/miniconda3/lib/python3.7/sre_parse.py", line 1018, in parse_template
raise s.error('bad escape %s' % this, len(this))
re.error: bad escape \s at position 0
```
how can I resolve this? thankshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/61error while running run_PLASMID2023-03-18T19:36:31+01:00sabina-llrerror while running run_PLASMIDHi, I am trying to test for the correct installation of PathoFact using the test module. Unfortunately I get this message:
Error in rule run_PLASMID:
jobid: 39
output: test/output_test_again/PathoFact_intermediate/MGE/plasmid/Pl...Hi, I am trying to test for the correct installation of PathoFact using the test module. Unfortunately I get this message:
Error in rule run_PLASMID:
jobid: 39
output: test/output_test_again/PathoFact_intermediate/MGE/plasmid/PlasFlow/test_sample/group_1_plasflow_prediction.tsv
log: test/output_test_again/logs/test_sample/group_1_plasflow_prediction.log (check log file(s) for error message)
conda-env: /net/fs-1/home01/sale/PathoFact/PathoFact/.snakemake/conda/d6afdeea
shell:
PlasFlow.py --input test/output_test_again/PathoFact_intermediate/MGE/plasmid_splitted/test_sample/group_1.fasta --output test/output_test_again/PathoFact_intermediate/MGE/plasmid/PlasFlow/test_sample/group_1_plasflow_prediction.tsv --threshold 0.7 &> test/output_test_again/logs/test_sample/group_1_plasflow_prediction.log
(exited with non-zero exit code)
Do you have any suggestion on how to solve this issue? (log file is attached)
Thanks very much in advance,
Kind Regards,
Sabina
[group_1_plasflow_prediction.log](/uploads/64e74de02a8f5c80f13a8b31f53c5bd0/group_1_plasflow_prediction.log)https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/62AMR.R2021-04-12T14:22:45+02:00ntromasAMR.RHi, I just got this error and I think it is related to R but not sure. Have you ever met a similar issue?
Error in rule combine_AMR:
jobid: 698
output: /mnt/f92456ad-0760-4f94-b5b5-0b8a862df46a/Pawel/Trimmed/Reads/dRep_wat_das_n...Hi, I just got this error and I think it is related to R but not sure. Have you ever met a similar issue?
Error in rule combine_AMR:
jobid: 698
output: /mnt/f92456ad-0760-4f94-b5b5-0b8a862df46a/Pawel/Trimmed/Reads/dRep_wat_das_new/dereplicated_genomes/genome_edit_name/PathoFact_results_MAGs_wat/PathoFact_intermediate/AMR/MAG-S14_P1C590_AMR_prediction.tsv
log: /mnt/f92456ad-0760-4f94-b5b5-0b8a862df46a/Pawel/Trimmed/Reads/dRep_wat_das_new/dereplicated_genomes/genome_edit_name/PathoFact_results_MAGs_wat/PathoFact_intermediate/logs/MAG-S14_P1C590/combine_AMR_temp.log (check log file(s) for error message)
conda-env: /home/nico/programmes/PathoFact/.snakemake/conda/81817da1
RuleException:
CalledProcessError in line 111 of /home/nico/programmes/PathoFact/rules/AMR/AMR.smk:
Command 'source /home/nico/miniconda3/bin/activate '/home/nico/programmes/PathoFact/.snakemake/conda/81817da1'; set -euo pipefail; Rscript --vanilla /home/nico/programmes/PathoFact/.snakemake/scripts/tmpezv1gqqn.AMR.R' returned non-zero exit status 1.
File "/home/nico/programmes/PathoFact/rules/AMR/AMR.smk", line 111, in __rule_combine_AMR
File "/home/nico/miniconda3/envs/PathoFact/lib/python3.6/concurrent/futures/thread.py", line 56, in run
Here is the log information:
cat /mnt/f92456ad-0760-4f94-b5b5-0b8a862df46a/Pawel/Trimmed/Reads/dRep_wat_das_new/dereplicated_genomes/genome_edit_name/PathoFact_results_MAGs_wat/PathoFact_intermediate/logs/MAG-S14_P1C590/combine_AMR_temp.log
Registered S3 methods overwritten by 'ggplot2':
method from
[.quosures rlang
c.quosures rlang
print.quosures rlang
Registered S3 method overwritten by 'rvest':
method from
read_xml.response xml2
── Attaching packages ─────────────────────────────────────── tidyverse 1.2.1 ──
✔ ggplot2 3.1.1 ✔ purrr 0.3.2
✔ tibble 2.1.1 ✔ dplyr 0.8.0.1
✔ tidyr 0.8.3 ✔ stringr 1.4.0
✔ readr 1.3.1 ✔ forcats 0.4.0
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
Error: `f` must be a factor (or character vector or numeric vector).
Execution halted
snakemake -v
5.5.4
Cheers,
NicoValentina Galatavalentina.galata@uni.luValentina Galatavalentina.galata@uni.luhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/63How estimate relative abundance?2021-09-10T18:51:28+02:00AA8898How estimate relative abundance?Hello everyone,
I do not really understand how it is possible to estimate the relative abundance of virulence factore, bacterial toxins and AMR?
Is it a way within the PathoFact pipeline to calculate it or do I have to use another pipeli...Hello everyone,
I do not really understand how it is possible to estimate the relative abundance of virulence factore, bacterial toxins and AMR?
Is it a way within the PathoFact pipeline to calculate it or do I have to use another pipeline for that?
thanks a lothttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/64testing module fails with DeepARG2022-04-15T23:44:16+02:00mhmismtesting module fails with DeepARGHello,
Thank you for this nice tool.
I have run the testing modules, but it fails at the DeepARG step, and it fails completely afterward.
Here is the logfile associated with DeepARG (group_1.out.mapping.ARG):
/mnt/d/Metagenomics_tr...Hello,
Thank you for this nice tool.
I have run the testing modules, but it fails at the DeepARG step, and it fails completely afterward.
Here is the logfile associated with DeepARG (group_1.out.mapping.ARG):
/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/theano/tensor/signal/downsample.py:6: UserWarning: downsample module has been moved to the theano.tensor.signal.pool module.
"downsample module has been moved to the theano.tensor.signal.pool module.")
/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
Traceback (most recent call last):
File "/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/bin/deeparg", line 5, in <module>
from deeparg.entry import main
File "/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/deeparg/entry.py", line 10, in <module>
import deeparg.predict.bin.deepARG as clf
File "/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/deeparg/predict/bin/deepARG.py", line 21, in <module>
from deeparg.predict.bin import process_blast
File "/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/deeparg/predict/bin/deeparg.py", line 21, in <module>
from deeparg.predict.bin import process_blast
ImportError: No module named predict.binhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/65assembled contigs fa files2021-08-28T18:36:09+02:00mhmismassembled contigs fa filesHello,
I have assembled contigs using megahit with the extension of .fa How could I parse them in the PathoFact as it only accepts .fna files?Hello,
I have assembled contigs using megahit with the extension of .fa How could I parse them in the PathoFact as it only accepts .fna files?https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/66Error in Combine_Toxin_SignalP.smk file2021-04-12T14:42:05+02:00Vadim (Dani) DubinskyError in Combine_Toxin_SignalP.smk fileDear developer,
When I try to run the tool with the standard command (snakemake -s PathoFact/Snakefile --use-conda --reason --cores 4 –p) I got the following error:
> MissingInputException in line 11 of /home/danid/PathoFact/rules/Toxin...Dear developer,
When I try to run the tool with the standard command (snakemake -s PathoFact/Snakefile --use-conda --reason --cores 4 –p) I got the following error:
> MissingInputException in line 11 of /home/danid/PathoFact/rules/Toxin/Combine_Toxin_SignalP.smk:
>
> Missing input files for rule R_script:
>
> databases/library_HMM_Toxins.csv
I installed PathoFact with Conda, set up the config.yml file, and all the libraries seem to be in place, including the library_HMM_Toxins.csv appearing in the error.
I will appreciate your help,
Best
Vadimhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/67virulence classifier: class probability cutoff?2021-04-15T12:11:24+02:00Nick Youngblutvirulence classifier: class probability cutoff?What class probability cutoff did you use for PathoFact RF classifier benchmarking in the [manuscript](https://microbiomejournal.biomedcentral.com/articles/10.1186/s40168-020-00993-9)? Given that the RF classifier can generate very low c...What class probability cutoff did you use for PathoFact RF classifier benchmarking in the [manuscript](https://microbiomejournal.biomedcentral.com/articles/10.1186/s40168-020-00993-9)? Given that the RF classifier can generate very low class probs for some targets (e.g., 0.1), I'm guessing that you filtered out classifications with low probabilities, but I can't find that info in the manuscript.https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/68Percent identity and query coverage for toxin hits2021-04-27T17:06:10+02:00Vadim (Dani) DubinskyPercent identity and query coverage for toxin hitsHello Laura,
I was wondering if it is possible to obtain in addition to the reported score and e-value for each toxin hit, also percent identity and query coverage? is such information is given for HMM profiles?
It might be very useful...Hello Laura,
I was wondering if it is possible to obtain in addition to the reported score and e-value for each toxin hit, also percent identity and query coverage? is such information is given for HMM profiles?
It might be very useful, because to know only the score and e-value is not enough for some cases.
Thank you
Vadimhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/69Config file config.yaml not found2021-08-31T15:58:07+02:00Owenke247Config file config.yaml not found**Script:**
snakemake -s /proj/spteds/spted00/software/PathoFact/Snakefile --use-conda --cores 1 --configfile /proj/spteds/spted00/software/PathoFact/configfile.yaml
**Error report:**
WorkflowError in line 3 of /proj/spteds/spted00/so...**Script:**
snakemake -s /proj/spteds/spted00/software/PathoFact/Snakefile --use-conda --cores 1 --configfile /proj/spteds/spted00/software/PathoFact/configfile.yaml
**Error report:**
WorkflowError in line 3 of /proj/spteds/spted00/software/PathoFact/Snakefile:
Config file config.yaml not found.
File "/proj/spteds/spted00/software/PathoFact/Snakefile", line 3, in <module>
**configfile.yaml**:
pathofact:
sample: ["M.036","R3.099"] # requires user input
project: "PathoFact_res" # requires user input
datadir: "/proj/spteds/spted00/Project/CDI/pathfact/contig" # requires user input
workflow: "complete" #options: "complete", "AMR", "Tox", "Vir"
size_fasta: 10000 #Adjustable to preference
scripts: "scripts"
signalp: "/proj/spteds/spted00/software/PathoFact/signalP/signalp-5.0b/bin" # requires user input
deepvirfinder: "submodules/DeepVirFinder/dvf.py"
tox_hmm: "databases/toxins/combined_Toxin.hmm"
tox_lib: "databases/library_HMM_Toxins.csv"
tox_threshold: 40 #Bitscore threshold of the toxin prediction, adjustable by user to preference
vir_hmm: "databases/virulence/Virulence_factor.hmm"
vir_domains: "databases/models_and_domains"
plasflow_threshold: 0.7
plasflow_minlen: 1000
runtime:
short: "00:10:00"
medium: "01:00:00"
long: "02:00:00"
mem:
normal_mem_per_core_gb: "4G"
big_mem_cores: 4
big_mem_per_core_gb: "30G"
Your comments will be much appreciated.https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/70Unclassified MGE2021-08-28T18:42:01+02:00ntromasUnclassified MGEHi,
Is it possible to know a bit more of how are classified MGEs. I saw the cutoff for PlasFlow but what about the one for phage? I have increased the PlasFlow cutoff to 0.9 as I have env metagenomic data but I have now a huge proportio...Hi,
Is it possible to know a bit more of how are classified MGEs. I saw the cutoff for PlasFlow but what about the one for phage? I have increased the PlasFlow cutoff to 0.9 as I have env metagenomic data but I have now a huge proportion of unclassified. When you tested PathoFact, did you also observe a large proportion of unclassified?
Thanks for your help,
Nicohttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/71The unknown task termination of multiple input genomes2021-09-06T12:29:47+02:00neptuneytThe unknown task termination of multiple input genomesDear PathoFact team,
Thanks for your such amazing work.
My issue is that I got unknown job termination when I run PathoFact with multiple input genomes.
* Systems:Ubuntu 20.04.1 LTS
* Snakemake 5.5.4
* Python 3.6.4
```command```
```
...Dear PathoFact team,
Thanks for your such amazing work.
My issue is that I got unknown job termination when I run PathoFact with multiple input genomes.
* Systems:Ubuntu 20.04.1 LTS
* Snakemake 5.5.4
* Python 3.6.4
```command```
```
nohup time snakemake -s Snakefile --configfile config.yaml --cores --use-conda -p &>iso.log&
```
```config.yaml```
```
(PathoFact) [u@h@PathoFact]$ cat config.yaml
pathofact:
sample: ["ISO_102-3","ISO_1051","ISO_108-2","ISO_10SP3-6-1","ISO_10SP3-6-3","ISO_130","ISO_138","ISO_167","ISO_174","ISO_194-2","ISO_197","ISO_20-2-1.001","ISO_20-2-1.002","ISO_32-3-3","ISO_83-1","ISO_83-3-2","ISO_95"]
project: PathoFact_results # requires user input
datadir: "/mnt/nfs/software/PathoFact/Test" # requires user input
workflow: "complete" #options: "complete", "AMR", "Tox", "Vir"
size_fasta: 10000 #Adjustable to preference
scripts: "scripts"
signalp: "/mnt/nfs/software/signalp-5.0b/bin" # requires user input
deepvirfinder: "submodules/DeepVirFinder/dvf.py"
tox_hmm: "databases/toxins/combined_Toxin.hmm"
tox_lib: "databases/library_HMM_Toxins.csv"
tox_threshold: 40 #Bitscore threshold of the toxin prediction, adjustable by user to preference
vir_hmm: "databases/virulence/Virulence_factor.hmm"
vir_domains: "databases/models_and_domains"
plasflow_threshold: 0.7
plasflow_minlen: 1000
runtime:
short: "00:10:00"
medium: "01:00:00"
long: "02:00:00"
mem:
normal_mem_per_core_gb: "10G"
big_mem_cores: 10
big_mem_per_core_gb: "30G"
```
```Logs(part)```
```
17 HMM_VIR_report
17 HMM_correct_format_2
17 HMM_correct_format_2_vir
17 HMM_correct_format_3
17 Plasmid_aggregate
34 Prodigal
17 R_script
17 SignalPN_aggregate
17 SignalPP_aggregate
17 aggregate_RGI
17 aggregate_VirFinder
17 aggregate_VirSorter
17 aggregate_deepARG
17 aggregate_signalP
1 all
17 clean_all
17 combine_AMR
17 combine_AMR_plasmid
17 combine_PathoFact
17 filter_seq
17 format_classifier
17 format_classifier_2
17 generate_ContigTranslation
34 generate_ID
17 generate_contigID
17 generate_translation
17 mapping_file
17 merge_SignalPVir
17 run_VirSorter
17 select
17 splitcontig
17 splitplasmid
34 splitting
17 splittingsignalP
664
767 of 857 steps (89%) done
[Tue May 11 10:01:52 2021]
Finished job 326.
768 of 857 steps (90%) done
[Tue May 11 10:02:28 2021]
Finished job 270.
769 of 857 steps (90%) done
[Tue May 11 10:03:00 2021]
Finished job 278.
770 of 857 steps (90%) done
[Tue May 11 10:03:22 2021]
Finished job 342.
771 of 857 steps (90%) done
[Tue May 11 10:05:56 2021]
Finished job 246.
772 of 857 steps (90%) done
[Tue May 11 10:07:47 2021]
Finished job 294.
773 of 857 steps (90%) done
```
It blocked on the 90% step, and I checked the output reasult no other error message.
I am looking forward your reply, Thanks a lothttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/72Test workflow and common run fails at RGI step at biopython2021-09-06T12:31:47+02:00Michal ZemanTest workflow and common run fails at RGI step at biopythonHi!
I was testing your software and I wasn't able to pass the test workflow (same problem is in normal workflow).
Data were single WGS genome in separate contigs.
Is it problem of my dependencies or some typo in the code?
Tested syste...Hi!
I was testing your software and I wasn't able to pass the test workflow (same problem is in normal workflow).
Data were single WGS genome in separate contigs.
Is it problem of my dependencies or some typo in the code?
Tested system:
Linux Mint 19.3 Tricia based on Ubuntu 18.04 bionic
conda 4.10.1
python 3.8.3.final.0
although when I have conda environment active
python version is 3.6.4
and biopython 1.78
```
Error in rule run_RGI:
jobid: 46
output: /home/kuskus/ANALYSIS/pathofact/PathoFact_results/PathoFact_intermediate/PathoFact_intermediate/AMR/RGI_results/Cl3778/group_1.RGI.txt
log: /home/kuskus/ANALYSIS/pathofact/PathoFact_results/PathoFact_intermediate/logs/Cl3778/group_1.RGI.log (check log file(s) for error message)
conda-env: /home/kuskus/Software/PathoFact/.snakemake/conda/efe64355
shell:
rgi main --input_sequence /home/kuskus/ANALYSIS/pathofact/PathoFact_results/PathoFact_intermediate/splitted/Cl3778/group_1.fasta --output_file /home/kuskus/ANALYSIS/pathofact/PathoFact_results/PathoFact_intermediate/PathoFact_intermediate/AMR/RGI_results/Cl3778/group_1.RGI --input_type protein --local --clean &> /home/kuskus/ANALYSIS/pathofact/PathoFact_results/PathoFact_intermediate/logs/Cl3778/group_1.RGI.log
(exited with non-zero exit code)
```
Log file contains several lines:
```
Traceback (most recent call last):
File "/home/kuskus/Software/PathoFact/.snakemake/conda/efe64355/bin/rgi", line 2, in <module>
from app.MainBase import MainBase
File "/home/kuskus/Software/PathoFact/.snakemake/conda/efe64355/lib/python3.6/site-packages/app/MainBase.py", line 1, in <module>
from app.settings import *
File "/home/kuskus/Software/PathoFact/.snakemake/conda/efe64355/lib/python3.6/site-packages/app/settings.py", line 7, in <module>
from Bio.Alphabet import generic_dna
File "/home/kuskus/.local/lib/python3.6/site-packages/Bio/Alphabet/__init__.py", line 21, in <module>
"Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://biopython.org/wiki/Alphabet for more information."
ImportError: Bio.Alphabet has been removed from Biopython. In many cases, the alphabet can simply be ignored and removed from scripts. In a few cases, you may need to specify the ``molecule_type`` as an annotation on a SeqRecord for your script to work correctly. Please see https://biopython.org/wiki/Alphabet for more information.
```https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/73only small fasta files (around 6000~7000 contigs) can work in Vir and Tox module2021-08-31T15:59:32+02:00Xuanji Lionly small fasta files (around 6000~7000 contigs) can work in Vir and Tox moduleDear Developer
The test file you offered in PathoFact can work. but when I run my own input metagenomic sample (with 2.5M contigs), it always gives me errors. So I tried to split the sample into a couple of small files. I found only smal...Dear Developer
The test file you offered in PathoFact can work. but when I run my own input metagenomic sample (with 2.5M contigs), it always gives me errors. So I tried to split the sample into a couple of small files. I found only small files with < 6000-7000 contigs can work. The error seems to be from signalp (as below), but when I run my big contig sample by signal separately, it can work. So really weird. Do you have any ideas on how to solve this? because I have >600 metagenomics samples.
Thanks
![LS4XC95SSCWDNCXZ9A185ZK](/uploads/5f6b0a9b5b4dfefe38079da8c031d827/LS4XC95SSCWDNCXZ9A185ZK.png)https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/74PathoFact block in the DVF step2021-07-06T11:24:08+02:00neptuneytPathoFact block in the DVF stepDear PathoFact team, Thanks for your such amazing work. My issue is that I got unknown block on the *dvf* step when I run PathoFact with a test input genomes.
* Systems:Ubuntu 20.04.1 LTS
* Snakemake 5.5.4
* Python 3.6.4
snakemake log:...Dear PathoFact team, Thanks for your such amazing work. My issue is that I got unknown block on the *dvf* step when I run PathoFact with a test input genomes.
* Systems:Ubuntu 20.04.1 LTS
* Snakemake 5.5.4
* Python 3.6.4
snakemake log:
```
Building DAG of jobs...
Updating job 10.
Using shell: /usr/bin/bash
Provided cores: 1
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 HMM_R_VIR
1 HMM_VIR_classification
1 HMM_VIR_finalformat
1 HMM_VIR_report
1 HMM_correct_format_2
1 HMM_correct_format_2_vir
1 HMM_correct_format_3
1 Plasmid_aggregate
1 R_script
1 SignalPN_aggregate
1 SignalPP_aggregate
1 aggregate_RGI
1 aggregate_VirFinder
1 aggregate_deepARG
1 aggregate_signalP
1 all
1 clean_all
1 combine_AMR
1 combine_AMR_plasmid
1 combine_PathoFact
1 filter_seq
1 format_classifier
1 format_classifier_2
2 generate_ID
1 generate_translation
1 mapping_file
1 merge_SignalPVir
1 run_VirFinder
1 select
1 splitplasmid
2 splitting
1 splittingsignalP
34
[Thu Jun 17 10:22:10 2021]
Job 35: Filter samples on length for PlasFlow predictions: PathoFact_results - ISO_DWRC1-3
Reason: Missing output files: /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/plasmid/ISO_DWRC1-3_filtered.fna
scripts/filter.pl 1000 /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/renamed/ISO_DWRC1-3_Contig_ID.fna > /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/plasmid/ISO_DWRC1-3_filtered.fna 2> /mnt/nfs/software/PathoFact/Test/PathoFact_results/logs/ISO_DWRC1-3/plasmid_filtered.log
Activating conda environment: /mnt/nfs/software/Old_PathoFact/.snakemake/conda/f490db85
[Thu Jun 17 10:22:16 2021]
Finished job 35.
1 of 34 steps (3%) done
[Thu Jun 17 10:22:16 2021]
checkpoint splitplasmid:
input: /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/plasmid/ISO_DWRC1-3_filtered.fna
output: /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/MGE/plasmid_splitted/ISO_DWRC1-3/
jobid: 29
reason: Missing output files: /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/MGE/plasmid_splitted/ISO_DWRC1-3/; Input files updated by another job: /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/plasmid/ISO_DWRC1-3_filtered.fna
wildcards: project=PathoFact_results, sample=ISO_DWRC1-3
Downstream jobs will be updated after completion.
python scripts/split.py /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/plasmid/ISO_DWRC1-3_filtered.fna 10000 /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/MGE/plasmid_splitted/ISO_DWRC1-3/
Activating conda environment: /mnt/nfs/software/Old_PathoFact/.snakemake/conda/f490db85
Wrote 5 records to group_1.fasta
Removing temporary output file /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/plasmid/ISO_DWRC1-3_filtered.fna.
Updating job 20.
Creating conda environment envs/PlasFlow.yaml...
Downloading remote packages.
Environment for envs/PlasFlow.yaml created (location: .snakemake/conda/0d925277)
[Thu Jun 17 11:22:31 2021]
Finished job 29.
2 of 33 steps (6%) done
[Thu Jun 17 11:22:32 2021]
Job 41: Executing Deep-VirFinder with 1 threads on the following sample(s): PathoFact_results - ISO_DWRC1-3
Reason: Missing output files: /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/MGE/phage/ISO_DWRC1-3/virfinder/group_1.fasta_gt1bp_dvfpred.txt
python submodules/DeepVirFinder/dvf.py -i /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/contig_splitted/ISO_DWRC1-3/group_1.fasta -o /mnt/nfs/software/PathoFact/Test/PathoFact_results/PathoFact_intermediate/MGE/phage/ISO_DWRC1-3/virfinder -c 1 &> /mnt/nfs/software/PathoFact/Test/PathoFact_results/logs/ISO_DWRC1-3/group_1.fasta_gt1bp_dvfpred.log
Activating conda environment: /mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f
```
It seems blocked in the dvf step,so I check the *group_1.fasta_gt1bp_dvfpred.log*:
```
bp_dvfpred.log
Using Theano backend.
WARNING (theano.configdefaults): install mkl with `conda install mkl-service`: No module named 'mkl'
OMP: Info #212: KMP_AFFINITY: decoding x2APIC ids.
OMP: Info #210: KMP_AFFINITY: Affinity capable, using global cpuid leaf 11 info
OMP: Info #154: KMP_AFFINITY: Initial OS proc set respected: 0-255
OMP: Info #156: KMP_AFFINITY: 256 available OS procs
OMP: Info #157: KMP_AFFINITY: Uniform topology
OMP: Info #179: KMP_AFFINITY: 2 packages x 64 cores/pkg x 2 threads/core (128 total cores)
OMP: Info #214: KMP_AFFINITY: OS proc to physical thread map:
OMP: Info #171: KMP_AFFINITY: OS proc 0 maps to package 0 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 128 maps to package 0 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 1 maps to package 0 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 129 maps to package 0 core 1 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 2 maps to package 0 core 2 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 130 maps to package 0 core 2 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 3 maps to package 0 core 3 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 131 maps to package 0 core 3 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 4 maps to package 0 core 4 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 132 maps to package 0 core 4 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 5 maps to package 0 core 5 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 133 maps to package 0 core 5 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 6 maps to package 0 core 6 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 134 maps to package 0 core 6 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 7 maps to package 0 core 7 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 135 maps to package 0 core 7 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 8 maps to package 0 core 8 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 136 maps to package 0 core 8 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 9 maps to package 0 core 9 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 137 maps to package 0 core 9 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 10 maps to package 0 core 10 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 138 maps to package 0 core 10 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 11 maps to package 0 core 11 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 139 maps to package 0 core 11 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 12 maps to package 0 core 12 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 140 maps to package 0 core 12 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 13 maps to package 0 core 13 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 141 maps to package 0 core 13 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 14 maps to package 0 core 14 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 142 maps to package 0 core 14 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 15 maps to package 0 core 15 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 143 maps to package 0 core 15 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 16 maps to package 0 core 16 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 144 maps to package 0 core 16 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 17 maps to package 0 core 17 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 145 maps to package 0 core 17 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 18 maps to package 0 core 18 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 146 maps to package 0 core 18 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 19 maps to package 0 core 19 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 147 maps to package 0 core 19 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 20 maps to package 0 core 20 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 148 maps to package 0 core 20 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 21 maps to package 0 core 21 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 149 maps to package 0 core 21 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 22 maps to package 0 core 22 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 150 maps to package 0 core 22 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 23 maps to package 0 core 23 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 151 maps to package 0 core 23 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 24 maps to package 0 core 24 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 152 maps to package 0 core 24 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 25 maps to package 0 core 25 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 153 maps to package 0 core 25 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 26 maps to package 0 core 26 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 154 maps to package 0 core 26 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 27 maps to package 0 core 27 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 155 maps to package 0 core 27 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 28 maps to package 0 core 28 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 156 maps to package 0 core 28 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 29 maps to package 0 core 29 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 157 maps to package 0 core 29 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 30 maps to package 0 core 30 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 158 maps to package 0 core 30 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 31 maps to package 0 core 31 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 159 maps to package 0 core 31 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 32 maps to package 0 core 32 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 160 maps to package 0 core 32 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 33 maps to package 0 core 33 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 161 maps to package 0 core 33 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 34 maps to package 0 core 34 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 162 maps to package 0 core 34 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 35 maps to package 0 core 35 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 163 maps to package 0 core 35 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 36 maps to package 0 core 36 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 164 maps to package 0 core 36 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 37 maps to package 0 core 37 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 165 maps to package 0 core 37 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 38 maps to package 0 core 38 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 166 maps to package 0 core 38 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 39 maps to package 0 core 39 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 167 maps to package 0 core 39 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 40 maps to package 0 core 40 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 168 maps to package 0 core 40 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 41 maps to package 0 core 41 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 169 maps to package 0 core 41 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 42 maps to package 0 core 42 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 170 maps to package 0 core 42 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 43 maps to package 0 core 43 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 171 maps to package 0 core 43 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 44 maps to package 0 core 44 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 172 maps to package 0 core 44 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 45 maps to package 0 core 45 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 173 maps to package 0 core 45 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 46 maps to package 0 core 46 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 174 maps to package 0 core 46 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 47 maps to package 0 core 47 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 175 maps to package 0 core 47 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 48 maps to package 0 core 48 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 176 maps to package 0 core 48 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 49 maps to package 0 core 49 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 177 maps to package 0 core 49 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 50 maps to package 0 core 50 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 178 maps to package 0 core 50 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 51 maps to package 0 core 51 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 179 maps to package 0 core 51 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 52 maps to package 0 core 52 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 180 maps to package 0 core 52 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 53 maps to package 0 core 53 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 181 maps to package 0 core 53 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 54 maps to package 0 core 54 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 182 maps to package 0 core 54 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 55 maps to package 0 core 55 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 183 maps to package 0 core 55 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 56 maps to package 0 core 56 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 184 maps to package 0 core 56 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 57 maps to package 0 core 57 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 185 maps to package 0 core 57 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 58 maps to package 0 core 58 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 186 maps to package 0 core 58 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 59 maps to package 0 core 59 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 187 maps to package 0 core 59 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 60 maps to package 0 core 60 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 188 maps to package 0 core 60 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 61 maps to package 0 core 61 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 189 maps to package 0 core 61 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 62 maps to package 0 core 62 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 190 maps to package 0 core 62 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 63 maps to package 0 core 63 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 191 maps to package 0 core 63 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 64 maps to package 1 core 0 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 192 maps to package 1 core 0 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 65 maps to package 1 core 1 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 193 maps to package 1 core 1 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 66 maps to package 1 core 2 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 194 maps to package 1 core 2 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 67 maps to package 1 core 3 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 195 maps to package 1 core 3 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 68 maps to package 1 core 4 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 196 maps to package 1 core 4 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 69 maps to package 1 core 5 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 197 maps to package 1 core 5 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 70 maps to package 1 core 6 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 198 maps to package 1 core 6 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 71 maps to package 1 core 7 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 199 maps to package 1 core 7 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 72 maps to package 1 core 8 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 200 maps to package 1 core 8 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 73 maps to package 1 core 9 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 201 maps to package 1 core 9 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 74 maps to package 1 core 10 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 202 maps to package 1 core 10 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 75 maps to package 1 core 11 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 203 maps to package 1 core 11 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 76 maps to package 1 core 12 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 204 maps to package 1 core 12 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 77 maps to package 1 core 13 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 205 maps to package 1 core 13 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 78 maps to package 1 core 14 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 206 maps to package 1 core 14 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 79 maps to package 1 core 15 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 207 maps to package 1 core 15 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 80 maps to package 1 core 16 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 208 maps to package 1 core 16 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 81 maps to package 1 core 17 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 209 maps to package 1 core 17 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 82 maps to package 1 core 18 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 210 maps to package 1 core 18 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 83 maps to package 1 core 19 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 211 maps to package 1 core 19 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 84 maps to package 1 core 20 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 212 maps to package 1 core 20 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 85 maps to package 1 core 21 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 213 maps to package 1 core 21 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 86 maps to package 1 core 22 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 214 maps to package 1 core 22 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 87 maps to package 1 core 23 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 215 maps to package 1 core 23 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 88 maps to package 1 core 24 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 216 maps to package 1 core 24 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 89 maps to package 1 core 25 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 217 maps to package 1 core 25 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 90 maps to package 1 core 26 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 218 maps to package 1 core 26 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 91 maps to package 1 core 27 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 219 maps to package 1 core 27 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 92 maps to package 1 core 28 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 220 maps to package 1 core 28 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 93 maps to package 1 core 29 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 221 maps to package 1 core 29 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 94 maps to package 1 core 30 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 222 maps to package 1 core 30 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 95 maps to package 1 core 31 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 223 maps to package 1 core 31 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 96 maps to package 1 core 32 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 224 maps to package 1 core 32 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 97 maps to package 1 core 33 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 225 maps to package 1 core 33 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 98 maps to package 1 core 34 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 226 maps to package 1 core 34 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 99 maps to package 1 core 35 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 227 maps to package 1 core 35 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 100 maps to package 1 core 36 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 228 maps to package 1 core 36 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 101 maps to package 1 core 37 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 229 maps to package 1 core 37 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 102 maps to package 1 core 38 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 230 maps to package 1 core 38 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 103 maps to package 1 core 39 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 231 maps to package 1 core 39 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 104 maps to package 1 core 40 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 232 maps to package 1 core 40 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 105 maps to package 1 core 41 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 233 maps to package 1 core 41 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 106 maps to package 1 core 42 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 234 maps to package 1 core 42 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 107 maps to package 1 core 43 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 235 maps to package 1 core 43 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 108 maps to package 1 core 44 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 236 maps to package 1 core 44 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 109 maps to package 1 core 45 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 237 maps to package 1 core 45 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 110 maps to package 1 core 46 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 238 maps to package 1 core 46 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 111 maps to package 1 core 47 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 239 maps to package 1 core 47 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 112 maps to package 1 core 48 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 240 maps to package 1 core 48 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 113 maps to package 1 core 49 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 241 maps to package 1 core 49 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 114 maps to package 1 core 50 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 242 maps to package 1 core 50 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 115 maps to package 1 core 51 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 243 maps to package 1 core 51 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 116 maps to package 1 core 52 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 244 maps to package 1 core 52 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 117 maps to package 1 core 53 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 245 maps to package 1 core 53 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 118 maps to package 1 core 54 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 246 maps to package 1 core 54 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 119 maps to package 1 core 55 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 247 maps to package 1 core 55 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 120 maps to package 1 core 56 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 248 maps to package 1 core 56 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 121 maps to package 1 core 57 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 249 maps to package 1 core 57 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 122 maps to package 1 core 58 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 250 maps to package 1 core 58 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 123 maps to package 1 core 59 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 251 maps to package 1 core 59 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 124 maps to package 1 core 60 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 252 maps to package 1 core 60 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 125 maps to package 1 core 61 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 253 maps to package 1 core 61 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 126 maps to package 1 core 62 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 254 maps to package 1 core 62 thread 1
OMP: Info #171: KMP_AFFINITY: OS proc 127 maps to package 1 core 63 thread 0
OMP: Info #171: KMP_AFFINITY: OS proc 255 maps to package 1 core 63 thread 1
OMP: Info #250: KMP_AFFINITY: pid 2526244 tid 2526244 thread 0 bound to OS proc set 0
1. Loading Models.
model directory /mnt/nfs/software/Old_PathoFact/submodules/DeepVirFinder/models
2. Encoding and Predicting Sequences.
processing line 1
processing line 61512
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:516: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:517: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:518: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:519: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:520: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorflow/python/framework/dtypes.py:525: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:541: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:542: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:543: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:544: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:545: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
/mnt/nfs/software/Old_PathoFact/.snakemake/conda/45939b1f/lib/python3.6/site-packages/tensorboard/compat/tensorflow_stub/dtypes.py:550: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
```
The pipline stop above step for long time(up to 7 days) and seems hardly finished.
Looking forward your reply, Thanks a lot.https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/75CondaVerificationError after installation2021-09-01T13:48:10+02:00AndrewCondaVerificationError after installationHi,
I followed Installation guide step by step. But when I tried to run PathoFact got the error:
<details>
<summary>console output for test from guide</summary>
```bash
(PathoFact) anri@anriPC:/media/anri/FAE8F3E0E8F39959/PathoFact...Hi,
I followed Installation guide step by step. But when I tried to run PathoFact got the error:
<details>
<summary>console output for test from guide</summary>
```bash
(PathoFact) anri@anriPC:/media/anri/FAE8F3E0E8F39959/PathoFact-master$ snakemake -s test/Snakefile --use-conda --reason --cores 4 -p
Building DAG of jobs...
Executing subworkflow pathofact.
Building DAG of jobs...
Creating conda environment envs/VirSorter.yaml...
Downloading remote packages.
CreateCondaEnvironmentException:
Could not create conda environment from /media/anri/FAE8F3E0E8F39959/PathoFact-master/rules/AMR/../../envs/VirSorter.yaml:
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Downloading and Extracting Packages
libcurl-7.69.1 | 573 KB | ########## | 100%
libedit-3.1.20170329 | 172 KB | ########## | 100%
entrez-direct-13.3 | 4.7 MB | ########## | 100%
perl-moose-2.2011 | 444 KB | ########## | 100%
perl-class-method-mo | 13 KB | ########## | 100%
perl-module-implemen | 9 KB | ########## | 100%
perl-mro-compat-0.13 | 10 KB | ########## | 100%
perl-getopt-long-2.5 | 27 KB | ########## | 100%
perl-package-stash-0 | 66 KB | ########## | 100%
perl-sub-name-0.21 | 13 KB | ########## | 100%
perl-apache-test-1.4 | 115 KB | ########## | 100%
perl-package-depreca | 10 KB | ########## | 100%
muscle-3.8.1551 | 280 KB | ########## | 100%
perl-devel-overloadi | 8 KB | ########## | 100%
perl-devel-stacktrac | 16 KB | ########## | 100%
blast-2.9.0 | 17.7 MB | ########## | 100%
perl-sub-install-0.9 | 10 KB | ########## | 100%
perl-yaml-1.29 | 41 KB | ########## | 100%
perl-moo-2.003004 | 38 KB | ########## | 100%
ncurses-6.1 | 1.3 MB | ########## | 100%
perl-params-util-1.0 | 14 KB | ########## | 100%
perl-role-tiny-2.000 | 15 KB | ########## | 100%
mcl-14.137 | 2.2 MB | ########## | 100%
perl-package-stash-x | 21 KB | ########## | 100%
perl-dist-checkconfl | 10 KB | ########## | 100%
perl-eval-closure-0. | 11 KB | ########## | 100%
libssh2-1.8.2 | 257 KB | ########## | 100%
metagene_annotator-1 | 729 KB | ########## | 100%
curl-7.69.1 | 137 KB | ########## | 100%
perl-sub-exporter-pr | 8 KB | ########## | 100%
perl-file-which-1.23 | 12 KB | ########## | 100%
perl-devel-globaldes | 7 KB | ########## | 100%
krb5-1.17.1 | 1.5 MB | ########## | 100%
perl-class-load-0.25 | 12 KB | ########## | 100%
perl-module-runtime- | 7 KB | ########## | 100%
virsorter-1.0.5 | 47 KB | ########## | 100%
perl-bioperl-1.6.924 | 2.2 MB | ########## | 100%
perl-class-load-xs-0 | 13 KB | ########## | 100%
perl-module-runtime- | 15 KB | ########## | 100%
perl-sub-quote-2.006 | 18 KB | ########## | 100%
diamond-0.9.14 | 556 KB | ########## | 100%
perl-sub-exporter-0. | 30 KB | ########## | 100%
perl-parallel-forkma | 21 KB | ########## | 100%
perl-sub-identify-0. | 12 KB | ########## | 100%
openssl-1.1.1f | 2.1 MB | ########## | 100%
perl-data-optlist-0. | 10 KB | ########## | 100%
Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done
ERROR conda.core.link:_execute(699): An error occurred while installing package 'conda-forge::perl-5.26.2-h516909a_1006'.
Rolling back transaction: ...working... done
[Errno 22] Invalid argument: '/media/anri/FAE8F3E0E8F39959/PathoFact-master/.snakemake/conda/f8c49d3c/man/man3/App::Cpan.3'
()
```
</details>
<details>
<summary>console output for my data</summary>
```bash
anri@anriPC:/media/anri/FAE8F3E0E8F39959/PathoFact-master$ conda activate PathoFact
(PathoFact) anri@anriPC:/media/anri/FAE8F3E0E8F39959/PathoFact-master$ snakemake -s Snakefile --use-conda --reason --cores 4 -p
Building DAG of jobs...
Creating conda environment envs/Biopython.yaml...
Downloading remote packages.
CreateCondaEnvironmentException:
Could not create conda environment from /media/anri/FAE8F3E0E8F39959/PathoFact-master/rules/Universal/../../envs/Biopython.yaml:
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done
Preparing transaction: ...working... done
Verifying transaction: ...working... failed
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/data/boston_house_prices.csv'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/data/breast_cancer.csv'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/data/iris.csv'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/data/linnerud_exercise.csv'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/data/linnerud_physiological.csv'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/data/wine_data.csv'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/boston_house_prices.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/breast_cancer.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/california_housing.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/covtype.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/diabetes.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/digits.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/iris.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/kddcup99.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/lfw.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/linnerud.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/olivetti_faces.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/rcv1.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/twenty_newsgroups.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/descr/wine_data.rst'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/images/china.jpg'
specified in the package manifest cannot be found.
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/datasets/images/flower.jpg'
specified in the package manifest cannot be found.
SafetyError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/neighbors/ball_tree.cpython-36m-x86_64-linux-gnu.so'
has an incorrect size.
reported size: 538712 bytes
actual size: 241152 bytes
CondaVerificationError: The package for scikit-learn located at /home/anri/anaconda3/pkgs/scikit-learn-0.21.3-py36hcdab131_0
appears to be corrupted. The path 'lib/python3.6/site-packages/sklearn/utils/sparsefuncs_fast.cpython-36m-x86_64-linux-gnu.so'
specified in the package manifest cannot be found.
```
</details>
Whats wrong?https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/76hmm.R2021-09-06T12:31:47+02:00Matthew Macowanhmm.RHi, I just got this error, and I'm wondering if it is similar to previously closed issues [#62 (closed)](https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/62) and [#39 (closed)](https://git-r3lab.uni.lu/laura.denies/PathoFact/-/is...Hi, I just got this error, and I'm wondering if it is similar to previously closed issues [#62 (closed)](https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/62) and [#39 (closed)](https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/39).
```bash
Building DAG of jobs...
Using shell: /bin/bash
Provided cores: 16
Rules claiming more threads will be scaled down.
Job counts:
count jobs
1 HMM_R_VIR
1
[Sat Jul 31 01:59:24 2021]
rule HMM_R_VIR:
input: /fs03/mf33/Matt/sunbeam_cf_shotgun/sunbeam_output/assembly/contigs/PathoFact_results/PathoFact_intermediate/VIRULENCE/HMM_virulence/181s4_S72-co$
output: /fs03/mf33/Matt/sunbeam_cf_shotgun/sunbeam_output/assembly/contigs/PathoFact_results/PathoFact_intermediate/VIRULENCE/HMM_virulence/181s4_S72-c$
log: /fs03/mf33/Matt/sunbeam_cf_shotgun/sunbeam_output/assembly/contigs/PathoFact_results/logs/181s4_S72-contigs/hmm_results.log
jobid: 0
wildcards: project=PathoFact_results, sample=181s4_S72-contigs
Rscript --vanilla /fs03/mf33/Matt/PathoFact/.snakemake/scripts/tmp02b99mr2.hmm.R
Activating conda environment: /fs03/mf33/Matt/PathoFact/.snakemake/conda/7950189c
/fs03/mf33/Matt/PathoFact/.snakemake/conda/7950189c/lib/R/bin/R: line 240: /fs03/mf33/Matt/PathoFact/.snakemake/conda/7950189c/lib/R/etc/ldpaths: No such f$
[Sat Jul 31 01:59:50 2021]
Error in rule HMM_R_VIR:
jobid: 0
output: /fs03/mf33/Matt/sunbeam_cf_shotgun/sunbeam_output/assembly/contigs/PathoFact_results/PathoFact_intermediate/VIRULENCE/HMM_virulence/181s4_S72-c$
log: /fs03/mf33/Matt/sunbeam_cf_shotgun/sunbeam_output/assembly/contigs/PathoFact_results/logs/181s4_S72-contigs/hmm_results.log (check log file(s) for$
conda-env: /fs03/mf33/Matt/PathoFact/.snakemake/conda/7950189c
RuleException:
CalledProcessError in line 87 of /fs03/mf33/Matt/PathoFact/rules/Virulence/Virulence.smk:
Command 'source /scratch/of33/mmacowan/miniconda/bin/activate '/fs03/mf33/Matt/PathoFact/.snakemake/conda/7950189c'; set -euo pipefail; Rscript --vanilla $
File "/fs03/mf33/Matt/PathoFact/rules/Virulence/Virulence.smk", line 87, in __rule_HMM_R_VIR
File "/scratch/of33/mmacowan/miniconda/envs/PathoFact/lib/python3.6/concurrent/futures/thread.py", line 56, in run
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
```
It looks like `line 87` refers to script `hmm.R`.
The `hmm_results.log` file for the particular instance that caused the error was present but empty, however this is the contents of that file for one of the other samples:
```bash
Registered S3 methods overwritten by 'ggplot2':
method from
[.quosures rlang
c.quosures rlang
print.quosures rlang
Registered S3 method overwritten by 'rvest':
method from
read_xml.response xml2
── Attaching packages ─────────────────────────────────────── tidyverse 1.2.1 ──
✔ ggplot2 3.1.1 ✔ purrr 0.3.2
✔ tibble 2.1.1 ✔ dplyr 0.8.0.1
✔ tidyr 0.8.3 ✔ stringr 1.4.0
✔ readr 1.3.1 ✔ forcats 0.4.0
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ dplyr::filter() masks stats::filter()
✖ dplyr::lag() masks stats::lag()
Attaching package: 'reshape2'
The following object is masked from 'package:tidyr':
smiths
```
Have you encountered a similar issue? Any help would be greatly appreciated.
Thanks,
Matthewhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/77Feature request: include ARO in output reports2021-09-01T14:13:10+02:00Benjamin WingfieldFeature request: include ARO in output reportsHello,
Thanks for making PathoFact, it's awesome!
I ran into a small problem matching up output reports to the CARD database. Sometimes gene names change over time. It would be easier if we had access to `BEST_ARO_HIT` in the output re...Hello,
Thanks for making PathoFact, it's awesome!
I ran into a small problem matching up output reports to the CARD database. Sometimes gene names change over time. It would be easier if we had access to `BEST_ARO_HIT` in the output reports :smile:
Thanks,
BenLaura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/78ownHMM_library.R script fails with empty input files2021-11-12T13:27:17+01:00Susheel BusiownHMM_library.R script fails with empty input files`rule R_script` from the `Combine_Toxin_SignalP.smk` throws an error (below) if the input file is empty:
```
RuleException:
CalledProcessError in line 29 of /mnt/irisgpfs/users/sbusi/apps/PathoFact/rules/Toxin/Combine_Toxin_SignalP.smk:
...`rule R_script` from the `Combine_Toxin_SignalP.smk` throws an error (below) if the input file is empty:
```
RuleException:
CalledProcessError in line 29 of /mnt/irisgpfs/users/sbusi/apps/PathoFact/rules/Toxin/Combine_Toxin_SignalP.smk:
Command 'source /scratch/users/sbusi/tools/miniconda3/bin/activate '/mnt/irisgpfs/users/sbusi/apps/PathoFact/.snakemake/conda/c6514c6c'; set -euo pipefail; Rscript --vanilla /mnt/irisgpfs/users/sbusi/apps/PathoFact/.snakemake/scripts/tmpxk7hk0_a.ownHMM_library.R' returned non-zero exit status 1.
File "/mnt/irisgpfs/users/sbusi/apps/PathoFact/rules/Toxin/Combine_Toxin_SignalP.smk", line 29, in __rule_R_script
File "/scratch/users/sbusi/tools/miniconda3/envs/PathoFact/lib/python3.6/concurrent/futures/thread.py", line 56, in run
```
**Failed file example:**
```
==> /work/projects/amrwd/results/PathoFact/PathoFact_space_output/PathoFact_intermediate/TOXIN/HMM_toxin/IG4SW-M_F1_L4_NORM_GC.Input_HMM_R.csv <==
Query_sequence HMM_Name Significance_Evalue Score
#Toxin
```
**Working file example:**
```
==> /work/projects/amrwd/results/PathoFact/PathoFact_space_output/PathoFact_intermediate/TOXIN/HMM_toxin/IIF1SW-M_F2_L1_NORM_FLT.Input_HMM_R.csv <==
Query_sequence HMM_Name Significance_Evalue Score
#Toxin
0000010215 K01114_780_95_420 3.8e-78 262.5
0000013630 K01186_172 5.1e-103 344.2
0000012319 K01186_172 1.7e-08 33.1
0000019322 K01197_243 0.29 8.8
0000018125 K03699_483 2.8e-112 374.6
0000018124 K03699_483 2.3e-71 239.7
0000010016 K03699_483 1.8e-63 213.7
0000011228 K03699_483 1.5e-62 210.6
```Laura DeniesLaura Denieshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/79Adding information in the final PathoFact report2022-06-01T23:43:08+02:00ntromasAdding information in the final PathoFact reportHi!
I wonder if it would be possible to add some extra information in the PathoFact report, for example AMR database (deepARG or RGI), as this information is already in the AMR report.
Thanks !Hi!
I wonder if it would be possible to add some extra information in the PathoFact report, for example AMR database (deepARG or RGI), as this information is already in the AMR report.
Thanks !https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/80Question about execution Sna2021-10-29T14:51:39+02:00adolfoislaQuestion about execution SnaDear, I performed the installation and test/test_config.yaml was executed correctly, but after configuring the config.yaml with my data and execute ~/PathoFact$ snakemake -s Snakefile --use-conda --reason --cores 64 -p I got the follow...Dear, I performed the installation and test/test_config.yaml was executed correctly, but after configuring the config.yaml with my data and execute ~/PathoFact$ snakemake -s Snakefile --use-conda --reason --cores 64 -p I got the following error message:
Building DAG of jobs...
MissingInputException in line 33 of /home/superserver/PathoFact/Snakefile:
Missing input files for rule check_AMR_MGE:
/home/superserver/PathoFact/output_expected/PathoFact_report/AMR_MGE_prediction_CP011849_report.tsv
regardshttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/81Cluster documentation2023-02-20T14:57:03+01:00Benjamin WingfieldCluster documentationHello,
I would like to run PathoFact using my local SLURM cluster but it's not clear to me how to run the provided example.
Looking at `cluster.yaml`, I have to modify parameters to match my own cluster. I have edited partition, but I ...Hello,
I would like to run PathoFact using my local SLURM cluster but it's not clear to me how to run the provided example.
Looking at `cluster.yaml`, I have to modify parameters to match my own cluster. I have edited partition, but I don't normally set the `-q` parameter. Can I delete the quality parameter?
Thanks,
Benhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/82Error in rule Plasmid_aggregate2021-11-02T17:11:45+01:00Ella KantorError in rule Plasmid_aggregateHello, I have been having an issue where the program terminates with an error and I am not sure why. Here is the log file and it looks like the error is in rule Plasmid_aggregate, jobid: 1094. The command I used is `nohup snakemake -s Sn...Hello, I have been having an issue where the program terminates with an error and I am not sure why. Here is the log file and it looks like the error is in rule Plasmid_aggregate, jobid: 1094. The command I used is `nohup snakemake -s Snakefile --use-conda --reason --cores 12 -p`
[2021-10-07T162426.322356.snakemake.log](/uploads/806d6a534564348fcf8985eb557421bb/2021-10-07T162426.322356.snakemake.log)https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/83Finished NCBI genomes and PathoFact2022-09-21T07:04:47+02:00vkisandFinished NCBI genomes and PathoFactHi,
I have used the PathoFact successfully on my onw WGSs' containing several contigs. However, now I wanted to test it on finished genomes of similar strains downloaded from NCBI. Whitespaces are deleted from names, but "complete" run f...Hi,
I have used the PathoFact successfully on my onw WGSs' containing several contigs. However, now I wanted to test it on finished genomes of similar strains downloaded from NCBI. Whitespaces are deleted from names, but "complete" run freezes, I would say rather randomly. Re-runs may continue a bit but still never finish. "Vir" or "Tox" were finished, but "AMR" froze too
Any ideas? I increased mem setting a bit
mem:
normal_mem_per_core_gb: "4G"
big_mem_cores: 12
big_mem_per_core_gb: "64G"
cheers,
/vhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/84Error in rule run_VirSorter:2022-01-27T02:51:28+01:00yuzie0314Error in rule run_VirSorter:Dear @laura.denies,
Recently I was keeping trying your tool PathoFact with the test data provided from you.
I installed your tool and dependency according to the instructions from your ReadMe file.
However I frequently confronted with s...Dear @laura.denies,
Recently I was keeping trying your tool PathoFact with the test data provided from you.
I installed your tool and dependency according to the instructions from your ReadMe file.
However I frequently confronted with such error shown as follows (snakemake.log):
![image](/uploads/a8d72901d7b1ab60cf05944dd4cdcb71/image.png)
I also found the log in "test/output_test/logs/test_sample/VIRSorter_global-phage-signal.log" : [/usr/bin/bash: line 1: wrapper_phage_contigs_sorter_iPlant.pl: command not found]
Do you have any idea about popping up such issue when testing your tool ?
Thank You in advance.
Yu[2022-01-25T064727.536530.snakemake.log](/uploads/0884d9f32814d1f15a6d5edf434938f6/2022-01-25T064727.536530.snakemake.log)https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/85Error in rule splitting2022-03-02T08:31:24+01:00Gaoyang LuoError in rule splittingError in rule splitting:
jobid: 30
output: path/PathoFact_results/splitted/SRR3401469.contigs/
log: path/test/PathoFact_results/logs/SRR3401469.contigs/split_ORF.log (check log file(s) for error message)
conda-env: path/Patho...Error in rule splitting:
jobid: 30
output: path/PathoFact_results/splitted/SRR3401469.contigs/
log: path/test/PathoFact_results/logs/SRR3401469.contigs/split_ORF.log (check log file(s) for error message)
conda-env: path/PathoFact/.snakemake/conda/a010a931
shell:
python scripts/split.py path/PathoFact_results/PathoFact_intermediate/renamed/SRR3401469.contigs_ID.faa 10000 path/PathoFact_results/splitted/SRR3401469.contigs/ &> path/PathoFact_results/logs/SRR3401469.contigs/split_ORF.log
(exited with non-zero exit code)
Updating job 18.
Updating job 19.
Creating conda environment envs/HMMER.yaml...
Downloading remote packages.
[Thu Feb 24 19:39:58 2022]
Error in rule splitting:
jobid: 28
output: path/PathoFact_results/PathoFact_intermediate/splitted/SRR3401469.contigs/
log: path/PathoFact_results/PathoFact_intermediate/logs/SRR3401469.contigs/split_ORF.log (check log file(s) for error message)
conda-env: path/PathoFact/.snakemake/conda/a010a931
shell:
python scripts/split.py path/PathoFact_results/PathoFact_intermediate/PathoFact_intermediate/renamed/SRR3401469.contigs_ID.faa 10000 path/PathoFact_results/PathoFact_intermediate/splitted/SRR3401469.contigs/ &> path/PathoFact_results/PathoFact_intermediate/logs/SRR3401469.contigs/split_ORF.log
(exited with non-zero exit code)
Removing output files of failed job splitting since they might be corrupted:
path/PathoFact_results/splitted/SRR3401469.contigs/
Removing output files of failed job splitting since they might be corrupted:
path/PathoFact_results/PathoFact_intermediate/splitted/SRR3401469.contigs/
Shutting down, this might take some time.
Exiting because a job execution failed. Look above for error message
Complete log: /lomi_home/gaoyang/software/PathoFact/.snakemake/log/2022-02-24T184853.821696.snakemake.loghttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/86qsub: Bad UID for job execution2022-03-02T08:31:14+01:00Gaoyang Luoqsub: Bad UID for job executionDear @laura.denies
I wanted to put it on the cluster with cmd:qsub but failed.
"Error submitting jobscript (exit code 175)"
Here is my .pbs file[pathofact.pbs](/uploads/3d01605374f6c2b95f8f7def569573a5/pathofact.pbs)
And here is the err...Dear @laura.denies
I wanted to put it on the cluster with cmd:qsub but failed.
"Error submitting jobscript (exit code 175)"
Here is my .pbs file[pathofact.pbs](/uploads/3d01605374f6c2b95f8f7def569573a5/pathofact.pbs)
And here is the error log[pathofact_AMR.e284271](/uploads/7c4a3d02ff19b7ae2e9fab191a1075a3/pathofact_AMR.e284271)https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/87Error in rule SignalPN_aggregate2022-10-14T15:28:35+02:00Ella KantorError in rule SignalPN_aggregateHello,
I am running a large file and I got this error:
<details><summary>Click to expand</summary>
`[Tue Apr 5 20:11:45 2022]
Error in rule SignalPN_aggregate:
jobid: 289
output: /data2/ekantor/Argal/PathoFact_results_Argal/Pa...Hello,
I am running a large file and I got this error:
<details><summary>Click to expand</summary>
`[Tue Apr 5 20:11:45 2022]
Error in rule SignalPN_aggregate:
jobid: 289
output: /data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/contigs_Argal/gramn_summary.signalp5
shell:
cat /data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/Gram-/contigs_Argal/group_1_summary.signalp5 /data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/Gram-/contigs_Argal/group_2_summary.signalp5`
Then there is a whole list of files that ends with:
`/data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/Gram-/contigs_Argal/group_3087_summary.signalp5 > /data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/contigs_Argal/gramn_summary.signalp5
(exited with non-zero exit code)
[Tue Apr 5 20:11:55 2022]
Finished job 3376.
13147 of 14486 steps (91%) done
[Tue Apr 5 20:12:03 2022]
Finished job 2839.
13148 of 14486 steps (91%) done
[Tue Apr 5 20:14:28 2022]
Finished job 206.
13149 of 14486 steps (91%) done`
</details>
There is no log file yet as it is still running, but there hasn't been any progress since that last message.
Here is the full output so far: [nohup.out.zip](/uploads/0e0ffab9013443a7472d6c8b356f3538/nohup.out.zip)
This is the second time I have tried this and the last time I got a similar error but it was:
<details><summary>Click to expand</summary>
`[Fri Apr 1 05:13:24 2022]
Error in rule SignalPP_aggregate:
jobid: 1644
output: /data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/contigs_Argal/gramp_summary.signalp5
shell:
cat /data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/Gram+/contigs_Argal/group_1_summary.signalp5...
/data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/Gram+/contigs_Argal/group_3087_summary.signalp5 > /data2/ekantor/Argal/PathoFact_results_Argal/PathoFact_intermediate/SignalP/contigs_Argal/gramp_summary.signalp5
(exited with non-zero exit code)
[Fri Apr 1 05:13:25 2022]
Finished job 7819.
13271 of 14604 steps (91%) done
[Fri Apr 1 05:15:16 2022]
Finished job 124.
13272 of 14604 steps (91%) done`
</details>
Again, it stopped after that message and there was no progress for a few days. I force killed the program after that but I do not have the log file anymore unfortunately. I reinstalled PathoFact afterwards and ran the test successfully before trying again.
I am running conda version 4.11.0. I installed snakemake newest version 7.3.4 via mamba from the installation instructions on the snakemake docs.
I used 12 cores for the most recent run and 10 cores for the one before it. Could this have something to do with the size of the file?
Thank you,
Ellahttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/88testing module fails deepARG- how to update deepARG within PathoFact?2022-09-19T15:23:39+02:00RahilRydertesting module fails deepARG- how to update deepARG within PathoFact?Im having the same issue as issue #64 and the solution was to update deepARG 1.0.2 in bitbucket. I am using Ubuntu to run PathoFact. I tried updating deepARG using the github page but unsure what I would need to do within PathoFact to ge...Im having the same issue as issue #64 and the solution was to update deepARG 1.0.2 in bitbucket. I am using Ubuntu to run PathoFact. I tried updating deepARG using the github page but unsure what I would need to do within PathoFact to get it updated. I updated it and moved the files into the PathoFact folder where deepARG files are but this did not fix the problem. I've been able to run the toxin and virulence reports successfully. Appreciate any help!
"/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/bin/deeparg", line 5, in from deeparg.entry import main File "/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/deeparg/entry.py", line 10, in import deeparg.predict.bin.deepARG as clf File "/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/deeparg/predict/bin/deepARG.py", line 21, in from deeparg.predict.bin import process_blast File "/mnt/d/Metagenomics_trial/PathoFact/.snakemake/conda/e658ef70/lib/python2.7/site-packages/deeparg/predict/bin/deeparg.py", line 21, in from deeparg.predict.bin import process_blast ImportError: No module named predict.binhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/89Error in rule SignalPN_aggregate2023-02-19T16:53:40+01:00Gaoyang LuoError in rule SignalPN_aggregate[patho_PW_5M.e286138](/uploads/c0a3f7ef7619079c67529bc9614a5eff/patho_PW_5M.e286138)
Dera @laura.denies
I confronted the issue and I thought it could be the issue of `cat` command line. I don't know why there would be no error when the ...[patho_PW_5M.e286138](/uploads/c0a3f7ef7619079c67529bc9614a5eff/patho_PW_5M.e286138)
Dera @laura.denies
I confronted the issue and I thought it could be the issue of `cat` command line. I don't know why there would be no error when the number of files is small. I don't know if i try "for i in {input}; do cat ${i} >> {output}" will workhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/90Error in rule R_script:2022-07-27T15:18:12+02:00fconstanciasError in rule R_script:Dear PathoFact developers,
Thanks for your effort developping that very usefull tool.
I am trying to use PathoFact using already annotated contigs i.e., ORFs `https://gitlab.lcsb.uni.lu/laura.denies/PathoFact/-/tree/orf`.
I prepared t...Dear PathoFact developers,
Thanks for your effort developping that very usefull tool.
I am trying to use PathoFact using already annotated contigs i.e., ORFs `https://gitlab.lcsb.uni.lu/laura.denies/PathoFact/-/tree/orf`.
I prepared the necessary files of a toy dataset: 1 sample of 1 contig and 2 orfs:
- [sample_megahit_253073.fna](/uploads/e6c28d4ad491c22879dc68453df60332/sample_megahit_253073.fna)
- [sample_megahit_253073.faa](/uploads/fc9948247ad375feffb374af7e3022b2/sample_megahit_253073.faa)
- [sample_megahit_253073.contig](/uploads/bae2f46278492afa8586d2bdd4f89ef4/sample_megahit_253073.contig)
and I am using the enclosed [config.yaml](/uploads/ef8a638779601905e8cb564986377a89/config.yaml)
I run the pipeline using ` snakemake -s Snakefile --use-conda --reason --cores 2 --configfile test_orfs_human1_1cont/config.yaml `
everything went smooth until the following error - [gene_table_library.log](/uploads/feee8f7c551d1112f4c5d581217119cd/gene_table_library.log) [2022-06-16T202537.905206.snakemake.log](/uploads/ce6e2a539b1f8691dfeab0e784a43c81/2022-06-16T202537.905206.snakemake.log)
```
[Thu Jun 16 20:26:42 2022]
Error in rule R_script:
jobid: 2
output: /datadrive05/Flo/tools/PathoFact_orf/test_orfs_human1_1cont/PathoFact_results_orf/PathoFact_report/Toxin_gene_library_sample_megahit_253073_report.tsv, /datadrive05/Flo/tools/PathoFact_orf/test_orfs_human1_1cont/PathoFact_results_orf/PathoFact_report/Toxin_prediction_sample_megahit_253073_report.tsv
log: /datadrive05/Flo/tools/PathoFact_orf/test_orfs_human1_1cont/PathoFact_results_orf/logs/sample_megahit_253073/gene_table_library.log (check log file(s) for error message)
conda-env: /datadrive05/Flo/tools/PathoFact_orf/.snakemake/conda/5f694c2e
RuleException:
CalledProcessError in line 29 of /datadrive05/Flo/tools/PathoFact_orf/rules/Toxin/Combine_Toxin_SignalP.smk:
Command 'source /home/constancias/miniconda3/envs/atlas-metagenomes/bin/activate '/datadrive05/Flo/tools/PathoFact_orf/.snakemake/conda/5f694c2e'; set -euo pipefail; Rscript --vanilla /datadrive05/Flo/tools/PathoFact_orf/.snakemake/scripts/tmp_ujrww4f.ownHMM_library.R' returned non-zero exit status 1.
File "/datadrive05/Flo/tools/PathoFact_orf/rules/Toxin/Combine_Toxin_SignalP.smk", line 29, in __rule_R_script
File "/home/constancias/miniconda3/envs/atlas-metagenomes/envs/PathoFact/lib/python3.6/concurrent/futures/thread.py", line 56, in run
Removing output files of failed job R_script since they might be corrupted:
/datadrive05/Flo/tools/PathoFact_orf/test_orfs_human1_1cont/PathoFact_results_orf/PathoFact_report/Toxin_gene_library_sample_megahit_253073_report.tsv
Activating conda environment: /datadrive05/Flo/tools/PathoFact_orf/.snakemake/conda/5f694c2e
```
Thanks for your help.
EDIT: My test dataset works well using `AMR` and `VIR` workflows.
EDIT2: Might be worth to add an orf test dataset for the `https://gitlab.lcsb.uni.lu/laura.denies/PathoFact/-/tree/orf` branch.https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/91Error in rule CTDT:2023-09-13T10:12:25+02:00fconstanciasError in rule CTDT:Dear PathoFact developers,
Thanks for your effort developping this very usefull tool.
Could you clarify the different branches (`orf_coinf`, `orf`, `PathoFact_v`, `master` ...)
?
I am using PathoFact using already annotated contigs i....Dear PathoFact developers,
Thanks for your effort developping this very usefull tool.
Could you clarify the different branches (`orf_coinf`, `orf`, `PathoFact_v`, `master` ...)
?
I am using PathoFact using already annotated contigs i.e., ORFs `https://gitlab.lcsb.uni.lu/laura.denies/PathoFact/-/tree/orf_coinf` and I prepared the necessary files: .fna .faa .contig using the script: https://raw.githubusercontent.com/fconstancias/NRP72-FBT/master/functions/prepare_faa_fna_PAthoFact_orf.Rscript.R
I am using the enclosed [config.yaml](/uploads/6bab28ae40b5b022704a7bd305167531/config.yaml)
I run the pipeline using `snakemake -s Snakefile --use-conda --reason --cores 10 -p --configfile analyses/config.yaml`
everything went smooth until the following error:
```
[Fri Jul 1 16:05:57 2022]
Finished job 321.
277 of 1075 steps (26%) done
[Fri Jul 1 16:05:57 2022]
Job 324: Identify features (CTDT) on the following sample(s): PathoFact_results - chicken1_l10000
Reason: Missing output files: /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/PathoFact_intermediate/VIRULENCE/classifier_virulence/chicken1_l10000/group_22_CTDT.txt
python scripts/CTDT.py --file /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/splitted/chicken1_l10000/group_22.fasta --out /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/PathoFact_intermediate/VIRULENCE/classifier_virulence/chicken1_l10000/group_22_CTDT.txt &> /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/logs/chicken1_l10000/group_22_CTDT.log
Activating conda environment: /datadrive05/Flo/tools/orf_coinf/.snakemake/conda/67994cc8
Activating conda environment: /datadrive05/Flo/tools/orf_coinf/.snakemake/conda/67994cc8
[Fri Jul 1 16:05:59 2022]
Error in rule CTDT:
jobid: 319
output: /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/PathoFact_intermediate/VIRULENCE/classifier_virulence/chicken1_l10000/group_6_CTDT.txt
log: /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/logs/chicken1_l10000/group_6_CTDT.log (check log file(s) for error message)
conda-env: /datadrive05/Flo/tools/orf_coinf/.snakemake/conda/67994cc8
shell:
python scripts/CTDT.py --file /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/splitted/chicken1_l10000/group_6.fasta --out /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/PathoFact_intermediate/VIRULENCE/classifier_virulence/chicken1_l10000/group_6_CTDT.txt &> /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/logs/chicken1_l10000/group_6_CTDT.log
(exited with non-zero exit code)
Removing temporary output file /datadrive05/Flo/tools/orf_coinf/analyses/PathoFact_results/PathoFact_intermediate/VIRULENCE/classifier_virulence/chicken1_l10000/group_49_matrix.tsv.
[Fri Jul 1 16:06:00 2022]
Finished job 66.
```
Please find enclosed the logs: [2022-07-01T155547.324708.snakemake.log](/uploads/2142c321b536feddf05ad97afc16537c/2022-07-01T155547.324708.snakemake.log) [group_6_CTDT.log](/uploads/f415d5ad31b6e24516751a131e2001bc/group_6_CTDT.log) as well as the [group_6.fasta.gz](/uploads/e507e6dd8e0187f3d67195b07a4bcad7/group_6.fasta.gz)
```
conda --version
conda 4.12.0
```
[environment.yml](/uploads/bac774322dc73e9f67996ebf53b26f01/environment.yml)
It seems specific to the `VIR` workflow since `AMR` and `Tox` worked fine on the same dataset.
Thanks for the support!https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/92How to run PathoFact on Fasta file2022-09-19T15:07:54+02:00harelarikHow to run PathoFact on Fasta fileHi,
I have a fasta file of few proteins or DNA sequences, I have removed spaces from fasta headers.
PathoFact is installed in my linux system (version- Ubuntu 18.04).
conda version: 4.8.4 snakemake version 5.5.4
**How do I run Pathofact...Hi,
I have a fasta file of few proteins or DNA sequences, I have removed spaces from fasta headers.
PathoFact is installed in my linux system (version- Ubuntu 18.04).
conda version: 4.8.4 snakemake version 5.5.4
**How do I run Pathofact to scan the fasta file ?**
Thank you,
Arikhttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/93NA in MGE prediction2022-10-17T13:56:57+02:00ntromasNA in MGE predictionHi PathoFact team,
I run PathoFact using AMR in the workflow and the run ended without error. However, the AMR report is composed of NA (only NA) in the MGE prediction column. My guess is that there is an issue with the AMR_MGE.R script...Hi PathoFact team,
I run PathoFact using AMR in the workflow and the run ended without error. However, the AMR report is composed of NA (only NA) in the MGE prediction column. My guess is that there is an issue with the AMR_MGE.R script. I used the last version so not sure why I got this. DvF and VirSorter worked properly, same for PlasFlow.
Thanks for the help,
Nicohttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/94CalledProcessError in line 29 of Combine_Toxin_SignalP.smk:2022-09-19T14:59:34+02:00Ghost UserCalledProcessError in line 29 of Combine_Toxin_SignalP.smk:I run PathoFact with ```snakemake -s Snakefile --use-conda --rerun-incomplete --cores 28 -p``` and encountered a CalledProcessError. The run is not interrupted but remains at the current step for two days now (with average CPU usage of 1...I run PathoFact with ```snakemake -s Snakefile --use-conda --rerun-incomplete --cores 28 -p``` and encountered a CalledProcessError. The run is not interrupted but remains at the current step for two days now (with average CPU usage of 15). I have seen another user experiencing a similar issue (https://gitlab.lcsb.uni.lu/laura.denies/PathoFact/-/issues/90) but all my FASTA files contain more than one contig. What effect has the CalledProcessError on the final outcome? Thanks!
```
RuleException:
CalledProcessError in line 29 of /pathofact/PathoFact/rules/Toxin/Combine_Toxin_SignalP.smk:
Command 'source /miniconda3/bin/activate '/pathofact/PathoFact/.snakemake/conda/fb1f3485'; set -euo pipefail; Rscript --vanilla /pathofact/PathoFact/.snakemake/scripts/tmp23lfmhsd.ownHMM_library.R' returned non-zero exit status 1.
File "/protect/pathofact/PathoFact/rules/Toxin/Combine_Toxin_SignalP.smk", line 29, in __rule_R_script
File "/miniconda3/envs/PathoFact/lib/python3.6/concurrent/futures/thread.py", line 56, in run
Removing output files of failed job R_script since they might be corrupted:
/pathofact/contigs_raw_files/PathoFact_results/PathoFact_report/Toxin_gene_library_ABC.tsv
[Sun Jul 24 10:14:54 2022]
Finished job 37616.
31250 of 33263 steps (94%) done
[Sun Jul 24 10:14:58 2022]
Finished job 289.
31251 of 33263 steps (94%) done
```https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/95Error in rule run_deepARG2023-02-20T14:59:08+01:00WoCer2019Error in rule run_deepARGThanks for the nice tool!
Recently I was running PathoFact with the test data. However, I was confronted with an error shown as follows (group_1.out.mapping.ARG.log).
```
Traceback (most recent call last):
File "/data/LJ/software/patho...Thanks for the nice tool!
Recently I was running PathoFact with the test data. However, I was confronted with an error shown as follows (group_1.out.mapping.ARG.log).
```
Traceback (most recent call last):
File "/data/LJ/software/pathofact/PathoFact/.snakemake/conda/a3c12cf1/bin/deeparg", line 5, in <module>
from deeparg.entry import main
File "/data/LJ/software/pathofact/PathoFact/.snakemake/conda/a3c12cf1/lib/python2.7/site-packages/deeparg/entry.py", line 9, in <module>
import deeparg.predict.bin.deepARG as clf
File "/data/LJ/software/pathofact/PathoFact/.snakemake/conda/a3c12cf1/lib/python2.7/site-packages/deeparg/predict/bin/deepARG.py", line 12, in <module>
from lasagne import layers
File "/data/LJ/software/pathofact/PathoFact/.snakemake/conda/a3c12cf1/lib/python2.7/site-packages/lasagne/__init__.py", line 13, in <module>
""")
ImportError: Could not import Theano.
Please make sure you install a recent enough version of Theano. See
section 'Install from PyPI' in the installation docs for more details:
http://lasagne.readthedocs.org/en/latest/user/installation.html#install-from-pypi
```
Thank you in advance.https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/96DVF using all cores2023-02-19T16:58:20+01:00Ella KantorDVF using all coresHi,
I have been running PathoFact successfully but I noticed that the steps that run dvf.py are taking up all available cores on my workstation even though I specified 12. I found an issue similar to this #47 which is closed now but I d...Hi,
I have been running PathoFact successfully but I noticed that the steps that run dvf.py are taking up all available cores on my workstation even though I specified 12. I found an issue similar to this #47 which is closed now but I don't see if there is a fix for it, maybe something I could change in that dvf.py script? I looked into it and it was my understanding that DVF is limited to 4 by default.
I am running AMR only with the modified AMR_MGE.R script that you gave to Nico in issue #93 which seems to be taking much longer now than with the buggy script (I assume that is a good sign that it is working).
Hopefully there is a change that I could make to leave some cores free to the fellow users on the workstation.
Thank you,
Ellahttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/97trouble creating conda env VirSorter.yaml2023-02-21T14:52:22+01:00melinarosytrouble creating conda env VirSorter.yamlHi
I tried to use PathoFact. I installed all requirements. I tried to tun test module but I run into conflict that your current version is 2.31 and it should be 2.17. I paste the last sentences because the eoor is kind of long
- feature...Hi
I tried to use PathoFact. I installed all requirements. I tried to tun test module but I run into conflict that your current version is 2.31 and it should be 2.17. I paste the last sentences because the eoor is kind of long
- feature:/linux-64::__glibc==2.31=0
- feature:|@/linux-64::__glibc==2.31=0
- blast==2.9.0=pl526h3066fca_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- bzip2==1.0.8=h516909a_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- curl==7.69.1=h33f0ec9_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- diamond==0.9.14=hd28b015_2 -> libgcc-ng[version='>=4.9'] -> __glibc[version='>=2.17']
- expat==2.2.9=he1b5a44_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- krb5==1.17.1=h2fd8d38_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- libcurl==7.69.1=hf7181ac_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- libedit==3.1.20170329=hf8c457e_1001 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- libgcc==7.2.0=h69d50b8_2 -> libgcc-ng[version='>=7.2.0'] -> __glibc[version='>=2.17']
- libssh2==1.8.2=h22169c7_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- mcl==14.137=pl526h516909a_5 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- metagene_annotator==1.0=h516909a_3 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- muscle==3.8.1551=hc9558a2_5 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- ncurses==6.1=hf484d3e_1002 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- openssl==1.1.1f=h516909a_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- pcre==8.44=he1b5a44_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-class-load-xs==0.10=pl526h6bb024c_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-compress-raw-bzip2==2.087=pl526he1b5a44_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-compress-raw-zlib==2.087=pl526hc9558a2_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-eval-closure==0.14=pl526h6bb024c_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-html-parser==3.72=pl526h6bb024c_5 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-io-compress==2.087=pl526he1b5a44_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-json-xs==2.34=pl526h6bb024c_3 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-moose==2.2011=pl526hf484d3e_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-net-ssleay==1.88=pl526h90d6eec_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-package-stash-xs==0.28=pl526hf484d3e_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-package-stash==0.38=pl526hf484d3e_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-params-util==1.07=pl526h6bb024c_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-pathtools==3.75=pl526h14c3975_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-scalar-list-utils==1.52=pl526h516909a_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-storable==3.15=pl526h14c3975_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-sub-identify==0.14=pl526h14c3975_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-xml-parser==2.44_01=pl526ha1d75be_1002 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl==5.26.2=h516909a_1006 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- tk==8.6.10=hed695b0_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- virsorter==1.0.5=pl526h516909a_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- zlib==1.2.11=h516909a_1006 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
Your installed version is: 2.31
could you help me please?https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/98Could not create conda environment from VirSorter.yaml2023-02-21T14:52:22+01:00thr44pw00dCould not create conda environment from VirSorter.yamlHi,
I installed PathoFact as described in README.md and tried to run the test module using the command:
`snakemake -s test/Snakefile --use-conda --reason --cores 1 -p`
Unfortunately it failed to create the conda environment "envs/VirSor...Hi,
I installed PathoFact as described in README.md and tried to run the test module using the command:
`snakemake -s test/Snakefile --use-conda --reason --cores 1 -p`
Unfortunately it failed to create the conda environment "envs/VirSorter.yaml". Attached the output to the terminal after I issued the snakemake command. If I understand correctly it's mostly about conflicts regarding perl.
Would you have a suggestion how to resolve this?
Thanks in advance!
[stdout.txt](/uploads/4f1f7958e1225bc55db45af9a622c84e/stdout.txt)https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/99Issue from pathofact test2023-02-19T16:39:43+01:00gyudaeIssue from pathofact testHello, im graduate student from South Korea.
I have some problem with installing PathoFact.
After following the instructions, i did test command which was 'snakemake -s test/Snakefile --use-conda --reason --cores 50 -p' to check install....Hello, im graduate student from South Korea.
I have some problem with installing PathoFact.
After following the instructions, i did test command which was 'snakemake -s test/Snakefile --use-conda --reason --cores 50 -p' to check install.
But the following error massage was presented...
Does anyone know this error massage...? I cant figure it out for 1 week...
Package perl-eval-closure conflicts for:
perl-moo==2.003004=pl526_0 -> perl-moose -> perl-eval-closure[version='0.14.*|>=0.04']
perl-eval-closure==0.14=pl526h6bb024c_4
perl-moose==2.2011=pl526hf484d3e_1 -> perl-eval-closure[version='>=0.04']The following specifications were found to be incompatible with your system:
- feature:/linux-64::__glibc==2.35=0
- feature:|@/linux-64::__glibc==2.35=0
- blast==2.9.0=pl526h3066fca_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- bzip2==1.0.8=h516909a_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- curl==7.69.1=h33f0ec9_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- diamond==0.9.14=hd28b015_2 -> libgcc-ng[version='>=4.9'] -> __glibc[version='>=2.17']
- expat==2.2.9=he1b5a44_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- krb5==1.17.1=h2fd8d38_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- libcurl==7.69.1=hf7181ac_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- libedit==3.1.20170329=hf8c457e_1001 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- libgcc==7.2.0=h69d50b8_2 -> libgcc-ng[version='>=7.2.0'] -> __glibc[version='>=2.17']
- libssh2==1.8.2=h22169c7_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- mcl==14.137=pl526h516909a_5 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- metagene_annotator==1.0=h516909a_3 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- muscle==3.8.1551=hc9558a2_5 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- ncurses==6.1=hf484d3e_1002 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- openssl==1.1.1f=h516909a_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- pcre==8.44=he1b5a44_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-class-load-xs==0.10=pl526h6bb024c_2 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-compress-raw-bzip2==2.087=pl526he1b5a44_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-compress-raw-zlib==2.087=pl526hc9558a2_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-eval-closure==0.14=pl526h6bb024c_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-html-parser==3.72=pl526h6bb024c_5 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-io-compress==2.087=pl526he1b5a44_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-json-xs==2.34=pl526h6bb024c_3 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-moose==2.2011=pl526hf484d3e_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-net-ssleay==1.88=pl526h90d6eec_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-package-stash-xs==0.28=pl526hf484d3e_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-package-stash==0.38=pl526hf484d3e_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-params-util==1.07=pl526h6bb024c_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-pathtools==3.75=pl526h14c3975_1 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-scalar-list-utils==1.52=pl526h516909a_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-storable==3.15=pl526h14c3975_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-sub-identify==0.14=pl526h14c3975_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl-xml-parser==2.44_01=pl526ha1d75be_1002 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- perl==5.26.2=h516909a_1006 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- tk==8.6.10=hed695b0_0 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- virsorter==1.0.5=pl526h516909a_4 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
- zlib==1.2.11=h516909a_1006 -> libgcc-ng[version='>=7.3.0'] -> __glibc[version='>=2.17']
Your installed version is: 2.35
This is log file of test command. [2023-02-10T110040.706099.snakemake.log](/uploads/9ab917029126f02cbaf650dc217bae6f/2023-02-10T110040.706099.snakemake.log)
Thanks!
GyuDae Leehttps://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/100Convert PathoFact AMR_MGE_report.tsv output to GFF2023-04-14T09:18:45+02:00theadrijaConvert PathoFact AMR_MGE_report.tsv output to GFFIs there any way I can convert the .tsv report of AMR_MGE to GFF?Is there any way I can convert the .tsv report of AMR_MGE to GFF?https://git-r3lab.uni.lu/laura.denies/PathoFact/-/issues/101Error in rule run_VirSorter2023-05-28T23:22:25+02:00siva-sagarError in rule run_VirSorterPathofact was actually work fine yesterday but while i started running today i came across this error and unable to proceed forward. there is an issue on this but it is not much of a help.
![error1](/uploads/e38ad746d31c3ac0c8577d82c614...Pathofact was actually work fine yesterday but while i started running today i came across this error and unable to proceed forward. there is an issue on this but it is not much of a help.
![error1](/uploads/e38ad746d31c3ac0c8577d82c6147232/error1.png)
I used these command: snakemake -s Snakefile --use-conda --reason --cores 128 -p
These are the .snakemake log files
[2023-05-23T152917.975027.snakemake.log](/uploads/9d0aa2d4fb70b2fb4d28cf1f6f18aabb/2023-05-23T152917.975027.snakemake.log)
[2023-05-23T150043.964282.snakemake.log](/uploads/1dd537aa971a658eea2374655ac8c271/2023-05-23T150043.964282.snakemake.log)
thank you.