Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Empty file modified .github/workflows/cli-tests.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/conventional-commits.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/pre-commit.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/publish-pypi.yml
100644 → 100755
Empty file.
Empty file modified .github/workflows/release-please.yaml
100644 → 100755
Empty file.
Empty file modified .gitignore
100644 → 100755
Empty file.
Empty file modified .pre-commit-config.yaml
100644 → 100755
Empty file.
Empty file modified CHANGELOG.md
100644 → 100755
Empty file.
Empty file modified CODE_OF_CONDUCT.md
100644 → 100755
Empty file.
Empty file modified LICENSE.md
100644 → 100755
Empty file.
32 changes: 31 additions & 1 deletion README.md
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -248,7 +248,7 @@ transcriptformer inference \
```

You can also use the CLI it run inference on the ESM2-CE baseline model discussed in the paper:

```
transcriptformer inference \
--checkpoint-path ./checkpoints/tf_sapiens \
--data-file test/data/human_val.h5ad \
Expand Down Expand Up @@ -276,6 +276,36 @@ transcriptformer download --help
transcriptformer download-data --help
```

## Using the Python Client

You can also run inference and downloads programmatically with the Python client. This returns an in-memory AnnData object for direct use in notebooks and pipelines.

```python
from transcriptformer.client.client import TranscriptFormerClient

tf = TranscriptFormerClient()

# In-memory inference (single GPU only)
adata = tf.inference(
data_file="./data/my_data.h5ad",
checkpoint_path="./checkpoints/tf_sapiens",
batch_size=16,
use_oom_dataloader=True, # optional: OOM-safe dataloader
n_data_workers=4, # optional
)

# Optional: download artifacts
tf.download_model("tf-sapiens", checkpoint_dir="./checkpoints")

# Optional: download datasets
tf.download_data(["homo sapiens"], output_dir="./data/cellxgene")
```

Notes:
- The Python client currently supports single-GPU inference only. If you need multi-GPU (DDP), use the CLI (`--num-gpus`).
- Most `InferenceConfig` and `DataConfig` options are available as keyword args (e.g., `output_keys`, `obs_keys`, `gene_col_name`, `use_raw`, `clip_counts`).
- Pass real booleans for flags like `use_oom_dataloader`.

### CLI Options for `inference`:

- `--checkpoint-path PATH`: Path to the model checkpoint directory (required).
Expand Down
Empty file modified SECURITY.md
100644 → 100755
Empty file.
Empty file modified assets/model_overview.png
100644 → 100755
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
35 changes: 0 additions & 35 deletions conf/inference_config.yaml

This file was deleted.

Loading