Model Zoo

To see all available presets:

from slide2vec import list_models

list_models()        # all presets
list_models("tile")  # tile-level only
list_models("slide") # slide-level only

Tile-level encoders

Preset

Model

Output dim

Spacing (um)

Notes

lunit

Lunit ViT-S/8

384

0.5

Kang et al. (2023)

prost40m

Prost40M

384

0.5

Grisi et al. (2026)

conch

CONCH

512

0.5

Lu et al. (2024)

phikon

Phikon

768

0.5

Filiot et al. (2023)

conchv15

CONCHv1.5

768

0.5

Lu et al. (2024)

hibou-b

Hibou-B

768

0.5

Nechaev et al. (2024)

h0-mini

H0-mini

768 / 1536

0.5

Filiot et al. (2024)

phikonv2

Phikon-v2

1024

0.5

Filiot et al. (2024)

hibou-l

Hibou-L

1024

0.5

Nechaev et al. (2024)

uni

UNI

1024

0.5

Chen et al. (2024)

musk

MUSK

1024 / 2048

0.25, 0.5, 1.0

Xiang et al. (2024)

virchow

Virchow

1280 / 2560

0.5

Vorontsov et al. (2024)

virchow2

Virchow2

1280 / 2560

0.5, 1.0, 2.0

Zimmermann et al. (2024)

uni2

UNI2

1536

0.5

Chen et al. (2024)

gigapath

GigaPath

1536

0.5

Xu et al. (2024)

h-optimus-0

H-Optimus-0

1536

0.5

Saillard et al. (2024)

h-optimus-1

H-Optimus-1

1536

0.5

Saillard et al. (2024)

midnight

Midnight

3072

0.25, 0.5, 1.0, 2.0

Karasikov et al. (2025)

Slide-level encoders

Preset

Model

Tile encoder

Spacing (um)

Output dim

Notes

gigapath-slide

GigaPath

gigapath

0.5

768

Xu et al. (2024)

titan

TITAN

conchv15

0.5

768

Ding et al. (2024)

prism

PRISM

virchow

0.5

1280

Shaikovski et al. (2024)

moozy-slide

MOOZY

lunit

0.5

768

Kotp et al. (2026)

Patient-level encoders

Patient-level encoders aggregate multiple slide embeddings for the same patient into a single patient-level embedding. They require a patient_id column in the input manifest csv (or patient_id keys in each slide dict when using the Python API).

Preset

Model

Tile encoder

Spacing (um)

Output dim

Notes

moozy

MOOZY

lunit

0.5

768

Kotp et al. (2026)

Custom registry-backed encoders

If you want to use a model that is not shipped with slide2vec, wrap it in an encoder class and register it under a new preset name.

Tile encoder example

import torch
from torch import Tensor

from slide2vec.encoders import TileEncoder
from slide2vec.encoders import register_encoder, resolve_requested_output_variant


@register_encoder(
    "my-tile-model",
    output_variants={"default": {"encode_dim": 768}},
    default_output_variant="default",
    input_size=224,
    supported_spacing_um=0.5,
    precision="fp16",
    source="my-org/my-tile-model",
)
class MyTileModel(TileEncoder):
    def __init__(self, *, output_variant: str | None = None):
        self._output_variant = resolve_requested_output_variant(output_variant)
        self._device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self._model = self._load_model().eval()

    def _load_model(self):
        ...

    def get_transform(self):
        ...

    def encode_tiles(self, batch: Tensor) -> Tensor:
        return self._model(batch)

    @property
    def encode_dim(self) -> int:
        return 768

    @property
    def device(self) -> torch.device:
        return self._device

    def to(self, device: torch.device | str):
        self._device = torch.device(device)
        self._model = self._model.to(self._device)
        return self

Once the module is imported, the preset is available through the existing API:

from slide2vec import Model

model = Model.from_preset("my-tile-model")

Slide encoder example

import torch
from torch import Tensor

from slide2vec.encoders import SlideEncoder
from slide2vec.encoders import register_encoder, resolve_requested_output_variant


@register_encoder(
    "my-slide-model",
    level="slide",
    tile_encoder="my-tile-model",
    tile_encoder_output_variant="default",
    output_variants={"default": {"encode_dim": 512}},
    default_output_variant="default",
    supported_spacing_um=0.5,
    precision="fp16",
    source="my-org/my-slide-model",
)
class MySlideModel(SlideEncoder):
    def __init__(self, *, output_variant: str | None = None):
        self._output_variant = resolve_requested_output_variant(output_variant)
        self._device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
        self._model = self._load_model().eval()

    def _load_model(self):
        ...

    @property
    def encode_dim(self) -> int:
        return 512

    @property
    def device(self) -> torch.device:
        return self._device

    def to(self, device: torch.device | str):
        self._device = torch.device(device)
        self._model = self._model.to(self._device)
        return self

    def encode_slide(
        self,
        tile_features: Tensor,
        coordinates: Tensor | None = None,
        *,
        tile_size_lv0: int | None = None,
    ) -> Tensor:
        return self._model(tile_features)