An open API service for software projects hosted on Open Collective.

vLLM

vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective - Host: opensource - https://opencollective.com/vllm - Code: https://github.com/vllm-project/vllm

pypi: superlaser 0.0.6
An MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.
github.com/vllm-project/vllm - 6 versions - Latest release: over 1 year ago - 180 downloads last month
pypi: wxy-test 0.8.1
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 1 version - Latest release: 6 months ago - 43 downloads last month
pypi: vllm-test-tpu 0.9.1
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 2 versions - Latest release: 2 months ago - 13 downloads last month
pypi: vllm-tpu 0.9.3
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 2 versions - Latest release: 2 months ago - 32 downloads last month
Top 9.6% on proxy.golang.org
go: github.com/vllm-project/vllm v0.9.2
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 50 versions - Latest release: 21 days ago
pypi: vllm-xft 0.5.5.4
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 12 versions - Latest release: 3 months ago - 100 downloads last month
pypi: vllm-rocm 0.6.3
A high-throughput and memory-efficient inference and serving engine for LLMs with AMD GPU support
github.com/vllm-project/vllm - 1 version - Latest release: 10 months ago - 12 downloads last month
pypi: vllm-online 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 2 versions - Latest release: about 1 year ago - 9 downloads last month
pypi: hive-vllm 0.0.1
a
github.com/vllm-project/vllm - 1 version - Latest release: over 1 year ago - 10 downloads last month
pypi: vllm-acc 0.4.1
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 8 versions - Latest release: about 1 year ago - 28 downloads last month
pypi: nextai-vllm 0.0.7
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 6 versions - Latest release: over 1 year ago - 33 downloads last month
pypi: vllm-npu 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 3 versions - Latest release: 6 months ago - 30 downloads last month
pypi: tilearn-test01 0.1
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 1 version - Latest release: over 1 year ago - 9 downloads last month
pypi: llm-engines 0.0.17
A unified inference engine for large language models (LLMs) including open-source models (VLLM, S...
github.com/vllm-project/vllm - 17 versions - Latest release: 9 months ago - 506 downloads last month
pypi: mindie-turbo 2.0rc1
MindIE Turbo: An LLM inference acceleration framework featuring extensive plugin collections opti...
github.com/vllm-project/vllm - 1 version - Latest release: 3 months ago - 187 downloads last month
pypi: marlin-kernels 0.3.7
Marlin quantization kernels
github.com/vllm-project/vllm - 11 versions - Latest release: 7 months ago - 159 downloads last month
pypi: ai-dynamo-vllm 0.8.4
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 6 versions - Latest release: 3 months ago - 1.9 thousand downloads last month
Top 3.4% on pypi.org
pypi: vllm 0.9.2
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 62 versions - Latest release: 20 days ago - 46 dependent packages - 5 dependent repositories - 2.06 million downloads last month
pypi: llm-swarm 0.1.1
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 2 versions - Latest release: over 1 year ago - 72 downloads last month
pypi: llm_math 0.2.0
A tool designed to evaluate the performance of large language models on mathematical tasks.
github.com/vllm-project/vllm - 5 versions - Latest release: 10 months ago - 114 downloads last month
pypi: moe-kernels 0.8.2
MoE kernels
github.com/vllm-project/vllm - 15 versions - Latest release: 6 months ago - 181 downloads last month
pypi: vllm-emissary 0.1.0
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 2 versions - Latest release: 4 months ago - 20 downloads last month
pypi: tilearn-infer 0.3.3
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 3 versions - Latest release: over 1 year ago - 13 downloads last month
pypi: llm_atc 0.1.7
Tools for fine tuning and serving LLMs
github.com/vllm-project/vllm - 6 versions - Latest release: over 1 year ago - 193 downloads last month
pypi: byzerllm 0.1.182
ByzerLLM: Byzer LLM
github.com/vllm-project/vllm - 178 versions - Latest release: 3 months ago - 1 dependent package - 2 dependent repositories - 8.85 thousand downloads last month
pypi: vllm-consul 0.2.1
A high-throughput and memory-efficient inference and serving engine for LLMs
github.com/vllm-project/vllm - 5 versions - Latest release: almost 2 years ago - 21 downloads last month