Ecosyste.ms: OpenCollective
An open API service for software projects hosted on Open Collective.
github.com/vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
https://github.com/vllm-project/vllm
pypi: byzerllm 0.1.146
ByzerLLM: Byzer LLM143 versions - Latest release: 6 days ago - 1 dependent package - 2 dependent repositories - 9.25 thousand downloads last month
pypi: llm-engines 0.0.17
A unified inference engine for large language models (LLMs) including open-source models (VLLM, S...17 versions - Latest release: 3 months ago - 506 downloads last month
pypi: vllm-npu 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs3 versions - Latest release: 10 days ago - 354 downloads last month
pypi: vllm-rocm 0.6.3
A high-throughput and memory-efficient inference and serving engine for LLMs with AMD GPU support1 version - Latest release: 3 months ago - 163 downloads last month
pypi: llm_math 0.2.0
A tool designed to evaluate the performance of large language models on mathematical tasks.5 versions - Latest release: 4 months ago - 325 downloads last month
pypi: moe-kernels 0.7.0
MoE kernels13 versions - Latest release: 3 months ago - 441 downloads last month
Top 3.4% on pypi.org
47 versions - Latest release: about 1 month ago - 46 dependent packages - 5 dependent repositories - 1.67 million downloads last month
pypi: vllm 0.6.6
A high-throughput and memory-efficient inference and serving engine for LLMs47 versions - Latest release: about 1 month ago - 46 dependent packages - 5 dependent repositories - 1.67 million downloads last month
pypi: marlin-kernels 0.3.7
Marlin quantization kernels11 versions - Latest release: 20 days ago - 456 downloads last month
pypi: vllm-acc 0.4.1
A high-throughput and memory-efficient inference and serving engine for LLMs8 versions - Latest release: 9 months ago - 255 downloads last month
pypi: vllm-online 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs2 versions - Latest release: 9 months ago - 55 downloads last month
pypi: tilearn-infer 0.3.3
A high-throughput and memory-efficient inference and serving engine for LLMs3 versions - Latest release: 9 months ago - 62 downloads last month
pypi: tilearn-test01 0.1
A high-throughput and memory-efficient inference and serving engine for LLMs1 version - Latest release: 10 months ago - 28 downloads last month
pypi: llm_atc 0.1.7
Tools for fine tuning and serving LLMs6 versions - Latest release: about 1 year ago - 282 downloads last month
pypi: vllm-xft 0.5.5.0
A high-throughput and memory-efficient inference and serving engine for LLMs8 versions - Latest release: 5 months ago - 189 downloads last month
pypi: superlaser 0.0.6
An MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.6 versions - Latest release: 11 months ago - 274 downloads last month
pypi: llm-swarm 0.1.1
A high-throughput and memory-efficient inference and serving engine for LLMs2 versions - Latest release: 11 months ago - 117 downloads last month
pypi: nextai-vllm 0.0.7
A high-throughput and memory-efficient inference and serving engine for LLMs6 versions - Latest release: 9 months ago - 176 downloads last month
pypi: vllm-consul 0.2.1
A high-throughput and memory-efficient inference and serving engine for LLMs5 versions - Latest release: over 1 year ago - 177 downloads last month
Top 9.6% on proxy.golang.org
37 versions - Latest release: about 1 month ago
go: github.com/vllm-project/vllm v0.6.6
A high-throughput and memory-efficient inference and serving engine for LLMs37 versions - Latest release: about 1 month ago