Ecosyste.ms: OpenCollective
An open API service for software projects hosted on Open Collective.
vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective -
Host: opensource -
https://opencollective.com/vllm
- Code: https://github.com/vllm-project/vllm
pypi: llm_math 0.2.0
A tool designed to evaluate the performance of large language models on mathematical tasks.github.com/vllm-project/vllm - 5 versions - Latest release: 4 months ago - 248 downloads last month
pypi: vllm-rocm 0.6.3
A high-throughput and memory-efficient inference and serving engine for LLMs with AMD GPU supportgithub.com/vllm-project/vllm - 1 version - Latest release: 4 months ago - 113 downloads last month
Top 9.6% on proxy.golang.org
github.com/vllm-project/vllm - 38 versions - Latest release: 5 days ago
go: github.com/vllm-project/vllm v0.7.0
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 38 versions - Latest release: 5 days ago
pypi: llm_atc 0.1.7
Tools for fine tuning and serving LLMsgithub.com/vllm-project/vllm - 6 versions - Latest release: about 1 year ago - 273 downloads last month
pypi: llm-engines 0.0.17
A unified inference engine for large language models (LLMs) including open-source models (VLLM, S...github.com/vllm-project/vllm - 17 versions - Latest release: 3 months ago - 506 downloads last month
pypi: superlaser 0.0.6
An MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.github.com/vllm-project/vllm - 6 versions - Latest release: 11 months ago - 284 downloads last month
pypi: llm-swarm 0.1.1
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 2 versions - Latest release: 11 months ago - 113 downloads last month
pypi: tilearn-infer 0.3.3
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 3 versions - Latest release: 10 months ago - 69 downloads last month
pypi: vllm-npu 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 3 versions - Latest release: 16 days ago - 378 downloads last month
pypi: vllm-online 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 2 versions - Latest release: 9 months ago - 58 downloads last month
pypi: byzerllm 0.1.148
ByzerLLM: Byzer LLMgithub.com/vllm-project/vllm - 144 versions - Latest release: 5 days ago - 1 dependent package - 2 dependent repositories - 8.7 thousand downloads last month
pypi: hive-vllm 0.0.1
agithub.com/vllm-project/vllm - 1 version - Latest release: 11 months ago - 28 downloads last month
pypi: nextai-vllm 0.0.7
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 6 versions - Latest release: 9 months ago - 180 downloads last month
pypi: moe-kernels 0.8.2
MoE kernelsgithub.com/vllm-project/vllm - 15 versions - Latest release: 1 day ago - 423 downloads last month
pypi: vllm-xft 0.5.5.0
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 8 versions - Latest release: 5 months ago - 206 downloads last month
pypi: marlin-kernels 0.3.7
Marlin quantization kernelsgithub.com/vllm-project/vllm - 11 versions - Latest release: 26 days ago - 422 downloads last month
Top 3.4% on pypi.org
github.com/vllm-project/vllm - 47 versions - Latest release: about 1 month ago - 46 dependent packages - 5 dependent repositories - 1.78 million downloads last month
pypi: vllm 0.6.6
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 47 versions - Latest release: about 1 month ago - 46 dependent packages - 5 dependent repositories - 1.78 million downloads last month
pypi: vllm-consul 0.2.1
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 5 versions - Latest release: over 1 year ago - 186 downloads last month
pypi: vllm-acc 0.4.1
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 8 versions - Latest release: 9 months ago - 259 downloads last month
pypi: tilearn-test01 0.1
A high-throughput and memory-efficient inference and serving engine for LLMsgithub.com/vllm-project/vllm - 1 version - Latest release: 10 months ago - 31 downloads last month