An open API service for software projects hosted on Open Collective.

github.com/vllm-project/vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://github.com/vllm-project/vllm

pypi: byzerllm 0.1.182
ByzerLLM: Byzer LLM
178 versions - Latest release: 9 months ago - 1 dependent package - 2 dependent repositories - 8.85 thousand downloads last month
pypi: tmp-test-vllm-tpu 0.0.1
A high-throughput and memory-efficient inference and serving engine for LLMs
1 version - Latest release: 4 months ago - 165 downloads last month
pypi: vllm-test-tpu 0.9.1
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: 8 months ago - 13 downloads last month
pypi: vllm-fixed 1.0.0
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: 4 months ago - 101 downloads last month
pypi: llm-engines 0.0.17
A unified inference engine for large language models (LLMs) including open-source models (VLLM, S...
17 versions - Latest release: about 1 year ago - 506 downloads last month
pypi: vllm-tpu 0.9.3
A high-throughput and memory-efficient inference and serving engine for LLMs
3 versions - Latest release: 8 months ago - 172 downloads last month
pypi: mindie-turbo 2.0rc1
MindIE Turbo: An LLM inference acceleration framework featuring extensive plugin collections opti...
1 version - Latest release: 9 months ago - 187 downloads last month
pypi: vllm-emissary 0.1.0
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: 10 months ago - 19 downloads last month
pypi: ai-dynamo-vllm 0.8.4
A high-throughput and memory-efficient inference and serving engine for LLMs
7 versions - Latest release: 9 months ago - 166 downloads last month
pypi: wxy-test 0.8.1
A high-throughput and memory-efficient inference and serving engine for LLMs
1 version - Latest release: 12 months ago - 43 downloads last month
pypi: vllm-npu 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs
3 versions - Latest release: about 1 year ago - 42 downloads last month
pypi: vllm-rocm 0.6.3
A high-throughput and memory-efficient inference and serving engine for LLMs with AMD GPU support
1 version - Latest release: over 1 year ago - 50 downloads last month
pypi: llm_math 0.2.0
A tool designed to evaluate the performance of large language models on mathematical tasks.
5 versions - Latest release: over 1 year ago - 114 downloads last month
pypi: moe-kernels 0.8.2
MoE kernels
15 versions - Latest release: 12 months ago - 181 downloads last month
Top 3.4% on pypi.org
pypi: vllm 0.11.0
A high-throughput and memory-efficient inference and serving engine for LLMs
67 versions - Latest release: 4 months ago - 46 dependent packages - 5 dependent repositories - 5 million downloads last month
pypi: marlin-kernels 0.3.7
Marlin quantization kernels
11 versions - Latest release: about 1 year ago - 159 downloads last month
pypi: vllm-acc 0.4.1
A high-throughput and memory-efficient inference and serving engine for LLMs
8 versions - Latest release: over 1 year ago - 40 downloads last month
pypi: vllm-online 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: almost 2 years ago - 8 downloads last month
pypi: tilearn-infer 0.3.3
A high-throughput and memory-efficient inference and serving engine for LLMs
3 versions - Latest release: almost 2 years ago - 6 downloads last month
pypi: tilearn-test01 0.1
A high-throughput and memory-efficient inference and serving engine for LLMs
1 version - Latest release: almost 2 years ago - 4 downloads last month
pypi: llm_atc 0.1.7
Tools for fine tuning and serving LLMs
6 versions - Latest release: about 2 years ago - 193 downloads last month
pypi: vllm-xft 0.5.5.4
A high-throughput and memory-efficient inference and serving engine for LLMs
12 versions - Latest release: 9 months ago - 114 downloads last month
pypi: superlaser 0.0.6
An MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.
6 versions - Latest release: almost 2 years ago - 180 downloads last month
pypi: hive-vllm 0.0.1
a
1 version - Latest release: almost 2 years ago - 12 downloads last month
pypi: llm-swarm 0.1.1
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: almost 2 years ago - 72 downloads last month
pypi: nextai-vllm 0.0.7
A high-throughput and memory-efficient inference and serving engine for LLMs
6 versions - Latest release: almost 2 years ago - 42 downloads last month
pypi: vllm-consul 0.2.1
A high-throughput and memory-efficient inference and serving engine for LLMs
5 versions - Latest release: over 2 years ago - 26 downloads last month
Top 9.6% on proxy.golang.org
go: github.com/vllm-project/vllm v0.11.0
A high-throughput and memory-efficient inference and serving engine for LLMs
54 versions - Latest release: 4 months ago