Ecosyste.ms: OpenCollective

An open API service for software projects hosted on Open Collective.

github.com/vllm-project/vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://github.com/vllm-project/vllm

pypi: byzerllm 0.1.142
ByzerLLM: Byzer LLM
139 versions - Latest release: 13 days ago - 1 dependent package - 2 dependent repositories - 8.74 thousand downloads last month
pypi: llm-engines 0.0.17
A unified inference engine for large language models (LLMs) including open-source models (VLLM, S...
17 versions - Latest release: about 2 months ago - 479 downloads last month
pypi: vllm-rocm 0.6.3
A high-throughput and memory-efficient inference and serving engine for LLMs with AMD GPU support
1 version - Latest release: 2 months ago - 208 downloads last month
pypi: llm_math 0.2.0
A tool designed to evaluate the performance of large language models on mathematical tasks.
5 versions - Latest release: 2 months ago - 435 downloads last month
pypi: moe-kernels 0.7.0
MoE kernels
13 versions - Latest release: about 2 months ago - 344 downloads last month
Top 3.4% on pypi.org
pypi: vllm 0.6.5
A high-throughput and memory-efficient inference and serving engine for LLMs
45 versions - Latest release: 4 days ago - 46 dependent packages - 5 dependent repositories - 820 thousand downloads last month
pypi: marlin-kernels 0.3.6
Marlin quantization kernels
10 versions - Latest release: about 1 month ago - 478 downloads last month
pypi: vllm-acc 0.4.1
A high-throughput and memory-efficient inference and serving engine for LLMs
8 versions - Latest release: 8 months ago - 170 downloads last month
pypi: vllm-online 0.4.2
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: 8 months ago - 25 downloads last month
pypi: tilearn-infer 0.3.3
A high-throughput and memory-efficient inference and serving engine for LLMs
3 versions - Latest release: 8 months ago - 288 downloads last month
pypi: tilearn-test01 0.1
A high-throughput and memory-efficient inference and serving engine for LLMs
1 version - Latest release: 9 months ago - 101 downloads last month
pypi: llm_atc 0.1.7
Tools for fine tuning and serving LLMs
6 versions - Latest release: about 1 year ago - 100 downloads last month
pypi: vllm-xft 0.5.5.0
A high-throughput and memory-efficient inference and serving engine for LLMs
8 versions - Latest release: 4 months ago - 151 downloads last month
pypi: superlaser 0.0.6
An MLOps library for LLM deployment w/ the vLLM engine on RunPod's infra.
6 versions - Latest release: 10 months ago - 149 downloads last month
pypi: hive-vllm 0.0.1
a
1 version - Latest release: 10 months ago - 17 downloads last month
pypi: nextai-vllm 0.0.7
A high-throughput and memory-efficient inference and serving engine for LLMs
6 versions - Latest release: 8 months ago - 85 downloads last month
pypi: llm-swarm 0.1.1
A high-throughput and memory-efficient inference and serving engine for LLMs
2 versions - Latest release: 10 months ago - 68 downloads last month
pypi: vllm-consul 0.2.1
A high-throughput and memory-efficient inference and serving engine for LLMs
5 versions - Latest release: about 1 year ago - 79 downloads last month
Top 9.6% on proxy.golang.org
go: github.com/vllm-project/vllm v0.6.4
A high-throughput and memory-efficient inference and serving engine for LLMs
35 versions - Latest release: about 1 month ago