An open API service for software projects hosted on Open Collective.

vLLM: github.com/vllm-project/vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://github.com/vllm-project/vllm

Project activity

Loading...
Loading...

New Projects: 0
New Releases: 3

New Issues: 93
New Pull Requests: 165

Closed Issues: 56
Merged Pull Requests: 50
Closed Pull Requests: 20

Issue Authors: 81
Pull Request Authors: 105
Active Maintainers: 26

Time to close issues: 73.9 days
Time to merge pull requests: 3.5 days
Time to close pull requests: 11.8 days

Loading...

Commit Stats

Loading...
Loading...

Commits: 0
Commit Authors: 0
Commit Committers: 0
Additions: 0
Deletions: 0

Loading...

amd cuda deepseek gpt hpu inference inferentia llama llm llm-serving llmops mlops model-serving pytorch qwen rocm tpu trainium transformer xpu

Last synced: 1 day ago
JSON representation

A high-throughput and memory-efficient inference and serving engine for LLMs

Collective


2751 +1469

New Issues

3576 +2526

New Pull Requests

1217 +837

Closed Issues

1373 +922

Merged Pull Requests

1708 +1164

Closed Pull Requests

1912 +962

Issue Authors

45.0hrs -44.8

Time-to-Close Issues

335 +242

Not Merged PRs

889 +569

Pull Request Authors

8.1hrs -11.8

Time-to-Close PRs

4.6hrs -5.1

Time-to-Merge PRs

54 +28

Maintainers

33 +4

Releases
Loading...

Issues

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

No commits found within this period.