vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective -
Host: opensource -
https://opencollective.com/vllm
- Code: https://github.com/vllm-project/vllm
Financials
Expenses: 6 (-$7,384.71)
Donors: 4
Spenders: 2
Project activity
New Projects: 0
New Releases: 3
New Issues: 226
New Pull Requests: 450
Closed Issues: 87
Merged Pull Requests: 139
Closed Pull Requests: 42
Issue Authors: 198
Pull Request Authors: 200
Active Maintainers: 32
Time to close issues:
39.6 days
Time to merge pull requests:
5.3 days
Time to close pull requests:
11.3 days
Commit Stats
Commits: 0
Commit Authors: 0
Commit Committers: 0
Additions: 0
Deletions: 0