vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective -
Host: opensource -
https://opencollective.com/vllm
- Code: https://github.com/vllm-project/vllm
Financials
Expenses: 5 (-$438.80)
Donors: 4
Spenders: 2
Project activity
New Projects: 0
New Releases: 4
New Issues: 234
New Pull Requests: 367
Closed Issues: 57
Merged Pull Requests: 106
Closed Pull Requests: 30
Issue Authors: 209
Pull Request Authors: 186
Active Maintainers: 31
Time to close issues:
31.9 days
Time to merge pull requests:
4.4 days
Time to close pull requests:
10.5 days
Commit Stats
Commits: 0
Commit Authors: 0
Commit Committers: 0
Additions: 0
Deletions: 0