vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective -
Host: opensource -
https://opencollective.com/vllm
- Code: https://github.com/vllm-project/vllm
Financials
Expenses: 1 (-$100.00)
Donors: 2
Spenders: 1
Project activity
New Projects: 0
New Releases: 3
New Issues: 93
New Pull Requests: 165
Closed Issues: 56
Merged Pull Requests: 50
Closed Pull Requests: 20
Issue Authors: 81
Pull Request Authors: 105
Active Maintainers: 26
Time to close issues:
73.9 days
Time to merge pull requests:
3.5 days
Time to close pull requests:
11.8 days
Commit Stats
Commits: 0
Commit Authors: 0
Commit Committers: 0
Additions: 0
Deletions: 0