vLLM
vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective -
Host: opensource -
https://opencollective.com/vllm
- Code: https://github.com/vllm-project/vllm
Financials
Expenses: 4 (-$14,574.68)
Donors: 2
Spenders: 1
Project activity
New Projects: 0
New Releases: 0
New Issues: 0
New Pull Requests: 0
Closed Issues: 0
Merged Pull Requests: 0
Closed Pull Requests: 0
Issue Authors: 0
Pull Request Authors: 0
Active Maintainers: 0
Time to close issues:
N/A
Time to merge pull requests:
N/A
Time to close pull requests:
N/A
Commit Stats
Commits: 0
Commit Authors: 0
Commit Committers: 0
Additions: 0
Deletions: 0