An open API service for software projects hosted on Open Collective.

vLLM: github.com/vllm-project/vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://github.com/vllm-project/vllm

Project activity

Loading...
Loading...

New Projects: 0
New Releases: 0

New Issues: 0
New Pull Requests: 0

Closed Issues: 0
Merged Pull Requests: 0
Closed Pull Requests: 0

Issue Authors: 0
Pull Request Authors: 0
Active Maintainers: 0

Time to close issues: N/A
Time to merge pull requests: N/A
Time to close pull requests: N/A

Loading...

Commit Stats

Loading...
Loading...

Commits: 0
Commit Authors: 0
Commit Committers: 0
Additions: 0
Deletions: 0

Loading...

amd blackwell cuda deepseek deepseek-v3 gpt gpt-oss inference kimi llama llm llm-serving model-serving moe openai pytorch qwen qwen3 tpu transformer

Last synced: 7 days ago
JSON representation

A high-throughput and memory-efficient inference and serving engine for LLMs

Collective

0

New Issues

0

New Pull Requests

0

Closed Issues

0

Merged Pull Requests

0

Closed Pull Requests

0

Issue Authors

N/A

Time-to-Close Issues

0

Not Merged PRs

0

Pull Request Authors

N/A

Time-to-Close PRs

N/A

Time-to-Merge PRs

0

Maintainers