Ecosyste.ms: OpenCollective

An open API service for software projects hosted on Open Collective.

vLLM

vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective - Host: opensource - https://opencollective.com/vllm - Code: https://github.com/vllm-project/vllm

[tpu][misc] fix typo

github.com/vllm-project/vllm - youkaichao opened this pull request about 1 month ago
[Feature]: Dockerfile.cpu for aarch64

github.com/vllm-project/vllm - khayamgondal opened this issue about 1 month ago
[Bugfix] Fix broken OpenAI tensorizer test

github.com/vllm-project/vllm - DarkLight1337 opened this pull request about 1 month ago
[Usage]: Single-node multi-GPU inference

github.com/vllm-project/vllm - zhentingqi opened this issue about 1 month ago
[Bugfix] Fix Hermes tool call chat template bug

github.com/vllm-project/vllm - K-Mistele opened this pull request about 1 month ago
[Installation]:

github.com/vllm-project/vllm - ndao600 opened this issue about 1 month ago
[Bugfix] Fix LongRoPE bug

github.com/vllm-project/vllm - garg-amit opened this pull request about 1 month ago
[not-for-review] test PR multi py ver

github.com/vllm-project/vllm - khluu opened this pull request about 1 month ago
[Bug]: Tensorizer test is broken

github.com/vllm-project/vllm - alexeykondrat opened this issue about 1 month ago
[Model] Support multiple images for qwen-vl

github.com/vllm-project/vllm - alex-jw-brooks opened this pull request about 1 month ago
[Kernel] Build flash-attn from source

github.com/vllm-project/vllm - ProExpertProg opened this pull request about 1 month ago
[Bug]: Missing TextTokenPrompts class

github.com/vllm-project/vllm - shubh9m opened this issue about 1 month ago
[Misc]: kvcache hash collision

github.com/vllm-project/vllm - WangErXiao opened this issue about 1 month ago
[Usage]: FP8 and INT8

github.com/vllm-project/vllm - chenchunhui97 opened this issue about 1 month ago
[Installation]: Dockerfile for aarch64

github.com/vllm-project/vllm - khayamgondal opened this issue about 2 months ago
[Misc] Remove `SqueezeLLM`

github.com/vllm-project/vllm - dsikka opened this pull request about 2 months ago
[Feature]: Supporting Guided Decoding via AsyncLLMEngine

github.com/vllm-project/vllm - DhruvaBansal00 opened this issue about 2 months ago
[Misc] Fused MoE Marlin support for GPTQ

github.com/vllm-project/vllm - dsikka opened this pull request about 2 months ago
[BugFix] Fix Granite model configuration

github.com/vllm-project/vllm - njhill opened this pull request about 2 months ago
[Misc] Add GPTQ Marlin Fused MoE Support

github.com/vllm-project/vllm - dsikka opened this pull request about 2 months ago
[Bug]: FastAPI 0.113.0 breaks vLLM OpenAPI

github.com/vllm-project/vllm - drikster80 opened this issue about 2 months ago
[Misc] Upgrade vllm-flash-attn to v2.6.2

github.com/vllm-project/vllm - WoosukKwon opened this pull request about 2 months ago
Fix shutdown problem

github.com/vllm-project/vllm - Bye-legumes opened this pull request about 2 months ago
[Bug]: Shutdown problem when we use ADAG

github.com/vllm-project/vllm - Bye-legumes opened this issue about 2 months ago
[Bug]: Crashing

github.com/vllm-project/vllm - Abdulhanan535 opened this issue about 2 months ago
[Model] Adding Granite MoE.

github.com/vllm-project/vllm - shawntan opened this pull request about 2 months ago
[Misc]: benchmark_serving with image input

github.com/vllm-project/vllm - Mrxiangli opened this issue about 2 months ago
[CI/Build] Increasing timeout for multiproc worker tests

github.com/vllm-project/vllm - alexeykondrat opened this pull request about 2 months ago
[Core] *Prompt* logprobs support in Multi-step

github.com/vllm-project/vllm - afeldman-nm opened this pull request about 2 months ago
[Bug]: vllm.engine.async_llm_engine.AsyncEngineDeadError

github.com/vllm-project/vllm - NicolasDrapier opened this issue about 2 months ago
[OpenVINO] Enable GPU support for OpenVINO vLLM backend

github.com/vllm-project/vllm - sshlyapn opened this pull request about 2 months ago
[Frontend] Add --logprobs argument to `benchmark_serving.py`

github.com/vllm-project/vllm - afeldman-nm opened this pull request about 2 months ago
[Usage]: how to release cuda memory

github.com/vllm-project/vllm - UCC-team opened this issue about 2 months ago
[Doc] Add multi-image input example and update supported models

github.com/vllm-project/vllm - DarkLight1337 opened this pull request about 2 months ago
[Core/Bugfix] pass VLLM_ATTENTION_BACKEND to ray workers

github.com/vllm-project/vllm - SolitaryThinker opened this pull request about 2 months ago
[New Model]: Support for allenai/OLMoE-1B-7B-0924

github.com/vllm-project/vllm - GulatiAditya opened this issue about 2 months ago
[bugfix] Upgrade minimum OpenAI version

github.com/vllm-project/vllm - SolitaryThinker opened this pull request about 2 months ago
[Model] Allow loading from original Mistral format

github.com/vllm-project/vllm - patrickvonplaten opened this pull request about 2 months ago
Bump version to v0.6.0

github.com/vllm-project/vllm - simon-mo opened this pull request about 2 months ago
Move verify_marlin_supported to GPTQMarlinLinearMethod

github.com/vllm-project/vllm - mgoin opened this pull request about 2 months ago
[CI] Change test input in Gemma LoRA test

github.com/vllm-project/vllm - WoosukKwon opened this pull request about 2 months ago
[Misc] remove peft as dependency for prompt models

github.com/vllm-project/vllm - prashantgupta24 opened this pull request about 2 months ago
[Doc] [Misc] Create CODE_OF_CONDUCT.md

github.com/vllm-project/vllm - mmcelaney opened this pull request about 2 months ago
[ci] Mark LoRA test as soft-fail

github.com/vllm-project/vllm - khluu opened this pull request about 2 months ago
[Bug]: vllm async engine can not use adag

github.com/vllm-project/vllm - Bye-legumes opened this issue about 2 months ago
[Core][Bugfix][Perf] Introduce `MQLLMEngine` to avoid `asyncio` OH

github.com/vllm-project/vllm - alexm-neuralmagic opened this pull request about 2 months ago
[Bugfix] Fix missing `post_layernorm` in CLIP

github.com/vllm-project/vllm - DarkLight1337 opened this pull request about 2 months ago