Ecosyste.ms: OpenCollective

An open API service for software projects hosted on Open Collective.

vLLM

vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective - Host: opensource - https://opencollective.com/vllm - Code: https://github.com/vllm-project/vllm

[Usage]: Make request to LLAVA server.

github.com/vllm-project/vllm - premg16 opened this issue 9 months ago
[Frontend] Support GPT-4V Chat Completions API

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 9 months ago
[Model] Initial support for LLaVA-NeXT

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 9 months ago
[Core] Support image processor

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 9 months ago
[Misc]: optimize eager mode host time

github.com/vllm-project/vllm - functionxu123 opened this pull request 9 months ago
Adding max queue time parameter

github.com/vllm-project/vllm - KrishnaM251 opened this pull request 9 months ago
[Usage]: Llama 3 8B Instruct Inference

github.com/vllm-project/vllm - aliozts opened this issue 9 months ago
[Feature]: AMD ROCm 6.1 Support

github.com/vllm-project/vllm - kannan-scalers-ai opened this issue 9 months ago
[Feature]: Phi2 LoRA support

github.com/vllm-project/vllm - zero-or-one opened this issue 9 months ago
[Misc]Add customized information for models

github.com/vllm-project/vllm - jeejeelee opened this pull request 9 months ago
[Bug]: Invalid Device Ordinal on ROCm

github.com/vllm-project/vllm - Bellk17 opened this issue 9 months ago
[Misc] [CI]: AMD test flaky on main CI

github.com/vllm-project/vllm - cadedaniel opened this issue 9 months ago
[Model] Jamba support

github.com/vllm-project/vllm - mzusman opened this pull request 9 months ago
[CI/BUILD] enable intel queue for longer CPU tests

github.com/vllm-project/vllm - zhouyuan opened this pull request 9 months ago
[Bug]: --engine-use-ray is broken. #4100

github.com/vllm-project/vllm - jdinalt opened this pull request 9 months ago
[Bug]: guided_json bad output for llama2-13b

github.com/vllm-project/vllm - pseudotensor opened this issue 9 months ago
[Model] Adding support for MiniCPM-V

github.com/vllm-project/vllm - HwwwwwwwH opened this pull request 9 months ago
[Bug]: vllm_C is missing.

github.com/vllm-project/vllm - Calvinnncy97 opened this issue 9 months ago
[Model] Add support for 360zhinao

github.com/vllm-project/vllm - garycaokai opened this pull request 9 months ago
[Bug]: RuntimeError: Unknown layout

github.com/vllm-project/vllm - zzlgreat opened this issue 9 months ago
[Usage]: Problem when loading my trained model.

github.com/vllm-project/vllm - hummingbird2030 opened this issue 9 months ago
[Feature]: bitsandbytes support

github.com/vllm-project/vllm - orellavie1212 opened this issue 9 months ago
[Frontend] Refactor prompt processing

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 9 months ago
[Bug]: start api server stuck

github.com/vllm-project/vllm - QianguoS opened this issue 9 months ago
[Core] Support LoRA on quantized models

github.com/vllm-project/vllm - jeejeelee opened this pull request 9 months ago
[Kernel] Fused MoE Config for Mixtral 8x22

github.com/vllm-project/vllm - ywang96 opened this pull request 9 months ago
[Usage]: flash_attn vs xformers

github.com/vllm-project/vllm - VeryVery opened this issue 9 months ago
[CI/Build] Reduce race condition in docker build

github.com/vllm-project/vllm - youkaichao opened this pull request 10 months ago
[Bug]: StableLM 12b head size incorrect

github.com/vllm-project/vllm - bjoernpl opened this issue 10 months ago
[Model] LoRA gptbigcode implementation

github.com/vllm-project/vllm - raywanb opened this pull request 10 months ago
[Model] Initialize Fuyu-8B support

github.com/vllm-project/vllm - Isotr0py opened this pull request 10 months ago
[Kernel] PyTorch Labs Fused MoE Kernel Integration

github.com/vllm-project/vllm - robertgshaw2-neuralmagic opened this pull request 10 months ago
[Bug]: killed due to high memory usage

github.com/vllm-project/vllm - xiewf1990 opened this issue 10 months ago
[Bug]: Cannot load lora adapters in WSL 2

github.com/vllm-project/vllm - invokeinnovation opened this issue 10 months ago
[Roadmap] vLLM Roadmap Q2 2024

github.com/vllm-project/vllm - simon-mo opened this issue 10 months ago
[Frontend] openAI entrypoint dynamic adapter load

github.com/vllm-project/vllm - DavidPeleg6 opened this pull request 10 months ago
[Misc]: Implement CPU/GPU swapping in BlockManagerV2

github.com/vllm-project/vllm - Kaiyang-Chen opened this pull request 10 months ago
[Model] Cohere CommandR+

github.com/vllm-project/vllm - saurabhdash2512 opened this pull request 10 months ago
[Bug]: YI:34B在使用上无法停止。

github.com/vllm-project/vllm - cat2353050774 opened this issue 10 months ago