Ecosyste.ms: OpenCollective

An open API service for software projects hosted on Open Collective.

vLLM

vLLM is a high-throughput and memory-efficient inference and serving engine for large language models (LLMs).
Collective - Host: opensource - https://opencollective.com/vllm - Code: https://github.com/vllm-project/vllm

[Hardware] Initial TPU integration

github.com/vllm-project/vllm - WoosukKwon opened this pull request 5 months ago
[Misc] Skip for logits_scale == 1.0

github.com/vllm-project/vllm - WoosukKwon opened this pull request 5 months ago
[Misc] Missing error message for custom ops import

github.com/vllm-project/vllm - DamonFool opened this pull request 5 months ago
trigger_ci_cd

github.com/vllm-project/vllm - sergey-tinkoff opened this pull request 5 months ago
[Bug]: Regression in predictions in v0.4.3

github.com/vllm-project/vllm - hibukipanim opened this issue 5 months ago
[Model] Dynamic image size support for LLaVA-NeXT

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 5 months ago
test

github.com/vllm-project/vllm - geeker-smallwhite opened this pull request 5 months ago
[Core] Dynamic image size support for VLMs

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 5 months ago
[Kernel] Update Cutlass int8 kernel configs for SM80

github.com/vllm-project/vllm - varun-sundar-rabindranath opened this pull request 5 months ago
[Bug]: chatglm3 with lora adapter

github.com/vllm-project/vllm - Qingyuncookie opened this issue 5 months ago
[Misc] Fix docstring of get_attn_backend

github.com/vllm-project/vllm - WoosukKwon opened this pull request 5 months ago
[Bug]: a bug

github.com/vllm-project/vllm - lambda7xx opened this issue 5 months ago
[Bugfix] Destroy PP groups properly

github.com/vllm-project/vllm - andoorve opened this pull request 5 months ago
[Kernel] Allow 8-bit outputs for cutlass_scaled_mm

github.com/vllm-project/vllm - tlrmchlsmth opened this pull request 5 months ago
p

github.com/vllm-project/vllm - khluu opened this pull request 5 months ago
[Misc] Add CustomOp interface for device portability

github.com/vllm-project/vllm - WoosukKwon opened this pull request 5 months ago
[CI/Build] Reducing CPU CI execution time

github.com/vllm-project/vllm - bigPYJ1151 opened this pull request 5 months ago
[Frontend] Add OpenAI Vision API Support

github.com/vllm-project/vllm - ywang96 opened this pull request 5 months ago
[Bugfix] Add warmup for prefix caching example

github.com/vllm-project/vllm - zhuohan123 opened this pull request 5 months ago
Bugfix: fix broken of download models from modelscope

github.com/vllm-project/vllm - liuyhwangyh opened this pull request 5 months ago
[Model] Correct Mixtral FP8 checkpoint loading

github.com/vllm-project/vllm - comaniac opened this pull request 5 months ago
[Feature]: Custom attention masks

github.com/vllm-project/vllm - ojus1 opened this issue 5 months ago
v0.5.0 Release Tracker

github.com/vllm-project/vllm - simon-mo opened this issue 5 months ago
Support W4A8 quantization for vllm

github.com/vllm-project/vllm - HandH1998 opened this pull request 5 months ago
[Bugfix] Support `prompt_logprobs==0`

github.com/vllm-project/vllm - toslunar opened this pull request 5 months ago
[CI/Build] Add inputs tests

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 5 months ago
[Core] Registry for processing model inputs

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 5 months ago
[Installation]: Failed to build punica

github.com/vllm-project/vllm - asinglestep opened this issue 5 months ago
[Feature]: Speculative edits

github.com/vllm-project/vllm - Muhtasham opened this issue 5 months ago
[Frontend] Customizable RoPE theta

github.com/vllm-project/vllm - sasha0552 opened this pull request 5 months ago
push error

github.com/vllm-project/vllm - triple-Mu opened this pull request 5 months ago
[Misc] Improve error message when LoRA parsing fails

github.com/vllm-project/vllm - DarkLight1337 opened this pull request 5 months ago
[Core] Support loading GGUF model

github.com/vllm-project/vllm - Isotr0py opened this pull request 5 months ago
[Bug]: loading squeezellm model

github.com/vllm-project/vllm - yuhuixu1993 opened this issue 5 months ago
[Model] Add PaliGemma

github.com/vllm-project/vllm - ywang96 opened this pull request 5 months ago
[BugFix] Prevent `LLM.encode` for non-generation Models

github.com/vllm-project/vllm - robertgshaw2-neuralmagic opened this pull request 5 months ago
[Kernel] Switch fp8 layers to use the CUTLASS kernels

github.com/vllm-project/vllm - tlrmchlsmth opened this pull request 5 months ago
[Feature]: BERT models for embeddings

github.com/vllm-project/vllm - mevince opened this issue 5 months ago
[Model] LoRA support added for command-r

github.com/vllm-project/vllm - sergey-tinkoff opened this pull request 5 months ago
[Usage]: Prefix caching in VLLM

github.com/vllm-project/vllm - Abhinay2323 opened this issue 5 months ago
draft2

github.com/vllm-project/vllm - khluu opened this pull request 5 months ago
[Bugfix] Remove deprecated @abstractproperty

github.com/vllm-project/vllm - zhuohan123 opened this pull request 5 months ago
Adding fp8 gemm computation

github.com/vllm-project/vllm - charlifu opened this pull request 5 months ago