Ecosyste.ms: OpenCollective
An open API service for software projects hosted on Open Collective.
Chatbot Arena
Chatbot Arena is an open platform for crowdsourced AI benchmarking, developed by researchers at UC Berkeley SkyLab and LMSYS.
Collective -
Host: opensource -
https://opencollective.com/chatbot-arena
- Website: https://lmarena.ai/?about
- Code: https://github.com/lm-sys/FastChat
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
github.com/lm-sys/FastChat - Gnomesenpai opened this issue about 1 year ago
github.com/lm-sys/FastChat - Gnomesenpai opened this issue about 1 year ago
How to determine the number of GPUs needed?
github.com/lm-sys/FastChat - 2018211801 opened this issue over 1 year ago
github.com/lm-sys/FastChat - 2018211801 opened this issue over 1 year ago
Question using Openai-api compatible API service to serve llama-2-7b
github.com/lm-sys/FastChat - Junpliu opened this issue over 1 year ago
github.com/lm-sys/FastChat - Junpliu opened this issue over 1 year ago
move BaseModelWorker outside serve.model_worker to make it independent
github.com/lm-sys/FastChat - liunux4odoo opened this pull request over 1 year ago
github.com/lm-sys/FastChat - liunux4odoo opened this pull request over 1 year ago
模型文件和本项目部署在服务商,我想在本地使用tokenizer,可以使用这个项目实例化一个tokenizer?
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
How to get logs for every x number of steps instead of epochs
github.com/lm-sys/FastChat - code-x-0018 opened this issue over 1 year ago
github.com/lm-sys/FastChat - code-x-0018 opened this issue over 1 year ago
gen_model_answer.py generates the phenomenon of multiple conversations
github.com/lm-sys/FastChat - wangtong2020 opened this issue over 1 year ago
github.com/lm-sys/FastChat - wangtong2020 opened this issue over 1 year ago
Update Dockerfile
github.com/lm-sys/FastChat - dubaoquan404 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - dubaoquan404 opened this pull request over 1 year ago
Launch model worker with local weights
github.com/lm-sys/FastChat - CohenQU opened this issue over 1 year ago
github.com/lm-sys/FastChat - CohenQU opened this issue over 1 year ago
Can be used with Mistral Model?
github.com/lm-sys/FastChat - rajveer43 opened this issue over 1 year ago
github.com/lm-sys/FastChat - rajveer43 opened this issue over 1 year ago
openai api server - last two api calls not protected with check_api_key when used with api keys
github.com/lm-sys/FastChat - stelterlab opened this issue over 1 year ago
github.com/lm-sys/FastChat - stelterlab opened this issue over 1 year ago
Ask about the usage of template
github.com/lm-sys/FastChat - coranholmes opened this issue over 1 year ago
github.com/lm-sys/FastChat - coranholmes opened this issue over 1 year ago
add support for bedrock, togetherai, huggingface tgi, replicate, ai21, cohere, ai21
github.com/lm-sys/FastChat - krrishdholakia opened this pull request over 1 year ago
github.com/lm-sys/FastChat - krrishdholakia opened this pull request over 1 year ago
replace os.getenv with os.path.expanduser because the first one doesn…
github.com/lm-sys/FastChat - khalil-Hennara opened this pull request over 1 year ago
github.com/lm-sys/FastChat - khalil-Hennara opened this pull request over 1 year ago
Can't able to get Prediction with Multi-GPU, but working fine in case of Single GPU
github.com/lm-sys/FastChat - rajvirdhakhada7 opened this issue over 1 year ago
github.com/lm-sys/FastChat - rajvirdhakhada7 opened this issue over 1 year ago
LongChatV1.5-7B-32K with FlashAtten2 OOMs on inference
github.com/lm-sys/FastChat - jmzeng opened this issue over 1 year ago
github.com/lm-sys/FastChat - jmzeng opened this issue over 1 year ago
move BaseModelWorker outside serve.model_worker to make it independent
github.com/lm-sys/FastChat - liunux4odoo opened this pull request over 1 year ago
github.com/lm-sys/FastChat - liunux4odoo opened this pull request over 1 year ago
Add StreamingLLM to improve streaming performance
github.com/lm-sys/FastChat - BabyChouSr opened this issue over 1 year ago
github.com/lm-sys/FastChat - BabyChouSr opened this issue over 1 year ago
Fix for single turn dataset
github.com/lm-sys/FastChat - toslunar opened this pull request over 1 year ago
github.com/lm-sys/FastChat - toslunar opened this pull request over 1 year ago
questions in vicuna_bench with reference_answer cannot run
github.com/lm-sys/FastChat - toslunar opened this issue over 1 year ago
github.com/lm-sys/FastChat - toslunar opened this issue over 1 year ago
Update monitor & plots
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM @fschat 0.2.29, torch 2.0.1+cu118, transformers 4.33.3
github.com/lm-sys/FastChat - ZealHua opened this issue over 1 year ago
github.com/lm-sys/FastChat - ZealHua opened this issue over 1 year ago
Optimize for proper flash attn causal handling
github.com/lm-sys/FastChat - siddartha-RE opened this pull request over 1 year ago
github.com/lm-sys/FastChat - siddartha-RE opened this pull request over 1 year ago
does python3 -m fastchat.serve.test_message support other model?
github.com/lm-sys/FastChat - lambda7xx opened this issue over 1 year ago
github.com/lm-sys/FastChat - lambda7xx opened this issue over 1 year ago
release and tags are not in sync
github.com/lm-sys/FastChat - bufferoverflow opened this issue over 1 year ago
github.com/lm-sys/FastChat - bufferoverflow opened this issue over 1 year ago
Add metharme (pygmalion) conversation template
github.com/lm-sys/FastChat - AlpinDale opened this pull request over 1 year ago
github.com/lm-sys/FastChat - AlpinDale opened this pull request over 1 year ago
Third Party UI Example
github.com/lm-sys/FastChat - enochlev opened this pull request over 1 year ago
github.com/lm-sys/FastChat - enochlev opened this pull request over 1 year ago
Update train code to support the new tokenizer
github.com/lm-sys/FastChat - Ying1123 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - Ying1123 opened this pull request over 1 year ago
Update links to lmsys-chat-1m
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
Using `--device mps` results in `RuntimeError: PyTorch is not linked with support for mps devices`
github.com/lm-sys/FastChat - ngreve opened this issue over 1 year ago
github.com/lm-sys/FastChat - ngreve opened this issue over 1 year ago
Vicuna template may cause the tokenization mismatch warning
github.com/lm-sys/FastChat - Kong-Aobo opened this issue over 1 year ago
github.com/lm-sys/FastChat - Kong-Aobo opened this issue over 1 year ago
Update vllm_worker.py fix bug #2491 vllm 0.2.0 version from vllm.engine.async_l…
github.com/lm-sys/FastChat - exceedzhang opened this pull request over 1 year ago
github.com/lm-sys/FastChat - exceedzhang opened this pull request over 1 year ago
vllm_worker.py run error on vllm0.2.0
github.com/lm-sys/FastChat - exceedzhang opened this issue over 1 year ago
github.com/lm-sys/FastChat - exceedzhang opened this issue over 1 year ago
Fail to start vllm_worker for codellama/CodeLlama-7b-Instruct-hf on two T4 gpus
github.com/lm-sys/FastChat - bugzyz opened this issue over 1 year ago
github.com/lm-sys/FastChat - bugzyz opened this issue over 1 year ago
Pass root_path argument in web_server.py
github.com/lm-sys/FastChat - pweglik opened this issue over 1 year ago
github.com/lm-sys/FastChat - pweglik opened this issue over 1 year ago
ValueError: Tokenizer class QWenTokenizer does not exist or is not currently imported.
github.com/lm-sys/FastChat - thiner opened this issue over 1 year ago
github.com/lm-sys/FastChat - thiner opened this issue over 1 year ago
Can't load model Qwen-14B-Chat-Int4
github.com/lm-sys/FastChat - dijkstra-mose opened this issue over 1 year ago
github.com/lm-sys/FastChat - dijkstra-mose opened this issue over 1 year ago
Fix chunk handling when partial chunks are returned
github.com/lm-sys/FastChat - siddartha-RE opened this pull request over 1 year ago
github.com/lm-sys/FastChat - siddartha-RE opened this pull request over 1 year ago
Update openai_api_server.py to add an SSL option
github.com/lm-sys/FastChat - brandonbiggs opened this pull request over 1 year ago
github.com/lm-sys/FastChat - brandonbiggs opened this pull request over 1 year ago
Add Mistral AI instruction template
github.com/lm-sys/FastChat - lerela opened this pull request over 1 year ago
github.com/lm-sys/FastChat - lerela opened this pull request over 1 year ago
Output in a defined JSON format
github.com/lm-sys/FastChat - rounak610 opened this issue over 1 year ago
github.com/lm-sys/FastChat - rounak610 opened this issue over 1 year ago
能否8bit量化加载启动Qwen14b的?fastchat代码中限定了使用bin格式,而qwen14b只提供了safetensor格式。
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
Add conversation templates for ultralm
github.com/lm-sys/FastChat - lifan-yuan opened this pull request over 1 year ago
github.com/lm-sys/FastChat - lifan-yuan opened this pull request over 1 year ago
HOME Environment Variable Does Not Exist on Windows 11
github.com/lm-sys/FastChat - Skareeg opened this issue over 1 year ago
github.com/lm-sys/FastChat - Skareeg opened this issue over 1 year ago
Model does not load in Docker
github.com/lm-sys/FastChat - Rachneet opened this issue over 1 year ago
github.com/lm-sys/FastChat - Rachneet opened this issue over 1 year ago
support finetune on Qwen14B
github.com/lm-sys/FastChat - lucasjinreal opened this issue over 1 year ago
github.com/lm-sys/FastChat - lucasjinreal opened this issue over 1 year ago
very strange?RTX3090 load vicuna-7b,with 22g of video memory usage
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
help! llama2-13b use vllm_worker with two A100(80g); the RayWoker throw an exceptions
github.com/lm-sys/FastChat - azureboot opened this issue over 1 year ago
github.com/lm-sys/FastChat - azureboot opened this issue over 1 year ago
Run the qwen model and the response is messy
github.com/lm-sys/FastChat - lonngxiang opened this issue over 1 year ago
github.com/lm-sys/FastChat - lonngxiang opened this issue over 1 year ago
help!vllm with qwen-7b-chat fail:RuntimeError: Failed to load the model config.
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
fix typo quantization
github.com/lm-sys/FastChat - asaiacai opened this pull request over 1 year ago
github.com/lm-sys/FastChat - asaiacai opened this pull request over 1 year ago
how to get stream output when run python3 -m fastchat.serve.api?
github.com/lm-sys/FastChat - lonngxiang opened this issue over 1 year ago
github.com/lm-sys/FastChat - lonngxiang opened this issue over 1 year ago
[model_worker] should `worker-address` argument need to be changed when `host` or `port` arguments are changed?
github.com/lm-sys/FastChat - hi-jin opened this issue over 1 year ago
github.com/lm-sys/FastChat - hi-jin opened this issue over 1 year ago
ValueError: Tokenizer class BaichuanTokenizer does not exist or is not currently imported.
github.com/lm-sys/FastChat - lonngxiang opened this issue over 1 year ago
github.com/lm-sys/FastChat - lonngxiang opened this issue over 1 year ago
[Fine-Tuning Fail]: Problem Running FastChat T5 fine-tuning
github.com/lm-sys/FastChat - pcchen-ntunlp opened this issue over 1 year ago
github.com/lm-sys/FastChat - pcchen-ntunlp opened this issue over 1 year ago
Fix falcon chat template
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
vllm worker awq quantization update
github.com/lm-sys/FastChat - dongxiaolong opened this pull request over 1 year ago
github.com/lm-sys/FastChat - dongxiaolong opened this pull request over 1 year ago
Potential Issue of Vicuna v1.5 Training
github.com/lm-sys/FastChat - xingyaoww opened this issue over 1 year ago
github.com/lm-sys/FastChat - xingyaoww opened this issue over 1 year ago
Show terms of use as an JS alert
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
LLama 2 finetuning on multi-GPU with long context length
github.com/lm-sys/FastChat - amant555 opened this issue over 1 year ago
github.com/lm-sys/FastChat - amant555 opened this issue over 1 year ago
No module named 'transformers' in docker-compose.yml
github.com/lm-sys/FastChat - theUpsider opened this issue over 1 year ago
github.com/lm-sys/FastChat - theUpsider opened this issue over 1 year ago
llama-2 70b model using openai_api_server chat completions api or gradio interface times out
github.com/lm-sys/FastChat - harshitpatni opened this issue over 1 year ago
github.com/lm-sys/FastChat - harshitpatni opened this issue over 1 year ago
Huggingface api worker
github.com/lm-sys/FastChat - hnyls2002 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - hnyls2002 opened this pull request over 1 year ago
Add ExllamaV2 Inference Framework Support.
github.com/lm-sys/FastChat - leonxia1018 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - leonxia1018 opened this pull request over 1 year ago
Add Exllama2 Inference Framework Support.
github.com/lm-sys/FastChat - leonxia1018 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - leonxia1018 opened this pull request over 1 year ago
When using the new version of 'mtbench', I encountered an issue
github.com/lm-sys/FastChat - endNone opened this issue over 1 year ago
github.com/lm-sys/FastChat - endNone opened this issue over 1 year ago
enable quantization support for the VLLM worker
github.com/lm-sys/FastChat - dongxiaolong opened this issue over 1 year ago
github.com/lm-sys/FastChat - dongxiaolong opened this issue over 1 year ago
DLL importing issues on windows
github.com/lm-sys/FastChat - majidbhatti opened this issue over 1 year ago
github.com/lm-sys/FastChat - majidbhatti opened this issue over 1 year ago
Missing possibility for GET request against model_workers for "liveness" and "readiness" signals for kubernetes
github.com/lm-sys/FastChat - aecorn opened this issue over 1 year ago
github.com/lm-sys/FastChat - aecorn opened this issue over 1 year ago
Add Optional SSL Support to controller.py
github.com/lm-sys/FastChat - brandonbiggs opened this pull request over 1 year ago
github.com/lm-sys/FastChat - brandonbiggs opened this pull request over 1 year ago
Error while using RetrievalQA chain type of Langchain for vector retrieval using FastChat LLM model which is hosted on Endpoint(GPU machine).
github.com/lm-sys/FastChat - Smitraj007 opened this issue over 1 year ago
github.com/lm-sys/FastChat - Smitraj007 opened this issue over 1 year ago
Unable to Load Llama-2-70B-chat-GPTQ
github.com/lm-sys/FastChat - chengyanwu opened this issue over 1 year ago
github.com/lm-sys/FastChat - chengyanwu opened this issue over 1 year ago
openai.error.APIConnectionError
github.com/lm-sys/FastChat - 385628424 opened this issue over 1 year ago
github.com/lm-sys/FastChat - 385628424 opened this issue over 1 year ago
block_css not work well for Chatbot
github.com/lm-sys/FastChat - LiuZhihhxx opened this issue over 1 year ago
github.com/lm-sys/FastChat - LiuZhihhxx opened this issue over 1 year ago
Openai interface add use beam search and best of 2
github.com/lm-sys/FastChat - leiwen83 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - leiwen83 opened this pull request over 1 year ago
[WIP] Add microsoft/phi-1_5
github.com/lm-sys/FastChat - BabyChouSr opened this pull request over 1 year ago
github.com/lm-sys/FastChat - BabyChouSr opened this pull request over 1 year ago
Data cleaning scripts for dataset release
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
How do I load the bge model and start the service?
github.com/lm-sys/FastChat - asenasen123 opened this issue over 1 year ago
github.com/lm-sys/FastChat - asenasen123 opened this issue over 1 year ago