Ecosyste.ms: OpenCollective
An open API service for software projects hosted on Open Collective.
Chatbot Arena
Chatbot Arena is an open platform for crowdsourced AI benchmarking, developed by researchers at UC Berkeley SkyLab and LMSYS.
Collective -
Host: opensource -
https://opencollective.com/chatbot-arena
- Website: https://lmarena.ai/?about
- Code: https://github.com/lm-sys/FastChat
chatglm3 error: why are there multiple <|assistant|> <|user|> tags in the generated datas?
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
Add Hermes 2.5 [fixed]
github.com/lm-sys/FastChat - 152334H opened this pull request about 1 year ago
github.com/lm-sys/FastChat - 152334H opened this pull request about 1 year ago
fix: UnboundLocalError: local variable 'stopped' referenced before assignment
github.com/lm-sys/FastChat - purpleroc opened this pull request about 1 year ago
github.com/lm-sys/FastChat - purpleroc opened this pull request about 1 year ago
fastchat API+chatglm2-6b测试时不报错但结果为空?
github.com/lm-sys/FastChat - Wangqi12138 opened this issue about 1 year ago
github.com/lm-sys/FastChat - Wangqi12138 opened this issue about 1 year ago
Inference does not stop with Llama-2-13B-GPTQ with exllama
github.com/lm-sys/FastChat - karansikka1 opened this issue about 1 year ago
github.com/lm-sys/FastChat - karansikka1 opened this issue about 1 year ago
need pytorch 2.0 for last release?
github.com/lm-sys/FastChat - jesulo opened this issue about 1 year ago
github.com/lm-sys/FastChat - jesulo opened this issue about 1 year ago
xdan-l1-chat-v0.1 support submit with the scores of 8.09.
github.com/lm-sys/FastChat - xiechengmude opened this pull request about 1 year ago
github.com/lm-sys/FastChat - xiechengmude opened this pull request about 1 year ago
Bump version to v0.2.33
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
Format code and minor bug fix
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
add trust_remote_code argument
github.com/lm-sys/FastChat - wangshuai09 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - wangshuai09 opened this pull request about 1 year ago
Add Microsoft/Orca-2-7b and update model support docs
github.com/lm-sys/FastChat - BabyChouSr opened this pull request about 1 year ago
github.com/lm-sys/FastChat - BabyChouSr opened this pull request about 1 year ago
[BUG] _raise_timeout_error when training chatglm2-6b
github.com/lm-sys/FastChat - wangshuai09 opened this issue about 1 year ago
github.com/lm-sys/FastChat - wangshuai09 opened this issue about 1 year ago
how to load finetuned-chatglm3 ?
github.com/lm-sys/FastChat - estuday opened this issue about 1 year ago
github.com/lm-sys/FastChat - estuday opened this issue about 1 year ago
fix tokenizer of chatglm2
github.com/lm-sys/FastChat - wangshuai09 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - wangshuai09 opened this pull request about 1 year ago
fix tokenizer.pad_token attribute error
github.com/lm-sys/FastChat - wangshuai09 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - wangshuai09 opened this pull request about 1 year ago
Add new vllm serving options
github.com/lm-sys/FastChat - tjtanaa opened this issue about 1 year ago
github.com/lm-sys/FastChat - tjtanaa opened this issue about 1 year ago
Unable to connect to chat.lmsys.org on brave browser
github.com/lm-sys/FastChat - tmvkrpxl0 opened this issue about 1 year ago
github.com/lm-sys/FastChat - tmvkrpxl0 opened this issue about 1 year ago
[BUG] AssertionError when training chatglm2-6b
github.com/lm-sys/FastChat - wangshuai09 opened this issue about 1 year ago
github.com/lm-sys/FastChat - wangshuai09 opened this issue about 1 year ago
[BUG] AttributeError: can't set attribute when training chatglm2-6b
github.com/lm-sys/FastChat - wangshuai09 opened this issue about 1 year ago
github.com/lm-sys/FastChat - wangshuai09 opened this issue about 1 year ago
Template for using Deepseek code models
github.com/lm-sys/FastChat - AmaleshV opened this pull request about 1 year ago
github.com/lm-sys/FastChat - AmaleshV opened this pull request about 1 year ago
argument 'tokens': 'NoneType' object cannot be converted to 'Sequence'
github.com/lm-sys/FastChat - 143heyan opened this issue about 1 year ago
github.com/lm-sys/FastChat - 143heyan opened this issue about 1 year ago
When the model_worker starts, the template parameter settings seem invalid.
github.com/lm-sys/FastChat - Halflifefa opened this issue about 1 year ago
github.com/lm-sys/FastChat - Halflifefa opened this issue about 1 year ago
inference with multiple GPUs is too slow
github.com/lm-sys/FastChat - garyyang85 opened this issue about 1 year ago
github.com/lm-sys/FastChat - garyyang85 opened this issue about 1 year ago
CUDA Out-of-memory when fine-tuning Vicuna-13B-v1.5 with QLoRA
github.com/lm-sys/FastChat - Dandelionym opened this issue about 1 year ago
github.com/lm-sys/FastChat - Dandelionym opened this issue about 1 year ago
add support for Chinese-LLaMA-Alpaca
github.com/lm-sys/FastChat - zollty opened this pull request about 1 year ago
github.com/lm-sys/FastChat - zollty opened this pull request about 1 year ago
Inquiry on GPU Performance Benchmarks for faschat Models
github.com/lm-sys/FastChat - hayderkharrufa opened this issue about 1 year ago
github.com/lm-sys/FastChat - hayderkharrufa opened this issue about 1 year ago
Make --load-8bit flag work with weights in safetensors format
github.com/lm-sys/FastChat - xuguodong1999 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - xuguodong1999 opened this pull request about 1 year ago
support stable-vicuna model
github.com/lm-sys/FastChat - hi-jin opened this pull request about 1 year ago
github.com/lm-sys/FastChat - hi-jin opened this pull request about 1 year ago
[Feat] Add feature checking the network connection
github.com/lm-sys/FastChat - dblepart99 opened this issue about 1 year ago
github.com/lm-sys/FastChat - dblepart99 opened this issue about 1 year ago
IndexError: tuple index out of range
github.com/lm-sys/FastChat - zhaodaye2022 opened this issue about 1 year ago
github.com/lm-sys/FastChat - zhaodaye2022 opened this issue about 1 year ago
why use --load-8bit is more slow than not add
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
`do_sample` is always set to `FALSE` when using Hugging Face API
github.com/lm-sys/FastChat - ticoneva opened this issue about 1 year ago
github.com/lm-sys/FastChat - ticoneva opened this issue about 1 year ago
how to support multi-modal model like fuyu-8b?
github.com/lm-sys/FastChat - verigle opened this issue about 1 year ago
github.com/lm-sys/FastChat - verigle opened this issue about 1 year ago
save model under deepspeed
github.com/lm-sys/FastChat - MrZhengXin opened this pull request about 1 year ago
github.com/lm-sys/FastChat - MrZhengXin opened this pull request about 1 year ago
Check the max_new_tokens <= 0 in openai api server
github.com/lm-sys/FastChat - zeyugao opened this pull request about 1 year ago
github.com/lm-sys/FastChat - zeyugao opened this pull request about 1 year ago
Adding SSL support for model workers and huggingface worker
github.com/lm-sys/FastChat - lnguyen opened this pull request about 1 year ago
github.com/lm-sys/FastChat - lnguyen opened this pull request about 1 year ago
How does Fastchat integrate with existing ray cluster?
github.com/lm-sys/FastChat - EnjianGong opened this issue about 1 year ago
github.com/lm-sys/FastChat - EnjianGong opened this issue about 1 year ago
LoRA support for ModelWorker
github.com/lm-sys/FastChat - PyroGenesis opened this issue about 1 year ago
github.com/lm-sys/FastChat - PyroGenesis opened this issue about 1 year ago
What is the maximum amount of concurrency fastchat supports?
github.com/lm-sys/FastChat - asenasen123 opened this issue about 1 year ago
github.com/lm-sys/FastChat - asenasen123 opened this issue about 1 year ago
Can Fastchat started with 2GPU and int8?
github.com/lm-sys/FastChat - dongkuang opened this issue about 1 year ago
github.com/lm-sys/FastChat - dongkuang opened this issue about 1 year ago
V100,train_xformers ,Cuda OOM
github.com/lm-sys/FastChat - cason0126 opened this issue about 1 year ago
github.com/lm-sys/FastChat - cason0126 opened this issue about 1 year ago
Update exllama_v2.md
github.com/lm-sys/FastChat - jm23jeffmorgan opened this pull request about 1 year ago
github.com/lm-sys/FastChat - jm23jeffmorgan opened this pull request about 1 year ago
feat: support template's stop_str as list
github.com/lm-sys/FastChat - congchan opened this pull request about 1 year ago
github.com/lm-sys/FastChat - congchan opened this pull request about 1 year ago
cannot import name 'Doc' from 'typing_extensions'
github.com/lm-sys/FastChat - Yamazaki85 opened this issue about 1 year ago
github.com/lm-sys/FastChat - Yamazaki85 opened this issue about 1 year ago
train_flant5: fix typo
github.com/lm-sys/FastChat - Force1ess opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Force1ess opened this pull request about 1 year ago
UI and model change
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
chat.lmsys.org method not allowed error sometimes.
github.com/lm-sys/FastChat - youself64github opened this issue about 1 year ago
github.com/lm-sys/FastChat - youself64github opened this issue about 1 year ago
# "-2" is hardcoded for the Llama tokenizer to make the offset correct.?
github.com/lm-sys/FastChat - findalexli opened this issue about 1 year ago
github.com/lm-sys/FastChat - findalexli opened this issue about 1 year ago
When bge is deployed on fastchat and the input parameter is a list, the results of the same string in the two lists are inconsistent
github.com/lm-sys/FastChat - asenasen123 opened this issue about 1 year ago
github.com/lm-sys/FastChat - asenasen123 opened this issue about 1 year ago
[Feat] can vllm_worker support `use_dynamic_ntk`?
github.com/lm-sys/FastChat - ticoAg opened this issue about 1 year ago
github.com/lm-sys/FastChat - ticoAg opened this issue about 1 year ago
OpenChat 3.5 Default Conversation Template
github.com/lm-sys/FastChat - bsu3338 opened this issue about 1 year ago
github.com/lm-sys/FastChat - bsu3338 opened this issue about 1 year ago
Revert "Remove exclude_unset parameter"
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
No model name in the curl response
github.com/lm-sys/FastChat - garyyang85 opened this issue about 1 year ago
github.com/lm-sys/FastChat - garyyang85 opened this issue about 1 year ago
Could not create share link
github.com/lm-sys/FastChat - canbetry opened this issue about 1 year ago
github.com/lm-sys/FastChat - canbetry opened this issue about 1 year ago
Langchain OpenAIEmbeddings doesn't work for vllm_worker
github.com/lm-sys/FastChat - liyang79 opened this issue about 1 year ago
github.com/lm-sys/FastChat - liyang79 opened this issue about 1 year ago
peer closed connection without sending complete message body
github.com/lm-sys/FastChat - hqh312 opened this issue about 1 year ago
github.com/lm-sys/FastChat - hqh312 opened this issue about 1 year ago
when can you support YI model?
github.com/lm-sys/FastChat - ArlanCooper opened this issue about 1 year ago
github.com/lm-sys/FastChat - ArlanCooper opened this issue about 1 year ago
Something wrong with llama2 chat template?
github.com/lm-sys/FastChat - HeyyyyyyG opened this issue about 1 year ago
github.com/lm-sys/FastChat - HeyyyyyyG opened this issue about 1 year ago
Pin openai version < 1
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
OpenAI v1.0 breaks the MT-bench evaluation
github.com/lm-sys/FastChat - rsnm2 opened this issue about 1 year ago
github.com/lm-sys/FastChat - rsnm2 opened this issue about 1 year ago
TypeError: string indices must be integers
github.com/lm-sys/FastChat - adamsah opened this issue about 1 year ago
github.com/lm-sys/FastChat - adamsah opened this issue about 1 year ago
Fix version compatibility issue with transformers>4.34.0 for flash-attention2 patch
github.com/lm-sys/FastChat - Trangle opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Trangle opened this pull request about 1 year ago
Remove exclude_unset parameter
github.com/lm-sys/FastChat - snapshotpl opened this pull request about 1 year ago
github.com/lm-sys/FastChat - snapshotpl opened this pull request about 1 year ago
Add required_temp support in jsonl format to support flexible temperature setting for gen_api_answer
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
Update judge prompt
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
Improve Azure OpenAI interface
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
Discrepancy between chat.lmsys.org and Huggingface Vicuna models.
github.com/lm-sys/FastChat - KL4805 opened this issue about 1 year ago
github.com/lm-sys/FastChat - KL4805 opened this issue about 1 year ago
Fix for Mistral template
github.com/lm-sys/FastChat - Nithin-Holla opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Nithin-Holla opened this pull request about 1 year ago
llama2_flash_attn_monkey_patch error
github.com/lm-sys/FastChat - helldog-star opened this issue about 1 year ago
github.com/lm-sys/FastChat - helldog-star opened this issue about 1 year ago
Use conv.update_last_message api in mt-bench answer generation
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
Run the Gradio application as a container to serve the model on a browser Web UI.
github.com/lm-sys/FastChat - mandalrajiv opened this issue about 1 year ago
github.com/lm-sys/FastChat - mandalrajiv opened this issue about 1 year ago
added support for CodeGeex(2)
github.com/lm-sys/FastChat - peterwilli opened this pull request about 1 year ago
github.com/lm-sys/FastChat - peterwilli opened this pull request about 1 year ago
Discrepancy with HuggingFace template for Mistral
github.com/lm-sys/FastChat - Nithin-Holla opened this issue about 1 year ago
github.com/lm-sys/FastChat - Nithin-Holla opened this issue about 1 year ago
run fastchat.serve.model_worker with chatglm2-6b error
github.com/lm-sys/FastChat - liuxiaohao-xn opened this issue about 1 year ago
github.com/lm-sys/FastChat - liuxiaohao-xn opened this issue about 1 year ago
save all the chat history to log file
github.com/lm-sys/FastChat - ruifengma opened this issue about 1 year ago
github.com/lm-sys/FastChat - ruifengma opened this issue about 1 year ago
kill only fastchat process
github.com/lm-sys/FastChat - scenaristeur opened this pull request about 1 year ago
github.com/lm-sys/FastChat - scenaristeur opened this pull request about 1 year ago
Implement DeepSpeed FastGen worker
github.com/lm-sys/FastChat - SebastianBodza opened this issue about 1 year ago
github.com/lm-sys/FastChat - SebastianBodza opened this issue about 1 year ago
openchat 3.5 model support
github.com/lm-sys/FastChat - imoneoi opened this pull request about 1 year ago
github.com/lm-sys/FastChat - imoneoi opened this pull request about 1 year ago
how can I get the battle result
github.com/lm-sys/FastChat - miracletiger opened this issue about 1 year ago
github.com/lm-sys/FastChat - miracletiger opened this issue about 1 year ago
error when running docker-compose
github.com/lm-sys/FastChat - daitq-aime opened this issue about 1 year ago
github.com/lm-sys/FastChat - daitq-aime opened this issue about 1 year ago
feat: support custom models vllm serving
github.com/lm-sys/FastChat - congchan opened this pull request about 1 year ago
github.com/lm-sys/FastChat - congchan opened this pull request about 1 year ago
Add Hermes 2.5 to OpenOrcaAdapter
github.com/lm-sys/FastChat - teknium1 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - teknium1 opened this pull request about 1 year ago
Integrating a hosted model into the Arena
github.com/lm-sys/FastChat - siddartha-RE opened this issue about 1 year ago
github.com/lm-sys/FastChat - siddartha-RE opened this issue about 1 year ago
launch service with vllm_worker failed when loading lora model
github.com/lm-sys/FastChat - wanzhenchn opened this issue about 1 year ago
github.com/lm-sys/FastChat - wanzhenchn opened this issue about 1 year ago
Vicuna-7b for full-parameter fine-tuning based on DeepSpeed
github.com/lm-sys/FastChat - dimasheva1 opened this issue about 1 year ago
github.com/lm-sys/FastChat - dimasheva1 opened this issue about 1 year ago
fix: [model_worker] #2467
github.com/lm-sys/FastChat - dblepart99 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - dblepart99 opened this pull request about 1 year ago
Convert read-only endpoints from POST to GET (#2625)
github.com/lm-sys/FastChat - rladbrua0207 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - rladbrua0207 opened this pull request about 1 year ago
Make fastchat.serve.model_worker to take debug argument
github.com/lm-sys/FastChat - uinone opened this pull request about 1 year ago
github.com/lm-sys/FastChat - uinone opened this pull request about 1 year ago