Ecosyste.ms: OpenCollective
An open API service for software projects hosted on Open Collective.
Chatbot Arena
Chatbot Arena is an open platform for evaluating frontier AI models by human preference, developed by members at LMSYS & UC Berkeley.
Collective -
Host: opensource -
https://opencollective.com/chatbot-arena
- Website: https://chat.lmsys.org/?about
- Code: https://github.com/lm-sys/FastChat
Maybe a bug in vicuna's tokenizer
github.com/lm-sys/FastChat - xmy0916 opened this issue over 1 year ago
github.com/lm-sys/FastChat - xmy0916 opened this issue over 1 year ago
Support batching for chatglm models.
github.com/lm-sys/FastChat - kimanli opened this issue over 1 year ago
github.com/lm-sys/FastChat - kimanli opened this issue over 1 year ago
json.decoder.JSONDecodeError: Invalid \uXXXX escape: line 1 column 1442 (char 1441)
github.com/lm-sys/FastChat - zlht812 opened this issue over 1 year ago
github.com/lm-sys/FastChat - zlht812 opened this issue over 1 year ago
How to specify which GPU to run the model in background mode on?
github.com/lm-sys/FastChat - zlht812 opened this issue over 1 year ago
github.com/lm-sys/FastChat - zlht812 opened this issue over 1 year ago
baichuan-13b-chat updated the model, FastChat needs to sync
github.com/lm-sys/FastChat - Tomorrowxxy opened this issue over 1 year ago
github.com/lm-sys/FastChat - Tomorrowxxy opened this issue over 1 year ago
[OPENAI_API Error] vicuna13b_V1.5-16K called Error .
github.com/lm-sys/FastChat - cason0126 opened this issue over 1 year ago
github.com/lm-sys/FastChat - cason0126 opened this issue over 1 year ago
[important features] PDF uploader support
github.com/lm-sys/FastChat - Dandelionym opened this issue over 1 year ago
github.com/lm-sys/FastChat - Dandelionym opened this issue over 1 year ago
Worker do not stop even if the api request was cancelled
github.com/lm-sys/FastChat - jasoncaojingren opened this issue over 1 year ago
github.com/lm-sys/FastChat - jasoncaojingren opened this issue over 1 year ago
vicuna-13b-v1.5-16k is repeating word as output
github.com/lm-sys/FastChat - Extremys opened this issue over 1 year ago
github.com/lm-sys/FastChat - Extremys opened this issue over 1 year ago
adding support for Llama2 and Cohere models with litellm
github.com/lm-sys/FastChat - krrishdholakia opened this pull request over 1 year ago
github.com/lm-sys/FastChat - krrishdholakia opened this pull request over 1 year ago
1. add shell scripts for shutdowning serve; 2. add a feature to launch all serve related to openai-api-server in one cmd;
github.com/lm-sys/FastChat - hzg0601 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - hzg0601 opened this pull request over 1 year ago
Fine-tuning Llama 2 based model
github.com/lm-sys/FastChat - mikefrandsen opened this issue over 1 year ago
github.com/lm-sys/FastChat - mikefrandsen opened this issue over 1 year ago
Finetuning of LLaMA does not work in any setting (mem, lora)
github.com/lm-sys/FastChat - sergsb opened this issue over 1 year ago
github.com/lm-sys/FastChat - sergsb opened this issue over 1 year ago
gen_model_answer.py on peft adapter gets problem
github.com/lm-sys/FastChat - Clemente-H opened this issue over 1 year ago
github.com/lm-sys/FastChat - Clemente-H opened this issue over 1 year ago
RuntimeError: CUDA error: device-side assert triggered
github.com/lm-sys/FastChat - lw3259111 opened this issue over 1 year ago
github.com/lm-sys/FastChat - lw3259111 opened this issue over 1 year ago
Cost associated with running MT bench
github.com/lm-sys/FastChat - dhyani15 opened this issue over 1 year ago
github.com/lm-sys/FastChat - dhyani15 opened this issue over 1 year ago
help! how to load safetensors model?
github.com/lm-sys/FastChat - rainbownmm opened this issue over 1 year ago
github.com/lm-sys/FastChat - rainbownmm opened this issue over 1 year ago
[error] ValueError:The current `device_map` had weights offloaded to the disk. Please provide an `offload_folder` for them. Alternatively, make sure you have `safetensors` installed if the model you are using offers the weights in this format.
github.com/lm-sys/FastChat - cm-liushaodong opened this issue over 1 year ago
github.com/lm-sys/FastChat - cm-liushaodong opened this issue over 1 year ago
the model keeps repeating the same answer all the time
github.com/lm-sys/FastChat - aiot-tech opened this issue over 1 year ago
github.com/lm-sys/FastChat - aiot-tech opened this issue over 1 year ago
Bugs about Falcon-40B-instructed.
github.com/lm-sys/FastChat - Kong-Aobo opened this issue over 1 year ago
github.com/lm-sys/FastChat - Kong-Aobo opened this issue over 1 year ago
Nearly impossivle to use arena voting: the interface is too slow and hangs
github.com/lm-sys/FastChat - Anixx opened this issue over 1 year ago
github.com/lm-sys/FastChat - Anixx opened this issue over 1 year ago
Llama-2 loss and learning rate is always 0 after first step
github.com/lm-sys/FastChat - jerryjalapeno opened this issue over 1 year ago
github.com/lm-sys/FastChat - jerryjalapeno opened this issue over 1 year ago
Could not call v1/chat/completion successfully in new langchain endpoint in openai-compatible server
github.com/lm-sys/FastChat - zeyusuntt opened this issue over 1 year ago
github.com/lm-sys/FastChat - zeyusuntt opened this issue over 1 year ago
gradio web server infinite loading model
github.com/lm-sys/FastChat - o-evgeny opened this issue over 1 year ago
github.com/lm-sys/FastChat - o-evgeny opened this issue over 1 year ago
THIS SESSION HAS BEEN INACTIVE FOR TOO LONG. PLEASE REFRESH THIS PAGE.
github.com/lm-sys/FastChat - Art-Man opened this issue over 1 year ago
github.com/lm-sys/FastChat - Art-Man opened this issue over 1 year ago
No available workers for vicuna-7b-v1.3
github.com/lm-sys/FastChat - andreasbinder opened this issue over 1 year ago
github.com/lm-sys/FastChat - andreasbinder opened this issue over 1 year ago
Speed comparison with https://huggingface.co/chat/
github.com/lm-sys/FastChat - surak opened this issue over 1 year ago
github.com/lm-sys/FastChat - surak opened this issue over 1 year ago
The loss is abnormal when fine-tuning meta-llama/Llama-2-7b-hf
github.com/lm-sys/FastChat - Tianranse opened this issue over 1 year ago
github.com/lm-sys/FastChat - Tianranse opened this issue over 1 year ago
training loss curve looks like stairs
github.com/lm-sys/FastChat - gombumsoo opened this issue over 1 year ago
github.com/lm-sys/FastChat - gombumsoo opened this issue over 1 year ago
Where to add embedding function?
github.com/lm-sys/FastChat - hypily123 opened this issue over 1 year ago
github.com/lm-sys/FastChat - hypily123 opened this issue over 1 year ago
Experiencing intermittent 400 and 500 HTTP errors when making requests to server
github.com/lm-sys/FastChat - zxia545 opened this issue over 1 year ago
github.com/lm-sys/FastChat - zxia545 opened this issue over 1 year ago
Fine Tuning trust_remote_code=True
github.com/lm-sys/FastChat - Minniesse opened this issue over 1 year ago
github.com/lm-sys/FastChat - Minniesse opened this issue over 1 year ago
Loss reaches 0 when finetuning 7B model using 2xA100 80G
github.com/lm-sys/FastChat - rootally opened this issue over 1 year ago
github.com/lm-sys/FastChat - rootally opened this issue over 1 year ago
Langchain documentation conflicts with gradio web server.
github.com/lm-sys/FastChat - surak opened this issue over 1 year ago
github.com/lm-sys/FastChat - surak opened this issue over 1 year ago
'CUDA out of memory'. When QLoRA vicuna-7b in 4*24G gps .
github.com/lm-sys/FastChat - lj976264709 opened this issue over 1 year ago
github.com/lm-sys/FastChat - lj976264709 opened this issue over 1 year ago
python -m fastchat.serve.gradio_web_server
github.com/lm-sys/FastChat - suntinsion opened this issue over 1 year ago
github.com/lm-sys/FastChat - suntinsion opened this issue over 1 year ago
加载Baichuan-13b-chat模型报错
github.com/lm-sys/FastChat - Zhang-star-master opened this issue over 1 year ago
github.com/lm-sys/FastChat - Zhang-star-master opened this issue over 1 year ago
Answering chinese even with english question.
github.com/lm-sys/FastChat - soap117 opened this issue over 1 year ago
github.com/lm-sys/FastChat - soap117 opened this issue over 1 year ago
Support for htttps protocol call between controller node and worker node
github.com/lm-sys/FastChat - Victorwz opened this issue over 1 year ago
github.com/lm-sys/FastChat - Victorwz opened this issue over 1 year ago
inference truncate causes “output_ids” to be incorrectly sliced
github.com/lm-sys/FastChat - lyy-zz opened this issue over 1 year ago
github.com/lm-sys/FastChat - lyy-zz opened this issue over 1 year ago
Allow Peft models to share their base model
github.com/lm-sys/FastChat - fozziethebeat opened this pull request over 1 year ago
github.com/lm-sys/FastChat - fozziethebeat opened this pull request over 1 year ago
MT-bench results are different today
github.com/lm-sys/FastChat - imoneoi opened this issue over 1 year ago
github.com/lm-sys/FastChat - imoneoi opened this issue over 1 year ago
Using past for speeding up generation
github.com/lm-sys/FastChat - mmdalix opened this issue over 1 year ago
github.com/lm-sys/FastChat - mmdalix opened this issue over 1 year ago
Does Vicuna have plans to expand its Chinese vocabulary?
github.com/lm-sys/FastChat - PolarPeak opened this issue over 1 year ago
github.com/lm-sys/FastChat - PolarPeak opened this issue over 1 year ago
At present, I use vicuna-33b with 3 cards V100 32G memory, and the speed is very slow compared with the demo demo on the official website (https://chat.lmsys.org/). Can you please provide resources for the demo demo on the official website? configuration? Is this speed difference the difference between V100 and A100? Or is it the performance degradation caused by the data interaction of multiple graphics cards in the case of a single machine with multiple cards?
github.com/lm-sys/FastChat - murongweibo opened this issue over 1 year ago
github.com/lm-sys/FastChat - murongweibo opened this issue over 1 year ago
[Feature] Safe save with FSDP, slurm examples
github.com/lm-sys/FastChat - zhisbug opened this pull request over 1 year ago
github.com/lm-sys/FastChat - zhisbug opened this pull request over 1 year ago
Modified loss for Multi-turn conversations
github.com/lm-sys/FastChat - staticpunch opened this issue over 1 year ago
github.com/lm-sys/FastChat - staticpunch opened this issue over 1 year ago
[Bug]SFT vicuna-7b-v1.3 with train_mem.py (with flash-attention) can not work
github.com/lm-sys/FastChat - lindylin1817 opened this issue over 1 year ago
github.com/lm-sys/FastChat - lindylin1817 opened this issue over 1 year ago
Invalid JSON Response Error When Running Langchain Use Cases with AutoGPT
github.com/lm-sys/FastChat - Hannune-tech opened this issue over 1 year ago
github.com/lm-sys/FastChat - Hannune-tech opened this issue over 1 year ago
Use gradio state instead of IP address to track session expiration time
github.com/lm-sys/FastChat - lpfhs opened this pull request over 1 year ago
github.com/lm-sys/FastChat - lpfhs opened this pull request over 1 year ago
Mac M2: Memory usage growing by 1g per 4-5 tokens generated
github.com/lm-sys/FastChat - ericskiff opened this issue over 1 year ago
github.com/lm-sys/FastChat - ericskiff opened this issue over 1 year ago
Text generation early stop problem with Vicuna 33B v1.3
github.com/lm-sys/FastChat - iibw opened this issue over 1 year ago
github.com/lm-sys/FastChat - iibw opened this issue over 1 year ago
`TypeError: not a string` after pressing to special characters use delete
github.com/lm-sys/FastChat - FANGOD opened this issue over 1 year ago
github.com/lm-sys/FastChat - FANGOD opened this issue over 1 year ago
how to use vicuna-33b with multi nodes? such as three nodes and each has a V100 GPU, thanks
github.com/lm-sys/FastChat - zhiyongLiu1114 opened this issue over 1 year ago
github.com/lm-sys/FastChat - zhiyongLiu1114 opened this issue over 1 year ago
support for 4bit quantization from transfomer library.
github.com/lm-sys/FastChat - harpomaxx opened this issue over 1 year ago
github.com/lm-sys/FastChat - harpomaxx opened this issue over 1 year ago
`tiiuae/falcon-7b-instruct` is acting weird
github.com/lm-sys/FastChat - dudulasry opened this issue over 1 year ago
github.com/lm-sys/FastChat - dudulasry opened this issue over 1 year ago
use chatglm-6b. error: trust_remote_code=True
github.com/lm-sys/FastChat - 15354333388 opened this issue over 1 year ago
github.com/lm-sys/FastChat - 15354333388 opened this issue over 1 year ago
out of memory when finetune Vicuna-7B with 4 x A100 (40GB) or 8 x A100 (40GB)
github.com/lm-sys/FastChat - gobigrassland opened this issue over 1 year ago
github.com/lm-sys/FastChat - gobigrassland opened this issue over 1 year ago
vicuna-7b fastchat.serve.cli stops loading checkpoint shards in my google colab
github.com/lm-sys/FastChat - ecliipt opened this issue over 1 year ago
github.com/lm-sys/FastChat - ecliipt opened this issue over 1 year ago
tiiuae/falcon-7b does not work on Apple M1 GPU (MPS)
github.com/lm-sys/FastChat - ChristianWeyer opened this issue over 1 year ago
github.com/lm-sys/FastChat - ChristianWeyer opened this issue over 1 year ago
Support fastchat-t5-3b-v1.0 on M2 GPU model
github.com/lm-sys/FastChat - PassiveIncomeMachine opened this issue over 1 year ago
github.com/lm-sys/FastChat - PassiveIncomeMachine opened this issue over 1 year ago
NotImplementedError: Cannot copy out of meta tensor; no data!
github.com/lm-sys/FastChat - aresa7796 opened this issue over 1 year ago
github.com/lm-sys/FastChat - aresa7796 opened this issue over 1 year ago
FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead
github.com/lm-sys/FastChat - luohao123 opened this issue over 1 year ago
github.com/lm-sys/FastChat - luohao123 opened this issue over 1 year ago
CUDA runtime error when running fastchat.serve.cli to serve Vicuna-7B
github.com/lm-sys/FastChat - ecfm opened this issue over 1 year ago
github.com/lm-sys/FastChat - ecfm opened this issue over 1 year ago
LoRA finetuning model didn't converge
github.com/lm-sys/FastChat - lucasjinreal opened this issue over 1 year ago
github.com/lm-sys/FastChat - lucasjinreal opened this issue over 1 year ago
Add Support For Baichuan 7B
github.com/lm-sys/FastChat - gaojieqing opened this pull request over 1 year ago
github.com/lm-sys/FastChat - gaojieqing opened this pull request over 1 year ago
how merge fine-tuned output to vicuna-7b
github.com/lm-sys/FastChat - codezealot opened this issue over 1 year ago
github.com/lm-sys/FastChat - codezealot opened this issue over 1 year ago
Different worker with different models don't update the web interface
github.com/lm-sys/FastChat - surak opened this issue over 1 year ago
github.com/lm-sys/FastChat - surak opened this issue over 1 year ago
ERROR: [Errno 99] error while attempting to bind on address ('::1', 21001, 0, 0): cannot assign requested address
github.com/lm-sys/FastChat - lucasjinreal opened this issue over 1 year ago
github.com/lm-sys/FastChat - lucasjinreal opened this issue over 1 year ago
Specific UTF-8 character problem
github.com/lm-sys/FastChat - begumcitamak opened this issue over 1 year ago
github.com/lm-sys/FastChat - begumcitamak opened this issue over 1 year ago
Unable to launch the OpenAI API [Vicuna-7B]. Error log: Using pad_token, but it is not set yet.
github.com/lm-sys/FastChat - kennymckormick opened this issue over 1 year ago
github.com/lm-sys/FastChat - kennymckormick opened this issue over 1 year ago
Any plans to add AutoGPTQ as a gptq load option?
github.com/lm-sys/FastChat - fblissjr opened this issue over 1 year ago
github.com/lm-sys/FastChat - fblissjr opened this issue over 1 year ago
support Vicuna finetune with qLoRA
github.com/lm-sys/FastChat - ehartford opened this issue over 1 year ago
github.com/lm-sys/FastChat - ehartford opened this issue over 1 year ago
report error while i execute `python -m fastchat.serve.openai_api_server --host localhost --port 8000`
github.com/lm-sys/FastChat - lplzyp opened this issue over 1 year ago
github.com/lm-sys/FastChat - lplzyp opened this issue over 1 year ago
Model worker keeps on registering and gets de registered
github.com/lm-sys/FastChat - saurabhgssingh opened this issue over 1 year ago
github.com/lm-sys/FastChat - saurabhgssingh opened this issue over 1 year ago
AttributeError: module 'torch.cuda' has no attribute 'OutOfMemoryError'
github.com/lm-sys/FastChat - zhanhang123 opened this issue over 1 year ago
github.com/lm-sys/FastChat - zhanhang123 opened this issue over 1 year ago
module 'fastchat' has no attribute 'load_model'
github.com/lm-sys/FastChat - Sorio6 opened this issue over 1 year ago
github.com/lm-sys/FastChat - Sorio6 opened this issue over 1 year ago
Support logprob in OpenAI API
github.com/lm-sys/FastChat - wymanCV opened this issue over 1 year ago
github.com/lm-sys/FastChat - wymanCV opened this issue over 1 year ago
Language distribution of ShareGPT 70K conversation dataset for FastChat T5
github.com/lm-sys/FastChat - Mihir2 opened this issue over 1 year ago
github.com/lm-sys/FastChat - Mihir2 opened this issue over 1 year ago
Change localhost to IP and face error
github.com/lm-sys/FastChat - zxcv0258tw opened this issue over 1 year ago
github.com/lm-sys/FastChat - zxcv0258tw opened this issue over 1 year ago
ConnectionError when launch the model worker(s)
github.com/lm-sys/FastChat - sz2three opened this issue over 1 year ago
github.com/lm-sys/FastChat - sz2three opened this issue over 1 year ago
NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE.
github.com/lm-sys/FastChat - 107064547 opened this issue over 1 year ago
github.com/lm-sys/FastChat - 107064547 opened this issue over 1 year ago
Is there a way to combine data parallel and model parallel?
github.com/lm-sys/FastChat - sunyuhan19981208 opened this issue over 1 year ago
github.com/lm-sys/FastChat - sunyuhan19981208 opened this issue over 1 year ago
python3 -m fastchat.serve.gradio_web_server出现的链接打不开
github.com/lm-sys/FastChat - Hzzhang-nlp opened this issue over 1 year ago
github.com/lm-sys/FastChat - Hzzhang-nlp opened this issue over 1 year ago
trainer.train(resume_from_checkpoint=True) failed
github.com/lm-sys/FastChat - John-Lin98 opened this issue over 1 year ago
github.com/lm-sys/FastChat - John-Lin98 opened this issue over 1 year ago
Error while finetuning vicuna on custom data.
github.com/lm-sys/FastChat - pauljeffrey opened this issue over 1 year ago
github.com/lm-sys/FastChat - pauljeffrey opened this issue over 1 year ago
Use fastchat with download vicuna cpp model
github.com/lm-sys/FastChat - rohezal opened this issue over 1 year ago
github.com/lm-sys/FastChat - rohezal opened this issue over 1 year ago
Fine-tune vicuna with oracle big data
github.com/lm-sys/FastChat - Compratrex opened this issue over 1 year ago
github.com/lm-sys/FastChat - Compratrex opened this issue over 1 year ago
[Questions] Where can I find the delta weights automatically download?
github.com/lm-sys/FastChat - brucezhu512 opened this issue over 1 year ago
github.com/lm-sys/FastChat - brucezhu512 opened this issue over 1 year ago