Ecosyste.ms: OpenCollective
An open API service for software projects hosted on Open Collective.
Chatbot Arena
Chatbot Arena is an open platform for evaluating frontier AI models by human preference, developed by members at LMSYS & UC Berkeley.
Collective -
Host: opensource -
https://opencollective.com/chatbot-arena
- Website: https://chat.lmsys.org/?about
- Code: https://github.com/lm-sys/FastChat
Can I Override or Replace "I'm sorry, as an AI language model, I cannot" Response By Vicuna
github.com/lm-sys/FastChat - fucksmile opened this issue over 1 year ago
github.com/lm-sys/FastChat - fucksmile opened this issue over 1 year ago
Add fastest gptq 4bit inference support
github.com/lm-sys/FastChat - alanxmay opened this pull request over 1 year ago
github.com/lm-sys/FastChat - alanxmay opened this pull request over 1 year ago
NETWORK ERROR DUE TO HIGH TRAFFIC. PLEASE REGENERATE OR REFRESH THIS PAGE. (error_code: 4)
github.com/lm-sys/FastChat - adhupraba opened this issue over 1 year ago
github.com/lm-sys/FastChat - adhupraba opened this issue over 1 year ago
Catch more exceptions in the model worker
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
RuntimeError:The detected CUDA version (12.1) mismatches the version that was used to compile PyTorch (11.7).
github.com/lm-sys/FastChat - larawehbe opened this issue over 1 year ago
github.com/lm-sys/FastChat - larawehbe opened this issue over 1 year ago
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
github.com/lm-sys/FastChat - whk6688 opened this issue over 1 year ago
github.com/lm-sys/FastChat - whk6688 opened this issue over 1 year ago
reduce GPU Mem model_worker.get_embeddings
github.com/lm-sys/FastChat - supdizh opened this issue over 1 year ago
github.com/lm-sys/FastChat - supdizh opened this issue over 1 year ago
only one GPU works when setting "-num_gpu 2",
github.com/lm-sys/FastChat - goldfishl opened this issue over 1 year ago
github.com/lm-sys/FastChat - goldfishl opened this issue over 1 year ago
Finetune VICUNA-7b with 4*v100(32G)
github.com/lm-sys/FastChat - fw2325 opened this issue over 1 year ago
github.com/lm-sys/FastChat - fw2325 opened this issue over 1 year ago
Visual foundation model as plugin of Vicuna
github.com/lm-sys/FastChat - lixin4ever opened this issue over 1 year ago
github.com/lm-sys/FastChat - lixin4ever opened this issue over 1 year ago
Error raised during finetuning: ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0
github.com/lm-sys/FastChat - roshan-gopalakrishnan opened this issue over 1 year ago
github.com/lm-sys/FastChat - roshan-gopalakrishnan opened this issue over 1 year ago
Fix Chinese garbled code problem by filtering special characters \ufffd.
github.com/lm-sys/FastChat - yxleung opened this pull request over 1 year ago
github.com/lm-sys/FastChat - yxleung opened this pull request over 1 year ago
Improve OpenAI-Compatible Restful API (token usage, error handling, stream)
github.com/lm-sys/FastChat - jstzwj opened this pull request over 1 year ago
github.com/lm-sys/FastChat - jstzwj opened this pull request over 1 year ago
ChatGLM-PTuning +Fastchat
github.com/lm-sys/FastChat - lovelucymuch opened this issue over 1 year ago
github.com/lm-sys/FastChat - lovelucymuch opened this issue over 1 year ago
Support MiniGPT4 and other multi-modal models
github.com/lm-sys/FastChat - thiner opened this issue over 1 year ago
github.com/lm-sys/FastChat - thiner opened this issue over 1 year ago
Training new Vicuna based on fully open-source OpenLLaMA
github.com/lm-sys/FastChat - wilhelmagren opened this issue over 1 year ago
github.com/lm-sys/FastChat - wilhelmagren opened this issue over 1 year ago
2 node speed is not faster than 1 node
github.com/lm-sys/FastChat - lmolhw5252 opened this issue over 1 year ago
github.com/lm-sys/FastChat - lmolhw5252 opened this issue over 1 year ago
Claude-v1 is not available is Side-By-Side selection dropdown but found in battle
github.com/lm-sys/FastChat - RageshAntony opened this issue over 1 year ago
github.com/lm-sys/FastChat - RageshAntony opened this issue over 1 year ago
Instructions to add new model ?
github.com/lm-sys/FastChat - XReyRobert opened this issue over 1 year ago
github.com/lm-sys/FastChat - XReyRobert opened this issue over 1 year ago
feat: Add support for MPT
github.com/lm-sys/FastChat - mariobm opened this pull request over 1 year ago
github.com/lm-sys/FastChat - mariobm opened this pull request over 1 year ago
Fine tuning met OutOfMemoryError: CUDA out of memory.
github.com/lm-sys/FastChat - JustinZou1 opened this issue over 1 year ago
github.com/lm-sys/FastChat - JustinZou1 opened this issue over 1 year ago
--load-8bit not compatiable for fastchat-t5-3b-v1.0
github.com/lm-sys/FastChat - shm007g opened this issue over 1 year ago
github.com/lm-sys/FastChat - shm007g opened this issue over 1 year ago
pydantic.error_wrappers.ValidationError: 2 validation errors for ChatCompletionResponse
github.com/lm-sys/FastChat - LvJC opened this issue over 1 year ago
github.com/lm-sys/FastChat - LvJC opened this issue over 1 year ago
The size of tensor a (32001) must match the size of tensor b (32000) at non-singleton dimension 0
github.com/lm-sys/FastChat - xyk35182966 opened this issue over 1 year ago
github.com/lm-sys/FastChat - xyk35182966 opened this issue over 1 year ago
FastChat/fastchat/serve /test_throughput.py
github.com/lm-sys/FastChat - wei61547-jp opened this issue over 1 year ago
github.com/lm-sys/FastChat - wei61547-jp opened this issue over 1 year ago
RuntimeError: FlashAttention is only supported on CUDA 11 and above
github.com/lm-sys/FastChat - JustinZou1 opened this issue over 1 year ago
github.com/lm-sys/FastChat - JustinZou1 opened this issue over 1 year ago
8* V100S 32G running killed,something error?
github.com/lm-sys/FastChat - yezhongxiuchan opened this issue over 1 year ago
github.com/lm-sys/FastChat - yezhongxiuchan opened this issue over 1 year ago
ShareGPT conversation splits and "please continue"
github.com/lm-sys/FastChat - float-trip opened this issue over 1 year ago
github.com/lm-sys/FastChat - float-trip opened this issue over 1 year ago
Why not use model.generate in generate_stream
github.com/lm-sys/FastChat - vikigenius opened this issue over 1 year ago
github.com/lm-sys/FastChat - vikigenius opened this issue over 1 year ago
ImportError: cannot import name 'cache' from 'functools' (/usr/lib/python3.8/functools.py)
github.com/lm-sys/FastChat - mpetruc opened this issue over 1 year ago
github.com/lm-sys/FastChat - mpetruc opened this issue over 1 year ago
The stop parameter in openai API doesn't work since v0.2.5
github.com/lm-sys/FastChat - oreo-yum opened this issue over 1 year ago
github.com/lm-sys/FastChat - oreo-yum opened this issue over 1 year ago
Refactor to add MPT
github.com/lm-sys/FastChat - hlzhang109 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - hlzhang109 opened this pull request over 1 year ago
Is there a way to optimize the output token per second?
github.com/lm-sys/FastChat - vinvcn opened this issue over 1 year ago
github.com/lm-sys/FastChat - vinvcn opened this issue over 1 year ago
Decouple LLM Interface Code for Improved Scalability
github.com/lm-sys/FastChat - chen369 opened this issue over 1 year ago
github.com/lm-sys/FastChat - chen369 opened this issue over 1 year ago
fastchat-t5 quantization support?
github.com/lm-sys/FastChat - bash99 opened this issue over 1 year ago
github.com/lm-sys/FastChat - bash99 opened this issue over 1 year ago
How to break the 2048 token limit
github.com/lm-sys/FastChat - rainbownmm opened this issue over 1 year ago
github.com/lm-sys/FastChat - rainbownmm opened this issue over 1 year ago
Run API with just CPU for Fastchat t5
github.com/lm-sys/FastChat - djaffer opened this issue over 1 year ago
github.com/lm-sys/FastChat - djaffer opened this issue over 1 year ago
Error : model.embed_tokens.weight
github.com/lm-sys/FastChat - JerryYao80 opened this issue over 1 year ago
github.com/lm-sys/FastChat - JerryYao80 opened this issue over 1 year ago
TypeError: forward() got an unexpected keyword argument 'position_ids'
github.com/lm-sys/FastChat - luochuwei opened this issue over 1 year ago
github.com/lm-sys/FastChat - luochuwei opened this issue over 1 year ago
Support model list reload feature
github.com/lm-sys/FastChat - Jeffwan opened this pull request over 1 year ago
github.com/lm-sys/FastChat - Jeffwan opened this pull request over 1 year ago
Error when saving the model after training
github.com/lm-sys/FastChat - Puzzledyy opened this issue over 1 year ago
github.com/lm-sys/FastChat - Puzzledyy opened this issue over 1 year ago
Issue#270 add CI to support release and publish
github.com/lm-sys/FastChat - yantao0527 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - yantao0527 opened this pull request over 1 year ago
How to use api for t5 and example dataset?
github.com/lm-sys/FastChat - djaffer opened this issue over 1 year ago
github.com/lm-sys/FastChat - djaffer opened this issue over 1 year ago
Finetuning on 16 Tesla K80 GPUs on EC2 Instance (p2.16xlarge)
github.com/lm-sys/FastChat - ItsCRC opened this issue over 1 year ago
github.com/lm-sys/FastChat - ItsCRC opened this issue over 1 year ago
Encounter the runtime error training with lora and flash_attention together
github.com/lm-sys/FastChat - Jeffwan opened this issue over 1 year ago
github.com/lm-sys/FastChat - Jeffwan opened this issue over 1 year ago
Model running on only 2 GPU's even when 4 GPU's are specified
github.com/lm-sys/FastChat - SupreethRao99 opened this issue over 1 year ago
github.com/lm-sys/FastChat - SupreethRao99 opened this issue over 1 year ago
The <eos> token randomly pops out during the inference, making the text generation stops early.
github.com/lm-sys/FastChat - BadisG opened this issue over 1 year ago
github.com/lm-sys/FastChat - BadisG opened this issue over 1 year ago
CUDA out of memory in CLI vicuna 7B
github.com/lm-sys/FastChat - mpetruc opened this issue over 1 year ago
github.com/lm-sys/FastChat - mpetruc opened this issue over 1 year ago
Update apply_delta.py to use tokenizer from delta weights
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request over 1 year ago
fastchat-t5-3b-v1.0 on macOS?
github.com/lm-sys/FastChat - fdstevex opened this issue over 1 year ago
github.com/lm-sys/FastChat - fdstevex opened this issue over 1 year ago
Add ChatML inspired conversation style.
github.com/lm-sys/FastChat - rwl4 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - rwl4 opened this pull request over 1 year ago
Command to run train_flatT5.py
github.com/lm-sys/FastChat - samarthsarin opened this issue over 1 year ago
github.com/lm-sys/FastChat - samarthsarin opened this issue over 1 year ago
[lmsys/fastchat-t5-3b-v1.0] Is the ShareGPT dataset suitable for commercial use?
github.com/lm-sys/FastChat - yousifmansour opened this issue over 1 year ago
github.com/lm-sys/FastChat - yousifmansour opened this issue over 1 year ago
Get irrelevant answers when use fastchat.serve.cli on MacOS mps
github.com/lm-sys/FastChat - kivvi3412 opened this issue over 1 year ago
github.com/lm-sys/FastChat - kivvi3412 opened this issue over 1 year ago
Which model performs better? Vicuna-7B, Vicuna-13B or FastChat-T5?
github.com/lm-sys/FastChat - chentao169 opened this issue over 1 year ago
github.com/lm-sys/FastChat - chentao169 opened this issue over 1 year ago
Adding RWKV "Raven" 7B / 14B to Arena
github.com/lm-sys/FastChat - BlinkDL opened this issue over 1 year ago
github.com/lm-sys/FastChat - BlinkDL opened this issue over 1 year ago
How can we use vicuna for information retrievel from bunch of docs?
github.com/lm-sys/FastChat - alan-ai-learner opened this issue over 1 year ago
github.com/lm-sys/FastChat - alan-ai-learner opened this issue over 1 year ago
Model list mode reload is not supported anymore
github.com/lm-sys/FastChat - fungiboletus opened this issue over 1 year ago
github.com/lm-sys/FastChat - fungiboletus opened this issue over 1 year ago
How to use lora to train the 30b model on multiple machines and multiple cards?
github.com/lm-sys/FastChat - Awyshw opened this issue over 1 year ago
github.com/lm-sys/FastChat - Awyshw opened this issue over 1 year ago
Any solution to make different processes share a same model copy in memory for train_lora.py
github.com/lm-sys/FastChat - tonyaw opened this issue over 1 year ago
github.com/lm-sys/FastChat - tonyaw opened this issue over 1 year ago
how can I finetune the model with my own dataset on just one GPU?
github.com/lm-sys/FastChat - pkachuuK opened this issue over 1 year ago
github.com/lm-sys/FastChat - pkachuuK opened this issue over 1 year ago
Clean up filenames of GPT-4 review a little
github.com/lm-sys/FastChat - thandal opened this pull request over 1 year ago
github.com/lm-sys/FastChat - thandal opened this pull request over 1 year ago
[WIP] Fixe FSDP saving error
github.com/lm-sys/FastChat - zhisbug opened this pull request over 1 year ago
github.com/lm-sys/FastChat - zhisbug opened this pull request over 1 year ago
Error when loading Web UI
github.com/lm-sys/FastChat - kaifeng0502 opened this issue over 1 year ago
github.com/lm-sys/FastChat - kaifeng0502 opened this issue over 1 year ago
Loading checkpoint shards stuck
github.com/lm-sys/FastChat - tomqian2022 opened this issue over 1 year ago
github.com/lm-sys/FastChat - tomqian2022 opened this issue over 1 year ago
RuntimeError: Tensor on device cpu is not on the expected device meta!"
github.com/lm-sys/FastChat - mderouineau opened this issue over 1 year ago
github.com/lm-sys/FastChat - mderouineau opened this issue over 1 year ago
NotImplementedError: Cannot copy out of meta tensor; no data!
github.com/lm-sys/FastChat - weifan-zhao opened this issue over 1 year ago
github.com/lm-sys/FastChat - weifan-zhao opened this issue over 1 year ago
RuntimeError: Error(s) in loading state_dict for LlamaForCausalLM:
github.com/lm-sys/FastChat - landerson85 opened this issue over 1 year ago
github.com/lm-sys/FastChat - landerson85 opened this issue over 1 year ago
ERROR:torch.distributed.elastic.multiprocessing.api:failed
github.com/lm-sys/FastChat - yuanconghao opened this issue over 1 year ago
github.com/lm-sys/FastChat - yuanconghao opened this issue over 1 year ago
Continue training from checkpoint raise RuntimeError
github.com/lm-sys/FastChat - huijiawu0 opened this issue over 1 year ago
github.com/lm-sys/FastChat - huijiawu0 opened this issue over 1 year ago
Fine Tuning on Low Memory GPU
github.com/lm-sys/FastChat - samarthsarin opened this issue over 1 year ago
github.com/lm-sys/FastChat - samarthsarin opened this issue over 1 year ago
RuntimeError: The size of tensor a (32000) must match the size of tensor b (32001) at non-singleton dimension 0
github.com/lm-sys/FastChat - ZDDWLIG opened this issue over 1 year ago
github.com/lm-sys/FastChat - ZDDWLIG opened this issue over 1 year ago
Finetuning: RuntimeError: CUDA error: invalid device ordinal, Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
github.com/lm-sys/FastChat - ZohaibDurrani opened this issue over 1 year ago
github.com/lm-sys/FastChat - ZohaibDurrani opened this issue over 1 year ago
Fine Tuning: No Module name einops
github.com/lm-sys/FastChat - ZohaibDurrani opened this issue over 1 year ago
github.com/lm-sys/FastChat - ZohaibDurrani opened this issue over 1 year ago
Add option to set the seed for inference
github.com/lm-sys/FastChat - nielstron opened this pull request over 1 year ago
github.com/lm-sys/FastChat - nielstron opened this pull request over 1 year ago
Integrate external search engines, such as bing, google
github.com/lm-sys/FastChat - youtehub opened this issue over 1 year ago
github.com/lm-sys/FastChat - youtehub opened this issue over 1 year ago
further Fix #454 problem
github.com/lm-sys/FastChat - ycat3 opened this pull request over 1 year ago
github.com/lm-sys/FastChat - ycat3 opened this pull request over 1 year ago
How can I enter new line on CLI
github.com/lm-sys/FastChat - hellocomrade opened this issue over 1 year ago
github.com/lm-sys/FastChat - hellocomrade opened this issue over 1 year ago
execute fastchat.serve.cli error
github.com/lm-sys/FastChat - ch930410 opened this issue over 1 year ago
github.com/lm-sys/FastChat - ch930410 opened this issue over 1 year ago
safe_save_model_for_hf_trainer function save mode is very small
github.com/lm-sys/FastChat - lw3259111 opened this issue over 1 year ago
github.com/lm-sys/FastChat - lw3259111 opened this issue over 1 year ago
start model_worker no response
github.com/lm-sys/FastChat - JarringBye opened this issue over 1 year ago
github.com/lm-sys/FastChat - JarringBye opened this issue over 1 year ago
execute fastchat.serve.model_ Worker Error
github.com/lm-sys/FastChat - ch930410 opened this issue over 1 year ago
github.com/lm-sys/FastChat - ch930410 opened this issue over 1 year ago
fastchat.serve.model_worker not loading checkpoint shards and outputs stderr messages
github.com/lm-sys/FastChat - cenguix opened this issue over 1 year ago
github.com/lm-sys/FastChat - cenguix opened this issue over 1 year ago
</s> tokenization via sentencepiece
github.com/lm-sys/FastChat - vince62s opened this issue over 1 year ago
github.com/lm-sys/FastChat - vince62s opened this issue over 1 year ago
RuntimeError: The size of tensor a (32000) must match the size of tensor b (32001) at non-singleton dimension 0
github.com/lm-sys/FastChat - thibaudart opened this issue over 1 year ago
github.com/lm-sys/FastChat - thibaudart opened this issue over 1 year ago
Error occured when converting vicuna-13b using scripts in llama.cpp
github.com/lm-sys/FastChat - sablin39 opened this issue over 1 year ago
github.com/lm-sys/FastChat - sablin39 opened this issue over 1 year ago