Ecosyste.ms: OpenCollective
An open API service for software projects hosted on Open Collective.
Chatbot Arena
Chatbot Arena is an open platform for crowdsourced AI benchmarking, developed by researchers at UC Berkeley SkyLab and LMSYS.
Collective -
Host: opensource -
https://opencollective.com/chatbot-arena
- Website: https://lmarena.ai/?about
- Code: https://github.com/lm-sys/FastChat
Why doesn't "fastchat.serve.model_worker" take a debug argument?
github.com/lm-sys/FastChat - uinone opened this issue about 1 year ago
github.com/lm-sys/FastChat - uinone opened this issue about 1 year ago
Potential Enhancement: Use GET Instead of POST for Fetching Data?
github.com/lm-sys/FastChat - rladbrua0207 opened this issue about 1 year ago
github.com/lm-sys/FastChat - rladbrua0207 opened this issue about 1 year ago
Full parameter training is more easy overfitting than Lora training ?
github.com/lm-sys/FastChat - ljch2018 opened this issue about 1 year ago
github.com/lm-sys/FastChat - ljch2018 opened this issue about 1 year ago
Issue when training long context model on V100 with xformers with padding_mask error
github.com/lm-sys/FastChat - lithces opened this issue about 1 year ago
github.com/lm-sys/FastChat - lithces opened this issue about 1 year ago
add chatglm3 conv template support in conversation.py
github.com/lm-sys/FastChat - ZeyuTeng96 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - ZeyuTeng96 opened this pull request about 1 year ago
ModuleNotFoundError: No module named 'typing_extensions'
github.com/lm-sys/FastChat - LT1st opened this issue about 1 year ago
github.com/lm-sys/FastChat - LT1st opened this issue about 1 year ago
feat(glm3): adapt to glm3 prompt
github.com/lm-sys/FastChat - silk55 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - silk55 opened this pull request about 1 year ago
Run vllm_worker, the server stopped automatically.
github.com/lm-sys/FastChat - yixuantt opened this issue about 1 year ago
github.com/lm-sys/FastChat - yixuantt opened this issue about 1 year ago
Chatglm3-supported
github.com/lm-sys/FastChat - yanyang1024 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - yanyang1024 opened this pull request about 1 year ago
feat: Support model AquilaChat2
github.com/lm-sys/FastChat - fangyinc opened this pull request about 1 year ago
github.com/lm-sys/FastChat - fangyinc opened this pull request about 1 year ago
xFastTransformer framework support
github.com/lm-sys/FastChat - a3213105 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - a3213105 opened this pull request about 1 year ago
Training processing when tokenization mismatch
github.com/lm-sys/FastChat - TranSirius opened this issue about 1 year ago
github.com/lm-sys/FastChat - TranSirius opened this issue about 1 year ago
fix: Fix for OpenOrcaAdapter to return correct conversation template
github.com/lm-sys/FastChat - vjsrinath opened this pull request about 1 year ago
github.com/lm-sys/FastChat - vjsrinath opened this pull request about 1 year ago
[Logprobs] Support logprobs=1
github.com/lm-sys/FastChat - comaniac opened this pull request about 1 year ago
github.com/lm-sys/FastChat - comaniac opened this pull request about 1 year ago
How to run a model that isn't vicuna?
github.com/lm-sys/FastChat - iplayfast opened this issue about 1 year ago
github.com/lm-sys/FastChat - iplayfast opened this issue about 1 year ago
load new models to fastchat
github.com/lm-sys/FastChat - bioinfomagic opened this issue about 1 year ago
github.com/lm-sys/FastChat - bioinfomagic opened this issue about 1 year ago
Update qwen and add pygmalion
github.com/lm-sys/FastChat - Trangle opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Trangle opened this pull request about 1 year ago
AttributeError: module 'fastchat' has no attribute 'Conversation'
github.com/lm-sys/FastChat - Jaywhisker opened this issue about 1 year ago
github.com/lm-sys/FastChat - Jaywhisker opened this issue about 1 year ago
generating result mismatch while using official repo chat and model_worker
github.com/lm-sys/FastChat - better629 opened this issue about 1 year ago
github.com/lm-sys/FastChat - better629 opened this issue about 1 year ago
feat: enable vllm for answer generation
github.com/lm-sys/FastChat - congchan opened this pull request about 1 year ago
github.com/lm-sys/FastChat - congchan opened this pull request about 1 year ago
Bug in streaming mode -> UnboundLocalError: local variable 'stopped' referenced before assignment
github.com/lm-sys/FastChat - npuichigo opened this issue about 1 year ago
github.com/lm-sys/FastChat - npuichigo opened this issue about 1 year ago
Added settings vllm
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
Added top_k & penaltys
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
How to cancel tensor parallel in vllm of the master-worker strategy.
github.com/lm-sys/FastChat - linkedlist771 opened this issue about 1 year ago
github.com/lm-sys/FastChat - linkedlist771 opened this issue about 1 year ago
Update README.md to highlight chatbot arena
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
TensorRT-LLM backend support?
github.com/lm-sys/FastChat - npuichigo opened this issue about 1 year ago
github.com/lm-sys/FastChat - npuichigo opened this issue about 1 year ago
docs: bit misspell comments model adapter default template name conversation
github.com/lm-sys/FastChat - guspan-tanadi opened this pull request about 1 year ago
github.com/lm-sys/FastChat - guspan-tanadi opened this pull request about 1 year ago
Add documentation to readme
github.com/lm-sys/FastChat - arnehuang opened this pull request about 1 year ago
github.com/lm-sys/FastChat - arnehuang opened this pull request about 1 year ago
Update README.md (vicuna-v1.3 -> vicuna-1.5)
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
github.com/lm-sys/FastChat - infwinston opened this pull request about 1 year ago
Proposal: Use api keys with openai server api: A simple example with gitlab tokens
github.com/lm-sys/FastChat - surak opened this issue about 1 year ago
github.com/lm-sys/FastChat - surak opened this issue about 1 year ago
fastchat/serve/openai_api_server.py : BaseSettings moved from pydantic to pydantic_settings
github.com/lm-sys/FastChat - surak opened this issue about 1 year ago
github.com/lm-sys/FastChat - surak opened this issue about 1 year ago
Skycamp Tutorial Example LLM Judge
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
Skycamp LLM Judge Tutorial Example Prompts
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
ModuleNotFoundError: No module named 'flash_attn'
github.com/lm-sys/FastChat - chaofanl opened this issue about 1 year ago
github.com/lm-sys/FastChat - chaofanl opened this issue about 1 year ago
Train lora with deepspeed using Half ,but encounter runtimeError
github.com/lm-sys/FastChat - zuitbjc1096 opened this issue about 1 year ago
github.com/lm-sys/FastChat - zuitbjc1096 opened this issue about 1 year ago
Add Mistral-7B-OpenOrca conversation_temmplate
github.com/lm-sys/FastChat - waynespa opened this pull request about 1 year ago
github.com/lm-sys/FastChat - waynespa opened this pull request about 1 year ago
add trust_remote_code=True in BaseModelAdapter
github.com/lm-sys/FastChat - edisonwd opened this pull request about 1 year ago
github.com/lm-sys/FastChat - edisonwd opened this pull request about 1 year ago
How to save the memory overhead at the beginning of the fine tuning?
github.com/lm-sys/FastChat - sagittahjz opened this issue about 1 year ago
github.com/lm-sys/FastChat - sagittahjz opened this issue about 1 year ago
Update Mistral template
github.com/lm-sys/FastChat - Gk-rohan opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Gk-rohan opened this pull request about 1 year ago
Update vigogne template
github.com/lm-sys/FastChat - bofenghuang opened this pull request about 1 year ago
github.com/lm-sys/FastChat - bofenghuang opened this pull request about 1 year ago
Fix issue #2568: --device mps led to TypeError: forward() got an unexpected keyword argument 'padding_mask'.
github.com/lm-sys/FastChat - Phil-U-U opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Phil-U-U opened this pull request about 1 year ago
Llama 2 70b qLoRA training not converging
github.com/lm-sys/FastChat - alwayshalffull opened this issue about 1 year ago
github.com/lm-sys/FastChat - alwayshalffull opened this issue about 1 year ago
Multiple gpus error vicuna-7b-v1.3: RuntimeError: probability tensor contains either inf, nan or element < 0
github.com/lm-sys/FastChat - erickFBG opened this issue about 1 year ago
github.com/lm-sys/FastChat - erickFBG opened this issue about 1 year ago
Llama 2 conversation template missing `<s>` before system message
github.com/lm-sys/FastChat - mukundt opened this issue about 1 year ago
github.com/lm-sys/FastChat - mukundt opened this issue about 1 year ago
What is the difference between Llama 2-13b-chat and Llama 2-13B-chat-hf
github.com/lm-sys/FastChat - lixinliu1995 opened this issue about 1 year ago
github.com/lm-sys/FastChat - lixinliu1995 opened this issue about 1 year ago
Confusions about prompt in training stage
github.com/lm-sys/FastChat - Yuxin715d opened this issue about 1 year ago
github.com/lm-sys/FastChat - Yuxin715d opened this issue about 1 year ago
fastchat.serve.model_worker does not support --load-8bit
github.com/lm-sys/FastChat - AlBundy33 opened this issue about 1 year ago
github.com/lm-sys/FastChat - AlBundy33 opened this issue about 1 year ago
current installation does not support cuda=11.4
github.com/lm-sys/FastChat - qianc62 opened this issue about 1 year ago
github.com/lm-sys/FastChat - qianc62 opened this issue about 1 year ago
support belle template
github.com/lm-sys/FastChat - fengyizhu opened this pull request about 1 year ago
github.com/lm-sys/FastChat - fengyizhu opened this pull request about 1 year ago
Mac running error (--device mps running error; --device cpu can run)
github.com/lm-sys/FastChat - Phil-U-U opened this issue about 1 year ago
github.com/lm-sys/FastChat - Phil-U-U opened this issue about 1 year ago
different generated result when using llama2_flash_attn_monkey_patch
github.com/lm-sys/FastChat - ftgreat opened this issue about 1 year ago
github.com/lm-sys/FastChat - ftgreat opened this issue about 1 year ago
Add Xwin-LM V0.1, V0.2 support
github.com/lm-sys/FastChat - REIGN12 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - REIGN12 opened this pull request about 1 year ago
resolves #2542 modify dockerfile to upgrade cuda to 12.2.0 and pydantic 1.10.13
github.com/lm-sys/FastChat - alexdelapaz opened this pull request about 1 year ago
github.com/lm-sys/FastChat - alexdelapaz opened this pull request about 1 year ago
Add airoboros_v3 chat template (llama-2 format)
github.com/lm-sys/FastChat - jondurbin opened this pull request about 1 year ago
github.com/lm-sys/FastChat - jondurbin opened this pull request about 1 year ago
Fixed model_worker generate_gate may blocked main thread (#2540)
github.com/lm-sys/FastChat - lvxuan263 opened this pull request about 1 year ago
github.com/lm-sys/FastChat - lvxuan263 opened this pull request about 1 year ago
Issue with dtype: RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
github.com/lm-sys/FastChat - ahlinus opened this issue about 1 year ago
github.com/lm-sys/FastChat - ahlinus opened this issue about 1 year ago
Misc style and bug fixes
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
Fine-tune fastchat-t5-3b-v1.0 with Lora, learning_rate is always 0
github.com/lm-sys/FastChat - BeerTai opened this issue about 1 year ago
github.com/lm-sys/FastChat - BeerTai opened this issue about 1 year ago
version 0.2.30;chatglm error:text input must of type `str` (single example), `List[str]`
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
out ot memory when i use 32GB V100s to fine-tuning Vicuna-7B-v1.5 with Lora
github.com/lm-sys/FastChat - YangQianli92 opened this issue about 1 year ago
github.com/lm-sys/FastChat - YangQianli92 opened this issue about 1 year ago
run api error:requests.post("***/worker_generate_stream", headers=headers, json=pload, stream=True,timeout=3)
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
github.com/lm-sys/FastChat - lonngxiang opened this issue about 1 year ago
[Feature] a new model adapter to speed up many models inference performance on Intel CPU
github.com/lm-sys/FastChat - a3213105 opened this issue about 1 year ago
github.com/lm-sys/FastChat - a3213105 opened this issue about 1 year ago
Create `tags` attribute to fix `MarkupError` in rich CLI
github.com/lm-sys/FastChat - Steve-Tech opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Steve-Tech opened this pull request about 1 year ago
Revert "Improve Support for Mistral-Instruct"
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
Make FastChat work with LMSYS-Chat-1M Code
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
github.com/lm-sys/FastChat - CodingWithTim opened this pull request about 1 year ago
Add additional Informations from the vllm worker
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
When using openai_api_server, openai.Completion.create and openai.ChatCompletion.create generate different responses for the same parameters and prompt.
github.com/lm-sys/FastChat - jalamao opened this issue about 1 year ago
github.com/lm-sys/FastChat - jalamao opened this issue about 1 year ago
Improve Support for Mistral-Instruct
github.com/lm-sys/FastChat - Steve-Tech opened this pull request about 1 year ago
github.com/lm-sys/FastChat - Steve-Tech opened this pull request about 1 year ago
[train flant5] Preprocessing the conversations dataset is too slow.
github.com/lm-sys/FastChat - rotoava opened this issue about 1 year ago
github.com/lm-sys/FastChat - rotoava opened this issue about 1 year ago
[New Feature Wanted] Attention Visualize with Gradio
github.com/lm-sys/FastChat - ericzhou571 opened this issue about 1 year ago
github.com/lm-sys/FastChat - ericzhou571 opened this issue about 1 year ago
correct max_tokens by context_length instead of raise exception
github.com/lm-sys/FastChat - liunux4odoo opened this pull request about 1 year ago
github.com/lm-sys/FastChat - liunux4odoo opened this pull request about 1 year ago
Add additional Informations from the vllm worker
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
github.com/lm-sys/FastChat - SebastianBodza opened this pull request about 1 year ago
Docker-compose failing to start fastchat server
github.com/lm-sys/FastChat - aadityamundhalia opened this issue about 1 year ago
github.com/lm-sys/FastChat - aadityamundhalia opened this issue about 1 year ago
In model_worker /worker_generate API can blocked all request
github.com/lm-sys/FastChat - lvxuan263 opened this issue about 1 year ago
github.com/lm-sys/FastChat - lvxuan263 opened this issue about 1 year ago
Improve chat templates
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
Fix warnings for new gradio versions
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
github.com/lm-sys/FastChat - merrymercy opened this pull request about 1 year ago
RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
github.com/lm-sys/FastChat - Gnomesenpai opened this issue about 1 year ago
github.com/lm-sys/FastChat - Gnomesenpai opened this issue about 1 year ago
How to determine the number of GPUs needed?
github.com/lm-sys/FastChat - 2018211801 opened this issue over 1 year ago
github.com/lm-sys/FastChat - 2018211801 opened this issue over 1 year ago
Question using Openai-api compatible API service to serve llama-2-7b
github.com/lm-sys/FastChat - Junpliu opened this issue over 1 year ago
github.com/lm-sys/FastChat - Junpliu opened this issue over 1 year ago
move BaseModelWorker outside serve.model_worker to make it independent
github.com/lm-sys/FastChat - liunux4odoo opened this pull request over 1 year ago
github.com/lm-sys/FastChat - liunux4odoo opened this pull request over 1 year ago
模型文件和本项目部署在服务商,我想在本地使用tokenizer,可以使用这个项目实例化一个tokenizer?
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
github.com/lm-sys/FastChat - ye7love7 opened this issue over 1 year ago
How to get logs for every x number of steps instead of epochs
github.com/lm-sys/FastChat - code-x-0018 opened this issue over 1 year ago
github.com/lm-sys/FastChat - code-x-0018 opened this issue over 1 year ago