I find it odd that sglang, vLLM, TRTLLM don't seem to want to publish benchmarks comparing each other. They used to, but now there seems to be some unspoken rule against it.
At least we get comparison against "other OSS engine" this time, but that could be HF's Transformers as well :)
imjonse 2 days ago [-]
They're OSS projects in a friendly competition, both working towards the goal of having alternatives to big closed source players. No need for jabs.
Palmik 2 days ago [-]
I don't think "friendly" and "publishing benchmarks" are at odds with each other.
Model makers (both open and closed weight) typically publish benchmarks against other models and when they do not, people rightfully call them out.
Including comparison against "other OSS engine" is just not helpful (what if it's a sandbagged baseline like HF Transformers?)
rfoo 2 days ago [-]
The problem here is both aimed for Day 0 support, both got embargoed preliminary model weights and arch, and I don't think they have access to the other sides embargoed code.
embedding-shape 2 days ago [-]
> I find it odd that sglang, vLLM, TRTLLM don't seem to want to publish benchmarks comparing each other. They used to, but now there seems to be some unspoken rule against it.
Someone always cries foul with "you're using the wrong version/patch" or similar, and they get hopelessly outdated so fast too, especially since the labs tend to release when everyone else is releasing too, hassle to keep them up to date, and why would you when sometimes the alternatives manages to get ahead of you? :)
mirekrusin 2 days ago [-]
Too early, they simply didn't yet have access?
palata 2 days ago [-]
Yet another website where I don't know what they do, so I go to the homepage that has a marketing sentence explaining what they do, and I still don't understand.
Something with LLMs, obviously.
LeFantome 1 days ago [-]
On the home page it says "Open Source Continuous Inference Benchmark Trusted by GigaWatt Token Factories"
I think that is pretty self-explanatory. Do you understand the word benchmark?
palata 1 days ago [-]
We don't get the same homepage. For me it says "The Large Model Systems Organization develops large models and systems that are open, accessible, and scalable."
halJordan 2 days ago [-]
Uh, lol. I dont think lmsys or anyone in the industry owes you an explanation rofl. But pop off King
Bechmarks from InferenceX (they do not have apples-to-apples setups to compare the different engines for whatever reason): https://inferencex.semianalysis.com/inference?i_hc=1&g_model...
I find it odd that sglang, vLLM, TRTLLM don't seem to want to publish benchmarks comparing each other. They used to, but now there seems to be some unspoken rule against it.
At least we get comparison against "other OSS engine" this time, but that could be HF's Transformers as well :)
Model makers (both open and closed weight) typically publish benchmarks against other models and when they do not, people rightfully call them out.
Including comparison against "other OSS engine" is just not helpful (what if it's a sandbagged baseline like HF Transformers?)
Someone always cries foul with "you're using the wrong version/patch" or similar, and they get hopelessly outdated so fast too, especially since the labs tend to release when everyone else is releasing too, hassle to keep them up to date, and why would you when sometimes the alternatives manages to get ahead of you? :)
Something with LLMs, obviously.
I think that is pretty self-explanatory. Do you understand the word benchmark?