same query over 16M of logs on both Loki and VL:
Grafana Loki: 10-11s
VictoriaLogs: 250ms
Grafana Loki: 10-11s
VictoriaLogs: 250ms
official Jaeger integration just got approved today too
we've never been so early
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8988
we've never been so early
https://github.com/VictoriaMetrics/VictoriaMetrics/pull/8988
I've collected roughly 100M logs over the last few days in both Loki and VictoriaLogs
VictoriaLogs finally started to utilize a little more memory than Loki
Query time amazingly is still ~250ms for full-text search (not labels)
I don't even want to try running same query with Loki, it will just explode...
VictoriaLogs finally started to utilize a little more memory than Loki
Query time amazingly is still ~250ms for full-text search (not labels)
I don't even want to try running same query with Loki, it will just explode...
hardcock-driven development
(it did explode)
actually, it didn't
It looks like under the hood Loki splits up the data into 24h chunks and fetches them one-by-one
I've seen this in the docs, but didn't realize it works this way
However, it still takes astronomical amount of time to search, ~20-25s per each chunk. So the request to do full-text search on a week timeframe would take ~2.5 minutes
It looks like under the hood Loki splits up the data into 24h chunks and fetches them one-by-one
I've seen this in the docs, but didn't realize it works this way
However, it still takes astronomical amount of time to search, ~20-25s per each chunk. So the request to do full-text search on a week timeframe would take ~2.5 minutes
hardcock-driven development
actually, it didn't It looks like under the hood Loki splits up the data into 24h chunks and fetches them one-by-one I've seen this in the docs, but didn't realize it works this way However, it still takes astronomical amount of time to search, ~20-25s…
and it also uses all of the available CPU time to do a search, I wonder if it has any negative effects on the ability to effectively consume the data
After all it does seem to have sense to run Loki only as a Simple Scalable Deployment, where you can have a dedicated writer target(s) and a bunch of read target(s) that can process your request in parallel. But this will get expensive really fast
👍1