hardcock-driven development
actually, it didn't It looks like under the hood Loki splits up the data into 24h chunks and fetches them one-by-one I've seen this in the docs, but didn't realize it works this way However, it still takes astronomical amount of time to search, ~20-25s…
and it also uses all of the available CPU time to do a search, I wonder if it has any negative effects on the ability to effectively consume the data
After all it does seem to have sense to run Loki only as a Simple Scalable Deployment, where you can have a dedicated writer target(s) and a bunch of read target(s) that can process your request in parallel. But this will get expensive really fast
👍1
VictoriaLogs also seem to consistently use less storage than Loki
$ du -sh victoria-logs-data
936.1M victoria-logs-data
$ du -sh loki
1.6G loki
❤1
I've been considering building something similar myself in Grafana Alloy. vector.dev provides this out of the box...
there's even an ability to unit test your configurations: https://vector.dev/docs/reference/configuration/unit-tests/
people who install ubuntu on servers
please do the world a favor
please do the world a favor
❤1
TIL you can quote the
'EOF' delimiter to treat the contents literally