How to Install Loki using helm!

How to Install Loki using helm!

In the previous blog, we covered everything about Loki, from its components to its architecture, explaining how it works and how it efficiently compresses logs. If you have not read the blog check it out here: https://www.kubeblogs.com/all-you-need-to-know-about-loki/

Now that you have a solid understanding of Loki’s internals, it’s time to focus on how to install it. Loki can be installed in several ways, depending on your specific needs.

Let’s take a look at the three main methods:

  1. Monolithic Loki:

This is the simplest setup, where all the components run together as a single unit. It’s great for testing and small setups but not recommended for production environments due to limited scalability.

  1. Scalable Loki:

This setup breaks Loki’s components into separate, scalable units. It’s ideal for handling larger volumes of logs and gives you the flexibility to scale different components as needed. This is a more robust setup for production environments.

  1. Microservices Loki:

Similar to the scalable setup, but with an even finer division of components, allowing each to run as its own service. This is used when you need more flexibility and modularity in how Loki operates.


For this demo, we’ll show you how to install monolithic Loki using Helm. This setup is easy to implement and useful for small log volumes, around 20 to 30 GB per day.

However, it’s not recommended for production because it doesn’t scale well as the log volume grows.

In a future blog post, we’ll guide you through the scalable Loki setup, which can handle up to 1 TB of logs per day and is better suited for production environments.

To install monolithic Loki, you can use the Helmfile configuration provided below. It includes everything you need to get Loki up and running, including integration with AWS S3 for log storage and Grafana for visualization.

Here’s a brief explanation of the configuration:

  • Loki is set up with a specific image version and uses AWS S3 for storing logs.
  • Promtail is enabled to collect logs from your system.
  • Grafana is enabled to visualize the logs, with a pre-configured data source connected to Loki. This setup allows you to view and analyze logs directly in Grafana easily.
  • The configuration handles schema settings, indexing, and storage management using boltdb-shipper and S3 for long-term log storage.
  • The Grafana data source is configured to pull logs from Loki, making it the default log visualizer.

With this setup, you can quickly install Loki for testing purposes by running the Helmfile. Just keep in mind that this monolithic setup is best suited for managing logs up to 20-30 GB per day and is not recommended for production environments.

Here’s a Helmfile configuration for installing Monolithic Loki with integration to AWS S3 for log storage and Grafana for visualization:

repositories:
  - name: grafana
    url: https://grafana.github.io/helm-charts
releases:
  - name: loki-stack
    namespace: prod
    chart: grafana/loki-stack
    version: 2.10.2
    values:
      - loki:
          image:
            tag: 2.9.3
          env:
            - name: AWS_ACCESS_KEY_ID
              valueFrom:
                secretKeyRef:
                  name: iam-loki-s3
                  key: AWS_ACCESS_KEY_ID
            - name: AWS_SECRET_ACCESS_KEY
              valueFrom:
                secretKeyRef:
                  name: iam-loki-s3
                  key: AWS_SECRET_ACCESS_KEY
          server:
            http_listen_port: 3100
          config:
            schema_config:
              configs:
                - from: 2021-05-12
                  store: boltdb-shipper
                  object_store: s3
                  schema: v11
                  index:
                    prefix: loki_index_
                    period: 24h
                  chunks:
                    period: 24h
            storage_config:
              aws:
                s3: s3://us-east-1/loki-fggdfdfg
                s3forcepathstyle: true
                bucketnames: loki-fggdfdfg
                region: us-east-1
                insecure: false
                sse_encryption: false
              boltdb_shipper:
                shared_store: s3
                cache_ttl: 24h
        promtail:
          enabled: true
          config:
            clients:
              - url: http://loki-stack.prod:3100/loki/api/v1/push
        grafana:
          enabled: true
          sidecar:
            dashboards:
              enabled: true
            datasources:
              enabled: true
              label: grafana_datasource
          grafana.ini:
            datasource:
              apiVersion: 1
              datasources:
                - name: Loki
                  type: loki
                  url: http://loki-stack.prod:3100
                  access: proxy
                  isDefault: true

Q&A Section


Q: Why is the monolithic Loki setup not suitable for production?

A: The monolithic setup lacks scalability. While it’s easy to set up and ideal for small environments, it struggles with large log volumes. If you’re expecting heavy log traffic, it’s better to opt for a scalable or microservices Loki setup.

Q: How much log data can this monolithic setup handle?

A: Monolithic Loki can manage log volumes of around 20-30 GB per day. If your needs fall within this range, this setup should work fine for you. 

Q: Can Loki integrate with storage services other than AWS S3?

A: Yes, Loki supports other object storage solutions such as Google Cloud Storage (GCS) and Azure Blob Storage. You can modify the configuration based on your preferred storage provider. 

Q: Is scalable Loki better than monolithic Loki?

A: Scalable Loki is definitely better suited for larger environments or production setups. While monolithic Loki works well for small log volumes (up to 20-30 GB per day), scalable Loki can handle up to 1 TB of logs per day and allows you to scale individual components based on your needs. If you’re expecting log traffic to grow, scalable Loki is a more robust and future-proof solution.

Q: Can I start with monolithic Loki and then migrate to scalable Loki later?

A: Yes, you can start with monolithic Loki and later migrate to scalable Loki as your needs grow. This allows you to test the system with smaller log volumes before moving to a more scalable architecture. Migration involves separating the components and ensuring your log storage and indexing are properly configured for the scalable setup.


Conclusion

This monolithic Loki setup is a great starting point, especially if you’re managing smaller log volumes. If you’re not generating a high volume of logs, this setup will work just fine, and you can always scale by increasing replicas.

However, if you’re handling up to 1 TB of logs per day, the scalable Loki setup is a better choice as it can handle larger log traffic more efficiently.

For those looking to take it even further, the microservices Loki deployment offers the most flexibility and scalability, allowing you to manage even larger volumes of logs with ease.

In our next blog, we’ll cover setting up scalable Loki, followed by the microservices deployment in future posts.

If you’re looking for an end-to-end observability solution and want to focus on your product while we handle the monitoring and logging infrastructure, feel free to reach out to kubenine.

We deliver top-level observability solutions so you can focus on what matters most—your product.