Logging¶
This document describes different ways to handle log collection and aggregation for the services deployed with the SF-Operator.
- Using kubectl
- Native Cluster Logging
- Operator-managed Log Forwarding
- Example: forwarding logs to Loki
Using kubectl¶
You can always check logs from currently running pods and containers by using the kubectl logs
command, for example:
The major downside of this approach is that logs from terminated containers cannot be queried with kubectl
.
If you need to check logs from init containers, crashed containers, batch jobs or even
just containers that were destroyed after a configuration change, you will need to rely on
another tool.
Native Cluster Logging¶
Some clusters may have a built-in log collection infrastructure in place. For example:
- On Amazon EKS,you can forward logs to a collector of your choice with the built-in Fluent Bit collector.
- On OpenShift, cluster logging and log aggregation in Loki can be optionally enabled.
Please refer to your cluster's documentation to find out what existing logging solutions are available to you.
Log Forwarding to a Fluent Bit collector¶
Native Cluster Logging solutions should always be your first choice for managing the logs of Zuul and Nodepool. This is almost always the best solution in terms of cluster resource usage, and the most straightforward to implement as an application deployer.
If however no such solution is available, or if the default solution does not suit your needs, the SF-Operator can set up Zuul and Nodepool to forward application logs to a Fluent Bit collector. The collector must be configured to expose an forward input that your deployment will be able to reach.
Enabling log forwarding on a deployment¶
In the spec of your Software Factory deployment, edit the following:
The forwardInputHost is the host of the HTTP input to which logs should be sent.
Pre-defined labels¶
The log JSON payload that is sent to your Fluent Bit collector is enriched with the following keys:
key | description or value |
---|---|
labels_run | the "run" label in the pod's metadata; ie the name of the microservice emitting the log line |
labels_app | the "app" label, it is always set to "sf" |
podip | [DEPRECATED] the IP address of the pod |
namespace | the namespace where the pod is running |
nodename | [DEPRECATED] the node where the pod is running |
podname | [DEPRECATED] the name of the pod where the service is running |
log | the actual log line collected |
component | the name of the component running the microservice; typically "zuul" or "nodepool" |
container | the name of the container emitting the logs |
You can use these keys to define labels in your Fluent Bit pipeline.
Example: setting up a simple Fluent Bit + Loki stack for log collection¶
The tools/loki
folder in the sf-operator repository has everything needed to deploy a minimal
Loki instance and configure log forwarding to this instance:
- Loki manifest
- Fluent Bit manifest
- (Optional) Grafana deployment with preconfigured monitoring and logging datasources
Danger
These manifests are intended for testing purposes only and shouldn't be used in production.
-
Download the files listed above.
-
Create a Loki instance in your SF deployment's namespace (assumed to be
sf
):
or
kubectl apply -f https://raw.githubusercontent.com/softwarefactory-project/sf-operator/master/tools/loki/test-loki.yaml -n sf
- Create a Fluent Bit instance:
or
kubectl apply -f https://raw.githubusercontent.com/softwarefactory-project/sf-operator/master/tools/loki/test-fluentbit.yaml -n sf
Fluent Bit will be configured with an HTTP input, and forward logs to stdout (so you can see them in the logs of the Fluent Bit pod) and to the Loki instance you created above.
A service is also created so that the HTTP input endpoint is available on the sf namespace.
- Enable log forwarding in your Software Factory deployment's spec. Run:
Then add or edit the following section in the spec:
[...]
spec:
FluentBitLogForwarding:
forwardInputHost: fluent-bit-forward-input
forwardInputPort: 24224
- (Optional) Spin an instance of Grafana to query logs:
With your web browser of choice, you can then head to http://grafana.<YOUR_FQDN>
(the default
authentication is admin/admin) and query the services' logs.