Using Vector agent
In order to complete the setup, there are 2 main things to configure :
- 1.Obtain your keys YOUR_PAI_TOKEN and YOUR_PAI_IID by going to https://logpatterns.packetai.co/deploy/agent. Pick any of the integrations to find your keys.
- 2.Replace
yourclustername
with your cluster name in the Transformer block (see below).Note that even if you are not using a proper cluster, you must provide a value for this variable.⚠
Packetai requires a data source to send logs. So the first step is to configure a (Vector’s) source object.
As an example, here is how to setup a source of type
“file”
that sends log files located in 2 folders:[sources.extract_logs]
type = "file"
include = ["/var/log/*.log","/var/log/*/*.log"]
A second example shows how to setup a source of type
“kubernetes_logs”
- please note that additional configuration parameters are generally needed[sources.extract_logs]
type = "kubernetes_logs"
Please refer to https://vector.dev/docs/reference/configuration/sources to learn more about setting up the source of your choice.
Packetai provides a transformer that can be used to transform the data before it is ingested. Please note that for API compatibility reasons the transformer must create a
{kubernetes.controller.{name,type},namespace}
JSON object -even if you are not using kubernetes, else log patterns would fail to display.As an example, here is a transformer for file logs:
[transforms.extract_message]
type = "remap"
inputs = [ "extract_logs" ]
source = """
.temp = split(.file, "/") ?? ["unhandled-component"]
.kubernetes.controller.name = get!(.temp, path:[-1])
.kubernetes.controller.type = "Deployment"
.kubernetes.namespace = "user"
del(.temp)
"""
As an example, here is a transformer for GCP logs:
[transforms.extract_message]
type = "remap"
inputs = [ "extract_logs" ]
source = """
. = parse_json!(.message)
.message = del(.textPayload)
.temp = split(.logName, "/") ?? ["unhandled-component"]
.kubernetes.controller.name = get!(.temp, path:[-1])
.kubernetes.controller.type = "Deployment"
.kubernetes.namespace = "user"
del(.temp)
"""
Just below, finalize the transformer block with the snippet below. Please note to replace
⚠
yourclustername
in the snippet below -even if you are not using a proper cluster, you must provide a value for this variable. [transforms.add_packetai_info]
# IMPORTANT:
# CLUSTER_NAME MUST BE ADDED TO THE TRANSFORMER
# THIS IS DONE BY ADDING THE FOLLOWING LINE TO YOUR CONFIG
# CLUSTER NAME MUST MATCHES REGEX "^[a-z0-9]+$"
type = "remap"
inputs = [ "extract_message" ]
source = """
.packetai.cluster_name = "yourclustername"
"""
Finally, a sink object must be configured. This sink object (an HTTP in Packetai’s case) will be the destination to which logs are sent. Please note to replace
⚠
YOUR_PAI_TOKEN
and YOUR_PAI_IID
in the snippet below.[sinks.http_packetai]
type = "http"
inputs = [ "add_packetai_info" ]
compression = "gzip"
encoding.codec = "json"
uri="https://vector-ingester-logpatterns.packetai.co/vector/log"
[sinks.http_packetai.headers]
X-PAI-TOKEN="YOUR_PAI_TOKEN"
X-PAI-IID="YOUR_PAI_IID"
It is advised to test and validate your pipeline configuration by using the following
stdout
sink, before using the final sink defined above.# Print parsed logs to stdout
[sinks.print]
type = "console"
inputs = ["add_packetai_info"]
encoding.codec = "json"
Once there are no more errors and JSON messages display as expected, you can switch to the final sink: within minutes, you should see your first logs on https://logpatterns.packetai.co/logs !
Last modified 6mo ago