Configuring Data Engine
Last updated
Last updated
The CX Cloud Kubernetes cluster provisioned with kops doesn't come out of the box with centralized logging for the Pods or application. Hence, the data engine 1.0 mostly focus on logging but with a data lake possibility.
The architecture can be seen in the picture below. This version only support at the moment AWS since we use AWS services like Kinesis Firehose.
Fluentd stream the stdout logs lines to Kinesis Firehose from all Kubernetes pods.
Kinesis Firehose load the streaming data into Amazon S3 and Amazon Elasticsearch service.
Amazon S3 store compressed logs that can be used for backups or for further analysis.
Amazon Elasticsearch service store the logs that can be easily searched with Kibana, which is a part of the managed service from AWS.
Fluentd should be deployed to the cluster as a deamon set in order to read all pods.
There is a CX Cloud provided helm chart for installing Fluentd to the Kubernetes cluster.
To get started:
Install the repository:
Update repositories:
Install the chart with version 0.1.0 and the release name my-fluentd-release
into the namespace kube-system
:
The helm chart is more in detail documented on the GitHub repository, helm-fluentd-kinesis-firehose.
The Fluentd daemonset requires that an AWS account has already been provisioned with a Kinesis Firehose stream and with its data stores (eg. Amazon S3 bucket, Amazon Elasticsearch Service, etc).
Available is a CX Cloud provided Terraform module, terraform-kinesis-firehose-elasticsearch for helping with the installation of Kinesis Firehose, Amazon S3 bucket and Amazon Elasticsearch Service.
The following example show how the module can be used in Terraform.
The Terraform module is more in detail documented on the GitHub repository, terraform-kinesis-firehose-elasticsearch.