-
Notifications
You must be signed in to change notification settings - Fork 121
Added docs for EKS deployment #2785
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
0c72915
bcf66e4
462577c
f525631
d46ff0a
0fc1f1d
05c61b6
04abdca
47277e0
58aecba
0c65136
ff550a9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,4 +1,4 @@ | ||
| SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=http://xxx.xxx.xxx.xxx:8088 | ||
| SC4S_DEST_SPLUNK_HEC_DEFAULT_URL=https://xxx.xxx.xxx.xxx:8088 | ||
| SC4S_DEST_SPLUNK_HEC_DEFAULT_TOKEN=xxxxxxxxxxxxxxxxxx | ||
| #Uncomment the following line if using untrusted SSL certificates | ||
| #SC4S_DEST_SPLUNK_HEC_DEFAULT_TLS_VERIFY=no |
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,106 @@ | ||
| You can install SC4S on AWS with EKS. To do this, you can use a deployment file and a basic configuration information. | ||
|
|
||
| Refer to AWS [documentation](https://docs.aws.amazon.com/eks/latest/userguide/sample-deployment.html) on how to set up your AWS environment. | ||
|
|
||
| Before you begin you also need to have `kubectl` installed. | ||
|
|
||
| # Prepare your initial configuration | ||
| 1. First create a file named `/opt/sc4s/env_file` and add the following environment variables and values: | ||
|
|
||
| ``` dotenv | ||
| --8<---- "ansible/resources/env_file" | ||
| ``` | ||
| Then create a configmap with variables provided in the file | ||
| ``` | ||
| kubectl create configmap sc4s-config --from-env-file=/opt/sc4s/env_file -n sc4s | ||
| ``` | ||
|
|
||
| 2. Create a deployment configuration file based on this: | ||
| ``` yaml | ||
| --8<---- "docs/resources/docker/sc4s_deployment.yaml" | ||
| ``` | ||
|
|
||
| Please note that this file may need to be modified based on your requirements, such as the ports being used. | ||
| You can view the default range of ports opened by the nodePort [here](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport). | ||
|
|
||
| 3. (Optioinal) To use local filters you have to load them into a configmap, and uncomment parts of the deployment file related to them: | ||
|
|
||
| ``` | ||
| kubectl create configmap sc4s-local-filter-config \ | ||
| --from-file=/opt/sc4s/local/config/app_parsers -n sc4s | ||
| ``` | ||
|
|
||
| This loads files from app_parsers directory only, [here](https://kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#create-configmaps-from-files) is the documentation explaining other use cases. | ||
|
|
||
| # Deploy SC4S with your configuration | ||
| 1. To run SC4S simply run this command in the directory where your deployment file is located: | ||
| ```bash | ||
| kubectl apply -f sc4s_deployment.yaml | ||
| ``` | ||
|
|
||
| You can use a load balancer with SC4S, to set it up properly refer to our [documentation](../architecture/lb/index.md). | ||
|
|
||
| 2. You can use following commands to check if SC4S deployment and NodePort service is running. | ||
|
|
||
| To get pods: | ||
| ```bash | ||
| kubectl get pods -n sc4s | ||
| ``` | ||
|
|
||
| To get NodePort service: | ||
| ```bash | ||
| kubectl get services -n sc4s | ||
| ``` | ||
|
|
||
| Check the logs using this command: | ||
| ```bash | ||
| kubectl logs {your_pod_name} -n sc4s | ||
| ``` | ||
|
|
||
| You should see something like this: | ||
| ```ini | ||
| SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:fallback... | ||
| SC4S_ENV_CHECK_HEC: Splunk HEC connection test successful to index=main for sourcetype=sc4s:events... | ||
| syslog-ng checking config | ||
| sc4s version=3.37.0 | ||
| Configuring the health check port to: 8080 | ||
| [2025-08-01 17:40:50 +0000] [130] [INFO] Starting gunicorn 23.0.0 | ||
| [2025-08-01 17:40:50 +0000] [130] [INFO] Listening at: http://0.0.0.0:8080 (130) | ||
| [2025-08-01 17:40:50 +0000] [130] [INFO] Using worker: sync | ||
| [2025-08-01 17:40:50 +0000] [133] [INFO] Booting worker with pid: 133 | ||
| starting syslog-ng | ||
| ``` | ||
|
|
||
| If the pod does not start you can debug it with this command: | ||
| ```bash | ||
| kubectl describe pod {your_pod_name} -n sc4s | ||
| ``` | ||
|
|
||
| 3. You can use following commands to check if SC4S deployment and NodePort service is running. | ||
|
|
||
|
|
||
| # Validate your configuration | ||
|
|
||
| SC4S performs checks to ensure that the container starts properly and that the syntax of the underlying syslog-ng | ||
| configuration is correct. Once the checks are complete, validate that SC4S properly communicate with Splunk. | ||
| To do this, execute the following search in Splunk: | ||
|
|
||
| ```ini | ||
| index=* sourcetype=sc4s:events "starting up" | ||
| ``` | ||
|
|
||
| # Update SC4S | ||
| Whenever the image is upgraded or when you want your configuration changes to be applied, run the command: | ||
|
|
||
| ```bash | ||
| kubectl apply -f sc4s_deployment.yaml | ||
| ``` | ||
|
|
||
| Kubectl will detect if there are any changes to be made and rollout new pods if necessary. | ||
|
|
||
| # Stop SC4S | ||
|
|
||
| To delete the deployment run this command in the directory where your deployment file is located: | ||
| ```bash | ||
| kubectl delete -f sc4s_deployment.yaml | ||
| ``` | ||
sbylica-splunk marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,85 @@ | ||
| apiVersion: apps/v1 | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we already have helm chart for microk8s. Why we cannot use it for managed k8s in AWS?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm yeah I guess we could use it to, but I was thinking about something more basic that the customer can configure themselves manually. We could also link it in this new page I guess?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
But we have really enterprise and gov level customers, I not sure that they need something like that. They expecting that all features will be available :)
Not sure that working like that, just remember expirience from on-calls. In best case they can read a doc. @sbylica-splunk they will recognise this guide like official recommendation from Splunk
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Well, since it's an experimental feature (and marked as such) we won't be providing direct support I think. Earlier we decided that doing a full helm chart from the start is too much of a scope for such project |
||
| kind: Deployment | ||
| metadata: | ||
| name: sc4s-deployment | ||
| namespace: sc4s | ||
| labels: | ||
| app: sc4s-app | ||
| spec: | ||
| replicas: 1 | ||
| selector: | ||
| matchLabels: | ||
| app: sc4s-app | ||
| template: | ||
| metadata: | ||
| labels: | ||
| app: sc4s-app | ||
| spec: | ||
| volumes: | ||
| - name: config-volume | ||
| configMap: | ||
| name: sc4s-config | ||
| # Uncomment only if local parser used | ||
| # - name: local-filter-config | ||
| # configMap: | ||
| # name: sc4s-local-filter-config | ||
|
|
||
| containers: | ||
| - name: sc4s | ||
| image: ghcr.io/splunk/splunk-connect-for-syslog/container3:latest | ||
| imagePullPolicy: IfNotPresent | ||
|
|
||
| resources: | ||
| requests: | ||
| cpu: "500m" | ||
| memory: "512Mi" | ||
| limits: | ||
| cpu: "2000m" | ||
| memory: "2Gi" | ||
|
|
||
| envFrom: | ||
| - configMapRef: | ||
| name: sc4s-config | ||
|
|
||
| # Uncomment only if local parser used | ||
| # volumeMounts: | ||
| # - name: local-filter-config | ||
| # mountPath: /etc/syslog-ng/conf.d/local/config/app_parsers | ||
| # readOnly: true | ||
|
|
||
| --- | ||
| apiVersion: v1 | ||
| kind: Service | ||
| metadata: | ||
| name: sc4s-nodeport-service | ||
| namespace: sc4s | ||
| spec: | ||
| selector: | ||
| app: sc4s-app | ||
| type: NodePort | ||
| ports: | ||
| - port: 514 | ||
| targetPort: 514 | ||
| name: "tcp514" | ||
| protocol: TCP | ||
| nodePort: 30514 | ||
| - port: 514 | ||
| targetPort: 514 | ||
| name: "udp514" | ||
| protocol: UDP | ||
| nodePort: 30514 | ||
| - port: 601 | ||
sbylica-splunk marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| targetPort: 601 | ||
| name: "tcp601" | ||
| protocol: TCP | ||
| nodePort: 30601 | ||
| - port: 6514 | ||
| targetPort: 6514 | ||
| name: "tcp6514" | ||
| protocol: TCP | ||
| nodePort: 30515 | ||
| - port: 8080 | ||
| targetPort: 8080 | ||
| name: "healthcheck" | ||
| protocol: TCP | ||
| nodePort: 30080 | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we also have custom ports for
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this should be up to the customer to change
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @sbylica-splunk I not sure that our users reading a code and know how it's working :)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. See above |
||
Uh oh!
There was an error while loading. Please reload this page.