AWS Lambda

SUSE Observability

Getting started for AWS Lambda

We'll setup monitoring for one or more AWS Lambda functions:

  • The monitored AWS Lambda function(s) (instrumented using Open Telemetry)

  • The Open Telemetry collector

  • SUSE Observability or SUSE Cloud Observability

AWS Lambda Instrumentation With Open Telemetry with Open Telemetry collector running in Kubernetes

The Open Telemetry collector

For a production setup it is strongly recommended to install the collector, since it allows your service to offload data quickly and the collector can take care of additional handling like retries, batching, encryption or even sensitive data filtering.

First we'll install the OTel (Open Telemetry) collector, in this example we use a Kubernetes cluster to run it close to the Lambda functions. A similar setup can be made using a collector installed on a virtual machine instead. The configuration used here only acts as a secure proxy to offload data quickly from the Lambda functions and runs within trusted network infrastructure.

Create the namespace and a secret for the API key

We'll install in the open-telemetry namespace and use the receiver API key generated during installation (see here where to find it):

kubectl create namespace open-telemetry
kubectl create secret generic open-telemetry-collector \
    --namespace open-telemetry \
    --from-literal=API_KEY='<suse-observability-api-key>' 

Configure and install the collector

We install the collector with a Helm chart provided by the Open Telemetry project. Make sure you have the Open Telemetry helm charts repository configured:

helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts

Create a otel-collector.yaml values file for the Helm chart. Here is a good starting point for usage with SUSE Observability, replace <otlp-suse-observability-endpoint:port> with your OTLP endpoint (see OTLP API for your endpoint) and insert the name for your Kubernetes cluster instead of <your-cluster-name>. When using the ingress configuration also make sure to insert your own domain name and the corresponding TLS certificate secret in the marked locations.

otel-collector.yaml
mode: deployment
presets:
  kubernetesAttributes:
    enabled: true
    # You can also configure the preset to add all the associated pod's labels and annotations to you telemetry.
    # The label/annotation name will become the resource attribute's key.
    extractAllPodLabels: true
extraEnvsFrom:
  - secretRef:
      name: open-telemetry-collector
image:
  repository: "ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-k8s"

config:
  receivers:
    otlp:
      protocols:
        grpc:
          endpoint: 0.0.0.0:4317
        http:
          endpoint: 0.0.0.0:4318
  extensions:
    # Use the API key from the env for authentication
    bearertokenauth:
      scheme: SUSEObservability
      token: "${env:API_KEY}"
  exporters:
    otlp:
      auth:
        authenticator: bearertokenauth
      # Put in your own otlp endpoint, for example suse-observability.my.company.com:443
      endpoint: <otlp-suse-observability-endpoint:port>

  service:
    extensions: [health_check, bearertokenauth]
    pipelines:
      traces:
        receivers: [otlp]
        processors: [batch]
        exporters: [otlp]
      metrics:
        receivers: [otlp]
        processors: [batch]
        exporters: [otlp]
      logs:
        receivers: [otlp]
        processors: [batch]
        exporters: [otlp]

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: ingress-nginx-external
    nginx.ingress.kubernetes.io/ingress.class: ingress-nginx-external
    nginx.ingress.kubernetes.io/backend-protocol: GRPC
    # "12.34.56.78/32" IP address of NatGateway in the VPC where the otel data is originating from
    #  nginx.ingress.kubernetes.io/whitelist-source-range: "12.34.56.78/32"
  hosts:
    - host: "otlp-collector-proxy.<your-domain>"
      paths:
        - path: /
          pathType: ImplementationSpecific
          port: 4317
  tls:
    - secretName: <secret-for-tls-certificate>
      hosts:
        - "otlp-collector-proxy.<your-domain>"

# Instead of ingress:

# Alternative 1, load balancer service
#service:
#  type: LoadBalancer
#  loadBalancerSourceRanges: 12.34.56.78/32 # The IP address of NatGateway in the VPC for the lambda functions

# Alternative 2, node port service
#service:
#  type: NodePort
#ports:
#  otlp:
#    nodePort: 30317

Now install the collector, using the configuration file:

helm upgrade --install opentelemetry-collector open-telemetry/opentelemetry-collector \
  --values otel-collector.yaml \
  --namespace open-telemetry

Make sure that the proxy collector is accessible by the Lambda functions by either having the ingress publicly accessible or by having the collector IP in the same VPC as the Lambda functions. It is recommended to use a source-range whitelist to filter out data from untrusted and/or unknown sources (see the comment in the yaml). Next to the ingress setup it is also possible to expose the collector to the Lambda functions via:

  • a LoadBalancer service that restricts access by limiting the source ranges, see "Alternative 1".

  • a NodePort service for the collector, see "Alternative 2".

The collector offers a lot more configuration receivers, processors and exporters, for more details see our collector page. For production usage often large amounts of spans are generated and you will want to start setting up sampling.

Instrument a Lambda function

Open Telemetry supports instrumenting Lambda functions in multiple languages using Lambda layers. The configuration of those Lambda layers should use the address of the collector from the previous step to ship the data. To instrument a Node.js lambda follow our detailed instructions here. For instrumenting other languages apply the same configuration as for Node.js but use one of the other Open Telemetry Lambda layers.

View the results

Go to SUSE Observability and make sure the Open Telemetry Stackpack is installed (via the main menu -> Stackpacks).

After a short while and if your Lambda function(s) are getting some traffic you should be able to find the functions under their service name in the Open Telemetry -> services and service instances overviews. Traces will appear in the trace explorer and in the trace perspective for the service and service instance components. Span metrics and language specific metrics (if available) will become available in the metrics perspective for the components.

Next steps

You can add new charts to components, for example the service or service instance, for your application, by following our guide. It is also possible to create new monitors using the metrics and setup notifications to get notified when your application is not available or having performance issues.

More info

Last updated