Kubernetes ingress logs

Kubernetes ingress logs DEFAULT

Community

We have previously discussed how to deploy an Ingress access layer that provides high reliability in a Kubernetes cluster in the article Best Practices for Deploying a High-Reliability Kubernetes Ingress Controller. Ingress manages external access to services in a Kubernetes cluster. The Ingress Controller records request logs that are very important for tracking the entire request link. The following content describes how to import the collected Kubernetes Ingress Controller logs to Log Service to facilitate retrieval and analysis of service requests.

Prepare the Environment

  1. Apply for a Kubernetes cluster on the Container Service console.
  2. Deploy the service and configure the Ingress Controller to provide services for external systems.

Import the Collected Logs to Log Service

Perform the following steps to import the collected Ingress Controller logs to Log Service:

  1. Deploy Logtail DaemonSet.
  2. Create a log project and configure a machine group.
  3. Configure LogtailConfig (log collection configurations).

For more information, read our documentation.

Step 1: Deploy Logtail DaemonSet.

Note: Skip this step if you have deployed Logtail DaemonSet in the cluster.

The configuration items are described as follows:

  1. ${your_region_name}: ID of the region where the Kubernetes cluster is located.
  2. ${your_aliyun_user_id}: Alibaba Cloud UID.
  3. ${your_machine_group_name}: custom ID for configuring a machine group.

Step 2: Create a log project and configure a machine group.

Note: Skip this step if you have created a project for collecting the Kubernetes cluster logs in the corresponding region and you plan to use this project.

Create a log project at the Log Service console.

1

Note: The region of the log project must be the same as that of the Kubernetes cluster.

Configure a machine group.

2

Note: Select and ensure that the value is the same as that of the environmental variable in the Logtail DaemonSet YAML file edited in step 1.

Step 3: Configure LogtailConfig (log collection configurations).

Create a Logstore.

Note: You can reuse a created Logstore.

3

Configure the type of the configuration data to be collected.

Note: Ingress Controller logs are exported to Stdout, so select .

4

Configure the source of the configuration data to be collected.

5

For more configuration items, read our documentation. The preceding information indicates that only the Stdout and Stderr logs containing will be collected.

Associate a machine group.

6

Check Ingress Controller request logs.

After external systems access the services in the Kubernetes cluster through Ingress, check the collected Ingress Controller request logs in the k8s-ingress-controller Logstore.

7

Import the Collected Logs to the Self-Built ES

This section describes how to collect Ingress Controller logs to the self-built Elasticsearch (ES). Container Service also provides an appropriate solution if you want to collect and import Kubernetes cluster pod logs to the self-built ES. For more information, see A solution to log collection problems of Kubernetes clusters by using log-pilot, Elasticsearch, and Kibana.

Step 1: Deploy log-pilot, ES, and Kibana.

Note: Skip this step if you have deployed them in the cluster.

For more information, see A solution to log collection problems of Kubernetes clusters by using log-pilot, Elasticsearch, and Kibana.

Step 2: Perform log collection configurations.

To collect Ingress Controller logs, add the following log collection configurations:

After external systems access the services in the Kubernetes cluster through Ingress, check the collected Ingress Controller request logs in Kibana.

8

To learn more about Alibaba Cloud Container Service for Kubernetes, visit https://www.alibabacloud.com/product/kubernetes

ContainerServer Load BalancerContainer ServiceKubernetesIngress Controller

You may also like

Sours: https://www.alibabacloud.com/blog/kubernetes-ingress-controller-log-collection_594388

Shipping Kubernetes Nginx Ingress Logs Made Easy

Kubernetes is gaining popularity every day. Using an Ingress controller is the preferred method of allowing external access to the services in a cluster. This makes ingress logs incredibly important for tracking the performance of your services, issues, bugs, and even the security of your cluster.

You can learn more about logs in Kubernetes from our Kubernetes Logging Tutorial or check out how you can monitor Kubernetes logs, metrics, and events from our Guide to Kubernetes monitoring.

Depending on the traffic volume, logging each request may end up being expensive. How to solve this? There are several methods of reducing log volume and, in doing so, the cost as well. Here are a few:

  1. Remove unnecessary log enrichment. Logagent is very neat. It enriches each log line by adding extra container info that is very useful for understanding the context. In the case of an Ingress this extra info may not help that much and could be removed.
  2. Log fewer fields. Logging everything in hopes it will someday be of help is an admirable goal. Logging only important things is much harder. Choose wisely which fields you think will be important. For example, the HTTP referrer field is useful when debugging a web application, but may not be important at the Ingress level, where the main task is to route requests.
  3. Log less data. Logging all requests may provide various stats if logs are used as metrics. If you already collect those metrics, you could just skip logging successful requests.

Following these three steps helped us reduce our log volume in Sematext Logs by 75%.

Let’s see how you can ship Nginx Ingress logs using Sematext Logagent without breaking the bank.

Sematext Logagent is an open-source, light-weight log shipper parsing many log formats out of the box. With its rich set of input and output plugins, it becomes a general ETL tool for time-series data like logs or IoT sensor data. You can read data from various sources like files, databases, Elasticsearch or IoT devices (via MQTT), process the data and store the data in files, databases Apache Kafka, or Elasticsearch, InfluxDB or Sematext Cloud.

In the following examples, we will collect the Nginx Ingress controller log files and ship them to Elasticsearch. We will remove less important fields to reduce log volume.

Shipping Ingress logs

We start by assuming that the following prerequisites are met: 

The first step is to enable JSON logging, by updating the Ingress config section:

defaultBackend: replicaCount: 2 controller: kind: DaemonSet extraEnvs: - name: LOGS_TOKEN value: "<YOUR_LOGS_TOKEN>" config: use-forwarded-headers: "true" use-geoip: "false" use-geoip2: "false" log-format-escape-json: "true" log-format-upstream: '{ "@timestamp": "$time_iso8601", "remote_addr": "$remote_addr", "x-forward-for": "$proxy_add_x_forwarded_for", "request_id": "$req_id", "remote_user": "$remote_user", "bytes_sent": "$bytes_sent", "request_time": "$request_time", "status": "$status", "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": "$request_length", "duration": "$request_time", "method": "$request_method", "http_referrer": "$http_referer", "http_user_agent": "$http_user_agent" }'

Logagent can now easily parse and ship logs:

shell script helm install --name agent stable/sematext-agent -f agent.region: US logsToken: "<YOUR_LOGS_TOKEN>" logagent: config: MATCH_BY_NAME: .*_(default|ingress)_.*

By using we limit log collection to the and namespaces.

A sample log line will look like this, and is almost 3k (2570 chars) in size:

{ "@timestamp": "2019-07-29T07:27:32.030Z", "severity": "info", "os": { "host": "ip-10-4-62-243.eu-central-1.compute.internal" }, "timestamp": "2019-07-29T07:27:32+00:00", "remote_addr": "188.26.243.229", "x-forward-for": "188.26.243.229", "request_id": "b7ba683189225e96d7af6b8e42554720", "remote_user": "", "bytes_sent": 974, "request_time": 0.001, "status": 200, "vhost": "k8s-echo.test.elb.eu-west-1.amazonaws.com", "request_proto": "HTTP/1.1", "path": "/echo", "request_query": "", "request_length": 604, "duration": 0.001, "method": "GET", "http_referrer": "", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.1 Safari/605.1.15", "logSource": "sha256:02149b6f439fe60efad026f3846452f3cd0861f3b22f9e9b45eb0a5abba314ac_k8s_nginx-ingress-controller_ingress-nginx-ingress-controller-5rss4_ingress_09ac9e9d-b1d2-11e9-b8ba-027092ac4e5e_0_a6b891d21807", "container": { "id": "a6b891d21807e59124982a0812eecb80c91d57d2eae35086bdb92bc20e2aea88", "type": "docker", "name": "k8s_nginx-ingress-controller_ingress-nginx-ingress-controller-5rss4_ingress_09ac9e9d-b1d2-11e9-b8ba-027092ac4e5e_0", "image": { "name": "sha256:02149b6f439fe60efad026f3846452f3cd0861f3b22f9e9b45eb0a5abba314ac" }, "host": { "hostname": "ip-10-4-62-243.eu-central-1.compute.internal" } }, "labels": { "io_kubernetes_container_logpath": "/var/log/pods/09ac9e9d-b1d2-11e9-b8ba-027092ac4e5e/nginx-ingress-controller/0.log", "io_kubernetes_container_name": "nginx-ingress-controller", "io_kubernetes_docker_type": "container", "io_kubernetes_pod_name": "ingress-nginx-ingress-controller-5rss4", "io_kubernetes_pod_namespace": "ingress", "io_kubernetes_pod_uid": "09ac9e9d-b1d2-11e9-b8ba-027092ac4e5e", "io_kubernetes_sandbox_id": "ca9a453a24ddcfbd33834a20769cca82fbff50a8aef1414c8dd1b4846e5d9d33", "annotation_io_kubernetes_container_hash": "aebb22e4", "annotation_io_kubernetes_container_ports": "[{\"name\":\"http\",\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"containerPort\":443,\"protocol\":\"TCP\"}]", "annotation_io_kubernetes_container_restartCount": "0", "annotation_io_kubernetes_container_terminationMessagePath": "/dev/termination-log", "annotation_io_kubernetes_container_terminationMessagePolicy": "File", "annotation_io_kubernetes_pod_terminationGracePeriod": "60" }, "@timestamp_received": "2019-07-29T07:27:41.697Z", "logsene_orig_type": "logs" }

Remove log enrichment

We immediately notice 3 larger fields , and that were added by Logagent. These could be removed in this context by adding in the configuration:

shell script helm upgrade agent stable/sematext-agent -f agent.region: US logsToken: "<YOUR_LOGS_TOKEN>" logagent: config: MATCH_BY_NAME: .*_(default|ingress)_.* REMOVE_FIELDS: container,labels,logSource

The result is only 1k (890 chars) in size:

{ "@timestamp": "2019-07-29T07:42:39.368Z", "severity": "info", "os": { "host": "ip-10-4-62-243.eu-central-1.compute.internal" }, "timestamp": "2019-07-29T07:42:39+00:00", "remote_addr": "188.26.243.229", "x-forward-for": "188.26.243.229", "request_id": "8074ba0130449bf3eff03655c3e7da5e", "remote_user": "", "bytes_sent": 971, "request_time": 0.001, "status": 200, "vhost": "k8s-echo.test.elb.eu-west-1.amazonaws.com", "request_proto": "HTTP/1.1", "path": "/echo", "request_query": "", "request_length": 604, "duration": 0.001, "method": "GET", "http_referrer": "", "http_user_agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/12.1.1 Safari/605.1.15", "index": "c7718d77-9d0a-4729-96a9-08d32c6fa07d", "@timestamp_received": "2019-07-29T07:42:45.980Z", "logsene_orig_type": "logs" }

Log fewer fields

This looks small, but if you process millions of requests daily, it can add up to a lot at the end of the month. Removing some unneeded fields may improve things even further. This time the fields should be removed in the Nginx Ingress log format.

log-format-upstream: '{ "@timestamp": "$time_iso8601", "remote_addr": "$remote_addr", "bytes_sent": "$bytes_sent", "request_time": "$request_time", "status": "$status", "vhost": "$host", "request_proto": "$server_protocol", "path": "$uri", "request_query": "$args", "request_length": "$request_length", "duration": "$request_time", "method": "$request_method"}'

The result is now only 0.5k (569 chars) in size:

{ "@timestamp": "2019-07-29T08:00:53.000Z", "severity": "info", "os": { "host": "ip-10-4-62-243.eu-central-1.compute.internal" }, "remote_addr": "188.26.243.229", "bytes_sent": 974, "request_time": 0.002, "status": 200, "vhost": "k8s-echo.test.elb.eu-west-1.amazonaws.com", "request_proto": "HTTP/1.1", "path": "/echo", "request_query": "", "request_length": 604, "duration": 0.002, "method": "GET", "index": "c7718d77-9d0a-4729-96a9-08d32c6fa07d", "@timestamp_received": "2019-07-29T08:00:55.979Z", "logsene_orig_type": "logs" }

Log less data

Pretty big change from 2570 chars to 569 chars. There’s only one way to reduce it even further, and that’s to not ship all of the logs. For example, the 2xx requests can be dropped, by filtering them using in Logagent:

shell script helm upgrade agent stable/sematext-agent -f agent.yamlregion: US logsToken: "<YOUR_LOGS_TOKEN>" logagent: config: MATCH_BY_NAME: .*_(default|ingress)_.* REMOVE_FIELDS: container,labels,logSource IGNORE_LOGS_PATTERN: \“status\“:\s20

Summary

The examples above show that Loagagent provides all required methods to reduce log volume and at the same time reduce costs. By using you can limit log collection to desired namespaces. Unneeded fields can be removed using in the configuration. Even entire log lines can be ignored with . Logagent makes it easy to slim down any logs with very little effort.
Don’t hesitate to shoot us a message about any questions you may have.

You might also like

Sours: https://sematext.com/blog/shipping-nginx-ingress-logs/
  1. 5 star tuning
  2. Dinosaurs hot wheels
  3. Planet spark reviews
  4. Massage therapist jobs
  5. Wood door wallpaper

Troubleshooting ¶

Ingress-Controller Logs and Events ¶

There are many ways to troubleshoot the ingress-controller. The following are basic troubleshooting methods to obtain more information.

Check the Ingress Resource Events

Check the Ingress Controller Logs

Check the Nginx Configuration

Check if used Services Exist

Debug Logging ¶

Using the flag it is possible to increase the level of logging. This is performed by editing the deployment.

  • shows details using about the changes in the configuration in nginx
  • shows details about the service, Ingress rule, endpoint changes and it dumps the nginx configuration in JSON format
  • configures NGINX in debug mode

Authentication to the Kubernetes API Server ¶

A number of components are involved in the authentication process and the first step is to narrow down the source of the problem, namely whether it is a problem with service authentication or with the kubeconfig file.

Both authentications must work:

Service authentication

The Ingress controller needs information from apiserver. Therefore, authentication is required, which can be achieved in two different ways:

  1. Service Account: This is recommended, because nothing has to be configured. The Ingress controller will use information provided by the system to communicate with the API server. See 'Service Account' section for details.

  2. Kubeconfig file: In some Kubernetes environments service accounts are not available. In this case a manual configuration is required. The Ingress controller binary can be started with the flag. The value of the flag is a path to a file specifying how to connect to the API server. Using the does not requires the flag . The format of the file is identical to which is used by kubectl to connect to the API server. See 'kubeconfig' section for details.

  3. Using the flag : Using this flag it is possible to specify an unsecured API server or reach a remote kubernetes cluster using kubectl proxy. Please do not use this approach in production.

In the diagram below you can see the full authentication flow with all options, starting with the browser on the lower left hand side.

Service Account ¶

If using a service account to connect to the API server, the ingress-controller expects the file to be present. It provides a secret token that is required to authenticate with the API server.

Verify with the following commands:

If it is not working, there are two possible reasons:

  1. The contents of the tokens are invalid. Find the secret name with and delete it with . It will automatically be recreated.

  2. You have a non-standard Kubernetes installation and the file containing the token may not be present. The API server will mount a volume containing this file, but only if the API server is configured to use the ServiceAccount admission controller. If you experience this error, verify that your API server is using the ServiceAccount admission controller. If you are configuring the API server by hand, you can set this with the parameter.

    Note that you should use other admission controllers as well. Before configuring this option, you should read about admission controllers.

More information:

Kube-Config ¶

If you want to use a kubeconfig file for authentication, follow the deploy procedure and add the flag to the args section of the deployment.

Using GDB with Nginx ¶

Gdb can be used to with nginx to perform a configuration dump. This allows us to see which configuration is being used, as well as older configurations.

Note: The below is based on the nginx documentation.

  1. SSH into the worker
  1. Obtain the Docker Container Running nginx
  1. Exec into the container
  1. Make sure nginx is running in
  1. Get list of processes running on container
  1. Attach gdb to the nginx master process
  1. Copy and paste the following:
  1. Quit GDB by pressing CTRL+D

  2. Open nginx_conf.txt

Sours: https://kubernetes.github.io/ingress-nginx/troubleshooting/

Nginx Ingress Log Shipping

shareshareshare

Kubernetes is gaining popularity every day. Using an Ingress controller is the preferred method of allowing external access to the services in a cluster. This makes Ingress logs incredibly important for tracking the performance of your services, issues, bugs, and the security of your cluster.

Ship Ingress logs¶

Note: Make sure that the following prerequisites are met before continuing:

Enable JSON logging, by updating the Ingress config section:

defaultBackend:replicaCount:2controller:kind:DaemonSetextraEnvs:-name:LOGS_TOKENvalue:"<YOUR_LOGS_TOKEN>"config:use-forwarded-headers:"true"use-geoip:"false"use-geoip2:"false"log-format-escape-json:"true"log-format-upstream:'{"@timestamp":"$time_iso8601","remote_addr":"$remote_addr","x-forward-for":"$proxy_add_x_forwarded_for","request_id":"$req_id","remote_user":"$remote_user","bytes_sent":$bytes_sent,"request_time":$request_time,"status":$status,"vhost":"$host","request_proto":"$server_protocol","path":"$uri","request_query":"$args","request_length":$request_length,"duration":$request_time,"method":"$request_method","http_referrer":"$http_referer","http_user_agent":"$http_user_agent"}'

To limit log collection to the and namespaces, use the option.

Create an file that looks like this:

region:USlogsToken:"<YOUR_LOGS_TOKEN>"logagent:config:MATCH_BY_NAME:.*_(default|ingress)_.*

Setup Logagent to parse and ship logs:

helm install --name agent stable/sematext-agent -f agent.yaml

Remove log enrichment¶

Some of the larger fields like , and are added by Logagent for better context. These can be removed by using the option in Logagent:

Add the option to your :

region:USlogsToken:"<YOUR_LOGS_TOKEN>"logagent:config:MATCH_BY_NAME:.*_(default|ingress)_.*REMOVE_FIELDS:container,labels,logSource

Run the Helm upgrade command:

helm upgrade agent stable/sematext-agent -f agent.yaml

Remove unneeded fields¶

The same thing can be done by removing the unneeded fields from the Nginx Ingress log format.

log-format-upstream:'{"@timestamp":"$time_iso8601","remote_addr":"$remote_addr","bytes_sent":$bytes_sent,"request_time":$request_time,"status":$status,"vhost":"$host","request_proto":"$server_protocol","path":"$uri","request_query":"$args","request_length":$request_length,"duration":$request_time,"method":"$request_method","http_user_agent":"$http_user_agent"}'

Remove unneeded logs¶

To reduce logs size even further, some of the logs can be dropped. For example the 2xx requests can filtered by using the option in Logagent:

Add the option to your :

region:USlogsToken:"<YOUR_LOGS_TOKEN>"logagent:config:MATCH_BY_NAME:.*_(default|ingress)_.*REMOVE_FIELDS:container,labels,logSourceIGNORE_LOGS_PATTERN:\"status\":\s20

Run the Helm upgrade once again:

helm upgrade agent stable/sematext-agent -f agent.yaml

By using you can limit log collection to desired namespaces. Unneeded fields can be removed using in the configuration. Even entire log lines can be ignored with . Logagent makes it easy to slim down any logs with very little effort.

Sours: https://sematext.com/docs/logagent/how-to-nginx-ingress-log-shipping/

Ingress logs kubernetes

Logging with the HAProxy Kubernetes Ingress Controller

The HAProxy Kubernetes Ingress Controller publishes two sets of logs: the ingress controller logs and the HAProxy access logs.

 

After you install the HAProxy Kubernetes Ingress Controller, logging jumps to mind as one of the first features to configure. Logs will tell you whether the controller has started up correctly and which version of the controller you’re running, and they will assist in pinpointing any user experience issues. Getting access to HAProxy’s verbose access logs will pay big dividends, but it requires a small amount of setup.

Putting aside the fact that the ingress controller may be deployed as one or many containers across your cluster—either as a ReplicaSet or a DaemonSet—conceptually it is a single program. All of its components are neatly packaged inside of a Docker image. However, inside that image there are two distinct parts: The ingress controller process and the HAProxy load balancer.

The controller part handles watching the cluster for changes to pods, secrets, and other types of Kubernetes objects. When it detects a change, it triggers an update to the adjacent HAProxy load balancer configuration. The HAProxy part handles routing, TLS encryption, rate limiting and other proxy tasks. Because there are two parts, there are two sources of log messages.

In this blog post, you will learn how to configure your controller logs and HAProxy access logs. We’ll also consider a few special cases, such as how to capture information about request rates and TLS sessions.

Ingress Controller Logs

Ingress controller logs are what you see right after installing the ingress controller. If you call with the name of one of the ingress controller pods, you’ll see important information about the startup such as the controller version, ConfigMap values, default TLS certificate, etc.

Then, at runtime, more information will be logged depending on the log verbosity level, which you can set with the controller argument to one of error, warning, info, debug or trace. It defaults to info, which, in addition to capturing errors and warnings, reports important changes like updating default options and reloading HAProxy. To change this, you must set it during installation, as shown below where we set the log level to debug:

Debug logs give detailed information about what the controller is doing. Trace level logs will log, on top of that, all Kubernetes events that the controller receives.

HAProxy Logs

HAProxy emits a different set of log messages that contain a wealth of information, which can aid in identifying trends and spotting anomalies in your traffic. HAProxy access logs can be configured via the following annotations:

AnnotationDescription
syslog-serverConfigures one or more Syslog servers where logs should be sent.
log-formatSets the default log format string.
dontlognullSkips logging of connections that send no data, which can happen with monitoring systems.
logasapLogs request and response data as soon as the server returns a complete set of HTTP response headers instead of waiting for the response to finish sending all data.

Until you’ve configured the annotation, you will not see access logs. In the next section, you’ll learn how.

Setting the Log Target

To set up your access logs, create or update the controller’s ConfigMap. Be sure to give it the same namespace and name as shown in the startup logs (e.g. default/haproxy-kubernetes-ingress). Below, we set the annotation in a ConfigMap definition:

Then, we apply it with :

If you deploy the ingress controller using the Helm chart, you can set these values during installation, as shown:

To send logs to stdout, use this value:

Then, you can see the log messages by calling . This is useful for quick setups, proofs of concept, debugging, and other ad hoc situations. However, for a production environment, log retention and collection are important considerations to keep in mind.

One way to make that possible is to configure a logging driver that redirects the log stream from stdout to a target, such as to a file. According to the Kubernetes Logging Architecture guide, the container engine (i.e. Docker) is responsible for redirecting container logs:

Everything a containerized application writes to stdout and stderr is handled and redirected somewhere by a container engine. For example, the Docker container engine redirects those two streams to a logging driver, which is configured in Kubernetes to write to a file in JSON format.

In the case of the Docker container engine, log retention can be set via the param in Docker’s daemon.json file. However, making changes to the underlying container engine on each node in a Kubernetes cluster is not everyone’s preference.

Another option exists. You can send HAProxy’s access logs to a syslog server simply by using a different value for the annotation. That server could be a sidecar container that listens on the loopback address, in which case you’d set the annotation like this:

Or the syslog server may be deployed as a separate Kubernetes service that receives logs from multiple ingress controller pods and aggregates them. In that case, set the annotation like this:

The Log Format

HAProxy’s log format string defines what HAProxy will log. The default value is the HTTP log format, which generates a line that looks like this:

Read our blog post Introduction to HAProxy Logging to learn more about each of these fields. To change the format, set the annotation.

Note that when using TLS passthrough HAProxy won’t do layer 7 inspection, but passes TLS traffic directly to backends in mode TCP. In this case, the controller will use a TCP log format string where it also records the SNI value of a TLS connection.

Log Custom Information

In addition to being able to change the default log format to record different information, you can use the request-capture annotation in your Ingress or Service definitions to capture an HAProxy expression. An expression can include fetch methods and converters.

Here are some expressions:

  • – returns the content of the foo header converted to lowercase.
  • – returns the foo parameter from the URL-encoded body of a POST request.
  • – returns the value of the SNI extension.

A simple use case of is logging specific HTTP headers. For example, you might want to capture a header that contains a request ID used for debugging or tracing. In AWS this would be the X-Amzn-Trace-Id header. In this case, the annotation value would be:

This will provide a log line like this, where the Trace ID is Root=1-5e9df9d4-ca09fd0867923f2862d8504a:

This can be handy when you need to cross reference logs of different components that a request trace passes through. Another example is logging the Authorization header to see what type of authorization an HTTP request used:

A more advanced use case for would be to log the number of requests per second originating from a given source IP address after you’ve enabled rate limiting with the rate-limit-requests annotation. When you enable rate limiting, a stick table that tracks the request rate per client is created in HAProxy. The stick table is always given a conventional name of RateLimit-<rate limit period>. The default period for rate limiting is one second (1000 milliseconds), thus the stick table is named RateLimit-1000.

In the following example, we capture the requests rate by using the fetch method with the name of the stick table as a parameter:

This will provide access logs that look like the following, where the rate per second is shown in curly braces:

Available sample fetches cover more than the HTTP layer. For example, if the ingress controller has TLS enabled, you can log TLS information such as whether the TLS session is new or resumed, whether a client certificate was used, and the name of the TLS cipher negotiated for the connection. The following example shows a with those fields:

The log output looks like this, where the respective values are 0, no value, and TLS_AES_256_GCM_SHA384:

Conclusion

The HAProxy Kubernetes Ingress Controller has two sources of logs: the controller and the HAProxy load balancer. Both can be customized. You can set a different verbosity level for the controller logs and define a new log format and target for the HAProxy logs. There’s support for capturing custom information too, such as to record specific HTTP headers, request rates, or TLS fields. With all of these options in hand, you can take advantage of the detailed information only HAProxy offers.

Want to stay up to date on similar topics? Subscribe to our blog! You can also follow us on Twitter and join the conversation on Slack.

Sours: https://www.haproxy.com/blog/logging-with-the-haproxy-kubernetes-ingress-controller/
Kubernetes Ingress Resources - Using Ingress resources to expose your application

Tears were streaming down her cheeks, breathing was heavy. I dug my lips into her mouth, tongue caressed her palate, and with the other hand began to pull at her clitoris, stroking her labia. Everything was wet between her legs. Tearing off her lips, I inserted two fingers into her vagina, and began to fuck her with them.

You will also like:

You, with a sweet cry, break free from my hands and fall on the bed, convulsively trying to catch your breath. And I'm falling the other way. Lying on your stomach, with your right hand you find my penis, squeeze it and say "Oh Lord", feeling firm and tough.



11381 11382 11383 11384 11385