failed to flush the buffer fluentd. Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=34055641 file has been deleted: /var/log/containers/hello-world-bjfnf_argo_main-0b26876c79c5790bdaf62ba2d9512269459746b1c5711a6445256dc5a4930b65.log Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [outputes.0] task_id=5 assigned to thread #1 Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [retry] new retry created for task_id=19 attempts=1 Apr 15, 2021 at 17:18. [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920969.178403746.flb', retry in 130 seconds: task_id=774, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:21] [debug] [outputes.0] task_id=1 assigned to thread #1 [warn]: temporarily failed to flush the buffer. Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [out coro] cb_destroy coro_id=4 [2022/03/24 04:20:26] [debug] [out coro] cb_destroy coro_id=5 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [out coro] cb_destroy coro_id=11 Hi @yangtian9999. Name es There same issues keep other configs in value.yaml file by default. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [outputes.0] task_id=5 assigned to thread #1 Bug Report Describe the bug When Fluent Bit 1.8.9 first restarts to apply configuration changes, we are seeing spamming errors in the log like: [2021/10/30 02:47:00] [ warn] [engine] failed to flush chunk '2372-1635562009.567200761.flb',. [2022/03/24 04:21:08] [error] [outputes.0] could not pack/validate JSON response Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [http_client] not using http_proxy for header Host {{ .Release.Name }}-elasticsearch-master, sassoftware/viya4-monitoring-kubernetes#431. Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [task] created task=0x7ff2f183a0c0 id=9 OK Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [debug] [outputes.0] task_id=17 assigned to thread #0 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"duMnun8BI6SaBP9l3vN-","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=14 Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=665 Your Environment Fluentd or td-agent v. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2021/11/17 17:18:07] [ warn] [engine] failed to flush chunk '1-1637166971.404071542.flb', retry in 771 seconds: task_id=346, input=tail.0 > output=es.0 (out_id=0) [2021/11/17 17:18:07] [ warn] [engine] failed to flush chunk '1-1637167230.683033285.flb', retry in 1844 seconds: task_id=481, input=tail.0 > output=es.0 (out_id=0) [2021/11/17 17:18:08] [ warn] [engine] failed to flush chunk '1 . [2022/03/24 04:19:38] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [outputes.0] task_id=7 assigned to thread #1 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1OMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/argo-server-6d7cf9c977-dlwnk_argo_argo-server-7e1ccfbd60b7539a1b2984f2f46de601d567ce83e87d434e173df195e44b5224.log, inode 101715266 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [out coro] cb_destroy coro_id=21 we can close this issue. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [input chunk] update output instances with new chunk size diff=650 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"O-Mmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. FluentD or Collector pods are throwing errors similar to the following: 2022-01-28T05:59:48.087126221Z 2022-01-28 05:59:48 +0000 : [retry_default] failed to flush the buffer. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header I am wondering that I should update es version to the latest 7 version. Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:20:25] [debug] [outputes.0] task_id=2 assigned to thread #1 no tls required for es. It diagnoses problems by analyzing hundreds of metrics collected by a lightweight agent and offers guidance for resolving them. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] task_id=1 assigned to thread #0 Once a day or two the fluetnd gets the error: [warn]: #0 emit transaction failed: error_. [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records with the updated value.yaml file. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=657 [2022/03/24 04:19:49] [debug] [outputes.0] task_id=2 assigned to thread #1 [2022/03/24 04:19:30] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:19:24] [error] [outputes.0] could not pack/validate JSON response Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [task] created task=0x7ff2f183a480 id=11 OK Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [retry] new retry created for task_id=5 attempts=1 Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [input chunk] update output instances with new chunk size diff=1085 [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_wait-6b82c7411c8433b5e5f14c56f4b810dc3e25a2e7cfb9e9b107b9b1d50658f5e2.log, inode 67891711 Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY just like @lifeofmoo mentioned, initially everything went well in OpenSearch then the issue of "failed to flush chunk" came out. counter. [2022/03/24 04:21:08] [debug] [outputes.0] HTTP Status=200 URI=/_bulk [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_glob add(): /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log, inode 35326802 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [ warn] [engine] failed to flush chunk '1-1648192100.653122953.flb', retry in 17 seconds: task_id=3, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input chunk] update output instances with new chunk size diff=656 To Reproduce Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input chunk] update output instances with new chunk size diff=1085 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [out coro] cb_destroy coro_id=8 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Bug Report. Fri, Mar 25 2022 3:08:49 pm | [2022/03/25 07:08:49] [debug] [outputes.0] task_id=19 assigned to thread #0 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35353617 removing file name /var/log/containers/hello-world-g74nr_argo_main-11e24136e914d43a8ab97af02c091f0261ea8cee717937886f25501974359726.log Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [out coro] cb_destroy coro_id=15 ): k3s 1.19.8, use docker-ce backend, 20.10.12. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) Here is screenshot from DataGrip: [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1885019 removing file name /var/log/containers/hello-world-dsxks_argo_wait-114879608f2fe019cd6cfce8e3777f9c0a4f34db2f6dc72bb39b2b5ceb917d4b.log Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [retry] new retry created for task_id=9 attempts=1 version of docker imagefluent/fluent-bit:1.9.0-debug There same issues and after set Trace_Error On error logs here. [2022/03/24 04:19:49] [error] [outputes.0] could not pack/validate JSON response Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192098.623024610.flb', retry in 16 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [retry] re-using retry for task_id=9 attempts=2 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [outputes.0] task_id=21 assigned to thread #0 Fri, Mar 25 2022 3:08:42 pm | [2022/03/25 07:08:42] [debug] [upstream] KA connection #35 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [outputes.0] HTTP Status=200 URI=/_bulk @evheniyt thanks. [2022/03/24 04:19:30] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 19 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) Thanks for your answers, took some time after holiday (everybody happy new year) to dive into fluent-bit errors. As you can see, there is nothing special except failed to flush chunk and chunk cannot be retried. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [out coro] cb_destroy coro_id=2 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [outputes.0] task_id=12 assigned to thread #1 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=665 Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] new retry created for task_id=20 attempts=1 Under 200 tps everything is working . Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [retry] new retry created for task_id=7 attempts=1 Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY [2022/03/24 04:20:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988 {"took":3473,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"2-Mmun8BI6SaBP9luq99","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:23 pm | [2022/03/25 07:08:23] [debug] [http_client] not using http_proxy for header #Write_Operation upsert Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [outputes.0] task_id=2 assigned to thread #0 Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [http_client] not using http_proxy for header Fluentd does not handle a large number of chunks well when starting up, so that can be a problem as well. Name es Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [http_client] not using http_proxy for header "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"I-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:22 pm | [2022/03/25 07:08:22] [debug] [out coro] cb_destroy coro_id=2 [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=35326802 with offset=0 appended as /var/log/containers/hello-world-wpr5j_argo_main-55a61ed18250cc1e46ac98d918072e16dab1c6a73f7f9cf0a5dd096959cf6964.log Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Edit: If you're worried about something happening at 13:52:12 on 08/24, It's high probability is nothing special. Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 has been assigned (recycled) From fluent-bit to es: [ warn] [engine] failed to flush chunk, https://github.com/fluent/fluent-bit/issues/4386.you. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [input chunk] update output instances with new chunk size diff=1182 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"leMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:33 pm | [2022/03/25 07:08:33] [debug] [http_client] not using http_proxy for header privacy statement. Logstash_Format On Replace_Dots On. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input chunk] update output instances with new chunk size diff=650 * [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/fluent-bit-9hwpg_logging_fluent-bit-a7e85dd8e51db82db787e3386358a885ccff94c3411c8ba80a9a71598c01f387.log, inode 35353988 I'm using fluentd in my kubernetes cluster to collect logs from the pods and send them to the elasticseach. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1uMmun8BI6SaBP9liJWQ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:59] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 has been assigned (recycled) [2022/03/24 04:19:22] [debug] [out coro] cb_destroy coro_id=1 Fri, Mar 25 2022 3:08:44 pm | [2022/03/25 07:08:44] [ warn] [engine] failed to flush chunk '1-1648192124.833819.flb', retry in 10 seconds: task_id=17, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [outputes.0] task_id=8 assigned to thread #0 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Every time the website is updated a new manifest file is loaded from the backend, if there are newer files, the service worker will fetch all the new chunks and notify the user of the update. [2022/03/24 04:19:50] [debug] [out coro] cb_destroy coro_id=3 Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=104048677 removing file name /var/log/containers/hello-world-hxn5d_argo_main-ce2dea5b2661227ee3931c554317a97e7b958b46d79031f1c48b840cd10b3d78.log Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:46 pm | [2022/03/25 07:08:46] [debug] [outputes.0] task_id=12 assigned to thread #1 Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [ warn] [engine] failed to flush chunk '1-1648192098.623024610.flb', retry in 11 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [retry] re-using retry for task_id=6 attempts=2 I have also set Replace_Dots On. [2022/03/24 04:20:26] [debug] [upstream] KA connection #102 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:28 pm | [2022/03/25 07:08:28] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Describe the bug I have a pretty basic setup where I'm trying to write to a Cassandra backend and Loki just isn't creating any chunks. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HuMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. es 7.6.2 fluent/fluent-bit 1.8.12, Operating System and version: centos 7.9, kernel 5.4 LT, Filters and plugins: Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=104051102 watch_fd=9 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"H-Mmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:20:20] [ info] [input:tail:tail.0] inotify_fs_remove(): inode=1756313 watch_fd=8 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"Z-Mnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [upstream] KA connection #36 to 10.3.4.84:9200 has been assigned (recycled) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"HOMmun8BI6SaBP9lh4vZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:47] [debug] [http_client] not using http_proxy for header {"took":2414,"errors":true,"items":[{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"juMmun8BI6SaBP9l7LpS","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. By clicking Sign up for GitHub, you agree to our terms of service and Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [out coro] cb_destroy coro_id=13 [2022/03/24 04:20:26] [ warn] [engine] failed to flush chunk '1-1648095560.297175793.flb', retry in 161 seconds: task_id=2, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_OMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records [2022/03/22 03:57:48] [ warn] [engine] failed to flush chunk '1-1647920802.180669296.flb', retry in 1160 seconds: task_id=608, input=tail.0 > output=es.0 (out_id=0) Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY In this step, I have 5 fluentd pods and 2 of them were OOMkilled and restart several times. Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [ warn] [engine] failed to flush chunk '1-1648192107.811048259.flb', retry in 20 seconds: task_id=6, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:39 pm | [2022/03/25 07:08:39] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Here's my config; schema_config: configs: - from: 2020-05-15 store: cassandra object_store: cassandra . failed to flush the buffer fluentd. Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [debug] [retry] re-using retry for task_id=13 attempts=2 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0uMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0OMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Fri, Mar 25 2022 3:08:48 pm | [2022/03/25 07:08:48] [debug] [http_client] not using http_proxy for header Fri, Mar 25 2022 3:08:21 pm | [2022/03/25 07:08:21] [debug] [input:tail:tail.0] inode=3076476 events: IN_ATTRIB If I send the CONT signal to fluentbit I see that fluentbit still has them. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"yuMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 37 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:27 pm | [2022/03/25 07:08:27] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [input:tail:tail.0] inode=69179617 events: IN_MODIFY Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [debug] [out coro] cb_destroy coro_id=12 [2022/03/24 04:19:24] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 retry_time=2 next. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"_eMmun8BI6SaBP9l_8nZ","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:22] [debug] [retry] new retry created for task_id=2 attempts=1 Host 10.3.4.84 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"0-Mnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [ warn] [engine] failed to flush chunk '1-1648192118.5008496.flb', retry in 8 seconds: task_id=12, input=tail.0 > output=es.0 (out_id=0) Sign in Fluentbit stops sending data to output. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [out coro] cb_destroy coro_id=22 Fri, Mar 25 2022 3:08:32 pm | [2022/03/25 07:08:32] [debug] [outputes.0] HTTP Status=200 URI=/_bulk Deployed, Graylog using Helm Charts. [2022/03/24 04:19:21] [debug] [task] created task=0x7f7671e387c0 id=2 OK [2022/03/24 04:19:20] [debug] [input chunk] tail.0 is paused, cannot append records Fri, Mar 25 2022 3:08:41 pm | [2022/03/25 07:08:41] [debug] [http_client] not using http_proxy for header [2022/03/24 04:20:51] [ warn] [http_client] cannot increase buffer: current=512000 requested=544768 max=512000 is any way to skip create index if exists? "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"1eMnun8BI6SaBP9lo-jn","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamicall Host 10.3.4.84 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"YeMnun8BI6SaBP9lLtm1","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Fri, Mar 25 2022 3:08:51 pm | [2022/03/25 07:08:51] [debug] [input chunk] update output instances with new chunk size diff=693 Logs not being flushed after x amount of time. Are you still receiving some of the records on the ES side, or does it stopped receiving records altogether? Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [debug] [out coro] cb_destroy coro_id=9 [2022/03/22 03:48:51] [ warn] [engine] failed to flush chunk '1-1647920894.173241698.flb', retry in 58 seconds: task_id=700, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:50 pm | [2022/03/25 07:08:50] [ warn] [engine] failed to flush chunk '1-1648192120.74298017.flb', retry in 9 seconds: task_id=14, input=tail.0 > output=es.0 (out_id=0) "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"MuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. [2022/03/24 04:19:30] [debug] [retry] re-using retry for task_id=2 attempts=2 "}}},{"create":{"_index":"logstash-2022.03.24","_type":"_doc","_id":"OuMmun8BI6SaBP9luqVq","status":400,"error":{"type":"mapper_parsing_exception","reason":"Could not dynamically add mapping for field [app.kubernetes.io/instance]. Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. Name es [2022/03/24 04:21:20] [debug] [input:tail:tail.0] scan_blog add(): dismissed: /var/log/containers/ffffhello-world-dcqbx_argo_wait-6b82c7411c8433b5e5f14c56f4b810dc3e25a2e7cfb9e9b107b9b1d50658f5e2.log, inode 67891711 Existing mapping for [kubernetes.labels.app] must be of type object but found [text]. [2022/03/24 04:21:20] [debug] [input:tail:tail.0] inode=1772851 with offset=0 appended as /var/log/containers/hello-world-89knq_argo_wait-a7f77229883282b7aebce253b8c371dd28e0606575ded307669b43b272d9a2f4.log Share. Fri, Mar 25 2022 3:08:40 pm | [2022/03/25 07:08:40] [ warn] [engine] failed to flush chunk '1-1648192110.850147571.flb', retry in 11 seconds: task_id=9, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #120 to 10.3.4.84:9200 is now available Fri, Mar 25 2022 3:08:31 pm | [2022/03/25 07:08:31] [debug] [upstream] KA connection #118 to 10.3.4.84:9200 has been assigned (recycled) Fri, Mar 25 2022 3:08:29 pm | [2022/03/25 07:08:29] [debug] [task] created task=0x7ff2f1839d00 id=7 OK 1 comment Closed . [SERVICE] Flush 1 Daemon off Log_level info Parsers_File parsers.conf HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_PORT 2020 [INPUT] Name forward Listen 0.0.0.0 Port 24224 [INPUT] name cpu tag metrics_cpu [INPUT] name disk tag metrics_disk [INPUT] name mem tag metrics_memory [INPUT] name netif tag metrics_netif interface eth0 [FILTER] Name parser . Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [out coro] cb_destroy coro_id=10 Fri, Mar 25 2022 3:08:38 pm | [2022/03/25 07:08:38] [debug] [http_client] not using http_proxy for header [OUTPUT] outputs: | Fri, Mar 25 2022 3:08:37 pm | [2022/03/25 07:08:37] [ warn] [engine] failed to flush chunk '1-1648192108.829100670.flb', retry in 16 seconds: task_id=7, input=tail.0 > output=es.0 (out_id=0) Fri, Mar 25 2022 3:08:30 pm | [2022/03/25 07:08:30] [debug] [input chunk] update output instances with new chunk size diff=1167