ahole[m]Hi
alestichi guys, I am having some trouble with logstash
alesticI wrote a little python app that set up logstash with docker to run elasticsearch ingest tasks
alesticeverythings was working fine
alesticI tried to provide a file with nested object and it does not even start
alesticthe index template is loaded
alesticthe data directory is mounted
alesticin fact some logstash related directory are created
alestic dead_letter_queue plugins queue uuid .lock
alesticbut it does not read the file
alesticI have start form beginning on the input config
alesticno errors are shown in the logs
alesticI know it is a very specific thing to ask, but maybe I am missing something and someone have some idea
alesticnever mind, I found the error. It was my filter dropping everything because I changed my data structure
nmnayeckhello
nmnayeckwhen i run my logstash conf, which contains a jdbc-input and a es-output, I get the message: [2018-02-06T17:00:55,089][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>33, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:133:in `initialize'"}, {"thread_id"=>36, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_
nmnayeckqueue.rb:133:in `initialize'"}]}}
nmnayeckand the logstash does not terminate
nmnayecknot sure what i am doing wrong
alesticit would help to see your config file
nmnayeckthis is the output block in the config:
nmnayeckoutput {
nmnayeck# file {
nmnayeck# path => "D:/tools/logstash/logstash-6.1.3/mars/query_comptes_client_personne.json"
nmnayeck# }
nmnayeck elasticsearch {
nmnayeck# http_compression => "true"
nmnayeck hosts => ["localhost:9200"]
nmnayeck index => "comptes_client_personne"
nmnayeck document_type => "comptes_client_personne"
nmnayeck document_id => "%{id}"
nmnayeck user => "elastic"
nmnayeck password => "abcd"
nmnayeck }
nmnayeck}
alesticwhat is the input, you said it does not shutdown
alesticthere is some warning/error regarding the jdbc?
alestictry to output to file/stdout to see if at least data are coming in and get parsed correctly
nmnayeckoutput to file works perfectly fine using this: output {
nmnayeck file {
nmnayeck path => "D:/tools/logstash/logstash-6.1.3/mars/query_comptes_client_personne.json"
nmnayeck }}
alesticdo you get other messages in the log?
nmnayecklogstash or es log ?
nmnayeckin logstash i get this printed every few seconds:
nmnayeck[2018-02-06T17:45:45,641][WARN ][logstash.shutdownwatcher ] {"inflight_count"=>0, "stalling_thread_info"=>{"other"=>[{"thread_id"=>27, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:133:in `initialize'"}, {"thread_id"=>32, "name"=>nil, "current_call"=>"[...]/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:133:in `initialize'"}, {"thread_id"=>33, "name"=>nil, "current_call"=>"[...]/logstas
nmnayeckh-core/lib/logstash/util/wrapped_synchronous_queue.rb:133:in `initialize'"}]}}
nmnayeckon ES, i get this:
nmnayeck[2018-02-06T17:45:03,608][INFO ][o.e.c.m.MetaDataCreateIndexService] [BX4DPF2] [comptes_client_personne] creating index, cause [auto(bulk api)], templates [], shards [5]/[1], mappings []
nmnayeck[2018-02-06T17:45:05,863][INFO ][o.e.c.m.MetaDataMappingService] [BX4DPF2] [comptes_client_personne/j8AMmbqMQ1OZ8ybMO0OkSA] create_mapping [doc]
nmnayeckmaybe this has something to do with number_of_shards?
nmnayecki only have 1 node in the cluster
nmnayeckand logstash is waiting for the number of shards to be green (it is yellow as of now on ES) for this index
alesticI have never had this shard problem
alesticlogstash will send the data and elasticsearch will index them
alesticif the shards number is < than the node you will get a yellow state but everything should work fine
nmnayeckhmmm
nmnayeckis this the right expression for every 15 minutes?
nmnayeckschedule => "0/15 * * * *"
beorn_*/15 * * * *
nmnayeckthank you
bjorn_https://crontab-generator.org/
nmnayeckis it possible to externalise the jdbc_connection_string parameter ? because it will vary for each environment and i dont want to create a conf file for each
alesticyou can use an env var
alesticand the ${VAR_NAME} notation in the logstash.yml
alesticwhen you start the process just pass an env var to it and logstash will replace the value at runtime
alesticit is great for passwords, because it does not appear in the yml file
alesticexaple
alestichosts => ["localhost:9200"] --> hosts => ["${ELASTICSEARCH_URL}:9200"]
nmnayeckthat's cool, i use the regular export VARIABLE command available on the OS command line to define the variable ?
nmnayeckexport DB_URL=blabla...
alesticIt needs to be a variable of the logstash process
alesticso it depends
alesticI use docker so I just pass -E VAR=VALUE
alestic*-e
nmnayeckok thank you :)
alesticyou can pass it by running logstash like this
alesticVAR =
alesticVAR_NAME logstash_command
alesticI suppose export should work
EvesyDoes anyone know if Logstash will silently skip over a geop filter if the source field doesn't exist? Or do I need to wrap it in an 'if field exists block' ?
alesticI think that if you try to access a field that do not exist logstash will throw an error
alesticnot sure tough
watmmHi all, using logstash 5.6.7-1 and i recently started noticing that logs are being appended with for example, 2018-02-06T14:55:16.786Z <server hostname>. Has something changed recently?
asmodaiSo currently revamping the setup for 6.x. With ES 6.x _type from logstash is set to "doc" and with a type="apache" this will conflict on ES with a "rejecting mapping update to [...] as the final mapping would have more than 1 type: [apache, doc]". What's the intended solution?
hernanex3how can i create a new filed with the current time?
asmodaiMmm, guess my only way forward is to add document_type again, even though it's deprecated.
asmodaihernanex3: Had you tried something like: output { file { path => "myfile-%{HH:mm}" } }
hernanex3i want to write something like this : "event.set('@timestamp', Time.now.utc.strftime('%FT%TZ'))"
hernanex3but it generates a string, and not a timestamp
hernanex3what i'm facing is that a client send a timestamp in a wrong format, i want to drop it, and generate another one in logstash
darkmoonvthernanex3: You want wall clock time, not a copy of @timestamp?
hernanex3yes
hernanex3i don't want @timestamp
darkmoonvt@timestamp starts as wall time, unless your filter overwrites it.
darkmoonvtBut, if you want the current time, you need ruby to do it. "require time; require date; event.set('[meta][queuetime]',Time.now.to_i) "
darkmoonvt(that unix time, you can format it anyway you like.)
darkmoonvtthats*
a1exuslogstash latest version is 6.2.0 now)
a1exuscan I export results from Kibana's visualization?
bjorn_Partially.
a1exuswhich part though?
bjorn_The ones where you see "Export" at the bottom.
a1exusok, that is exactly what I need)
a1exushow do I put that into logstash?
bjorn_Manually, I guess.
a1exusmanually is no good, i need to put that into cronjob
a1exuscan I automate that through logstash somehow?
bjorn_Logstash has an exec input plugin.
bjorn_I don't know if the "Export" links are static, if they're not you will probably need some screen scraping.
a1exusthere gotta be another way...
darkmoonvtYou can export the searches/visualizations/dashboards with an API call to Elastic (not logstash) by grabbing the .kibana index.
darkmoonvtIt's somewhat not-intended-to-be-human-readable though.