minipopI have a grok configuration file call linux-messages under /etc/logstash/patterns.d directory
minipopSo how can write the logstash filter to access those patterns
minipop@ bjorn_ , I followed the way it explained in the article. But it now extract the patterns. Gave error
bjorn_Since no-one here can read your mind or magically detect your configuration, you have to provide useful details if you want help.
minipopfilter {
bjorn_Not here
bjorn_Use a pastebin
minipopbjorn_, just check
bjorn_Is "linux-audit" a file?
bjorn_So when the setting is "patterns_DIR" - what do you think "dir" means?
bjorn_Hint: It's explained in the article
minipopAs per the article I have place linux-audit pattern file in patterns_dir => [ "/etc/logstash-5.2.2/patterns.d" ] directory
bjorn_Fix line #4 in your paste.
bjorn_You have more errors, but fix that one first.
minipopaccording to the article when logstash start it will load patterns when pipline created
Shadur /j #postgresql
bjorn_minipop: patterns_dir has to be a DIRECTORY, not a FILE.
ShadurSorry, typo
minipopit is a directory
bjorn_4 minutes ago, you said that /etc/logstash-5.2.2/patterns.d/linux-audit is a file
minipopunder that directory, I have place linux-audit pattern file
minipopI have tried both option to resoved
minipopmatch patterns_dir => /etc/logstash-5.2.2/patterns.d/ and
minipoppatterns_dir =>/etc/logstash-5.2.2/patterns.d/linux-audit
minipopboth gave me error,
bjorn_Use the DIRECTORY in the PATTERNS_DIR setting, and then go read for how to use the match {} function.
BConeI'm attempting to set up an ELK stack using the container provided by Elastic. I've overwritten my config and pipeline and am only outputting to stdout for initial verification. Once it starts, Logstash keeps trying to connect to an elasticsearch instance but I can't find where that's configured. Any ideas?
_fatalisIs there any way to test / see in the log if a pattern is executed and why it fails? By failing i mean it cannot extract the message to parts
bjorn__fatalis: You'd get a _grok_parse_failure tag, and obviously the fields you'd expect are not populated.
_fatalisthe problem is i do not see any error in logstash
_fatalisand i do not see the message exported into fields in kibana
_fatalisI see only the message
_fatalisHow can I debug this?
GambitKHello, I'm trying to grok a field in which i don't previoulsy know the quantity of fields that it'l have, in can have only one " #1(8):24736758", two " #1(8):24736758 #2(5):63387", it can have three " #1(8):24736758 #2(5):63337 #3(4):data"
bjorn_In that case you should see them in Logstash' log files
_fatalisthere is nothing there bjorn_
_fatalisno error
bjorn_Could it be lost somewhere else?
_fatalisI see the event being sent in beats log
_fatalislogstash does not complain
_fatalisevery other field is there
_fatalisbut message does not get extracted
bjorn_Perhaps it doesn't match your grok patterns at all.
_fatalisAlthough I did not change them
_fatalisI just modified my puppet code to be more dynamic
_fatalisSo that is the weird part. I cannot increase verbosity somehow?
bjorn_You can add tags to the different stages of your logstash processing, to see what happens.
deepyHow do I configure multiline? I have ------\nSome content\nMore content\nEnd of content\n------\n and I want all content to appear in the same message
darkmoonvtIn the shipping agent (beats), usually.
GambitKIs there any way to trim as in remove a leading space from a field?
bjorn_Yes, use the trim function
bjorn_mutate -> trim
deepyAh found it, I forgot to set negate
GambitKbjorn_: doesn't show a trim option
bjorn_GambitK: It's strip
BConedeepy, I had to work with multiline as well and just created a crude pattern to work across multiple lines: GREEDY_MULTILINE (?m).*
GambitKI'm working with mutate->split where de delimiter character is in the front of the field and is causing the first field of the resulting array be an empty field, ex: field:#52#57, i get a three way array field[0]="",field[1]="52",field[2]="57"
GambitKI'm using split because I don't know how many field are in the string
benjwadamsHi, i see a "urldecode" logstash plugin. Is there an equivalent plugin/functionality for *encoding* url strings?
_fatalisbjorn_: I added tags for match and failure and npne gets processed. It is not weird?
GambitKI'm currently using nxlog as a central log receiving repository before using logstash to send the logs to elasticsearch because of the space required by elasticsearch. I'm having some problems with nxlog with the fact that I'm getting a lot of "WARNING TCP connection closed from End of file found" messages from different hosts. What other open source receiving service I can use that would allow me to write the file
rastroGambitK: how about filebeat->logstash->elasticsearch?
GambitKrastro:I have to keep a year of archived logs so I only send about one month to elasticsearch and the old ones are on file so that compression can help me with size
rastroGambitK: ok, how about filebeat->logstash->(file & elasticsearch)
GambitKrastro:I'll try that for one input and see how it goes
benjwadamsIt looks like the URL output plugin automatically encodes the URL -- correct?
_fatalisSo adding a tag jut after entering the grok does not provide anything
_fatalisthat means grok is not executed
_fatalisIt is weird right?
finster_fatalis: code or it didn't happen ;)
_fatalislogstash conf?
finster_fatalis: to the best of my knowledge: add_tag should be part of a mutate block
finsternot grok
_fatalisbut it is included in the genral fields
_fatalisit goes to every command in filter plugin
asimzaidiis there anyway I can pass arguments to my db.conf file from command line
asimzaidi.something like ./bin/logstash -f ./test.conf argument1, argument2
asimzaidiok I guess you cant do arguments to logstash
asimzaidiyou can only use env variables
benjwadamsis it possible to debug logstash http output and log the actual requests? I don't know why requests are failing silently and i'd like to curl the request manually, but i can't find it
bjorn_benjwadams: It's quite easy with tcpdump/wireshark or any other network sniffer.
torrancewbenjwadams: verbose and/or debug logs may also reveal what you're looking for
bjorn_You can also replace the HTTP server with netcat or socat, which will give you the full request in plain text.
benjwadamsbjorn_: i tried a little with tcpdump but was sending over https. tried a local mock server but it didn't support post. How do i go about using netcat or socat instead?
benjwadamstorrancew: I increased log level to trace. It doesn't show the generated request
bjorn_Oh https, you said http.
benjwadamsbjorn_: either way really. I need to inspect what the output being generated is
benjwadamstrying to port over some legacy logic from scripts outputting to Google Analytics via logstash
benjwadamseventually the analytics will be moved over to elk, but i need to do this for the brass
bjorn_Last time I had to debug the http output I wrote some PHP code that simply dumps the POST content to a file
torrancewbenjwadams: "socat tcp-listen:SOME_PORT -" may suffice
impalad06A query I have a problem with the filter file for syslog when I run it with / usr / share / logstash / bin / logstash -f /etc/logstash/conf.d/fortigate-syslog-01.conf --path.settings / etc / logstash indexes me correctly but when I leave the test does not return to index, I install from an RPM I think it may be the routes of .conf someone has happened
bjorn_impalad06: ps -ef | grep logstash
bjorn_It should tell you how Logstash is running when you're not testing.
impalad06this is
bjorn_Should be fine.
impalad06if that also I think but it only works to me when being executed the command / usr / share / logstash / bin / logstash -f /etc/logstash/conf.d/fortigate-syslog-01.conf --path.settings / etc / logstash if I stop it with control + C stops indexing
bjorn_Do you "test" as root?
bjorn_You should always test as the same user the application usually runs as.
impalad06the user that executes is logstash (LS_USER = logstash, LS_GROUP = logstash) and does not execute the command
impalad06logstash when installed creates the user "logstash" executed by a sudo logstash ... and indicates that the user is not available
bjorn_su - logstash -s /bin/bash
impalad06this is the output
impalad06but kibana does not show it
impalad06forget to say I have an ELK environment (all-in-one)
asimzaidiI have my sql statement in cofiguartion file and I want to loop over the results…how do I do that with logstash
bjorn_impalad06: When running normally, Logstash will read *all* files in /etc/logstash/conf.d/
bjorn_If you have other files than fortigate-syslog-01.conf, they may cause what you are seeing.
impalad06I do not understand the last, I just have to have a file * .log?
bjorn_impalad06: ls /etc/logstash/conf.d/
impalad06[root@thoth2 ~]# ls /etc/logstash/conf.d/
impalad06fortigate-syslog-01.conf fortigate-syslog-01.conf.bak logstash-syslog.conf.back
bjorn_Logstash will join all the files
impalad06even if they are only backup?
bjorn_Logstash doesn't care about names
bjorn_Move them somewhere else
bjorn_"Logstash tries to load all files in the /etc/logstash/conf.d directory, so don’t store any non-config files or backup files in this directory."
asimzaidiI am having this issue
asimzaidi"reason"=>"failed to parse [evtlist_date_c]", "caused_by"=>{"type"=>"illegal_field_value_exception", "reason"=>"Cannot parse \"0000-00-00\"
asimzaidican some one help me with this
asimzaidihow can I replace the date that is 0000-00-00 with null
bjorn_Use the mutate plugin?
impalad06bjorn I just moved them and restarted services
asimzaidi@bjorn if I use mutate like this filter { mutate { convert => { "fdatefield” => “null” } } } it will null all dates…I need to only make 0000-00-00 to null
bjorn_Yes there's something called "if"
asimzaidiyes and thats what I am wondering where to use if
asimzaidiin the filter
asimzaidican I use if
asimzaidiso filter { if[date]==‘0000-00-00’ { mutate??
asimzaidi@bjron that did not work
asimzaidican you look at this
asimzaidiplease .. I am not sure what am I doing wrong
asimzaidispent almost 3 hours…totally stuck
BaM`asimzaidi: didn't work how?
BaM`didn't process the message, or didn't start up?
asimzaidiit still says this "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [evtlist_date_c]", "caused_by"=>{"type"=>"illegal_field_value_exception", "reason"=>"Cannot parse \"0000-00-00\": Value 0 for monthOfYear must be in the range [1,12]"}}}}, :level=>:warn}
asimzaidiit does not send the records with 0000-00-00 @BaM`
asimzaidiit starts up finr
asimzaidijust ignores the records with date issue
bjorn_What's parsing the date?
BaM`why are you trying to set the field to nyull anyway - won't that upset ES if you have a field mapping?
BaM`just remove the field
asimzaidihow do I remove the field
asimzaidiactually I cant
asimzaidiI have to have the field
asimzaidiin the ES
asimzaidicause we make the decisions based on that field
bjorn_A date field in ES with 0000-00-00 is worthless, no?
darkmoonvtYour decision logic doesn't check for existence of the field, and branch on that?
asimzaidiyes its a legacy code so I am trying to move to ES with as much of not a break as possible
asimzaidiI understand its worthless
asimzaidibut there is some values that are true dates
BaM`if you have to have the field, and it's all zeroes, why not set it to 1970-01-01 or something
asimzaidino not all of them are zero
asimzaidijust few of them are
bjorn_Do you have more pipeline config than the pastebin shows?
asimzaidino thats it
BaM`yeah but we're talking about this particular case, right?
BaM`so it needs to be a valid date, even though the value is garbage
BaM`so make it 1970
asimzaidiok so what do I change
BaM`setting it null will prevent it from even indexing
asimzaidireplace => { "evtlist_date_c" => “1970” }
asimzaidiis that what you are suggesting @BaM`
BaM`no - "1970-01-01"
BaM`then your date parser will still work
asimzaidik let me give it a shot
BaM`um, do you even have a date filter?
bjorn_No and that's why I don't see why Logstash barfs on this.
BaM`or are you shoving it at ES and letting it work out the format?
asimzaidino I have what I have in pastebin
BaM`so that error is an ES mapping issue
asimzaidiI am just shoving it to ES
bjorn_Is there an Elasticsearch template?
asimzaidithere is no mapping on index
BaM`it will probably Just Work if you send it valid dates
asimzaidiby the way 1970-01-01 did not work
BaM`but you might hit an edge case where it parses a date as something else on occasion
asimzaidiyea prolly cause changing to 1970 is not working
bjorn_asimzaidi: Where does the error occur? The logstash log, or the Elasticsearch log?
asimzaidilogstash logs tell me that
BaM`tail the logs on the es node it's sending to
asimzaidiwhen I do this bin/logstash -f mysql/test.conf -v --debug --verbose
bjorn_So Logstash tries to parse this, on its own as it seems
asimzaidiyea and it never makes to ES
BaM`no, that could be the error back from the http endpoint
asimzaidithere are 2 records with 0000-00-00 and they get dropped off
bjorn_Can you remove the field?
asimzaidihow ?
bjorn_Mutate :-)
asimzaidik will try that out
asimzaidisorry very new to this
bjorn_No worries
asimzaidistill no go
BaM`can you paste some sample data?
bjorn_Same error in the log?
bjorn_So the parser acts before the filter
asimzaidifilter {if["evtlist_date_c"]=="0000-00-00" {mutate {remove_field => [ "evtlist_date_c" ]}}
BaM`do you have multiple filter files?
asimzaidino this is the only file I am running
BaM`change your output to stdout { codec => rubydebug {} }
BaM`forget ES for the moment
asimzaidilet me do that
asimzaidiok that ran
asimzaidiI did not see any errors
asimzaidibut it went way too fast
BaM`right, so it's an ES mapping error
asimzaidiso what do I do
asimzaidido you know
BaM`this works
BaM`might be kinda a long way around but at least it checks for a valid date first
asimzaidik let me check
BaM`obv you want to s/message/evtlist_date_c/
BaM`also mine returns 1960 because I'm in the futuuure
asimzaidik let me try that
BaM`you will probably need to nuke your ES index as well
BaM`because it will have weird mappings from your first attempt
asimzaidiwhat is [tags]?
BaM`tags are auto-added when stuff breaks, also you can store stuff there
BaM`things you don't want to have to create a named field for
asimzaidiah ok
BaM`it's still a normal field but it's kinda reserved
asimzaidinuking the index
BaM`so date, drok, json filters all make a note of failures in that field
BaM`hence the tag_on_failure options they all have - you can customise it
asimzaidiso now I am getting
asimzaidiailed parsing date from field {:field=>"[evtlist_date_c]", :value=>"0000-00-00", :exception=>"Cannot parse \"0000-00-00\": Value 0 for monthOfYear must be in the range [1,12]", :config_parsers=>"yyyy-MM-dd", :config_locale=>"default=en_US", :level=>:warn}
asimzaidibut I think the records made it to the ES
BaM`are you still trying to go to ES?
BaM`can you put stdout back, and pastebin your output from one record that has 0000-00-00
BaM`because if ES is giving you mapping errors, it's NOT storing the document
BaM`also pastebin your latest config
asimzaidiok brb.. with new pastebin
BaM`crap - I also gave you the wrong date config
BaM`pls hold
BaM`that will catch every invalid date, no matter what it is, and replace it with 1970-01-01
asimzaidihere you go
asimzaidioh ok
BaM`yeah, go and redo your config with the extra date {} filter from mine inside the conditional
BaM`sorry about that - too many terminals open
asimzaidioh no problem
asimzaidiextra date filter actually did not work
BaM`how so?
asimzaidiI have no idea…started doing same thing
BaM`can you leave the ES output alone, and replace it with stdout until this is solved?
BaM`then you can pastebin your config + output
asimzaidiwill do
asimzaidigive me one min
BaM`if you could paste a sample output of each with a valid date, and an invalid date that would help
asimzaidiI will
eternalmineralsHow's your magnesium? I bet you don't even know! Learn more at
asimzaidican you see me
torrancewBaM`: c'est la vie
asimzaidiI think my net dc'd
asimzaidifor a sec
asimzaidiI am back though
BaM`asimzaidi: can now
asimzaidihere is what I have
asimzaidi@BaM` thats my current one with no ES
asimzaidiand that I think its not sending any thing to ES
BaM`eternalminerals: I've mailed - I expect they'll be in touch with you shortly
asimzaidiis there any thing you want me to look at
BaM`asimzaidi: you're missing the extra date filter
asimzaidik let me check
asimzaidiok so now I have exact copy of what you have
BaM`and what's the output?
asimzaidiit just runs the thing without any error
asimzaidilet me do a pastbin
BaM`are you getting any _dateparsefailure tags?
asimzaidinot that I know of
asimzaidiits too fast to catch
BaM`that doesn't even have evtlist_date_c
BaM`do this:
BaM`output { if [evtlist_date_c] { stdout { codec => rubydebug {} } } }
asimzaidik thanks
asimzaidiI dont think if statement worked
asimzaidicause there were suppose to be only 2 records
BaM`hm, ok I'm not sure why it's done that
BaM`that conditionl shoul definitely work - I use similar ones all the time
asimzaidi if [evtlist_date_c] will output everything that has date
asimzaidishould that not be if [!evtlist_date_c] or something
BaM`no, just records with that field
BaM`the date filter is definitely processing those, so we can ignore them now
BaM`try this:
BaM`output { if "_dateparsefailure" in [tags] { stdout ...
BaM`oh - you don't still have another stdout output do you?
asimzaidiok I will try that
asimzaidisending you the pastebin
BaM`cool - you should be able to hook it back up to ES now
BaM`and nuke that index again
asimzaidiok thank you so much for your help
asimzaidiReally approciate it!
BaM`be aware that the parse failure tag will be indexed as well
BaM`so you can use that to identify those records in ES
asimzaidik thank you!