gebbione[%{WORD|WORD\.WORD:app}\] is not matching
rastrogebbione: WORD doesn't match the period. Something like NOTSPACE would, or you can make your own pattern.
gebbioneWORD\.WORD should
torrancewgebbione: I think what you actually want for that is something like: "(?<app>%{WORD}(\.%{WORD})?)"
torrancew%{} has pretty limited syntax IIRC
gebbionei dont even know the difference
gebbionebut yes it works
gebbioneit gives two matches
gebbioneapp into a single one
gebbioneand WORD where is whos both words as matches
gebbioneok now i need to work out how to destroy the index and regenerate the data
gebbione:/ Caused by: failed to write in data directory [/var/lib/elasticsearch/nodes/0/indices/IlpS32xZR1uXVuoi7E0kfw/0/_state] write permission is required
k_artIs: match { "" => "<my regexp>" } the right way of parsing a child key?
k_artAre there any naming conventions for fields in logstash? Why are some fields prefixed with "@" ?
k_artIs there a way to only get the first element of a matched group using grok regexps?
lk_I cannot input kafka. Anyone may help me?
_fatalisHi, I have configured beats to send to logstash with 2 different logstash listeners on 2 different ports and I have created 2 different configuration files with different outputs
lk_anybody ?
bjorn_lk_: Hang around a bit longer, perhaps someone who knows will wake up.
lk_bjorn_: waited for longer?
bjorn_Wait some more, yes.
Xylakantlk_: you might need to update the kafka input plugin. there are various versions that are compatible with different kafka versions
Xylakantthere's a list of versions and compatibility here
lk_Xylakant: I use the latest logstash 60.0
lk_the plugin is newest
Xylakantdid you check that it's the 0.11. version?
Xylakantthen you might want to try the 0.10.1 version
lk_my kafka is 0.10.0
Xylakantthe logstash kafka input plugin
Xylakantcan you use logstashs plugin command to check what's installed?
lk_I try logstash 5.x.x ?
Xylakantno, please use "bin/logstash-plugin list '*kafka*'" to check the version of the kafka input plugin
lk_➜ logstash-6.0.0 git:(master) ✗ bin/logstash-plugin list "kafka" logstash-input-kafka logstash-output-kafka
lk_cannot show version use this cmd
Xylakantwhy are you using a git version?
Xylakantwell, please add --verbose to the command, then it will show the version as well.
lk_logstash-input-kafka (8.0.2)
lk_I will install lower logstash
XylakantI'd first try installing the plugin version 6.3.4
Xylakantif that installs on LS 6.0, that is.
lk_I use ./vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.11/logstash-output-kafka.gemspec
lk_5.6.4 is the same
lk_LS 5.6.4
lk_plugin 6.3.4 fix this bug?
FreeBitCoin is a "faucet" which means, every hour anyone who signs up for FREE with only an email gets .00005 of a bitcoin. But thats only the start, you can win 1.00 BTC in a free weekly lottery! 100% legit you can google it, use my referral code to start off with $3 in free bitcoin!
lk_logstash 2.6.4 is ok
FreeBitCoincome on yes
lk_this version is ok
fenglogstash写redis的时候,如果redis满了 数据会不会丢?logstash 从文件里面读的数据
fenglogstash ,input is file, output is redis list, Whether data will be lost when redis is full
fengAnybody here?
xtechits so weird, when i try to match my log time with @timestamp in kibana, everything matches except for minutes and hours
xtechi got the exact same day, month, seconds, microseconds, except for the hour and minutes
Xylakantxtech: maybe some timezone offset?
xtechwhere can i modify that
XylakantWell, you’d usually parse the logs
Xylakanttimestamp to get the log time
xtechi have this in grok { match => {"message" => "%{DATESTAMP:time}}
XylakantWhen parsing you can specify in which TZ the logs are
xtechand this in date { match => [ "time", "yyyy-MM-dd HH:mm:ss.SSSSSS" ]}
XylakantAnd that filter does not provide a time zone
xtechwhat if i want the timezone to be dependent on each box
xtechi dont want a one timezone for all
XylakantThen you need to either parse the TZ offset from the date if there is one
XylakantOr write your filters accordingly
xtechin what for the TZ is
xtechis it +2:00 or something like that or what ?
XylakantIt’s either an offset (+12:45) or a name (Pacific/Chatham)
xtechmmmmm ok
xtechXylakant, the thing is im in UTC+2 TZ, my kibana is adding like 1 hour 20 min
xtechits not even matching the TZ
XylakantKibana does not add anything, logstash does all the @timestamp parsing
xtechyeah i know
xtechi mean im viewing it in this form
XylakantSo go debug whether your logstash filters parse the date correctly. It might be helpful if you can show a message that’s parsed incorrectly
xtechi've never seen anything as complex as the time in logstash and elastic
XylakantYou haven’t seen much time-related stuff then
XylakantTime handling is generally a mess.
XylakantNot because the code is hard, but because real-world TZ handling is a mess. There’s time zones that change offset at the command of a ruler.
xtechi mean im justing passing a time from my logs, treat it as it is. dont get confused by adding UTC or treating it as local, just show me it as i am sending
XylakantBasically it all relies on lookup tables compiled by a few tireless volunteers
manda_chuvaHi guys, could someone give me a hint! I'm receiving a "_geoip_lookup_failure" when parsing iis-logs, but my config geoip is the same for my apache logs and nginx logs and it works! I'll send my config just a moment!
manda_chuvaCan i send it here or pastebin?
XylakantI agree, that’s all I want as well. But real-world Systems sometimes write logs in local time, sometimes in utc and sometimes both. And I’ve even seen systems that logged (current Unix timestamp - timestamp when we started the project)
_fatalisDoes copy command in mutate filter needs the destination field to be created beforehand?
Xylakantmanda_chuva: pastebin
XylakantBecause some developer thought it would be fun if the server logged events I “seconds elapsed since the start of the project.”
xtechmanda_chuva, you are not specifying a database
manda_chuvaJust an observation: I´ve changed the field clientip after i've observed that was client_ip, but i restarted the logstash service, it seems didnt work
manda_chuvaxtech: filebeat -> logstash -> elastic
manda_chuvaxtech: you want to see the output?
xtechyou need to add: database => "/directory/GeoLite2-City.mmdb" which is a database you download from the internet to your geopip field
xtechso that the ip gets compared with the database you have
manda_chuvaxtech: I have the same geoip config for apache and nginx logs and it works without specifing databases.
Xylakantmanda_chuva: can you show a document that failed to parse?
manda_chuvaI will paste both of them just a moment
manda_chuvaXylakant: yes i'll show
XylakantDatabase is only required if you want to use a custom (or updated) DB
XylakantThe plugin includes the lite db
manda_chuvaHey guys nevermind, It seems that suddenly started to work lol, maybe after i've changed the field name there was a bunch of old data being parsed yet.
manda_chuvaBut thanks at all
_fatalisThere is a _rubyexception to my tags
_fatalisalthough I am not executing any ruby filter
_fatalisIs there any way I can manage to find where it is coming from
darkmoonvtMost (all?) of the filters are written in ruby. It's likely coming from one of those. I've tracked that down with a binary search (disable half my config, see if it happens, repeat).
_fatalishmm but i did not have it before
_fatalisand every other filter gets exeuted correctly
darkmoonvtIn my case, I was passing data (legal, but didn't make sense) to a filter.
_fatalisok will try binary search
darkmoonvtAre you getting any other _tags? Do you track which filters a given log passed through?
darkmoonvt            add_field => [ "[meta][filter]", "20-syslog-base" ]
darkmoonvtWe have one of those in every filter block.
_fatalisI have a grok which gets extracted, a mutate which i see the change and 2 date filters
_fatalisI am having only the beats tag which is applied by beats
_fatalis Do you track which filters a given log passed through? <--how can i do that?
darkmoonvtOur config is a bit more complex than that. The add_field line (from a mutate) above is from my 20-syslog-base.conf file.
darkmoonvtMost of my filters are conditional. That one for example only fires if the document it identified as being generated by syslog.
darkmoonvtSo, at the top of that block, I have that line so I can look at my docs in Kibana and know that it was processed by that chunk of code.
darkmoonvtI also have a 44-syslog-sshd.conf file, that adds that as a value (multivalue), which only fires if the syslog filter identified the syslog.program as 'sshd'.
darkmoonvtAnd so on.
darkmoonvtAnyplace (grok, date, etc) there's an option to tag_on_failure, we use it, with a unique tag that points to that bit of code.
_fatalisi see
_fatalisyou do not want to see my .erb template which createss the configuration ;)
darkmoonvtI'm also a mail admin, so I built the equivalent of 'received' headers into our logs.
_fatalisI did not propagate the new config in all our dev and prod logstash instances
_fatalis@darkmoonvt: Do you know how can I copy an event with ruby in logstash?
darkmoonvtWith ruby? No. I've had some luck with the clone filter, though. (yesterday, in fact.)
darkmoonvtYou're trying to create a second document, not just pull data out of it?
_fatalisI want to clone @timestamp field
_fatalisto create a copy with a different name
tgodarAny suggestions on where to start to look as to why my topbeat is getting all messed up? ->
tgodarNothing changed that I am aware of
darkmoonvt_fatalis: that's not an event, just a field. You can do that in ruby, or in mutate (add_field)
timvisherif I have (n)beats → redis → (n)logstash-forwarder → e12h, what (if anything) guarantees that a given event is not duplicated (n) forwarder times?
tgodarso weird, no updates, tempted to just clear data and start things up again
tgodarThe image is telling, just not sure what its trying to tell me
tgodarBah, swear I have to rebuild this crap every other month, probably over due on an update anyway the way this stack roles revisions. sigh end rant
jpsandiego42Anyone happen to know if logstash5 is backwards compatible to logstash ~1.5?
jpsandiego42Have an old log forwarding setup that talks to an ES1 cluster. Wanted to start peeling off some clients have start talking to a new cluster. Guess I'll just take the opportunity to move to filebeat.
jpsandiego42(new cluster / es5)
shogthe chances that your config will break logstash5 are quite hard, especially if you use ruby filter or custom plugins as event api changed.
shogalthough the process of getting this ready for logstash 5 is usually quite trival
timvisherthe answer appears to be "nothing", unless you specify the document id
jpsandiego42From perusing the filebeat -> logstash (dense) documentation, I'm not understanding if I have to setup a beats input on logstash for each type of log I want to send, or how if I can have one input configuration in logstash. Basically, how I need to differentiate the data I'm sending to it so it's categorized correctly. Like sending both Apache and Syslog logs from a host using filebeat to logstash on another host. Is that really two
jpsandiego42parate TCP ports to open?
jpsandiego42(maybe syslog is a bad example.. ), so maybe Apache and MySQL slow logs.
jpsandiego42oh ok.., I got skewed somewhere in the docs. Looks like filebeat can/does set type and that makes it all one port on the logstash side.