lost_and_unfoundGreetings, I need some assisting in terminology to better google my problem. I have a CSV. One of the values, "details", contains an additional CSV. I wish to parse and ingest this as well all in the same document. -> "User","Category","Type","Details","Date & Time" of which "Details" contain: "Status", "Date", "IP", "Email" etc... What would be the correct or prefered method to use / follow (e.g. slit?)
ujjainHow does logstash know what to grok patterns to match /var/log/ujjainapp/catalina.out with?
bjorn_You have to tell
darkmoonvtIn general, you tell it. (Wrap that filter in a conditional. Although using file name is a bit fragile.)
ujjaininput + pattern combination? we use a shipper,redis,indexer
ujjaininput+filter
ujjainIḿ trying to understand our logstash config, but I cant find the match between the logfiles parsed and the grok patterns used
darkmoonvtIt depends our your architecture, but you shipper should add some meta data to the documents it ships, describing the documents. (in 5.x, this was docuemnt_type, in 6.x, you might use fields.type)
darkmoonvtFor example, you might have something like: filter { if [type] == "syslog" { grok { <syslog things> }}}
darkmoonvtYou could look for specific files (in the source field, usually), but it's generally better to route on something a bit more stable.
darkmoonvtWait. Are you using the syslog input filter and wondering where certain fields come from?
ujjaindarkmoonvt, we are reading from redis
ujjainI found that it´s using check if catalina.out in filename
ujjain else if "catalina.access" in [file] {
ujjain match => [ "message", "(?m)%{CATALINA_LOG1}" ]
ujjainwe´re looking into moving out of logstash and into filebeat/es-ingest
d_codegood morning. I'm trying to write to S3 from an EC2 instance with an IAM profile. I have a corporate proxy that I have to send the actual data through, but it seems the lookup of the IAM profile info requires pulling from the metadata api at 169.254.169.254. Unfortunately, this has to go directly. On the `aws` cli, I have to set both HTTPS_PROXY and NO_PROXY. How can I manage this in logstash?
d_codeI'm using 6.1, btw
d_codehmm...well, if I set 'validate_credentials_on_root_bucket' to 'false', it succeeds