ktosiekCan I skip filters or throw an event to another pipeline? I want to keep a separate index with raw events for archiving (and if stars align, reprocessing).
Spixxdarkmoontv: so yeah it is not present in the config (and yes I did read the basics before asking such a question). The latest version of logstash uses /tmp to store and exec .so files, this becomes an issue when following security best practices and thus I need to know the flag/config for moving the tmp/temp folder to a logstash specific one. So my question yet again; what is the setting for the path of the tmp? Is it java specific settings?
bjorn_ktosiek: I would think so? Use an output that feeds the other pipeline's input?
ktosiekOh, but then I have to put the cloned event through the whole pipeline
ktosiekOr I can start with the archiving pipeline, and route events to specific ones from there, like in graylog
ktosiekIs there a way to do this without losing any event data? Preferably with good performance (zero serialisation would be best :-))
DirkosI keep getting an error while starting logstash but i dont understand the error at all
DirkosAn unexpected error occurred! {:error=>#<NoMethodError: undefined method `call' for nil:NilClass>, :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-redis-3.1.5/lib/logstash/inputs/redis.rb:100:in `stop'",
asmodaiIt's a typical Ruby error that means there's an invocation of a method of a class instance that does not exist (is nil)
Dirkosasmodai: but its vanilla logstash. How do i know whats wrong then?
asmodaiDirkos: outdated plugin maybe?
Dirkosasmodai: i dont think even plugins are installed
XylakantDirkos: maybe show your config?
Xylakantand tell us which logstash version you're running?
DirkosXylakant: i think i maybe found something where the patterns are not found
Dirkosi noticed a permission thingy maybe
Xylakantthe error message indicates that the redis input is stopped before it's been registered properly. That might be the case if something during startup is failing
DirkosXylakant: its the permission issue
Dirkossomehow /etc/logstash was owned by root:root
Xylakanti think you're probably on the right track with that permission thing
Dirkosso the process could not read the patterns itself
DirkosLonger wait would show me error on the filter where the "DATETIME" patterns was not there
yakizahello everyone
yakizaHEllo guys i ave just set up logstash with filebeat from the website and i get this error https://pastebin.com/29UQGfH5
yakizaany ideas?
bjorn_It's not an error, it's info.
bjorn_See, it says INFO - even in uppercase.
yakizabjorn_: i know i mean i can see but according to this guide https://www.elastic.co/guide/en/logstash/6.x/advanced-pipeline.html when i run logstash i am supposed to get some data in the logstash and i am getting none
yakizabjorn_: man i am soo sorry abouyt by stupidity
yakiza wrong dir! i have to sleep 32 hours without sleep
yakizait hurts my brain
gaudi71Hello everyone I just started logstash I create the conf file and I have a problem is that it does not get me logs in / var / logs / logstash and when i start the service and i do netstat -ant port 5044 does not appear how i can do ? thanks you for your response
DDHello :)
DDi was just wondering if anyone else is possibly experiencing constant GC with the newest logstash and beats input plugin. Even when stripping out pipeline to just input and filter drop or just input and stdout output i am getting something that looks like memory leak. Memory usage is rising until it hits heap size and then it is constantly doing GC with freeing almost no memory.
DDi have around 15k events/s on input and 64GB of heap allocated. Tried giving it more heap but it does the same. Everything is updated to latest version.
timvishergaudi71: you'd need to share your conf file and startup logs for any help with that
timvisherDD: i'm running 6.0.1 with no leakiness. are you seeing that after bumping from ~> 6.0.1 to latest?
DDtimvisher: for logstash itself in logstash.yml i have this
DDlogstash.yml: https://pastebin.com/BQHnZXAU
DDi've changed queue to persisted in order to eliminate queue filling up memory, same results as with memory
DDfor pipeline conf
DDi can reproduce the same issue with only beats input, no filters and stdout output, or even with beats input, filter { drop {} }, and stdout output
DDsomething like this
timvisherbut what version of logstash are you coming from?
DDcurrently on logstash 6.2.0, was on 6.0.3
DD6.1.3 sorry
DDlooking at jmap i see a lot of LinkedHashMap objects https://pastebin.com/LzE5hJkt
DDand looking at graphs in kibana monitoring i get something like this https://ibb.co/iCTvMH https://ibb.co/cnR4Fc
timvisherDD: is data flowing through logstash? i.e. are you getting documents in elasticsearch?
a1exusI'm using logstash's output csv plugin to export some of data into csv... is there a way to add first line with name of fields? as of now I get only values
beezelhey guys, im trying to get logstash running inside docker on AWS to connect to elastic cloud, but i can't get logstash to stand up. hopefully something basic. can someone take a look at my log and see if they can see what is going on?
beezeli am using the default configs, and only passing cloud.id and cloud.auth via env vars
beezeli see some stuff about xpack licensing, which makes sense since this is a trial. i also see some weird stuff about "Elasticsearch Unreachable: [http://logstash_system:xxxxxx@elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
torrancewbeezel: there's a lot going on in that paste
torrancew2 things I notice, mostly: the elasticsearch unresolvable thing, should probably get fixed. more importantly, looks like you've got a config error
torrancewcan you share your configs?
beezeltorrancew: i do not have a config, just did a docker pull
torrancewwhich docker image?
beezelsudo docker run -d -it -p 5601:5601 -p 9200:9200 -p 5044:5044 -p 9600:9600 --name logstash docker.elastic.co/logstash/logstash:6.2.1 -e --cloud.auth=password --cloud.id=id
beezelfwiw, i also setup metricbeat and it was able to connect and work with elastic cloud just fine, from the same VPC and host (same firewall rules, etc)
torrancewbeezel: have you seen https://www.elastic.co/guide/en/logstash/current/_configuring_logstash_for_docker.html ?
beezelyes that is where i started from
beezelthat and https://www.elastic.co/guide/en/cloud/current/ec-cloud-id.html
torrancewYou'll need to actually *follow* the instructions about creating a pipeline config
beezelthat same page says i can either use pipeline or settings config, but not to use both
beezelsettings config allows me to pass env vars, which is what i am doing
beezelit also says it should work with defaults
ScothGreetings. I have a bit of an odd use case I'm having trouble with. I'm working on migrating from a rickety old box running an elk setup to a shiny new cluster, but due to some past migration problems I need to keep logs flowing to the old box while debugging and validating the new cluster.
ScothWhat I need to do is have a logstash shipper that outputs messages into my new cluster (using rabbitmq) but also re-outputs the original message as received to dump into the old box
ScothI'm not finding an obvious way to passthrough or replay/output received messages/events without any extra encapsulation or modification
torrancewbeezel: where do you see it saying you cannot use both?
torrancewScoth: can you have your old box pass through to the new cluster
torrancewoh, that would be post parsing
beezeltorrancew: i think i misinterpreted this line: Logstash differentiates between two types of configuration: Settings and Pipeline Configuration.
ScothWell, the goal is to leave the old box alone as much as as possible, if not entirely. Config validation/etc
beezeli now understand what they mean. ill work on a pipeline config
torrancewScoth: ya
beezelthanks for the pointer
torrancewScoth: so, I can only think of a couple ways to approach tis
torrancewthey're all a bit janky
ScothUnfortunately it was set up before I was here and is a single point of failure with zero redundancy and zero cushion, so it's really limiting my options for safe migration with viable rollback
torrancewScoth: main approach: stand up a new, single node logstash box. have it do 0 parsing, and have an output for each of your clusters
torrancewthen point the shippers at that
torrancew(or have it pull from rabbit, then pass to the other 2 logstashs, etc)
torrancewlots of minor variations on that trick you can do, too
torrancewie, shipper->rabbit (topic foo) -> ls "proxy" -> rabbit (topic bar) + rabbit (topic baz)
torrancewthen old logstash pulls from topic bar, new cluster from topic baz
torrancewbeezel: may be easier to start by just separating ES out from the old box first
torrancewcould do that a bit more fluidly
torrancewie stand up es data nodes that join the current elk cluster, get the shards all spread out, then the 2 logstash clusters both use the same backing datastore, so less consistency/migration headaches
ScothThat's pretty much what I already tried (that is, having both old and new pull from duplicated rabbit cluster) but for some reason messages were being dropped. I didn't have time to really debug before having to rollback, which is partly why I'm in this state rather than just using rabbit
torrancewI'm not an expert on rabbit duplicated clusters, could it have been a problem with topology/acking ?
torrancewie (this is why I suggested pulling messages off of topic foo, and re-inserting them straight back into topics bar and baz; a sort of tee(3) built on rabbit)
ScothCould have been. I'd actually done similar about six months ago doing a es 2.x to 5.x migration, but it was coming from a well-built and well-designed 2.x cluster instead of just a creaky old box
torrancewwhat of the second suggestion, to scale Elasticsearch underneath your creaky old box?
Scoth(I used an exchange, and then bound two queues to it with the same routing key, which duplicated all the messages. The logstash indexers pulled
torrancewah gotcha
ScothThat may end up being an option, although it's an old 2.x box too.
torrancewthat sounds like a reasonable tpology (based on my tenuous grasp of rabbit)
torrancewyeah you'd have to grow the 2.x cluster
ScothI'm not a rabbit expert either, but it worked
torrancewwhich is, clearly, not ideal
ScothAight, I'm heading out for the time being. Thanks for the ideas, I'll keep puzzling over it
torrancewgood luck