- From: Avishai Ish-Shalom <
>
- To:
- Cc:
- Subject: [chef] Re: Re: Re: Re: Re: Re: RabbitMQ and Logstash
- Date: Wed, 11 Jan 2012 21:16:35 +0200
- Organization: FewBytes Technologies
I've ditched rabbitmq completely in favor of zeromq. apparently
the bunny gem doesn't perform so well, and zeromq both alleviate
the need for a broker and give great performance.
Regards,
Avishai
On 01/11/2012 07:27 PM, Harlan Barnes wrote:
"
type="cite">
Messed up and only sent to James the
first time.
On Tue, Jan 10, 2012 at 3:01 PM, James <
">
>
wrote:
We're currently working on a similar project. If you
want to contribute your changes back / post them
somewhere, we're happy to build on it and do the same.
Hi James - Sure, here's where I am. (I've been sidetracked with
another project, but I've solved what I think are hard problems.
I've tested everything individually but not all together.)
My plan was two parts A) to have logstash act as an
agent/shipper on of our nodes producing logs with a simple input
from the local file to the rabbitmq cluster and B) have X number
of logstash instances listening to the same queue in hopes of
spreading the consuming out across multiple nodes. The logstash
instances in B would do heavy filtering (grok'ing each event
type to parse out interesting information to index in
elasticsearch).
The log agent side ("A" above) is pretty easy. I modified Joshua
Timerbman's cookbook to:
- Changed the agent startup scripts so that it will use Jordan's
"directory" functionality instead of the explicit script. (The
plan was to have other recipes dump their configurations into
the directory as needed and let the agent glob them up.)
- To not install the agent.conf unless you tell it to (so it
doesn't interfere with my own agent configuration above)
- I also changed the web startup scripts so that it didn't
enforce 'localhost' as the elasticsearch backend (it's an
attribute now).
- Added Centos (init.d+daemonize) service support
- Here's the pull request I sent Joshua (and of course, it
references back my version of it): https://github.com/jtimberman/logstash-cookbook/pull/1
On the server side ("B" above), my goal was have each one of the
main components "clustered" (or at least stateless so they could
be load balanced as is the case with the logstash web
interface). I also wanted to be able to run a node having each
"component" installed ... or run each component on a node by
itself. Here's what I ended up with:
- Elasticsearch (clustered)
- The original one I started with is this: http://community.opscode.com/cookbooks/elasticsearch
It was only for ubuntu and debian. This is my first attempt at
modifying a recipe so I ripped out all the stuff I didn't
understand (the debian monit and ec2 stuff) so I could
understand it easier. Then I added the RedHat / Centos back in
by virtue of using the elasticsearch-wrapper here: https://github.com/elasticsearch/elasticsearch-servicewrapper
(with a little modification to the wrapper file itself.)
- What I ended up with was this: https://github.com/harlanbarnes/elasticsearch-cookbook
- There are other Elasticsearch recipes on github. I don't think
everyone has consolidated behind one maintainer. There's
probably one that fits you between all the choices.
- RabbitMQ (clustered)
- This cookbook from Opscode is pretty great https://github.com/opscode/cookbooks/tree/master/rabbitmq
... but I wanted it to auto cluster and support ssl connections.
So I added that with this pull request: https://github.com/opscode/cookbooks/pull/280
- One side note, to do clustering in RabbitMQ, you have to
tell the recipe "up-front" which are your disk nodes (even if
not all the disk nodes exist yet). As such, those disk nodes
need to be resolvable by everyone in the cluster. In a
traditional DNS setup, that's usually not a problem. On EC2 (and
not using Route 53) I had to make a little recipe that made the
"chef node name = the ephemeral private IP address".
- Logstash (consuming)
- There's not much additional to do here above what I did for
the log agent / shipper.
- I did create a grok package to deploy the C grok commands
that the version of logstash I was using relied on. However, I
understand that the latest version of logstash has a pure ruby
version of grok. So no need for that anymore.
There's more work to do (probably in a cookbook that implements
the other cookbooks):
- Setup the users and queues on rabbitmq (There's a LWRP in the
rabbitmq cookbook for that.)
- Write the logstash configuration into files or templates to
drop in the configuration directory.
So that's it. I'm still relatively new to chef, so I'm sure
there's lots of "wrong stuff." :-)
|
Attachment:
smime.p7s
Description: S/MIME Cryptographic Signature
Archive powered by MHonArc 2.6.16.