I would love to know your configuration, I've never been able to make this work. Every time I try solr stops responding and chef server eventually crashes. I'd like to be able to blame it on centos 5.5 and ruby 1.8.7, but I dont anything to compare to
Sent from a phoneSo, it turned out that this DID fix my problem, but there's a trick: /var/lib/chef/solr/conf/solrconfig.xml has the maxFieldLength field set _twice_, and you need to change the value in both places for it to take.
On Tue, May 29, 2012 at 11:14 PM, Ian Marlier < " target="_blank"> > wrote:Sadly, changing that parameter and restarting solr doesn't seem to have helped. I changed it in both locations (/etc/solr/conf/solrconfig.xml and /var/lib/chef/solr/conf/solrconfig.xml) and restarted chef-solr after doing so. I initially changed it from the default value of 10000 to 20000, and then tried again raising it to 120000
In order to try to force reindexing I've done a number of things, including editing the nodes and deleting the cluster/uuid parameter entirely, and then editing again and adding it back. I've even tried dumping the entire node out to a JSON file, deleting it entirely from Chef, and then adding it back in again. Neither of those worked.
Querying solr directly, it's clear that the issue is that this particular attribute isn't making it in there, but I really can't comprehend why.On Tue, May 29, 2012 at 10:39 PM, Ian Marlier < " target="_blank"> > wrote:
I'll take a look at that setting, Peter. Thanks for the pointer. I'll report back once I've got something.On Tue, May 29, 2012 at 9:15 PM, Peter Donald < " target="_blank"> > wrote:
Hi,
Do the nodes that are not being returned from search have a lot of
attribute data? If so the problem may be that they are failing to be
indexed by solr as you are exceeding the setting maxFieldLength. If
this is the case look for the solr config file on your system and
increase the size of this. (Just be warned the config file actually
appears twice on ubuntu systems for some reason).
HTH
--
On Wed, May 30, 2012 at 3:00 AM, Ian Marlier < " target="_blank"> > wrote:
> I'm working with the following two nodes, which are supposed to be part of a
> pacemaker cluster:
> chef-repo (hc_production)]$ k node show login01.fal
> --attribute cluster --format json
> {
> "cluster": {
> "heartbeat_coredumps": "true",
> "uuid": "dff82156-9145-4a89-b92a-ba3bb238442b",
> "udpport": "759",
> "heartbeat_compression": "bz2",
> "pacemaker": {
> "resource_dir": "/usr/lib/ocf/resource.d"
> },
> "deadtime": 60,
> "initdead": 90,
> "warntime": 30,
> "keepalive": 5
> }
> }
> chef-repo (hc_production)]$ k node show login02.fal
> --attribute cluster --format json
> {
> "cluster": {
> "heartbeat_coredumps": "true",
> "uuid": "dff82156-9145-4a89-b92a-ba3bb238442b",
> "udpport": "759",
> "heartbeat_compression": "bz2",
> "deadtime": 60,
> "pacemaker": {
> "resource_dir": "/usr/lib/ocf/resource.d"
> },
> "initdead": 90,
> "keepalive": 5,
> "warntime": 30
> }
> }
> chef-repo (hc_production)]$
>
>
> However, for reasons that I can't explain, searching based on cluster UUID
> doesn't return these nodes:
>
> chef-repo (hc_production)]$ k search node
> 'cluster_uuid:dff82156-9145-4a89-b92a-ba3bb238442b'
> 0 items found
>
> chef-repo (hc_production)]$
>
>
> Anyone have any idea why this would be? Is there a way to check the index
> status for a given node or something like that, or to force re-indexing of
> the node? (Note: I did try editing the node, removing the UUID parameter,
> saving the node, then editing again and adding the UUID parameter back in.)
>
> Thanks,
>
> Ian
>
> --
> Ian Marlier | Senior Systems Engineer
> Brightcove, Inc.
> 290 Congress Street, 4th Floor, Boston, MA 02110
> " target="_blank">
>
Cheers,
Peter Donald
--Ian Marlier | Senior Systems EngineerBrightcove, Inc.
--Ian Marlier | Senior Systems EngineerBrightcove, Inc.
--Ian Marlier | Senior Systems EngineerBrightcove, Inc.
Archive powered by MHonArc 2.6.16.