Quick question before buying XenForo

Ryan_

Active member
Hi everyone! I recently decided to buy XenForo and import my data from my vBulletin 4 forum after testing out the demos - excellent software for sure! I just had a quick question I was hoping you guys could help me with:

I want to set up the XenForo Enhanced Search, but I am a little confused on how to set up elasticsearch:

* "Download":http://www.elasticsearch.org/download and unzip the ElasticSearch official distribution.
* Run @bin/elasticsearch -f@ on unix, or @bin/elasticsearch.bat@ on windows.
* Run @curl -X GET http://localhost:9200/@.
* Start more servers ...

I understand the first part. The 2nd part seems easy enough for my Linux VPS. The 3rd part, is it literally localhost or are you supposed to replace that with something? And what do they mean by the "start more servers" part? Also, after that they give me an example for indexing:

h3. Indexing

Lets try and index some twitter like information. First, lets create a twitter user, and add some tweets (the @twitter@ index will be created automatically):

<pre>
curl -XPUT 'http://localhost:9200/twitter/user/kimchy' -d '{ "name" : "Shay Banon" }'

curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'

curl -XPUT 'http://localhost:9200/twitter/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>

Now, lets see if the information was added by GETting it:

<pre>
curl -XGET 'http://localhost:9200/twitter/user/kimchy?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/1?pretty=true'
curl -XGET 'http://localhost:9200/twitter/tweet/2?pretty=true'
</pre>

h3. Searching

Mmm search..., shouldn't it be elastic?
Lets find all the tweets that @kimchy@ posted:

<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?q=user:kimchy&pretty=true'
</pre>

We can also use the JSON query language ElasticSearch provides instead of a query string:

<pre>
curl -XGET 'http://localhost:9200/twitter/tweet/_search?pretty=true' -d '
{
"query" : {
"text" : { "user": "kimchy" }
}
}'
</pre>

Just for kicks, lets get all the documents stored (we should see the user as well):

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>

We can also do range search (the @postDate@ was automatically identified as date)

<pre>
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -d '
{
"query" : {
"range" : {
"postDate" : { "from" : "2009-11-15T13:00:00", "to" : "2009-11-15T14:00:00" }
}
}
}'
</pre>

There are many more options to perform search, after all, its a search product no? All the familiar Lucene queries are available through the JSON query language, or through the query parser.

h3. Multi Tenant - Indices and Types

Maan, that twitter index might get big (in this case, index size == valuation). Lets see if we can structure our twitter system a bit differently in order to support such large amount of data.

ElasticSearch support multiple indices, as well as multiple types per index. In the previous example we used an index called @twitter@, with two types, @User@ and @Tweet@.

Another way to define our simple twitter system is to have a different index per user (though note that an index has an overhead). Here is the indexing curl's in this case:

<pre>
curl -XPUT 'http://localhost:9200/kimchy/info/1' -d '{ "name" : "Shay Banon" }'

curl -XPUT 'http://localhost:9200/kimchy/tweet/1' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T13:12:00",
"message": "Trying out Elastic Search, so far so good?"
}'

curl -XPUT 'http://localhost:9200/kimchy/tweet/2' -d '
{
"user": "kimchy",
"postDate": "2009-11-15T14:12:12",
"message": "Another tweet, will it be indexed?"
}'
</pre>

The above index information into the @kimchy@ index, with two types, @info@ and @Tweet@. Each user will get his own special index.

Complete control on the index level is allowed. As an example, in the above case, we would want to change from the default 5 shards with 1 replica per index, to only 1 shard with 1 replica per index (== per twitter user). Here is how this can be done (the configuration can be in yaml as well):

<pre>
curl -XPUT http://localhost:9200/another_user/ -d '
{
"index" : {
"numberOfShards" : 1,
"numberOfReplicas" : 1
}
}'
</pre>

Search (and similar operations) are multi index aware. This means that we can easily search on more than one
index (twitter user), for example:

<pre>
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>

Or on all the indices:

<pre>
curl -XGET 'http://localhost:9200/_search?pretty=true' -d '
{
"query" : {
"matchAll" : {}
}
}'
</pre>

{One liner teaser}: And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from my friends friends).

h3. Distributed, Highly Available

Lets face it, things will fail....

ElasticSearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replica. By default, an index is created with 5 shards and 1 replica per shard (5/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).

In order to play with Elastic Search distributed nature, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.

h3. Where to go from here?

We have just covered a very small portion of what ElasticSearch is all about. For more information, please refer to: .

h3. Building from Source

ElasticSearch uses "Maven":http://maven.apache.org for its build system.

In order to create a distribution, simply run the @mvn clean package -DskipTests@ command in the cloned directory.

The distribution will be created under @target/releases@.

h1. License

<pre>
This software is licensed under the Apache 2 license, quoted below.

Copyright 2009-2013 Shay Banon and ElasticSearch <http://www.elasticsearch.org>

Licensed under the Apache License, Version 2.0 (the "License"); you may not
use this file except in compliance with the License. You may obtain a copy of
the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
License for the specific language governing permissions and limitations under
the License.
</pre>

Is any of that required to be done, or is it just an example? As I am not very familiar with SSH, I prefer doing a lot of things via other methods when applicable. And once all that is done, you just install the Enhanced Search add-on and the files that go with it and it takes affect immediately, correct?
 
I'm assuming in that case you DON'T need to do the same thing for $ftp->delete and the line under "# Array of Old Database Files to be Deleted", correct? Also, do you replace http://prntscr.com/1gs5sa << those with the actual information?
Yes, you'd need to remove the array from the delete as well (I'm deleting multiple backups as well from my NAS)

Here is a edited version of mine for a single file (along with sensitive information removed by XXXX)

Code:
#!/usr/bin/perl -w

use Net::FTP;
use Date::Manip;
# FTP PARAMETERS
$ftp_backup = 1;
$dir = "/home/z22se/scripts/databases";
$nas_host = "XXXXXXXX";
$nas_port = "XXXX";
$nas_user = "XXXX";
$nas_pwd = "XXXXXXXX";
$nas_dir = "/Websites/Databases/";
$today = UnixDate("today","%Y-%m-%d");
$old = UnixDate("14 days ago","%Y-%m-%d");
# File to be Uploaded
$file = "$dir/z22seforum.$today.sql.bz2";
# File to be Deleted
$file = "z22seforum.$old.sql.bz2";
&nas();
# UPLOADING BACKUP TO NAS
sub nas {
if($ftp_backup == 1)
{
        my $ftp = Net::FTP->new($nas_host, Port => $nas_port, Debug => 0)
          or die "Cannot connect to server: $@";
        $ftp->login($nas_user, $nas_pwd)
          or die "Cannot login ", $ftp->message;
        $ftp->cwd($nas_dir)
          or die "Can't CWD to remote FTP directory ", $ftp->message;
        $ftp->binary();
        $ftp->put($file)
          or warn "Upload failed ", $ftp->message;
        $ftp->delete($oldfile)
          or warn "Delete failed ", $ftp->message;
        $ftp->quit();
}
}
 
Yes, you'd need to remove the array from the delete as well (I'm deleting multiple backups as well from my NAS)

Here is a edited version of mine for a single file (along with sensitive information removed by XXXX)

Code:
#!/usr/bin/perl -w

use Net::FTP;
use Date::Manip;
# FTP PARAMETERS
$ftp_backup = 1;
$dir = "/home/z22se/scripts/databases";
$nas_host = "XXXXXXXX";
$nas_port = "XXXX";
$nas_user = "XXXX";
$nas_pwd = "XXXXXXXX";
$nas_dir = "/Websites/Databases/";
$today = UnixDate("today","%Y-%m-%d");
$old = UnixDate("14 days ago","%Y-%m-%d");
# File to be Uploaded
$file = "$dir/z22seforum.$today.sql.bz2";
# File to be Deleted
$file = "z22seforum.$old.sql.bz2";
&nas();
# UPLOADING BACKUP TO NAS
sub nas {
if($ftp_backup == 1)
{
        my $ftp = Net::FTP->new($nas_host, Port => $nas_port, Debug => 0)
          or die "Cannot connect to server: $@";
        $ftp->login($nas_user, $nas_pwd)
          or die "Cannot login ", $ftp->message;
        $ftp->cwd($nas_dir)
          or die "Can't CWD to remote FTP directory ", $ftp->message;
        $ftp->binary();
        $ftp->put($file)
          or warn "Upload failed ", $ftp->message;
        $ftp->delete($oldfile)
          or warn "Delete failed ", $ftp->message;
        $ftp->quit();
}
}

Okay, so if I was going to edit the original for multiple websites I would remove the delete from ftp->delete as well as the database section, gotcha. So that code would replace upload.pl and it will work for a single website's database AND file system backups?
 
Okay, so if I was going to edit the original for multiple websites I would remove the delete from ftp->delete as well as the database section, gotcha. So that code would replace upload.pl and it will work for a single website's database AND file system backups?
No, the delete part is there to remove the old databases which are no longer required. That script will keep 14 days (can be changed by editing the $old variable).

The upload.pl script id only for backing up the databases.

If you want to do filesystem as well, you want to use something like rsync to do that.

This is a bash script I have to mirror my public_html directory to the NAS

Code:
#!/bin/bash
## Matt's script to copy sites to home NAS

# Location to store backups...
NAS_HOST="XXXXXX"
NAS_PORT="XXXX"
NAS_USER="XXXX"
NAS_FORUM="/mnt/soho_storage/samba/shares/Matt/Websites/Z22SE.co.uk"
# Rsync options...
OPTS="-aruvPz
      --compress-level=9
      --itemize-changes
      --delete
      --human-readable
      --stats"
LOCAL_FORUM="/home/z22se/public_html/"
# Display greeter...
#echo "Preparing to backup z22se.co.uk forum in 5 seconds with the following command..."
echo "Backing up home directory to NAS..."
echo ""
# Echo command to run...
echo /usr/bin/rsync -e "ssh -p ${NAS_PORT}" $OPTS $LOCAL_FORUM $NAS_USER@$NAS_HOST:$NAS_FORUM
echo ""
sleep 5
# now the actual transfer
/usr/bin/rsync -e "ssh -p ${NAS_PORT}" $OPTS $LOCAL_FORUM $NAS_USER@$NAS_HOST:$NAS_FORUM
 
No, the delete part is there to remove the old databases which are no longer required. That script will keep 14 days (can be changed by editing the $old variable).

The upload.pl script id only for backing up the databases.

If you want to do filesystem as well, you want to use something like rsync to do that.

This is a bash script I have to mirror my public_html directory to the NAS

Code:
#!/bin/bash
## Matt's script to copy sites to home NAS

# Location to store backups...
NAS_HOST="XXXXXX"
NAS_PORT="XXXX"
NAS_USER="XXXX"
NAS_FORUM="/mnt/soho_storage/samba/shares/Matt/Websites/Z22SE.co.uk"
# Rsync options...
OPTS="-aruvPz
      --compress-level=9
      --itemize-changes
      --delete
      --human-readable
      --stats"
LOCAL_FORUM="/home/z22se/public_html/"
# Display greeter...
#echo "Preparing to backup z22se.co.uk forum in 5 seconds with the following command..."
echo "Backing up home directory to NAS..."
echo ""
# Echo command to run...
echo /usr/bin/rsync -e "ssh -p ${NAS_PORT}" $OPTS $LOCAL_FORUM $NAS_USER@$NAS_HOST:$NAS_FORUM
echo ""
sleep 5
# now the actual transfer
/usr/bin/rsync -e "ssh -p ${NAS_PORT}" $OPTS $LOCAL_FORUM $NAS_USER@$NAS_HOST:$NAS_FORUM

I meant changing it from plural ( http://prntscr.com/1gs9ma ) to singular, as you said with the # Array of Database Files to Upload. But either way, I'll just use the edited version you so kindly provided me with for single websites.

Okay, so I'll look into rsync for the file system as well. Although I am gonna have my host do a nightly full server backup soon, filesystem included, so maybe I don't need rsync after all.
 
Last edited:
I meant changing it from plural ( http://prntscr.com/1gs9ma ) to singular, as you said with the # Array of Database Files to Upload. But either way, I'll just use the edited version you so kindly provided me with for single websites.

Okay, so I'll look into rsync for the file system as well. Although I am gonna have my host do a nightly full server backup soon, filesystem included, so maybe I don't need rsync after all.
I have 4 X daily full backups, but it's nice to have your own backup plan in place.

I understand what you mean now about the delete. I messed up the example above, it should be:

$file = "$dir/z22seforum.$today.sql.bz2";
# File to be Deleted
$oldfile = "z22seforum.$old.sql.bz2";
and then you'd just remove the for each around it:

$ftp->delete($oldfile)
or warn "Delete failed ", $ftp->message;
 
I have 4 X daily full backups, but it's nice to have your own backup plan in place.

I understand what you mean now about the delete. I messed up the example above, it should be:

$file = "$dir/z22seforum.$today.sql.bz2";
# File to be Deleted
$oldfile = "z22seforum.$old.sql.bz2";
and then you'd just remove the for each around it:

$ftp->delete($oldfile)
or warn "Delete failed ", $ftp->message;

Yup, having several daily backups is good in case something happens.

Do you mean for the example of the single-website file? If so, I'll keep that in mind and if I have any more questions after I buy XenForo I'll let you know.
 
Yup, having several daily backups is good in case something happens.

Do you mean for the example of the single-website file? If so, I'll keep that in mind and if I have any more questions after I buy XenForo I'll let you know.
Yes, for the single database example I posted earlier in the thread. If you need any help once you have XF up and running, I can give you a specific script to use based on your own set up if you need it.
 
Top Bottom