XF 1.2 Admin Login - repeated prompting - PerconaCluster

ColinD

Member
Hi guys,

I'm experimenting with some setups on my vmware sandbox ahead of migration planning from IPB.

My setup currently is a nginx 1.4 with php-fpm php5.5.8 (sockets), login to the forums area works fine.

Admin login, as others have reported is repeating the login request. It's checking password if I duff it, it tells me it's invalid.

I've checked and cleared the sessions (admin and normal) a few times. Now I'm drawing a blank.

I've seen this thread: http://xenforo.com/community/threads/cant-login-to-my-admin-kontrol-panel.41223/ and the subsquent thread suggestions.

Regrettably my sandbox is under my desk and not exposed to the interweb. I'm using a dummy domain briskoda.dev, which is set in my hosts file. I installed ( a few times now ;) ) from that url fine. I've used a combination of safari, chrome and firefox to test, each repeats, all enabled for cookies. Cookies are being set too, in admin xf_session, xf_session_admin & xf_user.

I've also set debug true in the config, being new to XF I still have to work out what I'm debugging.

I'm missing something. I hope to feel like a wally soon, so I can crack on and get off IPB.

p.s. the only other odd ball is I've got a percona cluster, behind a haproxy. The install/upgrades and front end logins work... to eliminate that I connected to a cluster node directly, same results. Thought I'd mention it though.

-Edit -> Seems the balancing into the cluster is the spell breaking the magic. See below.
 
Last edited:
I'm assuming you completely erased all cache and cookie values?

What's actually happening?

Is your IP constant, as reported to the PHP installation?

Liam
 
Hi Liam,

PHP info is showing me my IP which remains unchanged, same lan.

Good to ask on cookies etc, I am plus clearing local browser cache. I've also erased the db between bigger tests, using different admin usernames; sanity testing myself..

I'm connecting from safari directly to the nginx ip, so no load balancer or x forward for issues. I've removed APC.

Just picking over php session and then digging into code to figure out what flag I've missed. PHP is not my forte!

Code:
_SERVER["SERVER_SOFTWARE"] nginx/1.4.4

_SERVER["REMOTE_ADDR"] 192.168.1.235

_SERVER["REMOTE_PORT"] 58070

_SERVER["SERVER_ADDR"] 192.168.1.110

_SERVER["SERVER_PORT"] 80
 
Liam,

That old, go for a cup of tea and come back has worked. I just logged into admin!

Now to work backwards, I want to understand why. Undoubtedly it's the setup I'm working on.

I'll update when I figure it out, in case anyone else is googling.
 
Still digging, but it appears that the mysql cluster setup and in particular how I'm balancing is having an effect.

If I soft down all bar one mysql node it (admin) works fine, I can login. If I then bring the other nodes up, either leastconn or roundrobin, I cannot stay logged into admin, nor login.

So the front appears to be robust enough to cope with the queries heading off to any server through ha, likewise importing external data. But admin seems more delicate. There is no load on the cluster, synching is quick atm, internal vm network.

I've done a few scenarios now and the common issue is when nodes are up and it's (ha) is balancing. Worse case I put 2+n nodes as backup, but I really wanted to avoid a kingpin.

Anyone have any ideas on the above?
 
The admin and front end use the exact same session management system. It's possible your session on the front end is being recreated over and over (because you selected the "stay logged in" option).

It simply sounds like the clustering isn't working for this table (and possibly a few others). Have you made sure it's an NDB table?
 
Hi Mike,

I figured the code was going to be shared it would be under rails or node... When I was trying this I wasn't logged into the front end. But what was throwing me for a while too, is the front end login/logout worked when roundrobin or leastconn balancing was in effect.

It's not an NDB table, this is a Percona XtraDB Cluster where all nodes are r/w capable. Front end appears fine when I Siege it, I can see a good even distribution of reads and 200 responses. Likewise with a hackish selenium script to post noddy replies to a thread.

My gut is telling me it's replication latency; which maybe why a single node setup works "ok". It maybe also why DigitalPoint went with the Db node on webnode...

I'm pursuing this for availability/hardware failure over performance grounds.

/Edit...
See now I feel a wally :)

...it is however a myisam table and thats not a 'supported' replication yet.

wsrep_replicate_myisam=1 < Off to test.

I saw the search tables were myisam, figures, but ignored as I'll be heading for the extended search setup next. Glad it's bitten me, was to easy otherwise.
 
Last edited:
For reference, the memory ones will be a problem; specific to this mysql cluster.
Have altered them to innodb and turned off delayed insert in acp.
Thinking on once I'm done I'll write this up under a new heading... if only for my own sanity :)
 
My gut is telling me it's replication latency; which maybe why a single node setup works "ok". It maybe also why DigitalPoint went with the Db node on webnode...
We don't use Percona or Galera. We use MySQL Cluster which uses ndbcluster storage engine... A completely different beast (not replication based). Galera is going to run into limitations simply because it uses InnoDB and disk-based tables underneath it all. Out setup can handle somewhere around 20-25M queries per second, and having SQL nodes on web server nodes is not needed.
 
Thanks for the reply, sorry I mis-associated a previous post... Given your requirements ndbcuster in memory makes sense. The complexity for me does not, that at least is how I see Galera. As an 'easier' path to multi master HA. I'm not handling any serious QPS. I'm a long way of the InnoDB limits.

I prefer to separate my layers & apps. Introduces some network time, but in the scheme of things not much to worry about.
 
Back
Top Bottom