Yii powered website on a "cloud"

Hello all, here at work they bought some servers on a cloud system, that is nothing more than two mirrored servers accessed through a load balancer (well… if you want, call it round robin selection).

Now my application is both serving a web administration interface and a flash game.

Of course if a session get started on a server and the next call will be redirected to the other server the session isn’t there anymore forcing the user to login again in the best of the cases, thing that shouldn’t happen within the game itself.

I wonder what’s the best solution to handle this problems since I’m very new to these things and from my humble point of view a cluster could have made the trick… but who knows!

I’d really like to know what do you think, about the problem itself (session handling) and about the “cloud” stuff: most of the tech ppl are just throwing bad words at it.

thanks.

You could use database or memcache as session handler, that way the sessions would be available on both servers.

thanks for the quick reply.

Anyway if in some sort I can understand how a db can solve this issue, I don’t get how memcached can (since I’ve never used it), if the word stands for something and the data is “memory cached” the problem arises again, how do you share the “memory”? is there something I miss?

The database or memcache server must be on one of the servers. Both servers connect to the same database or memcache server.

Memcache is a usual cache but supports tcp as well. So the data is physically only on one server, still you can access it from the other.

oh I see, didn’t know that. Seems like a nice solution indeed, I’ll see what I can do to implement memcache into the application.

Thanks a lot.

No problem. Actually most big websites handle caching with memcache. For example you could setup 10 physical memcache servers and add them all to your config:




'components' => array(

   'globalcache' => array(

      'class' => 'CMemCache',

      'servers' => array(

         array('host' => '192.168.200.1'),

         array('host' => '192.168.200.2'),

         array('host' => '192.168.200.3'),

         array('host' => '192.168.200.4'),

         array('host' => '192.168.200.5'),

         array('host' => '192.168.200.6'),

         array('host' => '192.168.200.7'),

         array('host' => '192.168.200.8'),

         array('host' => '192.168.200.9'),

         array('host' => '192.168.200.10'),

      ),

   ),

),



In your application you could simply access the cache (one server gets randomly selected):


$relatedVides = Yii::app()->globalcache->get('relatedVideos' . $searchQuery);

I’m no expert at these things so I don’t know what’s a good way to handle session data between several physical servers. My guess is that database with replication is used for that instead of memcache (since session data must be redundant).

// Or in this case a good solution would be to extend CMemCache, modify the set() function so that all servers get the new data and then use that extended class only for the session handling. For data handling use the original CMemCache. I guess that’s they way to go. Not sure :D

Might be a useful read to you. Also, just google for "handling sessions on cloud".

found a good answer here: http://www.yiiframework.com/forum/index.php?/topic/6667-slow-db-session-table-reason-found

since all the nodes have memcached active that was a very good thing to switch to.

You need to be careful if you do caching per node.

If you’re not using sticky sessions on the load balancer then you risk state being fragmented across nodes. For example if you cache session state in memcache and you use that state to track a particular event then the state is only saved on the node that serves the request. The next request the user makes will possibly go to a different node that has stale session state i.e. it hasn’t got a record of the previous event.

Additionally if it’s a large site and you’re caching a lot of data it’s probably desirable to have 1 or more dedicated memcached nodes that all web servers use. If you host memcached on each node for a large site you can quickly find yourself running out of memory for a typical web server with a few gig of memory.

You may be able to tell I’ve gone through some pain with this before (I use Amazon EC2)!

Sorry, quick edit: a final thing to add is that if you store session state per node and the node dies so does the users session. Obviously the nature of the site and the business (and how much you want to spend) dictates whether that’s ok or not.

Hope that helps,

P.

Completely true.

I was a little bit wrong on my previous reply, there’s only one memcached server, so I’m using only one amongst the others. I’d prefer to stick on it since the load balancer seems not to behave as supposed to maintain sessions (unfortunately I’m not the one who can configure it directly)

Hi there fellow developers!

I have a question that is quite like the one regarding sessions.

We have a load balancer between two servers, and we have the sessions in db. This works fine and sessions are handled correctly. The problem we have, is that when the CSRF-validation is set to true, the CSRF-token is not verified when the load balancer switch servers.

Do you have any solution for this? Should we save the CSRF-token on the file server or in db? The CSRF-cookie don’t work at all here.

Thanks!

Hi Kenz,

I think post #4 here may answer your question.

Paul

Instead of sending half the requests to one server, and the other half to the other server, how about sending half the users to one server and the other half to the other server? So the load balancer first checks if the user is logged into one of the servers, and if so, redirect to that one. Otherwise, redirect to the one with the smallest load.

I don’t have experience in this area, but thought I’d share my thoughts.

You may also note how games like RuneScape do this: before logging on, the user chooses a server to use (used to play that game a lot when I was younger)

Hi jonah,

Most load balancers will do this anyway using sticking sessions (assuming they’re enabled). The first time a user visits a site the load balancer will direct the user to a server - probably the one with least load at that point in time. Then for all future interactions with the site the user will be directed back to the same server.

As I said above the problem is that if a server dies and the session state is on that server then the users session will be lost. As a general rule I try and avoid any kind of server affinity between a user and their session. This page seems to over a nice, simple overview if anyone is interested.

Whilst I’m at it can anyone help me with my question? :)

Paul.