mandag 29. februar 2016

Redis service bus for API calls

Redis service bus for API calls means that I can store the
users context (geo fence) as a time+place context.
Instead of having all the users query the same API calls for the same context, why not geo fence the API queries and let redis act as a remote storage for the results? That's what I asked my self in my undergoing bachelors project.

My Android app "Kontekst" (Norwegian for Context) will query data sources like YR and NILU – Norsk institutt for luftforskning.

The context of the user shold show data relevant for the user and the user should not have to wait for the same data delivered to other users in the same context. There is the dimension place (location) and there is time. Time doesent stand still even though the user does and after 10 minutes, the data is considered stale when it comes to air quality and the local weather situation.

Yet there are hundreds of thousands of inhabitants, even in the city of Bergen, Norway! So why not use the contexts and harvest the great powers of Redis?

søndag 7. februar 2016

Redis, put your socks on!

Instead of handling connections via the TCP connection, one of the most important yields of Redis performance enhancment would have to be getting the sockets up and running.

On unbuntu server 14.04 you have the settings by doing like so:
sudo su
vi /etc/redis/redis.conf

Find the socket by searching in vim:
:/unixsocket <enter>

When you find the line with redis socket, just uncomment it, so it becomes:
unixsocket /var/run/redis/redis.sock
unixsocketperm 755

Then write and quit vim.
:wq

Then you need to reload the redis server, you can do it like so:
service redis-server restart

You can of course try to force a reload instead of doing a restart, but I find redis to often need a restart where Nginx often have enough with a reload. 

So, what are the gains my dear sir?
I did some benchmarks on my 5$ VPS.

First look at redis without socks (TCP):
root@xxxx:/var/www/kontekst# redis-benchmark -q -n 10000
PING_INLINE: 57471.27 requests per second
PING_BULK: 54347.82 requests per second
SET: 24038.46 requests per second
GET: 25974.03 requests per second
INCR: 26315.79 requests per second
LPUSH: 25641.03 requests per second
LPOP: 40650.41 requests per second
SADD: 65789.48 requests per second
SPOP: 44247.79 requests per second
LPUSH (needed to benchmark LRANGE): 43290.04 requests per second
LRANGE_100 (first 100 elements): 24570.02 requests per second
LRANGE_300 (first 300 elements): 11185.68 requests per second
LRANGE_500 (first 450 elements): 6997.90 requests per second
LRANGE_600 (first 600 elements): 6930.01 requests per second
MSET (10 keys): 37735.85 requests per second

Then the next one is looking at socks for less overhead:

(Remember to change your sock location!)
root@xxxx:/var/www/kontekst# redis-benchmark -q -n 10000 -s /.../.../.../redis.sock

PING_INLINE: 108695.65 requests per second
PING_BULK: 112359.55 requests per second
SET: 109890.11 requests per second
GET: 111111.11 requests per second
INCR: 90090.09 requests per second
LPUSH: 128205.12 requests per second
LPOP: 129870.13 requests per second
SADD: 67114.09 requests per second
SPOP: 61728.39 requests per second
LPUSH (needed to benchmark LRANGE): 60606.06 requests per second
LRANGE_100 (first 100 elements): 23752.97 requests per second
LRANGE_300 (first 300 elements): 9149.13 requests per second
LRANGE_500 (first 450 elements): 8361.20 requests per second
LRANGE_600 (first 600 elements): 5896.23 requests per second
MSET (10 keys): 56497.18 requests per second

So the socks connection gives: 53497.18 - 37735.85 = 15761,33 increase in requests per second!
This is completely free and takes you 2 lines of configuration... So why not do it? Even if your Redis handles it's IO today, you will of course put less strain on CPU and Memory too.

5$ VPS rocks the boat!

I'm fairly impressed by the 5$ VPS by Digital Ocean (DO), which gives you real SSD storage, 512 Mb RAM and 1 core, 1Tb traffic out. The fact that I got 10$ after signing up (for free) is also nice.

Digital Ocean has a lot of guides about everything you can imagine and they have a place where the community helps out. I even figured out they pay people to write guides!

You can get a lot done on Linux even with half a gigabyte of RAM. In a couple of days I've set up Ubuntu Server 14.04 (LTS), NGINX, PHP7, MySQL, Redis and grade A+ HTTPS using letscrypt.com.

A+ with let's crypt :-)

Redis caching WordPress objects
Redis caches all objects from my Wordpress instance, also it will cache my API calls from third parties (though with expiration). NGINX does both static file cache, also gzip compression etc.

In comparison of my previous webhotel, can now do "what I want" (within reason). Let's say I want a failover cluster, then I just make a new "droplet" (virtual server) and I do some configuraion for a failover cluster. I moved my domain to theire DNS, so I can have full control in one place.

NGINX is a great server, but I would also like to try this setup:
NGINX (reverse proxy) > Openlightspeed > Redis

OR:
NGINX (reverse proxy) > Apache > Redis
In my latest example I would want to use mod_pagespeed in apache, as this is pluggable. In NGINX you have to compile it your self, so it's a bit more work when you are upgrading NGINX and have to compile dependencies.

I'm running NGINX with HTTP/2, compression, cache etc. It flies! It's so much faster than regular hosting deals. Of course it's more work to get it up and running than just uploading files to a shared webhost.

If you want to try digital ocean today, use my referral link please :-)

>> Digital Ocean (DO), <<

Also remember to find your place with lowest ping:
http://speedtest-sfo1.digitalocean.com/

For me AMS2 was fastest and had lowest ping, ping is by all means the most important factor when it comes to internet services, as a high latency will give a slow page even though it's very light and optimized to fly.