Wednesday, July 19, 2017

 

Postgresql Streaming Replication recipe


This is a quick recipe for PostgreSql 9.x Streaming Replication configuration
between a Master and Slave servers based on https://www.howtoforge.com/tutorial/postgresql-replication-on-ubuntu-15-04/

Master: IP 192.168.1.249

Everything should be done as postgres user.
psql CREATE USER replica REPLICATION LOGIN ENCRYPTED PASSWORD 'replicauser@';
# ctrl + d to exit
cd /etc/postgresql/9.4/main
vim postgresql.conf
listen_addresses = 'localhost,192.168.1.248'
wal_level = hot_standby
checkpoint_segments = 8
archive_mode = on
archive_command = 'cp -i %p /var/lib/postgresql/9.4/main/archive/%f'
max_wal_senders = 3
wal_keep_segments = 8
# save and close
mkdir -p /var/lib/9.4/main/archive/
vim pg_hba.conf
host    replication     replica      192.168.1.248/24            md5
#192.168.1.248 is slave-server ip address
# save and close
sudo service postgresql start
Master server done.

Slave: IP: 192.168.1.248

Again, everything as postgres user.
su - postgres
cd /etc/postgresql/9.4/main/
vim postgresql.conf
listen_addresses = 'localhost,192.168.1.249'
wal_level = hot_standby
checkpoint_segments = 8
max_wal_senders = 3
wal_keep_segments = 8
hot_standby = on
# save and close
sudo service postgresql stop
pg_basebackup -h 192.168.1.249 -D /var/lib/postgresql/9.4/main -U replica -v -P
cd /var/lib/postgresql/9.4/main/
vim recovery.conf
standby_mode = 'on'
primary_conninfo = 'host=192.168.1.249 port=5432 user=replica password=replicauser@'
restore_command = 'cp /var/lib/postgresql/9.4/main/archive/%f %p'
trigger_file = '/tmp/postgresql.trigger.5432'
# save and close
sudo service postgresql start
Done!.

Sunday, March 27, 2016

 

PostgreSql tablespaces


This is my 1nst post after more than two years, I can’t believe how much time has passed since my last article. In this period I was still programming using FPC/Lazarus and Delphi (only from time to time, fixing bugs/adding features to old apps), also doing some stuff in C++ (using the DCMTK dicom library) and of course JavaScript/CSS/HTML5/AngularJs, but the common denominator of all the work I’ve done is the wonderful, marvellous, spectacular PostgreSql database, yes it’s awesome, I agree with those that calls it a developer’s friendly database, it really is.
Now, I’ve decided to start writing about solutions to not so common problems involving PostgreSql, but don’t worry, I’ll still write about FPC ;).

Reaching free disk space limit


Imagine this situation, you installed PostgreSql in an Ubuntu Server by using sudo apt-get install postgresql and populated your database, then after some time, you find it is growing more than you initially expected. So, the solution is easy, add more disk space…, yes, but how you can add more disk space depends on the architecture of the underlying hardware and operating system. The easiest solution is to add an extra hard disk with more capacity, then move the PGDATA directory, but some times you can’t do that.
Let’s continue with an example. The server has one 1Tb disk, with two partitions, first is 800Gb and second is 200Gb. You install PostgreSql in the first partition, using all the defaults, so your PGDATA directory is in /var/lib/postgresql/9.x/main. Everything is ok, until you find the space limit on the first partition is near.

Tablespaces to the rescue


According to the PostgreSql manual, Tablespaces allow database administrators to define locations in the file system where the files representing database objects can be stored. Once created, a tablespace can be referred to by name when creating database objects.
So, one solution to our problem is move one or more tables from one partition to another. To do that, first determine the current in-disk size of your biggest tables, using this command:

SELECT nspname || '.' || relname AS "relation",
  pg_size_pretty(pg_relation_size(C.oid)) AS "size"
FROM pg_class C
LEFT JOIN pg_namespace N ON (N.oid = C.relnamespace)
WHERE nspname NOT IN ('pg_catalog', 'information_schema')
ORDER BY pg_relation_size(C.oid) DESC
LIMIT 20;

For example, imagine we determine to move a 100Gb table called “documents” to a new tablespace, in the partition mounted on /mnt/partition2. The process is very easy:
  1. Create a directory in /mnt/partition2 where our data will reside:

    sudo mkdir /mnt/partition2/pgdata

  2. Assign postgres owner:

    sudo chown postgres:postgres /mnt/partition2/pgdata

  3. Create the new tablespace (from now on inside psql or PGAdminIII):

    create tablespace part2 location '/mnt/partition2/pgdata';

  4. Change the tablespace on documents table (this can take a while):

    alter table documents set tablespace part2;

That’s it, the documents table now is stored in a different location and our main partition has 100Gb more!.
As you can imagine, the tablespaces can be used in a lot more situations than this, for example, imagine you have two servers, one with screaming fast SSD storage and other with SATA, but bigger than the main server, you can store the most recent and needed data in the fast server and older data in the slow one, in my next article I’ll show you how.

Labels: , ,


Thursday, March 06, 2014

 

Improving Performance with Zeos and PostgreSql


Last month I was asked if one of our client-server applications can run from a remote site via Internet, sure, why not?, just open a port in your router and point the connection's IP to the external address and done.

A couple of days later, the same customer calls again, telling me that the application is working, but it's slow like a turtle, even on a 2Mb link. Ok, I closed Hacker News and started thinking about how to debug this.

Logging PostgreSql queries


The first thing I did was getting a backup of that customer's database (well, a very old one, because now it's 0.5Tb), and restored on my PostgreSql 9.2 instance. I did this because as I wanted to log everything I just need one client connected, otherwise I'll get mixed queries from different PCs.

To log everything just open your postgresql.conf and set this:

logging_collector = on 
log_min_duration_statement = 0 
log_statement = all

Save and restart the postgres service, on Debian based systems just do "sudo service postgresql restart". Then start using the application and go to the pg_log directory, by default in /var/lib/postgresql/9.2/main/pg_log, and you'll see postgresql-YYYY-MM-DD_######.log files.

Ok, once logging_collector was set up, I opened my application then went to one option that just listed data (a grid) then double clicked on a row, to open a dialog containing further data, then closed the application. I thought just a couple of queries will be enough for this task, but I was way wrong.

Let me tell you something about this app here, it's an old application started by a small team seven years ago, initially the team was very design patterns focused (remember the time when Java was everywhere?, we used Delphi, but the Java way), after the initial years, whe keep adding features, the team changed over time, and the quality control dissapeared (after the experience I'm writing here things will change).

Going back to the main topic, what I found in the log files was the app was doing not two queries, but forty, yes 40!, and that wasn't the only strange thing I've found in the logs, there were queries we didn't write, and some warnings I must get rid of (I'll explain this at the end of this article).

The process of improving this was first remove the unwanted queries, that look like this:

SELECT
n.nspname,c.relname,a.attname,a.atttypid,a.attnotnull,a.atttypmod,a.attlen,a.attnum,pg_get_expr(def.adbin,
def.adrelid) as adsrc,dsc.description  FROM pg_catalog.pg_namespace n  JOIN
pg_catalog.pg_class c ON (c.relnamespace = n.oid)  JOIN pg_catalog.pg_attribute
a ON (a.attrelid=c.oid)  LEFT JOIN pg_catalog.pg_attrdef def ON
(a.attrelid=def.adrelid AND a.attnum = def.adnum) LEFT JOIN
pg_catalog.pg_description dsc ON (c.oid=dsc.objoid AND a.attnum = dsc.objsubid)
LEFT JOIN pg_catalog.pg_class dc ON (dc.oid=dsc.classoid AND
dc.relname='pg_class') LEFT JOIN pg_catalog.pg_namespace dn ON
(dc.relnamespace=dn.oid AND dn.nspname='pg_catalog')  WHERE a.attnum > 0 AND
NOT a.attisdropped AND c.relname LIKE E'sysconfig' AND a.attname LIKE E'%'
ORDER BY nspname,relname,attnum


I've asked in the Zeus Forum and was told TZConnection has a UseMetadata property set to true by default, so setting it to False fixed the issue. Now the query count was about 30, a 20% less.

The next step was to look for some repeating patterns in the log file, and I found many queries similar to this:

select varvalue from config where varname='REFRESH_TIME';
select varvalue from config where varname='MAX_DISPLAY_ROWS';
...


As you can see, we store configuration values in the "config" table, and query values on demand. This sounds ok, but is this a smart way of doing it?, config params change in real-time?, surely not, it must be a better way.

Luckily all the configuration was encapsulated in a class, with properties like TConfig.VarValue, so, all the changes to be made where there, inside this class.

The solution to this problem was to mantain a Cache of variables, so we added an internal TCollection that was loaded with all the config when the application starts,  then added a LastAccessTime property, containing the last time a variable was accessed, if more than 10 minutes has passed, then we re-load the collection.

This single change removed more than 15 queries, now we are at only 15 queries left to be improved.

Our application has a very neat feature that is per-grid column configuration, this allows an administrator to configure what columns (and it's title, width, etc.) must be displayed. This information was loaded every time the grid is refreshed, once when the window is opened, then every 30 seconds. Usually each screen has one main grid an two or tree subgrids, each with this functionality, all of them are refreshed at the same time. We removed the re-loading at refresh time, eliminating three (for this screen only) queries, now there are 12 queries left.

Some time ago, we implemented "pagination" in our grids, this way we limited the amount of records retrieved from the database, as some queries loaded more than 10k records this was a must have for us. The way we implemented it was this:

1º Having a query, for example "select * from customers", first we needed the total count:

   select count(*) from
      (select * from customers) as foo;


   This allowed us to get the count of any query, very neat at the time.

2º Execute the real query with a Limit and Offset:

   select * from customers limit 100 offset 101;

This way, we used two queries to display "XXXX records of YYYYY". This improved a lot compared to what we have initially, but forced us to use two queries.

Since PostgreSql 8.4, a very neat feature called Window Functions was added, allowing impossible tasks with the former versions, one of those tasks is getting the total record count even when a "limit" is imposed, for example:

   select
      count(*) over() as Total_Count,
      c.*
   from customers c
   limit 100 offset 101;


After removing the old behavior, we removed the count query on each grid, on the screen we used for logging, we have three grids, hence 3 queries were removed. Now only 9 queries were made.

The last step won't remove a query, but alowed us to speed those with Blob fields, an let us get rid of some nasty warnings.

Looking at the log files, we found many Warnings similar to this:

2014-02-25 09:53:57 ART WARNING:  nonstandard use of \\ in a string literal at character 42
2014-02-25 09:53:57 ART HINT:  Use the escape string syntax for backslashes, e.g., E'\\'.
2014-02-25 09:53:57 ART STATEMENT:  update ....'


This tells the database is escaping everything with a backslash, since our blob fields were encoded, they included backslashes. If we get rid of this encoding, the data sent by the wire will be significantly less.

Again, asking in the Zeos Forum, I was told what to do, just add doPreferPrepared to TZQuery objects and done!. This only works if you use parametrized queries, luckily we use them in all of our queries.

Final words


The optimization process didn't finish here, but, to not force the customer to wait more time, we branched the app and delivered this quicker version. I must admit, we are very happy with the results. Surely we could have created a web version, but as our customer's budget is thin, and they needed to use the application in no more than two weeks, I think we did the right thing.

Labels:


This page is powered by Blogger. Isn't yours?