Hi,Using single transaction solves the problem - but that's not what i really want.
On Friday, September 21, 2012 03:47:56 PM Jagdish Motwani wrote:Recently i upgraded my kernel from 2.6.29.6 to 2.6.35.14.How do the results to this look if you use psql -1/--single-transaction?
After upgrading i got very poor performance on my postgre database.
My test.sql contains 10000 postgre insert query
Linux 2.6.29.6
time psql -U user -d database -f test.sql > /dev/null
real 0m 7.23s
user 0m 0.38s
sys 0m 0.12s
Linux 2.6.35.14
# time psql -U user -d database -f test.sql > /dev/null
real 1m 4.05s
user 0m 0.44s
sys 0m 0.12
Thanks Andres,
I even tried Linux 3.5.4 and got similar results.I guess youre using some form of virtualization? I think what youre observing
Using git bisect, i got commit ab0a9735e06914ce4d2a94ffa41497dbc142fe7f
Is it a behavior change or am i missing something? Are there any
workarounds for this?
is just that access via raw devices previously lied about consistency. As the
commit observes several virtualization solutions can use raw device access.
If all those 10000 inserts above happen in individual transactions - which
would happen if youre not using transactions explicitly - each and every one
of them will cause a single disk write if they are executed sequentially.
A typical rotating disks can do between 80-160 such writes. If you devide 10k
transactions by 150 synchronous writes a second you get ~66s which pretty
nicely fits your time above.
If you don't care about loosing a very small amount of transactions (up to a
second with the default settings) you can disable the synchronous_commit
setting in postgres. No earlier commits/changes will be lost/corrupted.
You can change that setting per transaction, per session/connection, per user,
per database or globally.
Greetings,
Andres