Really slow backup times: pg_dump

I am backing up my database as I want to migrate data from a develop branch into my main branch. I have about 80GB of data and it is taking >24 hours with autoscaling from 2-4 cpus. This seems long… Is it connected to an IO bottleneck with neon?

I am using pg_dump to do so.

I would like to be able to backup my database to a .sql file so I can own the restoration process. I would also like a easy way of transitioning from dev to prod data. Is there a more “neon” approach?

1 Like

Hi,

It definitely shouldn’t take so long. Just tried on fixed size 1 CPU, 30GB dumped in 3 minutes. First of all, how far machine running pg_dump from the region with endpoint? Have you tried multiple times, does this repeat? Do you have an idea how loaded endpoint during the operation?

If that’s not enough, please share endpoint name (you can file support request) so we could check metrics.

– arseny

Is there a more “neon” approach?’

Well, if dev data is in branch, you can just declare it as ‘primary’:

Endpoint name should be still updated though, I suppose.

The endpoint isn’t loaded at all. I am in Europe so not too far away…

Happy to share the endpoint. It is taking longer than 24 hours to back up my database and it crashes at the end of that.

I can’t delete the root main branch though so I can’t replace/rename.