Fine, but I really do not think the problem is at my end. I had in there TLE datasets https://www.celestrak.com/NORAD/elements/ converted to csv files. They are public time-indexed records of measurements of orbital positions in a compact format. There was around half a gigabyte of those, stored in a few thousand tables with one table per trackable object. Then, I cycled through all the days in the last few years and for each particular day I defined a view that extracted the data for that day from the whole dataset and queried that view as a test. This is where something went wrong. Yes, it is a lot of views - one per day for several years, each involving a search of every table in the database. Still, it should not have caused this much downtime, and certainly not created 3GB extra data, because if one considers the views as temp tables, the total amount of those temp tables should be at most equal to the size of the database (and in reality much much less because my script did not run to the end).