If you ever had to maintain and setup test environments of your production app, you know how much pain in the neck it can be. Manual DB imports are heavy and automatic ones can easily be prone to security issues. In the talk I will describe approach we've been using for auto updated large databases on dynamic test environments for testing your web apps.

Test database that is decoupled from the production one and actually is a clone of it, can easily get outdated and further testing on it can make your development process harder. With simple approach we've designed, we can have fresh databases imported every night and UI for modifying what database our app is using for testing purposes. I will explain how we used different tools in order to split, process, purge, seed and prepare large DB dumps for import through the night - completely automatically so we don't have to worry about them at all

Comments

Comments are closed.

Lovro at 16:59 on 18 Mar 2017

Great insight into problem domain and one of the possible solutions. I like step by step approach the lecturer took and built solution in a few iterations. That turned out to be very useful for general understanding of this talk.

Outstanding, honest delivery and good discussion afterwards made this talk A+.

Neven M at 21:22 on 20 Mar 2017

This talk demonstrated a great approach at solving a DB related problem. The QA session after it gave some valuable solutions to the same problem. The lecturer presented the problem and his custom solution in a clear fashion.

Tomo Ε ala at 21:04 on 22 Mar 2017

Zoran gave a great talk, outlining the process he used to solve the problem he was facing - manipulating large data sets and using them to seed test environment for the service his team was building.

Clearly describing the background and presenting the audience with context, presenter set the scene for a more detailed walk-through of the processes he and his team needed to implement in order to overcome obstacles that were no laughing matter - from manipulating DB dumps several GBs in size originating from DBs they had not control over, processing them in order to preserve and protect sensitive private data, to handling and unbelievably optimizing import process of dump files into testing environment, cutting the time required from over 24 hours down to just a few.

The talk inspired a great discussion afterwards, with all of the participants offering their own views on the matter and giving various invaluable advice from their own similar experiences on how to best solve similar problems with respect to the scale of one Zoran presented.