Nowadays Drupal is not simply a CMS, it's a big and complicated framework that stands behind large enterprise level websites.

Drupal is used successfully for government, high education and healthcare websites that store large volumes of data. Often enough these organizations need their data to be updated over night, monthly or annually. It can be as simple as stock information updates performed once an hour and as big as update of all programs and courses offered by the college and university including all of the program details and costs.

Large scale complicated imports bring their challenges: time and hosting resources, parsing algorithms, different sources: XML, JSON, CSV. In my talk I'll cover some problems that a developer may face while building imports of data from various external sources into Drupal. I'll cover different formats such as XML, CSV and JSON, as well as approaches that can be used to make the task easier.

The audience will learn how to build imports with solid architecture so they would have less problems with performance as well as won't be limited by cron job time. I will also cover Batch and Queue APIs in Drupal 8 as well as touch base on continuous integration tool such as Jenkins.

The talk is for intermediate to advanced level back-end developers.

Basic module development concepts. Understanding of cron jobs as well as some basic concepts of Batch API and Queue API.


Comments are closed.

The concepts and infrastructure recommendations were spot-on from my own experiences. I think the entire presentation could really better showcase the data / metadata transitions with a flowchart showing a comparison between the Feeds module and the custom-methods outlined.