Revisited: The Webstack in 2015


Comments are closed.

This is very familiar territory to me, so it was great to find that I was very much on the same wavelength as Arne. I was concerned that some might feel out of their depth in a systems area, but there were some great follow-up questions that suggested that was not the case.

Good talk, good (experienced) speaker. Worth a 5-star rating.

I would appreciate more advanced details in this talk, it was a bit superficial. But for novice users, this was probably a great introduction into scalable PHP stacks.

Anonymous at 17:16 on 28 Jun 2015

Great talk. Rather short. Information slightly dated and for beginners.

Please extend the talk to explore actual 2015 technologies like asynchronous PHP (handles more connections), multi-location data centers (active-active and active-passive), and shared file servers. Also would like to hear more about the "Alexa Top 500" stack solutions for caching, CDN-ing, dealing with server outages.

Great talk as an intro. I would have liked to go more in depth into the nitty gritty and 'stories from the trenches'-type stuff. In anycase, great speaker (clearly done this before) and again great talk!

Very informative, gave me some great ideas for setting up a testlab in a VMWare LAN Segement to test out and maybe build on suggested setup. I'm half way and it looks like it's a very good setup. Little to no configurations to the default config files of NGinx, PHP5-FPM, Redis or MySql. It gives you a very scalable and redundant stack. Only I would like to see a constructive solution for a solution for shared storage where all the PHP server coulf write and read files.

This talk is really for beginners who did not much look at servers. I was expecting more details, maybe some numbers or other analysis to have more "meat" to this talk. The 2015 in the title made me think it would be a bit more bleeding edge. The technologies where well explained but have all been in use for a couple of years. "A modern web stack" would fit better as a title.