Once an API ships it doesn't matter how it should behave—how it actually behaves is the important part. Users depend on the existing behavior, and we need a way to ensure it doesn't change. Behat is a tool that was built to help design software, but it’s actually a great tool for capturing existing behavior too. We’ve used these tools to gain confidence to refactor 5+-year-old apps by capturing the existing behavior before making changes. I want to share the secrets we learned with you.


Comments are closed.

Philip Sharp at 11:08 on 16 Nov 2017

Clear presentation of the problem and solution. Took the time to explain additional tests that can be written for legacy code, as well as progressively more detailed Behat testing. Excellent delivery.

Bobby Pearson at 13:58 on 16 Nov 2017

Took his time, explained the challenges of legacy code and both the need for characterization tests and the methodology. His examples were clear and relevant without being too dumbed-down. He answered my questions afterwards.

Still a 5/5, but the acoustics in the main hall are quite bad and speakers need to adjust accordingly: slower, louder, and project! This talk should have been in a smaller room.

The audio was bad and that was not Michael's fault, but anyway I felt he spoke too low. The presentation and advices were good.

I really enjoyed this talk-- it was good to see how to tackle a problem first by defining the surrounding behavior and it gives you safety before you start refactoring. It was also nice to see how to use these tests as a temporary measure that doesn't need to live forever, only until you can get from point A to B.

Sandy Smith at 14:18 on 20 Nov 2017

Excellent structure to the talk.