Once an API ships it doesn't matter how it should behave - how it actually behaves is the important part. Users depend on the existing behaviour and we need a way to ensure that it doesn't change

Behat is a tool that was built to help design software, but it’s actually a great tool for capturing existing behaviour too.

We’ve used these tools to gain confidence to refactor 5+ year old apps by capturing the existing behaviour before making changes. I want to share the secrets we learned with you

Comments

Comments are closed.

Bastian at 11:54 on 8 Jun 2018

A clear and structured talk.

Learned a lot as someone with zero BDD or Behat experience. Clear talk. I liked how the focus was not just 'hah look at these cool tools!', but there was some backstory to illustrate the context and when/why you should (not) use them.

Personally I would have enjoyed watching some tests run live. It is a satisfying feeling ;)

All in all, good talk.

Very informative. Nice approach to first describe the current behavior to more confidently refactoring code. A bit dry, but that comes with the subject I gues.

Christiaan Bye at 12:33 on 8 Jun 2018

The subject of testing can be a bit stale at times, but Michael presented the subject in a very clear, informative and even enjoyable way. Very good talk!

I think the situation of “application or workspace without tests” is all to common. I therefore reckon that this talk could be converted to a tutorial format and would have a great reach within the community.

Nice to learn more about behat! I would like to use it more, so learning about snapshots and mountebank was really interesting!

Nic Wortel at 23:05 on 8 Jun 2018

Inheriting an untested legacy codebase and having to maintain and improve it can be off-putting even in a best-case scenario. This talk shows, with practical examples and tips, how to use Behat to write system tests that improve your confidence to refactor a piece of legacy code. It includes topics like API tests, mocking other (REST) API's, and automating browser tests. I think it would have been nice to add some hints about how to factor the database into the equasion, but other than that, it is a great talk - especially for those who are new to Behat and system testing.

Xavier Vidal at 23:51 on 8 Jun 2018

The talk was good.
If I'm not wrong it was "medium level" bit it was completly a "beginner's level", but I sti'll stayed.

Harold Claus at 10:24 on 9 Jun 2018

Thank you for the clear & structured talk about Behat. Picked up a lot!

Steve Winter at 11:42 on 9 Jun 2018

Well structured - liked the context used and the pragmatic approach taken. Would have liked to have seen some of the tests run.

Mark Hamstra at 12:20 on 9 Jun 2018

As someone new to behat, I decided to look it up and look at some examples prior to the talk. Sometimes talks about specific tools can be boring and just rehashing of the documentation, but Michael gave a great introduction to why you would do this and also practical advice on what (not!) to do. Definitely looking forward to applying this in upcoming projects.

Robert Basic at 09:38 on 11 Jun 2018

As a heavy Behat user even I learned a couple of new things about it! The talk was nicely structured, well presented. I think the introduction to the material was spot on.

For me personally this was very informative and the style of the talk was good.

A good introduction to characterisation tests.

I'd love to see a version of this talk or even a workshop that goes into more detail, especially with nitty gritty, like code that makes database calls.

Really nicely put how to tackle any legacy thing that You get.

Arnout Boks at 18:54 on 14 Jun 2018

Great talk, and well-delivered. Nice that you're addressing the trade-offs that come with characterisation tests.