DEV Community

Discussion on: My Tests Are Being Maintained by Artificial Intelligence

Collapse
 
awstahl profile image
awstahl

What happens when the AI finds a workaround to a bug, and that design path wasn't intended and won't be obvious to users? How do you verify the "successes" it finds match expectations? Is there summation data against with to assert?

Collapse
 
razgandeanu profile image
Klaus • Edited

That is an interesting question and I do have an answer.
The platform learns more about your elements as you run your test.
For example, you tell your test to find the element with the "add_to_cart_button" ID, the platform will also find another 20-30 ways to find that element and it will rank them based on reliability.
If that element has a new ID, your test would normally fail. But if you run it with the Self-Healing option, it will detect that anomaly and it will look into the alternative ways that it remembered to identify that element.
Based on rankings and on how many of those alternative ways return a positive or negative answer, it will clearly differentiate between a change and a bug. It is extremely unlikely that it will cover up a bug.
And every time it makes a change like that, it will be written clearly in the logs. It's not like it's doing it behind your back.

Collapse
 
13steinj profile image
13steinj

"Based on rankings and on how many of those alternative ways return a positive or negative answer, it will clearly differentiate between a change and a bug."

There are still decent chances for false positives and false negatives. You can't claim "it will be smart" because it is AI. You have to back it up with actual statistics of real world cases.