I was riding home on the highway last night when our car came to a stop due to traffic from an accident on the highway. Waze had been predicting the time it would take to get home, using crowdsourcing and Google's aggregate traffic data to give a pretty accurate prediction. But in this case when Waze tried to keep us informed, it had no idea.
We came to a stop and the program adjusted its expectations by about 10 minutes, then 20, 30, 40. Soon enough it was expecting a wait of about 90 minutes. This didn't make my fellow passengers very happy—but the program was useless at that point.
Waze had no clue. Waze can't predict the future, it can only make accurate predictions based on aggregated data, not specific answers relating to the prediction. Waze has no idea how long the tow trucks are going to take, what might be holding up the ambulance or the police investigation. Waze is completely in the dark as to what might actually make these cars keep moving.
And yet Waze kept giving predictions. This is a bug. The most accurate prediction in this case is "I have no clue".
As technology becomes "smarter", it encroaches on a concept previously left to the humans: nuance. Software-driven human interfaces need to be able to handle nuance if they are responsible for those kinds of situations. Devs are very familiar with the idea of faking things. We fake progress bars when we don't have any clue how long a thing is going to take—only that it will take a shortish amount of time. We don't always do a great job of informing users when our programs have lost any grounds for predicting an outcome.
When the answer is unknown or nuanced, an assertive "I don't know" beats an inaccurate prediction.