During the past several years, our data science teams at Waze have been building data-driven products from AI to BI and putting them at the hands of our 150 Million MAUs. In this post, I will tell you about a recent Machine Learning model for ETA prediction which is now in production for most of Waze's user base, including the United States, Canada, France, Israel, Brazil and Malaysia.
When a user types in a destination with Waze - our system predicts how long is this route going to take her ie - the ETA.
Giving an accurate ETA is basically a result of our knowledge of current speeds of nearby segments, our understanding of historical speeds of non nearby segments, properly combining those two knowledges and summing it up while accounting for other contextual and personalized aspects of the trip, and potential error adjustments.
An ETA prediction always starts from a route request from A to B on our map which is represented as a graph. From there - Waze's routing algorithm enters the scene. Our routing is based on A*. A* is a search algorithm on a weighted graph. From there - our routing algorithm gives us up to 10 best routes. That is the input of the ETA prediction model. Output is the expected time for each route. Besides that - every few minutes we re-compute the ETA to account for ongoing traffic changes.
So this is a supervised learning prediction task. What is therefore our supervision data? The mobile client of every driver which uses Waze - reports to Waze's servers her GPS coordinates.
In retrospect, for training purposes - we need to convert those coordinates into a collection of segments on the map, and how much time did it take each driver to pass each segment. This is called "cross time" of a segment. What is important to know is that there is really no-straight-forward method of converting those GPS samples into a cross time. Preparing the supervision dataset is therefore not easy.
Throughout 2022 we released a Deep-Learning model for those ETA predictions, based on LSTM. The model is called “Smartsum”. The input of Smartsum is a collection of segments of a route and the model tries to estimate the expected cross time of each segment and to sum them up in a smart way. That’s why it is called Smartsum.
The model takes a lot of signals under consideration. Let's go over some of those signals.
Most importantly - traffic data - cross times of both real time and historic samples. For example - given a segment - how long did it take you to cross it right now, 15 minutes ago and 2 hours ago and for different days in the history based on their types.
Second family of features - spatial info - the physics of the road - for example - is this a highway, is this a small road, how many lanes does it include, is there an HOV etc.
Third family - temporal info (day, night, weekend etc). These signals are critical for the entire route. Not only a specific segment. We will see how this happens in the model’s architecture.
We have other signals that we use now or that we intend to use. For example - community reports, public events like football games, and personalized adjustments of the ETA. All these features are presented in the image below.
We use many metrics to evaluate our ETA prediction accuracy. But above all of those metrics - we rely on something that is called “Bad ETA Session”. Every trip at Waze is measured in terms of a Bad/Good ETA session.
The X axis is how long this trip actually takes.
The Y axis is the difference between predicted and actual time for this trip.
If the deviation is negative - it means we were too optimistic - we told the user it would take her a shorter time than it actually took. If, on the other hand, we were pessimistic - the error is positive.
The penalty function is harsher when we are too optimistic, and when the route is longer. That is based on a UX research we have conducted that shows users are more sensitive for being late vs. being too early to their destination.
Every drive that is outside of the Bad ETA Session cost functions - is considered a bad ETA session. This metric is a great example of our guiding principles at the data guild - which rely heavily on product-analytics and user understanding metrics even for our ML models. A data scientist actually sits with PAs and UXR to interview users.
Now we’ve reached the heart of the model.
The input is structured around segments. Each segment has its feature vector.
Historical and real time cross time. Of course - real time data is more heavily weighted in nearby segments, while historic information is usually accounted more heavily for in segments that the user is far from.
In the next phase of the architecture - we take those segment-level cross times and put them in an RNN cell.
Recall that RNNs (recurrent neural networks) try to account in the modeling for the sequential nature of the segments and the relationships between them. For example - if there is an important information about unusual traffic or traffic buildup that is happening at the moment in a single segment - such a model architecture could carry out this information to additional segment via the memory cell (the exact selection of the architecture, whether it includes GRU, bi-directional LSTMs, etc - would determine the extent and memory structure of that information across the network).
The next phase is segment embeddings or states.
On the side in yellow - we also use contextual data, which is not at the segment level. For example - time of day. We also apply two layers here and come down to a state of the trip as a whole.
From there - we concatenate for each segment the entire trip’s state.
We then apply two additional dense layers
The output is for each segment - the expected cross time.
The sum of all these cross times is the output of the model - the ETA prediction.
Now - a few notes about the results (image below - Y axis is the improvement in %s in bad ETA sessions, X axis is local time) - What we can see is that during rush hours (7-9 and 15-18) we have managed to reduce BETAS by more than 40%. But that’s not everything. As I have mentioned - we heavily rely at the data guild on user-centric metrics even for our ML models.
This notification is something users really hate - it is an ETA update during the drive. For example - suppose a user started a drive and stumble into a traffic jam. Then we might say to the user that there is an update of 10 minutes on the route. Ie - it means something meaningful has happened along the route which our initial ETA prediction did not account for.
Smartsum reduces those ETA updates by 50%, which is great for user experience.
How does it all relate to Waze’s data strategy?
I showed you only one example of a meaningful ML model in production.
But actually the data guild covers many different aspects of the Waze product. We need to build models that account for:
Next drive prediction
And which advertisement to present to a user on a given route.
A data strategy is fundamentally composed of 3 things:
A problem we are trying to solve as a data org
The guiding principles that we as a guild all rely on when making decisions in research and productionizing models and insights
And the resources allocated to properly work based on those principals.
Here are the guiding principles we apply at Waze’s data guild -= from AI to BI
First - we rely on product-oriented metrics: Data scientists and Product Analysts work together within each product area, and each team owns their metrics and opens their data assets to other teams at Waze in a data mesh format.
Second - DSs work in proximity to the BE systems - so that the logs data and feature engineering from research to production are identical. There is no separation between thinkers and builders.
And third - data scientists are guided to focus on sequential models in most of the problems I mentioned earlier.
These 3 guiding principles have important implications to how we are structured, our ML Ops and the unified cloud solutions we all work on.
The above slide summarizes our view of how a modern, quick-to-deliver data science and product analytics org should function, from AI to BI and back, and we have been implementing these solutions to deliver quality services at the hands of our 150 Million users in the past few years, together with other engineering and product teams.
If you have any questions about what I have presented or in general about how we work at the data guild at Waze - please by all means feel free to contact me.
* This blog post covers the work of amazing Data Scientists at Data Engineers working as part of the data guild at Waze, including - Ilan Orlov, Amit Kagian, Amir Bar, Avia Ratzon, Danny Rosenstein, Philippe Adjiman, Shay Oved, Yevgeniya Gimelfarb, Nir Ben Yaacov, Johnny Shehade & Alex Ohayon, in addition to many other engineers and product managers at Waze.