Predicting flight delays with regression analysis
editPredicting flight delays with regression analysis
editLet’s try to predict flight delays by using the sample flight data. We want to be able to use information such as weather and location of the destination and origin, flight distance and carrier to predict the number of minutes delayed for each flight. As it is a continuous numeric variable, we’ll use regression analysis to make the prediction.
We have chosen this dataset as an example because it is easily accessible for Kibana users and the use case is relevant. However, the data has been manually created and contains some inconsistencies. For example, a flight can be both delayed and canceled. Please remember that the quality of your input data will affect the quality of results.
Each document in the dataset contains details for a single flight, so this data is ready for analysis as it is already in a two-dimensional entity-based data structure (data frame). In general, you often need to transform the data into an entity-centric index before you analyze the data.
This is an example source document from the dataset:
{ "_index": "kibana_sample_data_flights", "_type": "_doc", "_id": "S-JS1W0BJ7wufFIaPAHe", "_version": 1, "_seq_no": 3356, "_primary_term": 1, "found": true, "_source": { "FlightNum": "N32FE9T", "DestCountry": "JP", "OriginWeather": "Thunder & Lightning", "OriginCityName": "Adelaide", "AvgTicketPrice": 499.08518599798685, "DistanceMiles": 4802.864932998549, "FlightDelay": false, "DestWeather": "Sunny", "Dest": "Chubu Centrair International Airport", "FlightDelayType": "No Delay", "OriginCountry": "AU", "dayOfWeek": 3, "DistanceKilometers": 7729.461862731618, "timestamp": "2019-10-17T11:12:29", "DestLocation": { "lat": "34.85839844", "lon": "136.8049927" }, "DestAirportID": "NGO", "Carrier": "ES-Air", "Cancelled": false, "FlightTimeMin": 454.6742272195069, "Origin": "Adelaide International Airport", "OriginLocation": { "lat": "-34.945", "lon": "138.531006" }, "DestRegion": "SE-BD", "OriginAirportID": "ADL", "OriginRegion": "SE-BD", "DestCityName": "Tokoname", "FlightTimeHour": 7.577903786991782, "FlightDelayMin": 0 } }
Regression is a supervised machine learning analysis and therefore needs
to train on data that contains the ground truth for the dependent_variable
that we want to predict. In this example, the ground truth is available in each
document as the actual value of FlightDelayMins
. In order to be analyzed, a
document must contain at least one field with a supported data type (numeric
,
boolean
, text
, keyword
or ip
) and must not contain arrays with more than
one item.
If your source data consists of some documents that contain a
dependent_variable
and some that do not, the model is trained on the
training_percent
of the documents that contain ground truth. However,
predictions are made against all of the data. The current implementation of
regression analysis supports a single batch analysis for both training and
predictions.
Creating a regression model
editTo predict the number of minutes delayed for each flight:
-
Create a data frame analytics job.
Use the create data frame analytics jobs API as you can see in the following example:
PUT _ml/data_frame/analytics/model-flight-delays { "source": { "index": [ "kibana_sample_data_flights" ], "query": { "range": { "DistanceKilometers": { "gt": 0 } } } }, "dest": { "index": "df-flight-delays" }, "analysis": { "regression": { "dependent_variable": "FlightDelayMin", "training_percent": 90 } }, "analyzed_fields": { "includes": [], "excludes": [ "Cancelled", "FlightDelay", "FlightDelayType" ] }, "model_memory_limit": "100mb" }
The source index to analyze.
This query removes erroneous data from the analysis to improve its quality.
The index that will contain the results of the analysis; it will consist of a copy of the source index data where each document is annotated with the results.
Specifies the continuous variable we want to predict with the regression analysis.
Specifies the approximate proportion of data that is used for training. In this example we randomly select 90% of the source data for training.
Specifies fields to be excluded from the analysis. It is recommended to exclude fields that either contain erroneous data or describe the
dependent_variable
.Specifies a memory limit for the job. If the job requires more than this amount of memory, it fails to start. This makes it possible to prevent job execution if the available memory on the node is limited.
-
Start the job.
Use the start data frame analytics jobs API to start the job. It will stop automatically when the analysis is complete.
POST _ml/data_frame/analytics/model-flight-delays/_start
The job takes a few minutes to run. Runtime depends on the local hardware and also on the number of documents and fields that analyzed. The more fields and documents, the longer the job runs.
-
Check the job stats to follow the progress by using the get data frame analytics jobs statistics API.
GET _ml/data_frame/analytics/model-flight-delays/_stats
The API call returns the following response:
{ "count" : 1, "data_frame_analytics" : [ { "id" : "model-flight-delays", "state" : "stopped", "progress" : [ { "phase" : "reindexing", "progress_percent" : 100 }, { "phase" : "loading_data", "progress_percent" : 100 }, { "phase" : "analyzing", "progress_percent" : 100 }, { "phase" : "writing_results", "progress_percent" : 100 } ] } ] }
The job has four phases. When all the phases have completed, the job stops and the results are ready to view and evaluate.
Viewing regression results
editNow you have a new index that contains a copy of your source data with predictions for your dependent variable. Use the standard Elasticsearch search command to view the results in the destination index:
GET df-flight-delays/_search
The snippet below shows a part of a document with the annotated results:
Evaluating results
editThe results can be evaluated for documents which contain both the ground truth
field and the prediction. In the example below, FlightDelayMins
contains the
ground truth and the prediction is stored as ml.FlightDelayMin_prediction
.
-
Use the data frame analytics evaluate API to evaluate the results.
First, we want to know the training error that represents how well the model performed on the training dataset:
POST _ml/data_frame/_evaluate { "index": "df-flight-delays", "query": { "bool": { "filter": [{ "term": { "ml.is_training": true } }] } }, "evaluation": { "regression": { "actual_field": "FlightDelayMin", "predicted_field": "ml.FlightDelayMin_prediction", "metrics": { "r_squared": {}, "mean_squared_error": {} } } } }
The destination index which is the output of the analysis job.
We calculate the training error by only evaluating the training data.
The ground truth label.
Predicted value.
Next, we calculate the generalization error that represents how well the model performed on previously unseen data:
POST _ml/data_frame/_evaluate { "index": "df-flight-delays", "query": { "bool": { "filter": [{ "term": { "ml.is_training": false } }] } }, "evaluation": { "regression": { "actual_field": "FlightDelayMin", "predicted_field": "ml.FlightDelayMin_prediction", "metrics": { "r_squared": {}, "mean_squared_error": {} } } } }
The evaluate data frame analytics API returns the following response:
{ "regression" : { "mean_squared_error" : { "error" : 3759.7242253334207 }, "r_squared" : { "value" : 0.5853159777330623 } } }
For more information about the evaluation metrics, see Measuring model performance.
If you don’t want to keep the data frame analytics job, you can delete it by using the delete data frame analytics job API. When you delete data frame analytics jobs, the destination indices remain intact.