- Java REST Client (deprecated): other versions:
- Overview
- Java Low Level REST Client
- Java High Level REST Client
- Getting started
- Document APIs
- Search APIs
- Miscellaneous APIs
- Index APIs
- Analyze API
- Create Index API
- Delete Index API
- Index Exists API
- Open Index API
- Close Index API
- Shrink Index API
- Split Index API
- Clone Index API
- Refresh API
- Flush API
- Flush Synced API
- Clear Cache API
- Force Merge API
- Rollover Index API
- Put Mapping API
- Get Mappings API
- Get Field Mappings API
- Index Aliases API
- Exists Alias API
- Get Alias API
- Update Indices Settings API
- Get Settings API
- Put Template API
- Validate Query API
- Get Templates API
- Templates Exist API
- Get Index API
- Freeze Index API
- Unfreeze Index API
- Delete Template API
- Reload Search Analyzers API
- Cluster APIs
- Ingest APIs
- Snapshot APIs
- Tasks APIs
- Script APIs
- Licensing APIs
- Machine Learning APIs
- Put anomaly detection job API
- Get anomaly detection jobs API
- Delete anomaly detection job API
- Open anomaly detection job API
- Close anomaly detection job API
- Update anomaly detection job API
- Flush Job API
- Put datafeed API
- Update datafeed API
- Get datafeed API
- Delete datafeed API
- Preview Datafeed API
- Start datafeed API
- Stop Datafeed API
- Get datafeed stats API
- Get anomaly detection job stats API
- Forecast Job API
- Delete Forecast API
- Get buckets API
- Get overall buckets API
- Get records API
- Post Data API
- Get influencers API
- Get categories API
- Get calendars API
- Put calendar API
- Get calendar events API
- Post Calendar Event API
- Delete calendar event API
- Put anomaly detection jobs in calendar API
- Delete anomaly detection jobs from calendar API
- Delete calendar API
- Get data frame analytics jobs API
- Get data frame analytics jobs stats API
- Put data frame analytics jobs API
- Delete data frame analytics jobs API
- Start data frame analytics jobs API
- Stop data frame analytics jobs API
- Evaluate data frame analytics API
- Estimate memory usage API
- Put Filter API
- Get filters API
- Update filter API
- Delete Filter API
- Get model snapshots API
- Delete Model Snapshot API
- Revert Model Snapshot API
- Update model snapshot API
- ML get info API
- Delete Expired Data API
- Set Upgrade Mode API
- Migration APIs
- Rollup APIs
- Security APIs
- Put User API
- Get Users API
- Delete User API
- Enable User API
- Disable User API
- Change Password API
- Put Role API
- Get Roles API
- Delete Role API
- Delete Privileges API
- Get Builtin Privileges API
- Get Privileges API
- Clear Roles Cache API
- Clear Realm Cache API
- Authenticate API
- Has Privileges API
- Get User Privileges API
- SSL Certificate API
- Put Role Mapping API
- Get Role Mappings API
- Delete Role Mapping API
- Create Token API
- Invalidate Token API
- Put Privileges API
- Create API Key API
- Get API Key information API
- Invalidate API Key API
- Watcher APIs
- Graph APIs
- CCR APIs
- Index Lifecycle Management APIs
- Snapshot Lifecycle Management APIs
- Transform APIs
- Using Java Builders
- Migration Guide
- License
Put data frame analytics jobs API
editPut data frame analytics jobs API
editCreates a new data frame analytics job.
The API accepts a PutDataFrameAnalyticsRequest
object as a request and returns a PutDataFrameAnalyticsResponse
.
Put data frame analytics jobs request
editA PutDataFrameAnalyticsRequest
requires the following argument:
Data frame analytics configuration
editThe DataFrameAnalyticsConfig
object contains all the details about the data frame analytics job
configuration and contains the following arguments:
DataFrameAnalyticsConfig config = DataFrameAnalyticsConfig.builder() .setId("my-analytics-config") .setSource(sourceConfig) .setDest(destConfig) .setAnalysis(outlierDetection) .setAnalyzedFields(analyzedFields) .setModelMemoryLimit(new ByteSizeValue(5, ByteSizeUnit.MB)) .setDescription("this is an example description") .build();
The data frame analytics job ID |
|
The source index and query from which to gather data |
|
The destination index |
|
The analysis to be performed |
|
The fields to be included in / excluded from the analysis |
|
The memory limit for the model created as part of the analysis process |
|
Optionally, a human-readable description |
SourceConfig
editThe index and the query from which to collect data.
DataFrameAnalyticsSource sourceConfig = DataFrameAnalyticsSource.builder() .setIndex("put-test-source-index") .setQueryConfig(queryConfig) .build();
Constructing a new DataFrameAnalyticsSource |
|
The source index |
|
The query from which to gather the data. If query is not set, a |
QueryConfig
editThe query with which to select data from the source.
QueryConfig queryConfig = new QueryConfig(new MatchAllQueryBuilder());
DestinationConfig
editThe index to which data should be written by the data frame analytics job.
Analysis
editThe analysis to be performed.
Currently, the supported analyses include : OutlierDetection
, Regression
.
Outlier detection
editOutlierDetection
analysis can be created in one of two ways:
or
Regression
editRegression
analysis requires to set which is the dependent_variable
and
has a number of other optional parameters:
DataFrameAnalysis regression = Regression.builder("my_dependent_variable") .setLambda(1.0) .setGamma(5.5) .setEta(5.5) .setMaximumNumberTrees(50) .setFeatureBagFraction(0.4) .setPredictionFieldName("my_prediction_field_name") .setTrainingPercent(50.0) .build();
Constructing a new Regression builder object with the required dependent variable |
|
The lambda regularization parameter. A non-negative double. |
|
The gamma regularization parameter. A non-negative double. |
|
The applied shrinkage. A double in [0.001, 1]. |
|
The maximum number of trees the forest is allowed to contain. An integer in [1, 2000]. |
|
The fraction of features which will be used when selecting a random bag for each candidate split. A double in (0, 1]. |
|
The name of the prediction field in the results object. |
|
The percentage of training-eligible rows to be used in training. Defaults to 100%. |
Analyzed fields
editFetchContext object containing fields to be included in / excluded from the analysis
FetchSourceContext analyzedFields = new FetchSourceContext( true, new String[] { "included_field_1", "included_field_2" }, new String[] { "excluded_field" });
Synchronous execution
editWhen executing a PutDataFrameAnalyticsRequest
in the following manner, the client waits
for the PutDataFrameAnalyticsResponse
to be returned before continuing with code execution:
PutDataFrameAnalyticsResponse response = client.machineLearning().putDataFrameAnalytics(request, RequestOptions.DEFAULT);
Synchronous calls may throw an IOException
in case of either failing to
parse the REST response in the high-level REST client, the request times out
or similar cases where there is no response coming back from the server.
In cases where the server returns a 4xx
or 5xx
error code, the high-level
client tries to parse the response body error details instead and then throws
a generic ElasticsearchException
and adds the original ResponseException
as a
suppressed exception to it.
Asynchronous execution
editExecuting a PutDataFrameAnalyticsRequest
can also be done in an asynchronous fashion so that
the client can return directly. Users need to specify how the response or
potential failures will be handled by passing the request and a listener to the
asynchronous put-data-frame-analytics method:
The |
The asynchronous method does not block and returns immediately. Once it is
completed the ActionListener
is called back using the onResponse
method
if the execution successfully completed or using the onFailure
method if
it failed. Failure scenarios and expected exceptions are the same as in the
synchronous execution case.
A typical listener for put-data-frame-analytics
looks like:
Response
editThe returned PutDataFrameAnalyticsResponse
contains the newly created data frame analytics job.
DataFrameAnalyticsConfig createdConfig = response.getConfig();
On this page