Preview a datafeed
Generally available; Added in 5.4.0
This API returns the first "page" of search results from a datafeed. You can preview an existing datafeed or provide configuration details for a datafeed and anomaly detection job in the API. The preview shows the structure of the data that will be passed to the anomaly detection engine. IMPORTANT: When Elasticsearch security features are enabled, the preview uses the credentials of the user that called the API. However, when the datafeed starts it uses the roles of the last user that created or updated the datafeed. To get a preview that accurately reflects the behavior of the datafeed, use the appropriate credentials. You can also use secondary authorization headers to supply the credentials.
Required authorization
- Index privileges:
read
- Cluster privileges:
manage_ml
Query parameters
-
The start time from where the datafeed preview should begin
-
The end time when the datafeed preview should stop
Body
-
Hide datafeed_config attributes Show datafeed_config attributes object
-
If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.
-
Hide chunking_config attributes Show chunking_config attributes object
-
Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
-
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.
Hide indices_options attributes Show indices_options attributes object
-
If false, the request returns an error if any wildcard expression, index alias, or
_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
If true, missing or closed indices are not included in the response.
-
If true, concrete, expanded or aliased indices are ignored when frozen.
-
-
If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after
frequency
timesmax_empty_searches
of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
Hide runtime_mappings attribute Show runtime_mappings attribute object
-
Hide * attributes Show * attributes object
-
For type
composite
-
For type
lookup
-
A custom format for
date
type runtime fields. -
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
Hide script attributes Show script attributes object
source
string | object One of: Hide attributes Show attributes
-
Defines the aggregations that are run as part of the search request.
-
If
true
, the request returns detailed information about score computation as part of a hit. -
Configuration of search extensions defined by Elasticsearch plugins.
-
The starting document offset, which must be non-negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.
-
Boost the
_score
of documents from specified indices. The boost value is the factor by which scores are multiplied. A boost value greater than1.0
increases the score. A boost value between0
and1.0
decreases the score. -
An array of wildcard (
*
) field patterns. The request returns doc values for field names matching these patterns in thehits.fields
property of the response. -
The minimum
_score
for matching documents. Documents with a lower_score
are not included in search results or results collected by aggregations. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
-
Set to
true
to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
-
Retrieve a script evaluation (based on different fields) for each hit.
-
A field value.
-
The number of hits to return, which must not be negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
property. -
An array of wildcard (
*
) field patterns. The request returns values for field names matching these patterns in thehits.fields
property of the response. -
The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.
IMPORTANT: Use with caution. Elasticsearch applies this property to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this property for requests that target data streams with backing indices across multiple data tiers.
If set to
0
(default), the query does not terminate early. -
The period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.
-
If
true
, calculate and return document scores, even if the scores are not used for sorting. -
If
true
, the request returns the document version as part of a hit. -
If
true
, the request returns sequence number and primary term of the last modification of each hit. -
The stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API.
-
-
Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.
-
Values are
boolean
,composite
,date
,double
,geo_point
,geo_shape
,ip
,keyword
,long
, orlookup
.
-
-
-
Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.
Hide script_fields attribute Show script_fields attribute object
-
Hide * attributes Show * attributes object
-
Hide script attributes Show script attributes object
source
string | object One of: Hide attributes Show attributes
-
Defines the aggregations that are run as part of the search request.
-
If
true
, the request returns detailed information about score computation as part of a hit. -
Configuration of search extensions defined by Elasticsearch plugins.
-
The starting document offset, which must be non-negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
parameter. -
Number of hits matching the query to count accurately. If true, the exact number of hits is returned at the cost of some performance. If false, the response does not include the total number of hits matching the query. Defaults to 10,000 hits.
-
Boost the
_score
of documents from specified indices. The boost value is the factor by which scores are multiplied. A boost value greater than1.0
increases the score. A boost value between0
and1.0
decreases the score. -
An array of wildcard (
*
) field patterns. The request returns doc values for field names matching these patterns in thehits.fields
property of the response. -
The minimum
_score
for matching documents. Documents with a lower_score
are not included in search results or results collected by aggregations. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
-
Set to
true
to return detailed timing information about the execution of individual components in a search request. NOTE: This is a debugging tool and adds significant overhead to search execution. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
-
Retrieve a script evaluation (based on different fields) for each hit.
-
A field value.
-
The number of hits to return, which must not be negative. By default, you cannot page through more than 10,000 hits using the
from
andsize
parameters. To page through more hits, use thesearch_after
property. -
An array of wildcard (
*
) field patterns. The request returns values for field names matching these patterns in thehits.fields
property of the response. -
The maximum number of documents to collect for each shard. If a query reaches this limit, Elasticsearch terminates the query early. Elasticsearch collects documents before sorting.
IMPORTANT: Use with caution. Elasticsearch applies this property to each shard handling the request. When possible, let Elasticsearch perform early termination automatically. Avoid specifying this property for requests that target data streams with backing indices across multiple data tiers.
If set to
0
(default), the query does not terminate early. -
The period of time to wait for a response from each shard. If no response is received before the timeout expires, the request fails and returns an error. Defaults to no timeout.
-
If
true
, calculate and return document scores, even if the scores are not used for sorting. -
If
true
, the request returns the document version as part of a hit. -
If
true
, the request returns sequence number and primary term of the last modification of each hit. -
The stats groups to associate with the search. Each group maintains a statistics aggregation for its associated searches. You can retrieve these stats using the indices stats API.
-
-
Specifies any named parameters that are passed into the script as variables. Use parameters instead of hard-coded values to decrease compile time.
-
-
-
The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of
index.max_result_window
, which is 10,000 by default.
-
-
Hide job_config attributes Show job_config attributes object
-
Advanced configuration option. Specifies whether this job can open when there is insufficient machine learning node capacity for it to be immediately assigned to a node.
-
Hide analysis_config attributes Show analysis_config attributes object
-
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. categorization_analyzer
string | object One of: Hide attributes Show attributes
-
One or more character filters. In addition to the built-in character filters, other plugins can provide more character filters. If this property is not specified, no character filters are applied prior to categorization. If you are customizing some other aspect of the analyzer and you need to achieve the equivalent of
categorization_filters
(which are not permitted when some other aspect of the analyzer is customized), add them here as pattern replace character filters.External documentation -
One or more token filters. In addition to the built-in token filters, other plugins can provide more token filters. If this property is not specified, no token filters are applied prior to categorization.
External documentation tokenizer
object | string The name or definition of the tokenizer to use after character filters are applied. This property is compulsory if
categorization_analyzer
is specified as an object. Machine learning provides a tokenizer calledml_standard
that tokenizes in a way that has been determined to produce good categorization results on a variety of log file formats for logs in English. If you want to use that tokenizer but change the character or token filters, specify"tokenizer": "ml_standard"
in yourcategorization_analyzer
. Additionally, theml_classic
tokenizer is available, which tokenizes in the same way as the non-customizable tokenizer in old versions of the product (before 6.2).ml_classic
was the default categorization tokenizer in versions 6.2 to 7.13, so if you need categorization identical to the default for jobs created in these versions, specify"tokenizer": "ml_classic"
in yourcategorization_analyzer
.
-
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
If
categorization_field_name
is specified, you can also define optional filters. This property expects an array of regular expressions. The expressions are used to filter out matching sequences from the categorization field values. You can use this functionality to fine tune the categorization by excluding sequences from consideration when categories are defined. For example, you can exclude SQL statements that appear in your log files. This property cannot be used at the same time ascategorization_analyzer
. If you only want to define simple regular expression filters that are applied prior to tokenization, setting this property is the easiest method. If you also want to customize the tokenizer or post-tokenization filtering, use thecategorization_analyzer
property instead and include the filters as pattern_replace character filters. The effect is exactly the same. -
Detector configuration objects specify which data fields a job analyzes. They also specify which analytical functions are used. You can specify multiple detectors for a job. If the detectors array does not contain at least one detector, no analysis can occur and an error is returned.
Hide detectors attributes Show detectors attributes object
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
Custom rules enable you to customize the way detectors operate. For example, a rule may dictate conditions under which results should be skipped. Kibana refers to custom rules as job rules.
Hide custom_rules attributes Show custom_rules attributes object
-
The set of actions to be triggered when the rule applies. If more than one action is specified the effects of all actions are combined.
Values are
skip_result
orskip_model_update
. -
An array of numeric conditions when the rule applies. A rule must either have a non-empty scope or at least one condition. Multiple conditions are combined together with a logical AND.
-
A scope of series where the rule applies. A rule must either have a non-empty scope or at least one condition. By default, the scope includes all series. Scoping is allowed for any of the fields that are also specified in
by_field_name
,over_field_name
, orpartition_field_name
.
-
-
A description of the detector.
-
A unique identifier for the detector. This identifier is based on the order of the detectors in the
analysis_config
, starting at zero. If you specify a value for this property, it is ignored. -
Values are
all
,none
,by
, orover
. -
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
The analysis function that is used. For example,
count
,rare
,mean
,min
,max
, orsum
. -
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
Defines whether a new series is used as the null series when there is no value for the by or partition fields.
-
-
A comma separated list of influencer field names. Typically these can be the by, over, or partition fields that are used in the detector configuration. You might also want to use a field name that is not specifically named in a detector, but is available as part of the input data. When you use multiple detectors, the use of influencers is recommended as it aggregates results for each influencer entity.
-
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
This functionality is reserved for internal use. It is not supported for use in customer environments and is not subject to the support SLA of official GA features. If set to
true
, the analysis will automatically find correlations between metrics for a given by field value and report anomalies when those correlations cease to hold. For example, suppose CPU and memory usage on host A is usually highly correlated with the same metrics on host B. Perhaps this correlation occurs because they are running a load-balanced application. If you enable this property, anomalies will be reported when, for example, CPU usage on host A is high and the value of CPU usage on host B is low. That is to say, you’ll see an anomaly when the CPU of host A is unusual given the CPU of host B. To use themultivariate_by_fields
property, you must also specifyby_field_name
in your detector. -
Hide per_partition_categorization attributes Show per_partition_categorization attributes object
-
To enable this setting, you must also set the
partition_field_name
property to the same value in every detector that uses the keywordmlcategory
. Otherwise, job creation fails. -
This setting can be set to true only if per-partition categorization is enabled. If true, both categorization and subsequent anomaly detection stops for partitions where the categorization status changes to warn. This setting makes it viable to have a job where it is expected that categorization works well for some partitions but not others; you do not pay the cost of bad categorization forever in the partitions where it works badly.
-
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
-
Hide analysis_limits attributes Show analysis_limits attributes object
-
The maximum number of examples stored per category in memory and in the results data store. If you increase this value, more examples are available, however it requires that you have more storage available. If you set this value to 0, no examples are stored. NOTE: The
categorization_examples_limit
applies only to analysis that uses categorization.
-
-
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
Custom metadata about the job
-
Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies a period of time (in days) after which only the first snapshot per day is retained. This period is relative to the timestamp of the most recent snapshot for this job.
-
Hide data_description attributes Show data_description attributes object
-
Only JSON format is supported at this time.
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
The time format, which can be
epoch
,epoch_ms
, or a custom pattern. The valueepoch
refers to UNIX or Epoch time (the number of seconds since 1 Jan 1970). The valueepoch_ms
indicates that time is measured in milliseconds since the epoch. Theepoch
andepoch_ms
time formats accept either integer or real values. Custom patterns must conform to the Java DateTimeFormatter class. When you use date-time formatting patterns, it is recommended that you provide the full date, time and time zone. For example:yyyy-MM-dd'T'HH:mm:ssX
. If the pattern that you specify is not sufficient to produce a complete timestamp, job creation fails.
-
-
Hide datafeed_config attributes Show datafeed_config attributes object
-
If set, the datafeed performs aggregation searches. Support for aggregations is limited and should be used only with low cardinality data.
-
Hide chunking_config attributes Show chunking_config attributes object
-
Hide delayed_data_check_config attributes Show delayed_data_check_config attributes object
-
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
Controls how to deal with unavailable concrete indices (closed or missing), how wildcard expressions are expanded to actual indices (all, closed or open indices) and how to deal with wildcard expressions that resolve to no indices.
Hide indices_options attributes Show indices_options attributes object
-
If false, the request returns an error if any wildcard expression, index alias, or
_all
value targets only missing or closed indices. This behavior applies even if the request targets other open indices. For example, a request targetingfoo*,bar*
returns an error if an index starts withfoo
but no index starts withbar
. -
If true, missing or closed indices are not included in the response.
-
If true, concrete, expanded or aliased indices are ignored when frozen.
-
-
If a real-time datafeed has never seen any data (including during any initial training period) then it will automatically stop itself and close its associated job after this many real-time searches that return no documents. In other words, it will stop after
frequency
timesmax_empty_searches
of real-time operation. If not set then a datafeed with no end time that sees no data will remain started until it is explicitly stopped. -
An Elasticsearch Query DSL (Domain Specific Language) object that defines a query.
External documentation -
A duration. Units can be
nanos
,micros
,ms
(milliseconds),s
(seconds),m
(minutes),h
(hours) andd
(days). Also accepts "0" without a unit and "-1" to indicate an unspecified value. -
Hide runtime_mappings attribute Show runtime_mappings attribute object
-
Hide * attributes Show * attributes object
-
For type
composite
-
For type
lookup
-
A custom format for
date
type runtime fields. -
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
Hide script attributes Show script attributes object
-
Values are
boolean
,composite
,date
,double
,geo_point
,geo_shape
,ip
,keyword
,long
, orlookup
.
-
-
-
Specifies scripts that evaluate custom expressions and returns script fields to the datafeed. The detector configuration objects in a job can contain functions that use these script fields.
Hide script_fields attribute Show script_fields attribute object
-
Hide * attributes Show * attributes object
-
-
The size parameter that is used in Elasticsearch searches when the datafeed does not use aggregations. The maximum value is the value of
index.max_result_window
, which is 10,000 by default.
-
-
A description of the job.
-
A list of job groups. A job can belong to no groups or many.
-
Reserved for future use, currently set to
anomaly_detector
. -
Hide model_plot_config attributes Show model_plot_config attributes object
-
If true, enables calculation and storage of the model change annotations for each entity that is being analyzed.
-
If true, enables calculation and storage of the model bounds for each entity that is being analyzed.
-
Path to field or array of paths. Some API's support wildcards in the path to select multiple fields.
-
-
Advanced configuration option, which affects the automatic removal of old model snapshots for this job. It specifies the maximum period of time (in days) that snapshots are retained. This period is relative to the timestamp of the most recent snapshot for this job. The default value is
10
, which means snapshots ten days older than the newest snapshot are deleted. -
Advanced configuration option. The period over which adjustments to the score are applied, as new data is seen. The default value is the longer of 30 days or 100
bucket_spans
. -
Advanced configuration option. The period of time (in days) that results are retained. Age is calculated relative to the timestamp of the latest bucket result. If this property has a non-null value, once per day at 00:30 (server time), results that are the specified number of days older than the latest bucket result are deleted from Elasticsearch. The default value is null, which means all results are retained. Annotations generated by the system also count as results for retention purposes; they are deleted after the same number of days as results. Annotations added by users are retained forever.
-
GET _ml/datafeeds/datafeed-high_sum_total_sales/_preview
resp = client.ml.preview_datafeed(
datafeed_id="datafeed-high_sum_total_sales",
)
const response = await client.ml.previewDatafeed({
datafeed_id: "datafeed-high_sum_total_sales",
});
response = client.ml.preview_datafeed(
datafeed_id: "datafeed-high_sum_total_sales"
)
$resp = $client->ml()->previewDatafeed([
"datafeed_id" => "datafeed-high_sum_total_sales",
]);
curl -X GET -H "Authorization: ApiKey $ELASTIC_API_KEY" "$ELASTICSEARCH_URL/_ml/datafeeds/datafeed-high_sum_total_sales/_preview"