本文将分享分页Elasticseach聚合结果的详细内容,并且还将对es聚合分页api进行详尽解释,此外,我们还将为大家带来关于elasticsearch查询聚合结果排序、elasticsearch聚
本文将分享分页Elasticseach聚合结果的详细内容,并且还将对es聚合分页api进行详尽解释,此外,我们还将为大家带来关于elasticsearch 查询聚合结果排序、elasticsearch 聚合结果集过大,如何分批查询、ElasticSearch5.x实践_day04_01_ElasticSearch聚合查询、ElasticSearch在另一个聚合结果上使用聚合的相关知识,希望对你有所帮助。
本文目录一览:- 分页Elasticseach聚合结果(es聚合分页api)
- elasticsearch 查询聚合结果排序
- elasticsearch 聚合结果集过大,如何分批查询
- ElasticSearch5.x实践_day04_01_ElasticSearch聚合查询
- ElasticSearch在另一个聚合结果上使用聚合
分页Elasticseach聚合结果(es聚合分页api)
我想使用Elasticseach聚合器查询的’size’和’from’属性进行分页。
这可能吗 ?
目前,我只知道size属性= 0可获得无限结果
答案1
小编典典聚合中的分页功能尚未实现。您可以在大小上使用组合,而在构面中排除特征。
elasticsearch 查询聚合结果排序
一、查询结果按某个字段进行排序
-
{
-
"size" : 5,
-
"query" : {
-
"bool" : {
-
"filter" : [
-
{
-
"range" : {
-
"startTime" : {
-
"from" : 1517046960000,
-
"to" : 1517048760000
-
}
-
}
-
}
-
]
-
}
-
},
-
"sort" : [{ "startTime" : { "order" : "desc"}}
-
]
-
}
-
//javaApi设置查询数量size和对orderstr进行正序
-
SearchRequestBuilder srb = this.createSearchRequestBuilder(new Date(begin), new Date(end));
-
srb.setQuery(queryBuilder).setSize(queryParam.getPageIndex() * queryParam.getPageSize())
-
.addSort(orderStr, SortOrder.ASC);
二、聚合结果进行排序
1、根据查询到文档数量
聚合结果为查询到文档的数量倒序:"order" : { "_count" : "desc" }
-
{"size": 0,
-
"query" : {
-
"bool" : {
-
"filter" : [
-
{ "range" : {
-
"startTime" : {
-
"from" : 1515655800000,
-
"to" : 1516865400000
-
}
-
} },
-
{ "term" : {
-
"type" : {
-
"value" : "URL",
-
"boost" : 1.0
-
}
-
} }
-
]
-
}
-
},
-
"aggregations" : {
-
"CATEGORY" : {
-
"terms" : {
-
"field" : "errorCode",
-
"size" : 5,
-
"order" : {
-
"_count" : "desc"
-
}
-
}
-
}
-
}
-
}
-
//在java代码中不能直接使用“_count” 使用如下的方式查询
-
TermsAggregationBuilder termsAggBuilder=AggregationBuilders.terms(AggAlias.CATEGORY.getValue()).field(cateGoryFieldName);
-
termsAggBuilder.order(Terms.Order.count(false)).size(5);
-
{
-
"query" : {
-
"bool" : {
-
"filter" : [
-
{
-
"term" : {
-
"type" : {
-
"value" : "URL",
-
"boost" : 1.0
-
}
-
}
-
}
-
]
-
}
-
},
-
"aggregations" : {
-
"CATEGORY" : {
-
"terms" : {
-
"field" : "name",
-
"size" : 5,
-
"order" : {"responseTime.avg" : "asc" }
-
},
-
"aggregations" : {
-
"responseTime" : {
-
"extended_stats" : {
-
"field" : "durationInMillis",
-
"sigma" : 2.0
-
}
-
},
-
"error" : {
-
"sum" : {
-
"script" : {
-
"inline" : "def errorTemp=doc[''status''].value; if(errorTemp==''0''){return 0;}else{return 1;}",
-
"lang" : "painless"
-
}
-
}
-
},
-
"apdex" : {
-
"avg" : {
-
"script" : {
-
"inline" : "def responseTemp=doc[''durationInMillis''].value; if(responseTemp>params.threshold){return 0.5;}else{return 1;}",
-
"lang" : "painless",
-
"params" : {
-
"threshold" : 20.0
-
}
-
}
-
}
-
},
-
"errorRate" : {
-
"percentile_ranks" : {
-
"script" : {
-
"inline" : "def errorTemp=doc[''status''].value; if(errorTemp==''0''){return 1;}else{return 0;}",
-
"lang" : "painless"
-
},
-
"values" : [
-
0.0
-
],
-
"keyed" : true,
-
"tdigest" : {
-
"compression" : 100.0
-
}
-
}
-
}
-
}
-
}
-
}
-
}
-
//注意根据上面的语句传入的参数应该是 orderStr===responseTime.avg ascOrder=true
-
AggregationBuilders.terms(AggAlias.CATEGORY.getValue()).field(categoryfieldName).order(Terms.Order.aggregation(orderStr,ascOrder)).size(size)
elasticsearch 聚合结果集过大,如何分批查询
现在需要获取聚合的结果集,但是由于结果集过大(10 万数量级),JVM 抛出 OOM,请问如何分批获取聚合的结果集,已知深度分页和滚动好像不能解决这个问题
ElasticSearch5.x实践_day04_01_ElasticSearch聚合查询
聚合提供了从你的数据中分组并萃取统计的能力。关于聚合的最简单的方法是将它大致等同于SQL组和SQL聚合函数。在Elasticsearch中,你可以执行搜索返回命中的结果,与此同时在同一个响应中返回聚合结果。在这个意义上这是非常强大和有效的,你可以使用简洁和简化的API在一次网络交互中完成查询和多个聚合操作。
首先,这个示例按state分组所有的account,然后返回按统计数(count)降序排列的前10个state。
POST /bank/_search { "aggs": { "group_by_state": { "terms": { "field": "state.keyword" } } }, "size": 0 }
在SQL中,上面的聚合在概念上类似:
SELECT state, COUNT(*) FROM bank GROUP BY state ORDER BY COUNT(*) DESC
{ "_shards": { "failed": 0, "successful": 5, "total": 5 }, "aggregations": { "group_by_state": { "buckets": [ { "doc_count": 27, "key": "ID" }, { "doc_count": 27, "key": "TX" }, { "doc_count": 25, "key": "AL" }, { "doc_count": 25, "key": "MD" }, { "doc_count": 23, "key": "TN" }, { "doc_count": 21, "key": "MA" }, { "doc_count": 21, "key": "NC" }, { "doc_count": 21, "key": "ND" }, { "doc_count": 20, "key": "ME" }, { "doc_count": 20, "key": "MO" } ], "doc_count_error_upper_bound": 20, "sum_other_doc_count": 770 } }, "hits": { "hits": [], "max_score": 0.0, "total": 1000 }, "timed_out": false, "took": 29 }
我们可以看到在ID(Idaho)中有27个账户,紧跟着是27个账号在TX(Texas),然后是25个账号在AL(Alabama),以此类推。
需要注意的是我们设置了size=0用来不显示search hits(搜索结果中的hits),因为我们只想看返回的聚合结果。
在前边的聚合基础上,下面的例子计算各州(state)的平局账户余额。(仅取按计数降序排列的前十个)。
POST /bank/_search { "aggs": { "group_by_state": { "aggs": { "average_balance": { "avg": { "field": "balance" } } }, "terms": { "field": "state.keyword" } } }, "size": 0 }
注意我们是如何将average_balance聚合嵌套在group_by_state聚合中的。这是所有聚合的通用模式。你可以在聚合中任意嵌套聚合用于萃取你所需的数据汇总。
基于前面的聚合,让我们根据余额平均数姜旭排列。
POST /bank/_search { "aggs": { "group_by_state": { "aggs": { "average_balance": { "avg": { "field": "balance" } } }, "terms": { "field": "state.keyword", "order": { "average_balance": "desc" } } } }, "size": 0 }
这个例子演示了如何通过年龄组(年龄20-29岁,30-39岁,40-49),然后通过性别,最后得到每个年龄段、每个性别的平均账户余额。
POST /bank/_search { "aggs": { "group_by_age": { "aggs": { "group_by_gender": { "aggs": { "average_balance": { "avg": { "field": "balance" } } }, "terms": { "field": "gender.keyword" } } }, "range": { "field": "age", "ranges": [ { "from": 20, "to": 30 }, { "from": 30, "to": 40 }, { "from": 40, "to": 50 } ] } } }, "size": 0 }
ElasticSearch在另一个聚合结果上使用聚合
有一个对话列表,每个对话都有一个消息列表。每个消息都有一个不同的字段和一个action
字段。我们需要考虑的是,在对话的第一条消息中使用了动作A
,在几条消息中使用了动作A.1
之后,过了一会儿A.1.1
,依此类推(有一个聊天机器人意图列表)。
将对话的消息动作分组将类似于: A > A > A > A.1 > A > A.1 > A.1.1 ...
问题:
我需要使用ElasticSearch创建一个报告,该报告将返回actions group
每次会话的;接下来,我需要对类似的东西进行分组并actionsgroups
添加一个计数;最终将导致Map<actionsGroup, count>
as ''A > A.1 > A > A.1 > A.1.1'',3
。
构建actions group
I需要消除每组重复项;而不是A > A > A > A.1 > A > A.1 > A.1.1
我需要拥有A >A.1 > A > A.1 > A.1.1
。
我开始做的步骤 :
{ "collapse":{ "field":"context.conversationId", "inner_hits":{ "name":"logs", "size": 10000, "sort":[ { "@timestamp":"asc" } ] } }, "aggs":{ },}
接下来我需要什么:
- 我需要将崩溃的结果映射到单个结果中
A > A.1 > A > A.1 > A.1.1
。我已经看到,在这种情况下,或者aggr
有可能在结果中使用脚本,并且有可能创建一个我需要aggr
执行的动作列表,但它会对所有消息进行操作,而不仅仅是对我组合的消息进行操作陷入崩溃。是否可以使用aggr
内部塌陷或类似的解决方案? - 我需要对
A > A.1 > A > A.1 > A.1.1
所有折叠的结果values()进行分组,添加一个计数并得出Map<actionsGroup, count>
。
要么:
conversationId
使用字段将对话消息分组aggr
(我不知道该怎么做)- 使用脚本来迭代所有值并
actions group
为每个对话创建。(不确定是否可行) aggr
在所有值上使用另一个,并将重复项分组,返回Map<actionsGroup, count>
。
更新1: 添加一些其他详细信息
对应:
"mappings":{ "properties":{ "@timestamp":{ "type":"date", "format": "epoch_millis" } "context":{ "properties":{ "action":{ "type":"keyword" }, "conversationId":{ "type":"keyword" } } } }}
对话的样本文件:
Conversation 1.{ "@timestamp": 1579632745000, "context": { "action": "A", "conversationId": "conv_id1", }},{ "@timestamp": 1579632745001, "context": { "action": "A.1", "conversationId": "conv_id1", }},{ "@timestamp": 1579632745002, "context": { "action": "A.1.1", "conversationId": "conv_id1", }}Conversation 2.{ "@timestamp": 1579632745000, "context": { "action": "A", "conversationId": "conv_id2", }},{ "@timestamp": 1579632745001, "context": { "action": "A.1", "conversationId": "conv_id2", }},{ "@timestamp": 1579632745002, "context": { "action": "A.1.1", "conversationId": "conv_id2", }}Conversation 3.{ "@timestamp": 1579632745000, "context": { "action": "B", "conversationId": "conv_id3", }},{ "@timestamp": 1579632745001, "context": { "action": "B.1", "conversationId": "conv_id3", }}
预期结果:
{ "A -> A.1 -> A.1.1": 2, "B -> B.1": 1}Something similar, having this or any other format.
由于我是Elasticsearch的新手,所以每个提示都值得欢迎。
答案1
小编典典我用scripted_metric
弹性的解决了。而且,的index
状态已从初始状态更改。
剧本:
{ "size": 0, "aggs": { "intentPathsCountAgg": { "scripted_metric": { "init_script": "state.messagesList = new ArrayList();", "map_script": "long currentMessageTime = doc[''messageReceivedEvent.context.timestamp''].value.millis; Map currentMessage = [''conversationId'': doc[''messageReceivedEvent.context.conversationId.keyword''], ''time'': currentMessageTime, ''intentsPath'': doc[''brainQueryRequestEvent.brainQueryRequest.user_data.intentsHistoryPath.keyword''].value]; state.messagesList.add(currentMessage);", "combine_script": "return state", "reduce_script": "List messages = new ArrayList(); Map conversationsMap = new HashMap(); Map intentsMap = new HashMap(); String[] ifElseWorkaround = new String[1]; for (state in states) { messages.addAll(state.messagesList);} messages.stream().forEach((message) -> { Map existingMessage = conversationsMap.get(message.conversationId); if(existingMessage == null || message.time > existingMessage.time) { conversationsMap.put(message.conversationId, [''time'': message.time, ''intentsPath'': message.intentsPath]); } else { ifElseWorkaround[0] = ''''; } }); conversationsMap.entrySet().forEach(conversation -> { if (intentsMap.containsKey(conversation.getValue().intentsPath)) { long intentsCount = intentsMap.get(conversation.getValue().intentsPath) + 1; intentsMap.put(conversation.getValue().intentsPath, intentsCount); } else {intentsMap.put(conversation.getValue().intentsPath, 1L);} }); return intentsMap.entrySet().stream().map(intentPath -> [intentPath.getKey().toString(): intentPath.getValue()]).collect(Collectors.toSet()) " } } }}
格式化脚本(为了提高可读性-使用.ts):
scripted_metric: { init_script: ''state.messagesList = new ArrayList();'', map_script: ` long currentMessageTime = doc[''messageReceivedEvent.context.timestamp''].value.millis; Map currentMessage = [ ''conversationId'': doc[''messageReceivedEvent.context.conversationId.keyword''], ''time'': currentMessageTime, ''intentsPath'': doc[''brainQueryRequestEvent.brainQueryRequest.user_data.intentsHistoryPath.keyword''].value ]; state.messagesList.add(currentMessage);`, combine_script: ''return state'', reduce_script: ` List messages = new ArrayList(); Map conversationsMap = new HashMap(); Map intentsMap = new HashMap(); boolean[] ifElseWorkaround = new boolean[1]; for (state in states) { messages.addAll(state.messagesList); } messages.stream().forEach(message -> { Map existingMessage = conversationsMap.get(message.conversationId); if(existingMessage == null || message.time > existingMessage.time) { conversationsMap.put(message.conversationId, [''time'': message.time, ''intentsPath'': message.intentsPath]); } else { ifElseWorkaround[0] = true; } }); conversationsMap.entrySet().forEach(conversation -> { if (intentsMap.containsKey(conversation.getValue().intentsPath)) { long intentsCount = intentsMap.get(conversation.getValue().intentsPath) + 1; intentsMap.put(conversation.getValue().intentsPath, intentsCount); } else { intentsMap.put(conversation.getValue().intentsPath, 1L); } }); return intentsMap.entrySet().stream().map(intentPath -> [ ''path'': intentPath.getKey().toString(), ''count'': intentPath.getValue() ]).collect(Collectors.toSet())`
答案:
{ "took": 2, "timed_out": false, "_shards": { "total": 5, "successful": 5, "skipped": 0, "failed": 0 }, "hits": { "total": { "value": 11, "relation": "eq" }, "max_score": null, "hits": [] }, "aggregations": { "intentPathsCountAgg": { "value": [ { "smallTalk.greet -> smallTalk.greet2 -> smallTalk.greet3": 2 }, { "smallTalk.greet -> smallTalk.greet2 -> smallTalk.greet3 -> smallTalk.greet4": 1 }, { "smallTalk.greet -> smallTalk.greet2": 1 } ] } }}
今天关于分页Elasticseach聚合结果和es聚合分页api的介绍到此结束,谢谢您的阅读,有关elasticsearch 查询聚合结果排序、elasticsearch 聚合结果集过大,如何分批查询、ElasticSearch5.x实践_day04_01_ElasticSearch聚合查询、ElasticSearch在另一个聚合结果上使用聚合等更多相关知识的信息可以在本站进行查询。
本文标签: