Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge bytebuffer and memory stores into a single memory store options #22

Closed
kimchy opened this issue Feb 17, 2010 · 1 comment
Closed

Comments

@kimchy
Copy link
Member

kimchy commented Feb 17, 2010

The bytebuffer store name is really bad. It basically exposes the user to the internals of Java on how a direct memory allocation (outside the JVM heap) is done.

Instead, there should be a single memory store, with the option to choose its "location", which can be either "heap" or "direct", with "direct" being the default.

This does mean that if someone was configuring to use the bytebuffer, things will break and they will need to change to memory type.

@kimchy
Copy link
Member Author

kimchy commented Feb 17, 2010

Merge bytebuffer and memory stores into a single memory store options, closed by 8727815.

dadoonet added a commit that referenced this issue Jun 5, 2015
dadoonet added a commit that referenced this issue Jun 5, 2015
With #2784, we can now add plugin version in `es-plugin.properties` file.

It will only be used with elasticsearch 1.0.0 and upper. No need to push it in 1.x branch.

Closes #22.
dadoonet added a commit that referenced this issue Jun 5, 2015
Beidermorse encoder does not support "replace" option: only new tokens will be returned.
One of the backfires is that highlighting will not work.
This is actually because Lucene's beidermorse filter does not support this option.
Please consider to update documentation by specifying which encore support `"replace : false"` option.

Closes #22.

(cherry picked from commit c307877)
dadoonet added a commit that referenced this issue Jun 5, 2015
When no tags exists on other running instances and if we try to filter by tag, we get the following error:

```
[2014-05-19 16:17:37,377][DEBUG][discovery.gce            ] [Theresa Cassidy] start building nodes list using GCE API
[2014-05-19 16:17:37,378][INFO ][cloud.gce                ] [Theresa Cassidy] starting GCE discovery service
[2014-05-19 16:17:37,592][TRACE][discovery.gce            ] [Theresa Cassidy] gce instance hadoop1 with status RUNNING found.
[2014-05-19 16:17:37,597][TRACE][discovery.gce            ] [Theresa Cassidy] start filtering instance hadoop1 with tags [elasticsearch, dev].
[2014-05-19 16:17:37,597][TRACE][discovery.gce            ] [Theresa Cassidy] comparing instance tags null with tags filter [elasticsearch, dev].
[2014-05-19 16:17:37,597][WARN ][discovery.gce            ] [Theresa Cassidy] Exception caught during discovery java.lang.NullPointerException : null
[2014-05-19 16:17:37,597][TRACE][discovery.gce            ] [Theresa Cassidy] Exception caught during discovery
java.lang.NullPointerException
    at org.elasticsearch.discovery.gce.GceUnicastHostsProvider.buildDynamicNodes(GceUnicastHostsProvider.java:157)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing.sendPings(UnicastZenPing.java:245)
    at org.elasticsearch.discovery.zen.ping.unicast.UnicastZenPing$2.run(UnicastZenPing.java:176)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:744)
[2014-05-19 16:17:37,598][DEBUG][discovery.gce            ] [Theresa Cassidy] 0 node(s) added
[2014-05-19 16:17:37,598][DEBUG][discovery.gce            ] [Theresa Cassidy] using dynamic discovery nodes []
```

Closes #22.
dadoonet added a commit that referenced this issue Jun 5, 2015
Looks like `WordTokenFilter` has been [deprecated in Lucene 4.8](http://lucene.apache.org/core/4_8_0/analyzers-smartcn/org/apache/lucene/analysis/cn/smart/WordTokenFilter.html) and looking at the javadoc, it looks like that only the [HMMChineseTokenizer](http://lucene.apache.org/core/4_8_0/analyzers-smartcn/org/apache/lucene/analysis/cn/smart/HMMChineseTokenizer.html) will be supported.

We need to deprecate `smartcn_word` and `smartcn_sentence`.
We add `smartcn_tokenizer` which does the both things.

 Closes #22.

(cherry picked from commit 64dcb9b)
dadoonet added a commit that referenced this issue Jun 5, 2015
were deprecated in 2.2.0 by #22.

Closes #24.

(cherry picked from commit 2bab6e0)
dadoonet added a commit that referenced this issue Jun 9, 2015
Related to #21.
Closes #22.
(cherry picked from commit c3964ad)
dadoonet added a commit that referenced this issue Jun 9, 2015
It sounds like Jython 2.5.3 is leaking some threads.

Jython 2.5.4.rc1 has the same issue.

Jython 2.7-b3 fixes it.

Typical error when running tests:

```
ERROR   0.00s J2 | PythonScriptEngineTests (suite) <<<
   > Throwable #1: com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE scope at org.elasticsearch.script.python.PythonScriptEngineTests:
   >    1) Thread[id=12, name=org.python.google.common.base.internal.Finalizer, state=WAITING, group=TGRP-PythonScriptEngineTests]
   >         at java.lang.Object.wait(Native Method)
   >         at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
   >         at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
   >         at org.python.google.common.base.internal.Finalizer.run(Finalizer.java:127)
   >    at __randomizedtesting.SeedInfo.seed([7A5ECFD8D0474383]:0)
   > Throwable #2: com.carrotsearch.randomizedtesting.ThreadLeakError: There are still zombie threads that couldn't be terminated:
   >    1) Thread[id=12, name=org.python.google.common.base.internal.Finalizer, state=WAITING, group=TGRP-PythonScriptEngineTests]
   >         at java.lang.Object.wait(Native Method)
   >         at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:135)
   >         at java.lang.ref.ReferenceQueue.remove(ReferenceQueue.java:151)
   >         at org.python.google.common.base.internal.Finalizer.run(Finalizer.java:127)
   >    at __randomizedtesting.SeedInfo.seed([7A5ECFD8D0474383]:0)
```

Closes #22.
rmuir pushed a commit to rmuir/elasticsearch that referenced this issue Nov 8, 2015
If you define some specific mapping for your file content, such as the following:

```javascript
{
    "person": {
        "properties": {
            "file": {
                "type": "attachment",
                "path": "full",
                "fields": {
                    "date": { "type": "string" }
                }
            }
        }
    }
}
```

And then, if you ask back the mapping, you get:

```javascript
{
   "person":{
      "properties":{
         "file":{
            "type":"attachment",
            "path":"full",
            "fields":{
               "file":{
                  "type":"string"
               },
               "author":{
                  "type":"string"
               },
               "title":{
                  "type":"string"
               },
               "name":{
                  "type":"string"
               },
               "date":{
                  "type":"date",
                  "format":"dateOptionalTime"
               },
               "keywords":{
                  "type":"string"
               },
               "content_type":{
                  "type":"string"
               }
            }
         }
      }
   }
}
```

All your settings have been overwrited by the mapper plugin.

See also issue elastic#22 where the issue was found.

Closes elastic#39.
ywelsch pushed a commit to ywelsch/elasticsearch that referenced this issue Apr 24, 2018
Formerly, there was no explicit notion of failure of a PublicationTarget. This
change makes the state transitions of each target explicit and adds a FAILED 
state.

Additionally, formerly we sent ApplyCommit messages to all PublicationTargets
when a value was committed, but some of them may not even have received the
PublishRequest so were rejecting the ApplyCommit. This was hard to distinguish
from actual failures. This change alters the behaviour so that ApplyCommit are
only sent to nodes that have definitely received the corresponding
PublishRequest so that a rejection of ApplyCommit always represents a genuine
failure.

Finally, it's possible to be sure that this publication will fail if the
remaining non-failed targets do not form a quorum, and in this situation the
most sensible response is for the leader to stand down. Formerly we waited for
the publication to time out before doing so, but this change alters the
behaviour to do so explicitly.
ClaudioMFreitas pushed a commit to ClaudioMFreitas/elasticsearch-1 that referenced this issue Nov 12, 2019
ClaudioMFreitas pushed a commit to ClaudioMFreitas/elasticsearch-1 that referenced this issue Nov 12, 2019
palesz pushed a commit that referenced this issue Mar 11, 2021
…9765)

Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation.

For example the query

```sql
SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j
```

failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization:

```
OrderBy[[Order[j{r}#4,ASC,LAST]]]                                             ! OrderBy[[Order[i{r}#2,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..]
    \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! 
```

By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above:

```
OrderBy[[Order[test.int{f}#22,ASC,LAST]]]                                     = OrderBy[[Order[test.int{f}#22,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..]
    \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! 
 ```

The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated.

Fixes #67237
palesz pushed a commit to palesz/elasticsearch that referenced this issue Mar 11, 2021
…astic#69765)

Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation.

For example the query

```sql
SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j
```

failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization:

```
OrderBy[[Order[j{r}elastic#4,ASC,LAST]]]                                             ! OrderBy[[Order[i{r}elastic#2,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..]
    \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! 
```

By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above:

```
OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]]                                     = OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..]
    \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! 
 ```

The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated.

Fixes elastic#67237
palesz pushed a commit to palesz/elasticsearch that referenced this issue Mar 11, 2021
…astic#69765)

Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation.

For example the query

```sql
SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j
```

failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization:

```
OrderBy[[Order[j{r}elastic#4,ASC,LAST]]]                                             ! OrderBy[[Order[i{r}elastic#2,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..]
    \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! 
```

By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above:

```
OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]]                                     = OrderBy[[Order[test.int{f}elastic#22,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..]
    \_EsRelation[test][date{f}elastic#6, some{f}elastic#7, some.string{f}elastic#8, some.string..] ! 
 ```

The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated.

Fixes elastic#67237
palesz pushed a commit that referenced this issue Mar 11, 2021
…9765) (#70325)

Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation.

For example the query

```sql
SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j
```

failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization:

```
OrderBy[[Order[j{r}#4,ASC,LAST]]]                                             ! OrderBy[[Order[i{r}#2,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..]
    \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! 
```

By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above:

```
OrderBy[[Order[test.int{f}#22,ASC,LAST]]]                                     = OrderBy[[Order[test.int{f}#22,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..]
    \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! 
 ```

The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated.

Fixes #67237
palesz pushed a commit that referenced this issue Mar 11, 2021
…9765) (#70322)

Previously we did not resolve the attributes recursively which meant that if a field or expression was re-aliased multiple times (through multiple levels of subqueries), the aliases were only resolved one level down. This led to failed query translation because `ReferenceAttribute`s were pointing to non-existing attributes during query translation.

For example the query

```sql
SELECT i AS j FROM ( SELECT int AS i FROM test) ORDER BY j
```

failed during translation because the `OrderBy` resolved the `j` ReferenceAttribute to another `i` ReferenceAttribute that was later removed by an Optimization:

```
OrderBy[[Order[j{r}#4,ASC,LAST]]]                                             ! OrderBy[[Order[i{r}#2,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..]
    \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! 
```

By resolving the `Attributes` recursively both `j{r}` and `i{r}` will resolve to `test.int{f}` above:

```
OrderBy[[Order[test.int{f}#22,ASC,LAST]]]                                     = OrderBy[[Order[test.int{f}#22,ASC,LAST]]]
\_Project[[j]]                                                                = \_Project[[j]]
  \_Project[[i]]                                                              !   \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..]
    \_EsRelation[test][date{f}#6, some{f}#7, some.string{f}#8, some.string..] ! 
 ```

The scope of recursive resolution depends on how the `AttributeMap` is constructed and populated.

Fixes #67237
cbuescher pushed a commit to cbuescher/elasticsearch that referenced this issue Oct 2, 2023
cbuescher pushed a commit to cbuescher/elasticsearch that referenced this issue Oct 2, 2023
With this commit we ensure the nightly benchmarks start with the
"maximum" configuration, which is at the moment a three node benchmark.
This is necessary because night_rally enforces a build on the first step
but skips it afterwards. We could (and maybe also will) implement a
detection logic whether a build is needed but for now the pragmatic
choice is just to start with the maximum configuration

Relates elastic#22
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant