Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SPARK-1150: fix repo location in create script (re-open) #52

Closed
wants to merge 1 commit into from

Conversation

CodingCat
Copy link
Contributor

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

@pwendell
Copy link
Contributor

pwendell commented Mar 2, 2014

Thanks, merged

@asfgit asfgit closed this in fe195ae Mar 2, 2014
jhartlaub referenced this pull request in jhartlaub/spark May 27, 2014
Add an optional closure parameter to HadoopRDD instantiation to use when creating local JobConfs.

Having HadoopRDD accept this optional closure eliminates the need for the HadoopFileRDD added earlier. It makes the HadoopRDD more general, in that the caller can specify any JobConf initialization flow.

(cherry picked from commit 9979690)
Signed-off-by: Reynold Xin <[email protected]>
clockfly added a commit to clockfly/spark that referenced this pull request Aug 30, 2016
## What changes were proposed in this pull request?

This is the second part of closure translation feature, which translates the Node tree returned from ByteCodeParser to Spark sql expressions.

For example, input Node tree for filter operation:

```
  Arithmetic[Z](>)
    Argument[I]
    Constant[I](0)
```

is translated to expression:

```
GreaterThan
  ColumnField("value")
  Literal(0)
```

After translation, the expression may be further flattened to Seq[Expression] if its type
contain sub-fields. This is consistent with the behavior of Dataset typed Map operation.

```
// If the type U is a case class, then all fields of type U are flattened. The result Dataset
// may contains multiple fields.
dataset.map(func: T => U)
```

### Design doc

https://docs.google.com/document/d/1JZ0-lZfjGTMZto7Oxg_yOkZh3G6VDo4lznVMSD0EM6k/edit

## How was this patch tested?

Unit tests.

Author: Sean Zhong <[email protected]>

Closes apache#52 from clockfly/closure_parser_part2.
robert3005 pushed a commit to robert3005/spark that referenced this pull request Jan 12, 2017
tnachen pushed a commit to tnachen/spark that referenced this pull request Jan 27, 2017
* Use "extraTestArgLine" to pass extra options to scalatest.

Because the "argLine" option of scalatest is set in pom.xml and we can't
overwrite it from the command line.

Ref apache-spark-on-k8s#37

* Added a default value for extraTestArgLine

* Use a better name.

* Added a tip for this in the dev docs.
lins05 added a commit to lins05/spark that referenced this pull request Apr 23, 2017
* Use "extraTestArgLine" to pass extra options to scalatest.

Because the "argLine" option of scalatest is set in pom.xml and we can't
overwrite it from the command line.

Ref apache-spark-on-k8s#37

* Added a default value for extraTestArgLine

* Use a better name.

* Added a tip for this in the dev docs.
erikerlandson pushed a commit to erikerlandson/spark that referenced this pull request Jul 28, 2017
* Use "extraTestArgLine" to pass extra options to scalatest.

Because the "argLine" option of scalatest is set in pom.xml and we can't
overwrite it from the command line.

Ref apache-spark-on-k8s#37

* Added a default value for extraTestArgLine

* Use a better name.

* Added a tip for this in the dev docs.
jlopezmalla pushed a commit to jlopezmalla/spark that referenced this pull request Nov 3, 2017
gcz2022 pushed a commit to gcz2022/spark that referenced this pull request Jul 30, 2018
…pache#52)

* Fix exception: Child of ShuffleQueryStage must be a ShuffleExchange

* top ShuffleExchange of QueryStage should not be removed anyway

* remove unecessary parentheses

* check top shuffle exchange for ShuffleQueryStage only

* minor comments

* improve topShuffleCheck

* simplfy codes
luzhonghao pushed a commit to luzhonghao/spark that referenced this pull request Dec 11, 2018
…pache#52)

* Fix exception: Child of ShuffleQueryStage must be a ShuffleExchange

* top ShuffleExchange of QueryStage should not be removed anyway

* remove unecessary parentheses

* check top shuffle exchange for ShuffleQueryStage only

* minor comments

* improve topShuffleCheck

* simplfy codes
hejian991 pushed a commit to growingio/spark that referenced this pull request Jun 24, 2019
…pache#52)

* Fix exception: Child of ShuffleQueryStage must be a ShuffleExchange

* top ShuffleExchange of QueryStage should not be removed anyway

* remove unecessary parentheses

* check top shuffle exchange for ShuffleQueryStage only

* minor comments

* improve topShuffleCheck

* simplfy codes
bzhaoopenstack pushed a commit to bzhaoopenstack/spark that referenced this pull request Sep 11, 2019
wangyum pushed a commit that referenced this pull request May 26, 2023
* index synax

* index build

* index prune

* index metrics

* index ut

* [CARMEL-3157] index pruning - upgrade to 3.0

* remove ut for index treated as unsupport feature

* fix conflict

* fix conflict

* fix style
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants