Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SPARK-25347][ML][DOC] Spark datasource for image/libsvm user guide #22675

Closed
wants to merge 6 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/_data/menu-ml.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
- text: Basic statistics
url: ml-statistics.html
- text: Data sources
url: ml-datasource
- text: Pipelines
url: ml-pipeline.html
- text: Extracting, transforming and selecting features
Expand Down
108 changes: 108 additions & 0 deletions docs/ml-datasource.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
---
layout: global
title: Data sources
displayTitle: Data sources
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be Datasource or Data sources? I am saying this because there looks a mismatch with the menu above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Data sources.

---

In this section, we introduce how to use data source in ML to load data.
Beside some general data sources such as Parquet, CSV, JSON and JDBC, we also provide some specific data sources for ML.

**Table of Contents**

* This will become a table of contents (this text will be scraped).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this convention, to have this text here in the table of contents? "* This will become a table of contents (this text will be scraped)."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. This keep the same with other ML algo page.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, ok, great

{:toc}

## Image data source

This image data source is used to load image files from a directory, it can load compressed image (jpeg, png, etc.) into raw image representation via `ImageIO` in Java library.
The loaded DataFrame has one `StructType` column: "image", containing image data stored as image schema.
The schema of the `image` column is:
- origin: `StringType` (represents the file path of the image)
- height: `IntegerType` (height of the image)
- width: `IntegerType` (width of the image)
- nChannels: `IntegerType` (number of image channels)
- mode: `IntegerType` (OpenCV-compatible type)
- data: `BinaryType` (Image bytes in OpenCV-compatible order: row-wise BGR in most cases)


<div class="codetabs">
<div data-lang="scala" markdown="1">
[`ImageDataSource`](api/scala/index.html#org.apache.spark.ml.source.image.ImageDataSource)
implements a Spark SQL data source API for loading image data as a DataFrame.

{% highlight scala %}
scala> val df = spark.read.format("image").option("dropInvalid", true).load("data/mllib/images/origin/kittens")
df: org.apache.spark.sql.DataFrame = [image: struct<origin: string, height: int ... 4 more fields>]

scala> df.select("image.origin", "image.width", "image.height").show(truncate=false)
+-----------------------------------------------------------------------+-----+------+
|origin |width|height|
+-----------------------------------------------------------------------+-----+------+
|file:///spark/data/mllib/images/origin/kittens/54893.jpg |300 |311 |
|file:///spark/data/mllib/images/origin/kittens/DP802813.jpg |199 |313 |
|file:///spark/data/mllib/images/origin/kittens/29.5.a_b_EGDP022204.jpg |300 |200 |
|file:///spark/data/mllib/images/origin/kittens/DP153539.jpg |300 |296 |
+-----------------------------------------------------------------------+-----+------+
{% endhighlight %}
</div>

<div data-lang="java" markdown="1">
[`ImageDataSource`](api/java/org/apache/spark/ml/source/image/ImageDataSource.html)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of curiosity, why did we put the image source inside of Spark, rather then a separate module? (see also #21742 (comment)). Avro was put as a separate module.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @mengxr as well

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually it depends on how important the use case is. For example, CSV was created as an external data source and later merged into Spark. See https://issues.apache.org/jira/browse/SPARK-21866?focusedCommentId=16148268&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16148268.

Copy link
Member

@HyukjinKwon HyukjinKwon Oct 9, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant (external) Avro was merged into external/... in Apache Spark as a separate module due to the reason above. Image data source is merged into Spark's main code rather then a separate module. I don't object to bring an external into Apache Spark and I don't doubt you guys's judgement - ++1 for bring this in actually.

My point is I was wondering why this exists in Spark's main code whereas the ideal approach is to put them external/....

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cc @cloud-fan and @gatorsmile, am I missing something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I sympathize with the comment, but I think it makes some sense tucked into ML rather than a standalone module.

implements Spark SQL data source API for loading image data as DataFrame.

{% highlight java %}
Dataset<Row> imagesDF = spark.read().format("image").option("dropInvalid", true).load("data/mllib/images/origin/kittens");
imageDF.select("image.origin", "image.width", "image.height").show(false);
/*
Will output:
+-----------------------------------------------------------------------+-----+------+
|origin |width|height|
+-----------------------------------------------------------------------+-----+------+
|file:///spark/data/mllib/images/origin/kittens/54893.jpg |300 |311 |
|file:///spark/data/mllib/images/origin/kittens/DP802813.jpg |199 |313 |
|file:///spark/data/mllib/images/origin/kittens/29.5.a_b_EGDP022204.jpg |300 |200 |
|file:///spark/data/mllib/images/origin/kittens/DP153539.jpg |300 |296 |
+-----------------------------------------------------------------------+-----+------+
*/
{% endhighlight %}
</div>

<div data-lang="python" markdown="1">
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how about SQL syntax? I think we can use CREATE TABLE tableA USING LOCATION 'data/image.png'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like SQL features and fit all datasources. Put it in spark SQL doc will be better.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we add an example for R as well then? It wouldn't be too difficult to add the equivalent examples. Also, I don't think we will add the equivalent examples in different languages at different pages.

In PySpark we provide Spark SQL data source API for loading image data as DataFrame.

{% highlight python %}
>>> df = spark.read.format("image").option("dropInvalid", true).load("data/mllib/images/origin/kittens")
>>> df.select("image.origin", "image.width", "image.height").show(truncate=False)
+-----------------------------------------------------------------------+-----+------+
|origin |width|height|
+-----------------------------------------------------------------------+-----+------+
|file:///spark/data/mllib/images/origin/kittens/54893.jpg |300 |311 |
|file:///spark/data/mllib/images/origin/kittens/DP802813.jpg |199 |313 |
|file:///spark/data/mllib/images/origin/kittens/29.5.a_b_EGDP022204.jpg |300 |200 |
|file:///spark/data/mllib/images/origin/kittens/DP153539.jpg |300 |296 |
+-----------------------------------------------------------------------+-----+------+
{% endhighlight %}
</div>

<div data-lang="r" markdown="1">
In SparkR we provide Spark SQL data source API for loading image data as DataFrame.

{% highlight r %}
> df = read.df("data/mllib/images/origin/kittens", "image")
> head(select(df, df$image.origin, df$image.width, df$image.height))

1 file:///spark/data/mllib/images/origin/kittens/54893.jpg
2 file:///spark/data/mllib/images/origin/kittens/DP802813.jpg
3 file:///spark/data/mllib/images/origin/kittens/29.5.a_b_EGDP022204.jpg
4 file:///spark/data/mllib/images/origin/kittens/DP153539.jpg
width height
1 300 311
2 199 313
3 300 200
4 300 296

{% endhighlight %}
</div>


</div>
Original file line number Diff line number Diff line change
Expand Up @@ -19,14 +19,17 @@ package org.apache.spark.ml.source.image

/**
* `image` package implements Spark SQL data source API for loading image data as `DataFrame`.
* The loaded `DataFrame` has one `StructType` column: `image`.
* It can load compressed image (jpeg, png, etc.) into raw image representation via `ImageIO`
* in Java library.
* The loaded `DataFrame` has one `StructType` column: `image`, containing image data stored
* as image schema.
* The schema of the `image` column is:
* - origin: String (represents the file path of the image)
* - height: Int (height of the image)
* - width: Int (width of the image)
* - nChannels: Int (number of the image channels)
* - mode: Int (OpenCV-compatible type)
* - data: BinaryType (Image bytes in OpenCV-compatible order: row-wise BGR in most cases)
* - origin: `StringType` (represents the file path of the image)
* - height: `IntegerType` (height of the image)
* - width: `IntegerType` (width of the image)
* - nChannels: `IntegerType` (number of image channels)
* - mode: `IntegerType` (OpenCV-compatible type)
* - data: `BinaryType` (Image bytes in OpenCV-compatible order: row-wise BGR in most cases)
*
* To use image data source, you need to set "image" as the format in `DataFrameReader` and
* optionally specify the data source options, for example:
Expand Down