Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MLLIB] [SPARK-2222] Add multiclass evaluation metrics #1155

Closed
wants to merge 15 commits into from
Closed
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.mllib.evaluation

import org.apache.spark.rdd.RDD
import org.apache.spark.Logging
import org.apache.spark.SparkContext._

/**
* Evaluator for multiclass classification.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Insert ::Experimental:: to the beginning of the doc to make it show up in the generated doc.

* NB: type Double both for prediction and label is retained
* for compatibility with model.predict that returns Double
* and MLUtils.loadLibSVMFile that loads class labels as Double
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not necessary to mention loadLibSVMFile in particular here. This is a "global" assumption in MLlib.

*
* @param predictionsAndLabels an RDD of (prediction, label) pairs.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: It is a collection of (prediction, label) pairs. Should we call it predictionAndLabels? predictionsAndLabels sounds to me like an instance of (RDD[Double], RDD[Double]). This is minor, but we should be consistent across the codebase. I prefer predictionAndLabels and I saw you used it in the multilabel PR as well.

*/
class MulticlassMetrics(predictionsAndLabels: RDD[(Double, Double)]) extends Logging {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please mark new methods @experimental.


/* class = category; label = instance of class; prediction = instance of class */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this comment for?


private lazy val labelCountByClass = predictionsAndLabels.values.countByValue()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please write explicit type if it is not primitive.

private lazy val labelCount = labelCountByClass.foldLeft(0L){case(sum, (_, count)) => sum + count}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change it to labelCountByClass.values.sum.

private lazy val tpByClass = predictionsAndLabels.map{ case (prediction, label) =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please follow Spark Code Style Guide and code style used in the code base.

.map { case (prediction, label) =>
  (label, if (label == prediction) 1 else 0)
}.reduceByKey(_ + _)
.collectAsMap()
  1. new line after " =>"
  2. space after "if"
  3. change "{ }" to "( )"
  4. use "()" for an action

(label, if(label == prediction) 1 else 0) }.reduceByKey{_ + _}.collectAsMap
private lazy val fpByClass = predictionsAndLabels.map{ case (prediction, label) =>
(prediction, if(prediction != label) 1 else 0) }.reduceByKey{_ + _}.collectAsMap
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same style issue


/**
* Returns Precision for a given label (category)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Precision" -> "precision"

* @param label the label.
* @return Precision.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

*/
def precision(label: Double): Double = if(tpByClass(label) + fpByClass.getOrElse(label, 0) == 0) 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make a closure for this function and make the code align better.

... : Double = {
  val tp = tpByClass(label)
  val fp = fpByClass.getOrElse(label, 0)
  if (tp + fp == 0) 0 else tp.toDouble / (tp + fp)
}

else tpByClass(label).toDouble / (tpByClass(label) + fpByClass.getOrElse(label, 0)).toDouble

/**
* Returns Recall for a given label (category)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Recall" -> "recall"

* @param label the label.
* @return Recall.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Usually, if the doc says "returns ...", it is not necessary to have "@return".

*/
def recall(label: Double): Double = tpByClass(label).toDouble / labelCountByClass(label).toDouble
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tpByClass(label).toDouble / labelCountByClass(label) (the second toDouble is not necessary)


/**
* Returns F1-measure for a given label (category)
* @param label the label.
* @return F1-measure.
*/
def f1Measure(label: Double): Double ={
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you change the method to fMeasure(label: Double) and fMeasure(label: Double, beta: Double)? The former computes F1 while the latter computes F_beta.

={ -> = {

val p = precision(label)
val r = recall(label)
if((p + r) == 0) 0 else 2 * p * r / (p + r)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

space after if

(p + r) == 0 -> p + r == 0

}

/**
* Returns micro-averaged Recall
* (equals to microPrecision and microF1measure for multiclass classifier)
* @return microRecall.
*/
lazy val microRecall: Double =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not useful. It gives you the global precision and the method name "micro" is confusing. We can simply call it precision() and remove micro* methods.

tpByClass.foldLeft(0L){case (sum,(_, tp)) => sum + tp}.toDouble / labelCount
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tpByClass.values.sum / labelCount


/**
* Returns micro-averaged Precision
* (equals to microPrecision and microF1measure for multiclass classifier)
* @return microPrecision.
*/
lazy val microPrecision: Double = microRecall
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove


/**
* Returns micro-averaged F1-measure
* (equals to microPrecision and microRecall for multiclass classifier)
* @return microF1measure.
*/
lazy val microF1Measure: Double = microRecall
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove


/**
* Returns weighted averaged Recall
* @return weightedRecall.
*/
lazy val weightedRecall: Double = labelCountByClass.foldLeft(0.0){case(wRecall, (category, count)) =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks better to me:

weightedRecall: Double = labelCountByClass.map { case (category, count) =>
    recall(category) * count / labelCount
  }.sum

wRecall + recall(category) * count.toDouble / labelCount}

/**
* Returns weighted averaged Precision
* @return weightedPrecision.
*/
lazy val weightedPrecision: Double =
labelCountByClass.foldLeft(0.0){case(wPrecision, (category, count)) =>
wPrecision + precision(category) * count.toDouble / labelCount}

/**
* Returns weighted averaged F1-measure
* @return weightedF1Measure.
*/
lazy val weightedF1Measure: Double =
labelCountByClass.foldLeft(0.0){case(wF1measure, (category, count)) =>
wF1measure + f1Measure(category) * count.toDouble / labelCount}

/**
* Returns map with Precisions for individual classes
* @return precisionPerClass.
*/
lazy val precisionPerClass =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of having those methods, I think it is nice to add "lazy val labels" that returns the labels. Then users can easily chain labels and precision to get all precisions.

labelCountByClass.map{case (category, _) => (category, precision(category))}.toMap

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove empty line

/**
* Returns map with Recalls for individual classes
* @return recallPerClass.
*/
lazy val recallPerClass =
labelCountByClass.map{case (category, _) => (category, recall(category))}.toMap

/**
* Returns map with F1-measures for individual classes
* @return f1MeasurePerClass.
*/
lazy val f1MeasurePerClass =
labelCountByClass.map{case (category, _) => (category, f1Measure(category))}.toMap
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

package org.apache.spark.mllib.evaluation

import org.apache.spark.mllib.util.LocalSparkContext
import org.scalatest.FunSuite
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

organize imports into groups


class MulticlassMetricsSuite extends FunSuite with LocalSparkContext {
test("Multiclass evaluation metrics") {
/*
* Confusion matrix for 3-class classification with total 9 instances:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the alignment is wrong

* |2|1|1| true class0 (4 instances)
* |1|3|0| true class1 (4 instances)
* |0|0|1| true class2 (1 instance)
*
*/
val scoreAndLabels = sc.parallelize(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to be consistent: 'predictionAndLabels`

Seq((0.0, 0.0), (0.0, 1.0), (0.0, 0.0), (1.0, 0.0), (1.0, 1.0),
(1.0, 1.0), (1.0, 1.0), (2.0, 2.0), (2.0, 0.0)), 2)
val metrics = new MulticlassMetrics(scoreAndLabels)

val delta = 0.00001
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use a smaller delta.

val precision0 = 2.0 / (2.0 + 1.0)
val precision1 = 3.0 / (3.0 + 1.0)
val precision2 = 1.0 / (1.0 + 1.0)
val recall0 = 2.0 / (2.0 + 2.0)
val recall1 = 3.0 / (3.0 + 1.0)
val recall2 = 1.0 / (1.0 + 0.0)
val f1measure0 = 2 * precision0 * recall0 / (precision0 + recall0)
val f1measure1 = 2 * precision1 * recall1 / (precision1 + recall1)
val f1measure2 = 2 * precision2 * recall2 / (precision2 + recall2)

assert(math.abs(metrics.precision(0.0) - precision0) < delta)
assert(math.abs(metrics.precision(1.0) - precision1) < delta)
assert(math.abs(metrics.precision(2.0) - precision2) < delta)
assert(math.abs(metrics.recall(0.0) - recall0) < delta)
assert(math.abs(metrics.recall(1.0) - recall1) < delta)
assert(math.abs(metrics.recall(2.0) - recall2) < delta)
assert(math.abs(metrics.f1Measure(0.0) - f1measure0) < delta)
assert(math.abs(metrics.f1Measure(1.0) - f1measure1) < delta)
assert(math.abs(metrics.f1Measure(2.0) - f1measure2) < delta)

assert(math.abs(metrics.microRecall -
(2.0 + 3.0 + 1.0) / ((2.0 + 3.0 + 1.0) + (1.0 + 1.0 + 1.0))) < delta)
assert(math.abs(metrics.microRecall - metrics.microPrecision) < delta)
assert(math.abs(metrics.microRecall - metrics.microF1Measure) < delta)
assert(math.abs(metrics.microRecall - metrics.weightedRecall) < delta)
assert(math.abs(metrics.weightedPrecision -
((4.0 / 9.0) * precision0 + (4.0 / 9.0) * precision1 + (1.0 / 9.0) * precision2)) < delta)
assert(math.abs(metrics.weightedRecall -
((4.0 / 9.0) * recall0 + (4.0 / 9.0) * recall1 + (1.0 / 9.0) * recall2)) < delta)
assert(math.abs(metrics.weightedF1Measure -
((4.0 / 9.0) * f1measure0 + (4.0 / 9.0) * f1measure1 + (1.0 / 9.0) * f1measure2)) < delta)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove empty line

}
}