MNC Identifier is a service to identify, and verify consumer with AI in it.
Liveness Detection using mlkit face recognition to detect live person present at the point of capture.
- Min SDK 21
build.gradle (root)
repositories {
...
maven { url 'https://jitpack.io' }
}
build.gradle (app)
dependencies {
implementation "com.github.mncinnovation.mnc-identifiersdk-android:core:1.0.1"
implementation "com.github.mncinnovation.mnc-identifiersdk-android:face-detection:1.0.1"
}
AndroidManifest.xml
<application ...>...<meta-data android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="face" /></application>
Start liveness activity
startActivityForResult(MNCIdentifier.getLivenessIntent(this), LIVENESS_DETECTION_REQUEST_CODE)
companion object {
const val LIVENESS_DETECTION_REQUEST_CODE = xxxx
}
Get Liveness Result
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == RESULT_OK) {
when (requestCode) {
LIVENESS_DETECTION_REQUEST_CODE -> {
//get liveness result
val livenessResult = MNCIdentifier.getLivenessResult(data)
livenessResult?.let { result ->
if (result.isSuccess) { // check if liveness detection success
// get image result
val bitmap = result.getBitmap(this, DetectionMode.SMILE)
} else { //Liveness Detection Error
//get Error Message
val errorMessage = result.errorMessage
//get Error Type (OOM / Exception)
val errorType = result.errorType
}
}
}
}
}
}
Default detection sequence is HOLD_STILL > BLINK > OPEN_MOUTH > SHAKE_HEAD > SMILE. You can cutomize detection sequence using following method
//the first boolean value indicates if the given detection sequence should be shuffled.
MNCIdentifier.setDetectionModeSequence(
false, listOf(
DetectionMode.HOLD_STILL,
DetectionMode.BLINK,
DetectionMode.OPEN_MOUTH,
DetectionMode.SMILE,
DetectionMode.SHAKE_HEAD
)
)
If the memory stack allocation usage is lower than 50 mb (by default), a popup warning will appear before executing the image capture or object detection process. However, you can customize the threshold or you can also disable it by setting it to 0.
MNCIdentifier.setLowMemoryThreshold(50) // for face detection
Optical Character Recognition using mlkit text recognition to detect text at the point of capture.
- Min SDK 21
build.gradle (root)
repositories {
...
maven { url 'https://jitpack.io' }
}
build.gradle (app)
dependencies {
implementation "com.github.mncinnovation.mnc-identifiersdk-android:core:1.1.0"
implementation "com.github.mncinnovation.mnc-identifiersdk-android:ocr:1.1.0"
}
AndroidManifest.xml
<application ...>
...
<meta-data android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="ocr" />
</application>
If you use face and ocr, AndroidManifest.xml
<application ...>
...
<meta-data android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="face, ocr" />
</application>
Optional configuration to show button flash at camera activity and show camera only screen.
- Default value
withFlash
isfalse
. - Default value
cameraOnly
isfalse
. - Default value
lowMemoryThreshold
is50
.
//call this function before startCapture
MNCIdentifierOCR.config(withFlash = true, cameraOnly = true, lowMemoryThreshold = 70)
Start to capture OCR result activity
//start directly
MNCIdentifierOCR.startCapture(this@MainActivity)
//or with requestCode value
MNCIdentifierOCR.startCapture(this@MainActivity, CAPTURE_EKTP_REQUEST_CODE)
companion object {
const val CAPTURE_EKTP_REQUEST_CODE = xxxx
}
Get Capture OCR Result
override fun onActivityResult(requestCode: Int, resultCode: Int, data: Intent?) {
super.onActivityResult(requestCode, resultCode, data)
if (resultCode == RESULT_OK) {
when (requestCode) {
CAPTURE_EKTP_REQUEST_CODE -> {
val captureOCRResult = MNCIdentifierOCR.getOCRResult(data)
captureOCRResult?.let { result ->
if (result.isSuccess) {
result.getBitmapImage(this)?.let {
//get image result
binding.ivKtpCapture.setImageBitmap(it)
}
//show all of data result
binding.tvCaptureKtp.text = result.ktpModel.toString()
} else {
//get Error Message
val errorMessage = result.errorMessage
//get Error Type (OOM / Exception)
val errorType = result.errorType
}
}
}
}
}
}
//another option (using registerForActivityResult)
private val resultLauncherOcr =
registerForActivityResult(ActivityResultContracts.StartActivityForResult()) { result ->
if (result.resultCode == Activity.RESULT_OK) {
val data = result.data
val captureOCRResult = MNCIdentifierOCR.getOCRResult(data)
captureOCRResult?.let { ocrResult ->
if (ocrResult.isSuccess) {
ocrResult.getBitmapImage(this)?.let {
binding.ivKtp.setImageBitmap(it)
}
binding.tvScanKtp.text = captureOCRResult.toString()
} else {
//get Error Message
val errorMessage = ocrResult.errorMessage
//get Error Type (OOM / Exception)
val errorType = ocrResult.errorType
}
}
}
}
MNCIdentifierOCR.startCapture(this@MainActivity, resultLauncherOcr)
Get OCR result by using function extract data. MNCIdentifier will receive input image from your app.
//call this function to get extract data from uri of image
MNCIdentifierOCR.extractDataFromUri(
uriList,
this@MainActivity,
object : ExtractDataOCRListener {
override fun onStart() {
Log.d("TAGAPP", "onStart Process Extract")
}
override fun onFinish(result: OCRResultModel) {
result.getBitmapImage()?.let { bitmap ->
binding.ivKtp.setImageBitmap(bitmap)
}
binding.tvScanKtp.text = result.toString()
}
override fun onError(message: String?, errorType: ResultErrorType?) {
//handle on error here
}
})
Extract data OCR input options:
- Uri : uri image file
- List : list of uri image files
- String : path image file
- List : list of path image files