I'm using my CameraX with Firebase MLKit bar-code reader to detect barcode code. Application Identifies the bar-code without a problem. But I'm trying to add bounding box which shows the area of the barcode in CameraX preview in real-time. The Bounding box information is retrieved from the bar-code detector function. But It doesn't have nither right position nor size as you can see below.
This is my layout of the activity.
<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout
xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".MainActivity">
<Button
android:id="@+id/camera_capture_button"
android:layout_width="100dp"
android:layout_height="100dp"
android:layout_marginBottom="50dp"
android:scaleType="fitCenter"
android:text="Take Photo"
app:layout_constraintLeft_toLeftOf="parent"
app:layout_constraintRight_toRightOf="parent"
app:layout_constraintBottom_toBottomOf="parent"
android:elevation="2dp" />
<SurfaceView
android:id="@+id/overlayView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
<androidx.camera.view.PreviewView
android:id="@+id/previewView"
android:layout_width="match_parent"
android:layout_height="match_parent" />
</androidx.constraintlayout.widget.ConstraintLayout>
SurfaceView
is used to draw this rectangle shape.
Barcode detection happens in the BarcodeAnalyzer
class which implements ImageAnalysis.Analyzer
. inside overwritten analyze
function I retrieve the barcode data like below.
@SuppressLint("UnsafeExperimentalUsageError")
override fun analyze(imageProxy: ImageProxy) {
val mediaImage = imageProxy.image
val rotationDegrees = degreesToFirebaseRotation(imageProxy.imageInfo.rotationDegrees)
if (mediaImage != null) {
val analyzedImageHeight = mediaImage.height
val analyzedImageWidth = mediaImage.width
val image = FirebaseVisionImage
.fromMediaImage(mediaImage,rotationDegrees)
detector.detectInImage(image)
.addOnSuccessListener { barcodes ->
for (barcode in barcodes) {
val bounds = barcode.boundingBox
val corners = barcode.cornerPoints
val rawValue = barcode.rawValue
if(::barcodeDetectListener.isInitialized && rawValue != null && bounds != null){
barcodeDetectListener.onBarcodeDetect(
rawValue,
bounds,
analyzedImageWidth,
analyzedImageHeight
)
}
}
imageProxy.close()
}
.addOnFailureListener {
Log.e(tag,"Barcode Reading Exception: ${it.localizedMessage}")
imageProxy.close()
}
.addOnCanceledListener {
Log.e(tag,"Barcode Reading Canceled")
imageProxy.close()
}
}
}
barcodeDetectListener
is a reference to an interface I create to communicate this data back into my activity.
interface BarcodeDetectListener {
fun onBarcodeDetect(code: String, codeBound: Rect, imageWidth: Int, imageHeight: Int)
}
In my main activity, I send these data to OverlaySurfaceHolder
which implements the SurfaceHolder.Callback
. This class is responsible for drawing a bounding box on overlayed SurfaceView
.
override fun onBarcodeDetect(code: String, codeBound: Rect, analyzedImageWidth: Int,
analyzedImageHeight: Int) {
Log.i(TAG,"barcode : $code")
overlaySurfaceHolder.repositionBound(codeBound,previewView.width,previewView.height,
analyzedImageWidth,analyzedImageHeight)
overlayView.invalidate()
}
As you can see here I'm sending overlayed SurfaceView
width and height for the calculation in OverlaySurfaceHolder
class.
OverlaySurfaceHolder.kt
class OverlaySurfaceHolder: SurfaceHolder.Callback {
var previewViewWidth: Int = 0
var previewViewHeight: Int = 0
var analyzedImageWidth: Int = 0
var analyzedImageHeight: Int = 0
private lateinit var drawingThread: DrawingThread
private lateinit var barcodeBound :Rect
private val tag = OverlaySurfaceHolder::class.java.simpleName
override fun surfaceChanged(holder: SurfaceHolder?, format: Int, width: Int, height: Int) {
}
override fun surfaceDestroyed(holder: SurfaceHolder?) {
var retry = true
drawingThread.running = false
while (retry){
try {
drawingThread.join()
retry = false
} catch (e: InterruptedException) {
}
}
}
override fun surfaceCreated(holder: SurfaceHolder?) {
drawingThread = DrawingThread(holder)
drawingThread.running = true
drawingThread.start()
}
fun repositionBound(codeBound: Rect, previewViewWidth: Int, previewViewHeight: Int,
analyzedImageWidth: Int, analyzedImageHeight: Int){
this.barcodeBound = codeBound
this.previewViewWidth = previewViewWidth
this.previewViewHeight = previewViewHeight
this.analyzedImageWidth = analyzedImageWidth
this.analyzedImageHeight = analyzedImageHeight
}
inner class DrawingThread(private val holder: SurfaceHolder?): Thread() {
var running = false
private fun adjustXCoordinates(valueX: Int): Float{
return if(previewViewWidth != 0){
(valueX / analyzedImageWidth.toFloat()) * previewViewWidth.toFloat()
}else{
valueX.toFloat()
}
}
private fun adjustYCoordinates(valueY: Int): Float{
return if(previewViewHeight != 0){
(valueY / analyzedImageHeight.toFloat()) * previewViewHeight.toFloat()
}else{
valueY.toFloat()
}
}
override fun run() {
while(running){
if(::barcodeBound.isInitialized){
val canvas = holder!!.lockCanvas()
if (canvas != null) {
synchronized(holder) {
canvas.drawColor(Color.TRANSPARENT, PorterDuff.Mode.CLEAR)
val myPaint = Paint()
myPaint.color = Color.rgb(20, 100, 50)
myPaint.strokeWidth = 6f
myPaint.style = Paint.Style.STROKE
val refinedRect = RectF()
refinedRect.left = adjustXCoordinates(barcodeBound.left)
refinedRect.right = adjustXCoordinates(barcodeBound.right)
refinedRect.top = adjustYCoordinates(barcodeBound.top)
refinedRect.bottom = adjustYCoordinates(barcodeBound.bottom)
canvas.drawRect(refinedRect,myPaint)
}
holder.unlockCanvasAndPost(canvas)
}else{
Log.e(tag, "Cannot draw onto the canvas as it's null")
}
try {
sleep(30)
} catch (e: InterruptedException) {
e.printStackTrace()
}
}
}
}
}
}
Please can anyone point me out what am I doing wrong?
I am no longer working on this project. However resonantly I worked on a camera application that uses Camera 2 API. In that application, there was a requirement to detect the object using the MLKit object detection library and show the bounding box like this on top of the camera preview. Faced the same issue like this one first and manage to get it to work finally. I'll leave my approach here. It might help someone.
Any detection library will do its detection process in a small resolution image compare to the camera preview image. When the detection library returns the combinations for the detected object we need to scale up to show it in the right position. it's called the scale factor. In order to make the calculation easy, it's better to select the analyze image size and preview image size in the same aspect ratio.
You can use the below function to get the aspect ratio of any size.
fun gcd(a: Long, b: Long): Long {
return if (b == 0L) a else gcd(b, a % b)
}
fun asFraction(a: Long, b: Long): Pair<Long,Long> {
val gcd = gcd(a, b)
return Pair((a / gcd) , b / gcd)
}
After getting the camera preview image aspect ratio, selected the analyze image size like below.
val previewFraction = DisplayUtils
.asFraction(previewSize!!.width.toLong(),previewSize!!.height.toLong())
val analyzeImageSize = characteristics
.get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP)!!
.getOutputSizes(ImageFormat.YUV_420_888)
.filter { DisplayUtils.asFraction(it.width.toLong(), it.height.toLong()) == previewFraction }
.sortedBy { it.height * it.width}
.first()
Finaly when you have these two values you can calculate scale factor like below.
val scaleFactor = previewSize.width / analyzedSize.width.toFloat()
Finaly before the bounding box is drawn to the multiply each opint with scale factor to get correct screen coordinations.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With