logo
down
shadow

How to find bounding boxes coordinates in Tensorflow Object Detection API


How to find bounding boxes coordinates in Tensorflow Object Detection API

By : Abdul Saleh
Date : October 17 2020, 06:10 PM
wish help you to fix your issue The values in output_dict['detection_boxes'] are indeed in normalized format. By checking the values in the array you provided, those values are all between 0 and 1, therefore they are reasonable.
There are 100 boxes because the model always output the same number of bounding boxes. (It is equal to max_total_detections in the config file ). But not all of them are always meaningful, you need to filter some boxes out according to the confidence score, which is stored in output_dict['scores'].
code :
boxes = np.squeeze(output_dict['detection_boxes'])
scores = np.squeeze(output_dict['detection_scores'])
#set a min thresh score, say 0.8
min_score_thresh = 0.8
bboxes = boxes[scores > min_score_thresh]

#get image size
im_width, im_height = image.size
final_box = []
for box in bboxes:
    ymin, xmin, ymax, xmax = box
    final_box.append([xmin * im_width, xmax * im_width, ymin * im_height, ymax * im_height])


Share : facebook icon twitter icon
How to choose coordinates of bounding boxes for object detection from tensorflow

How to choose coordinates of bounding boxes for object detection from tensorflow


By : Ivan Josa
Date : March 29 2020, 07:55 AM
To fix the issue you can do Ideally, you want that your predicted boxes perfectly overlap the ground truth boxes.
This means that if A = [y_min, x_min, y_max, x_max] is the ground truth box, you want B (the predicted box) to be equal to A => A=B.
Return coordinates for bounding boxes Google's Object Detection API

Return coordinates for bounding boxes Google's Object Detection API


By : Tapas kar
Date : March 29 2020, 07:55 AM
This might help you Google Object Detection API returns bounding boxes in the format [ymin, xmin, ymax, xmax] and in normalised form (full explanation here). To find the (x,y) pixel coordinates we need to multiply the results by width and height of the image. First get the width and height of your image:
code :
width, height = image.size
ymin = boxes[0][i][0]*height
xmin = boxes[0][i][1]*width
ymax = boxes[0][i][2]*height
xmax = boxes[0][i][3]*width
print 'Top left'
print (xmin,ymin,)
print 'Bottom right'
print (xmax,ymax)
Get the bounding box coordinates in the TensorFlow object detection API tutorial

Get the bounding box coordinates in the TensorFlow object detection API tutorial


By : Kevin Korir
Date : March 29 2020, 07:55 AM
like below fixes the issue
I tried printing output_dict['detection_boxes'] but I am not sure what the numbers mean
code :
(left, right, top, bottom) = (xmin * im_width, xmax * im_width, 
                              ymin * im_height, ymax * im_height)
def draw_bounding_box_on_image(image,
                           ymin,
                           xmin,
                           ymax,
                           xmax,
                           color='red',
                           thickness=4,
                           display_str_list=(),
                           use_normalized_coordinates=True):
  """Adds a bounding box to an image.
  Bounding box coordinates can be specified in either absolute (pixel) or
  normalized coordinates by setting the use_normalized_coordinates argument.
  Each string in display_str_list is displayed on a separate line above the
  bounding box in black text on a rectangle filled with the input 'color'.
  If the top of the bounding box extends to the edge of the image, the strings
  are displayed below the bounding box.
  Args:
    image: a PIL.Image object.
    ymin: ymin of bounding box.
    xmin: xmin of bounding box.
    ymax: ymax of bounding box.
    xmax: xmax of bounding box.
    color: color to draw bounding box. Default is red.
    thickness: line thickness. Default value is 4.
    display_str_list: list of strings to display in box
                      (each to be shown on its own line).
    use_normalized_coordinates: If True (default), treat coordinates
      ymin, xmin, ymax, xmax as relative to the image.  Otherwise treat
      coordinates as absolute.
  """
  draw = ImageDraw.Draw(image)
  im_width, im_height = image.size
  if use_normalized_coordinates:
    (left, right, top, bottom) = (xmin * im_width, xmax * im_width,
                                  ymin * im_height, ymax * im_height)
Get rid of overlapping bounding boxes across different classes in Tensorflow Object Detection API

Get rid of overlapping bounding boxes across different classes in Tensorflow Object Detection API


By : delete
Date : March 29 2020, 07:55 AM
To fix the issue you can do I am using the Tensorflow Object Detection API to train my own vehicle detector. When I tested my model using the Object detection tutorial, I found that there are instances when a truck is detected as both a car and a truck with two overlapping bounding boxes around it. I only want to leave the one with the highest detection score. , You can use non_max_suppression over all classes:
code :
  corners = tf.constant(boxes, tf.float32)
  boxesList = box_list.BoxList(corners)
  boxesList.add_field('scores', tf.constant(scores))
  iou_thresh = 0.1
  max_output_size = 100
  sess = tf.Session()
  nms = box_list_ops.non_max_suppression(
      boxesList, iou_thresh, max_output_size)
  boxes = sess.run(nms.get())
Tensorflow Object Detection API Data Augmentation Bounding Boxes

Tensorflow Object Detection API Data Augmentation Bounding Boxes


By : Nadeem
Date : March 29 2020, 07:55 AM
will help you Yes, the bounding boxes are affected in the same way. Specifically for random_horizontal_flip, you can verify it by looking at the function, which also receives boxes. Flipping the bounding boxes is performed here. Note not all augmentation options need bounding box altering, but those who do - alter the bounding box accordingly.
Related Posts Related Posts :
  • Fine tuning last x layers of BERT
  • How to use the PASCAL VOC dataset in the xml format to build the model in tensorflow
  • SSD mobilenet model does not detect objects at longer distances
  • How does BERT utilize TPU memories?
  • How to deploy cnn file
  • AttributeError: module 'tensorflow' has no attribute 'ConfigProto'
  • Can I aggregate over gradients in tensorflow-federated?
  • Reduce console verbosity
  • How to extract data/labels back from TensorFlow dataset
  • how to see tensor value of a layer output in keras
  • Is it possible to use Keras to optimize the coefficients of a mathematical function?
  • Converting Python Keras NLP Model to Tensorflowjs
  • How to decode float32 encoded png to tensor?
  • Tesorflow Custom Layer in High level API: throws object has no attribute '_expects_mask_arg' error
  • Partitioned matrix multiplication in tensorflow or pytorch
  • What is a fused kernel (or fused layer) in deep learning?
  • Tensorflow tf.data AUTOTUNE
  • Convert a TensorFlow model in a format that can be served
  • how to properly saving loaded h5 model to pb with TF2
  • How can I deploy a model that i trained on amazon sagemaker locally?
  • Not able to import tensorflow_datasets module in jupyter notebook
  • cuda and cudnn not working after successful installation
  • How to increase the accuracy of my cnn model?
  • tensorflow installation setup tools requirements error
  • How to detect if object is missing in Image using Tensorflow?
  • Nightly TF / Cloned TFX - how to manage Image for Kubeflow?
  • Google Cloud AI Platform Notebook Instance won't use GPU with Jupyter
  • Unable to Enable Tensorflows Eager execution
  • How to change a learning rate for Adam in TF2?
  • TensorFlow - Using class_weights in fit_generator causes memory leak
  • How can I improve f1-score of cnn?
  • How do I add a new feature column to a tf.data.Dataset object?
  • "Model not quantized" even after post-training quantization
  • Write tf.dataset back to TFRecord
  • Can CUDA 10.0 and 10.1 be on the same system?
  • How to export tensorflow models on datalab to use in bigquery?
  • od_graph_def = tf.GraphDef() AttributeError: module 'tensorflow' has no attribute 'GraphDef'
  • AttributeError: 'Sequential' object has no attribute 'run_eagerly'
  • Sequential' object has no attribute '_ckpt_saved_epoch' error when trying to save my model using callback on Keras
  • How to uninstall bazel 0.29.0 in order to install 0.26.1 because of tensorflow
  • Which model (GPT2, BERT, XLNet and etc) would you use for a text classification task? Why?
  • Bayesian Model does not learn with tensorflow probability and keras
  • How to use Transformers for text classification?
  • How can I resolve this error for conda tensorflow installation error?
  • How to understand masked multi-head attention in transformer
  • Tensorflow 2.0 can't use GPU, something wrong in cuDNN? :Failed to get convolution algorithm. This is probably because c
  • How to set the input of a keras subclass model in tensorflow?
  • How to train a simple neural network to implement median filter?
  • No module named 'tensorflow.contrib' while importing tflearn
  • Very Low Accuracy With LSTM
  • Reusable block in Keras' functional API
  • How can I change the following code from pytorch to tensorflow?
  • TensorFlow keeps consuming system memory and stuck during training
  • get_weights is slow with every iteration
  • How to use TPU in TensorFlow custom training loop?
  • python3 recognizes tensorflow, but doesn't recognize any of its attributes
  • OpenVino model optimizer error(FusedBatchNormV3)
  • Keras model gives different prediction on the same input during fit() and predict()
  • Keras: TPU models must have constant shapes for all operations
  • How to deal with correlation between classes in deep learning classification?
  • shadow
    Privacy Policy - Terms - Contact Us © 35dp-dentalpractice.co.uk