logo
down
shadow

Inference time using of Tensorflow Object Detection


Inference time using of Tensorflow Object Detection

By : merlon
Date : November 17 2020, 07:01 PM
hop of those help? As previous answers stated, you should indeed try to do multiple requests because tf-serving needs some overhead the first time(s). You can prevent this by using a warm-up script.
To add some extra options:
code :
from keras.applications.inception_v3 import InceptionV3, preprocess_input
from keras.models import Model
import numpy as np
import cv2
import tensorflow as tf
from keras.layers import Input, Lambda
from keras import backend as K

base_model = InceptionV3(
                weights='imagenet',
                include_top=True)

model = Model(
    inputs=base_model.input,
    outputs=base_model.get_layer('avg_pool').output)



def prepare_image(image_str_tensor):
            #image = tf.squeeze(tf.cast(image_str_tensor, tf.string), axis=[0])
            image_str_tensor = tf.cast(image_str_tensor, tf.string)
            image = tf.image.decode_jpeg(image_str_tensor,
                                        channels=3)
            #image = tf.divide(image, 255)
            #image = tf.expand_dims(image, 0)
            image = tf.image.convert_image_dtype(image, tf.float32)
            return image

def prepare_image_batch(image_str_tensor):
    return tf.map_fn(prepare_image, image_str_tensor, dtype=tf.float32)

# IF BYTE STR

model.layers.pop(0)
print(model.layers[0])

input_img = Input(dtype= tf.string,
            name ='string_input',
            shape = ()
            )
outputs = Lambda(prepare_image_batch)(input_img)
outputs = model(outputs)
inception_model = Model(input_img, outputs)
inception_model.compile(optimizer = "sgd", loss='categorical_crossentropy')
weights = inception_model.get_weights()


Share : facebook icon twitter icon
Training time of Tensorflow Object Detection API on MSCOCO

Training time of Tensorflow Object Detection API on MSCOCO


By : Kevin Brian Pradipta
Date : March 29 2020, 07:55 AM
wish of those help I have never trained a model on COCO using a single GPU --- we typically train using ~10 k40 GPUs with asynchronous SGD, which takes 3-4 days to converge on COCO. SSD and R-FCN take about the same amount of time.
Modify and combine two different frozen graphs generated using tensorflow object detection API for inference

Modify and combine two different frozen graphs generated using tensorflow object detection API for inference


By : user3550050
Date : March 29 2020, 07:55 AM
may help you . , @matt and @Vedanshu for responding, Here is the updated code that works fine for my requirement, Please give suggestions, if it needs any improvement as I am still learning it.
code :
# Dependencies
import tensorflow as tf
import numpy as np


# load graphs using pb file path
def load_graph(pb_file):
    graph = tf.Graph()
    with graph.as_default():
        od_graph_def = tf.GraphDef()
        with tf.gfile.GFile(pb_file, 'rb') as fid:
            serialized_graph = fid.read()
            od_graph_def.ParseFromString(serialized_graph)
            tf.import_graph_def(od_graph_def, name='') 
    return graph


# returns tensor dictionaries from graph
def get_inference(graph, count=0):
    with graph.as_default():
        ops = tf.get_default_graph().get_operations()
        all_tensor_names = {output.name for op in ops for output in op.outputs}
        tensor_dict = {}
        for key in ['num_detections', 'detection_boxes', 'detection_scores',
                    'detection_classes', 'detection_masks', 'image_tensor']:
            tensor_name = key + ':0' if count == 0 else '_{}:0'.format(count)
            if tensor_name in all_tensor_names:
                tensor_dict[key] = tf.get_default_graph().\
                                        get_tensor_by_name(tensor_name)
        return tensor_dict


# renames while_context because there is one while function for every graph
# open issue at https://github.com/tensorflow/tensorflow/issues/22162  
def rename_frame_name(graphdef, suffix):
    for n in graphdef.node:
        if "while" in n.name:
            if "frame_name" in n.attr:
                n.attr["frame_name"].s = str(n.attr["frame_name"]).replace("while_context",
                                                                           "while_context" + suffix).encode('utf-8')


if __name__ == '__main__':

    # your pb file paths
    frozenGraphPath1 = '...replace_with_your_path/some_frozen_graph.pb'
    frozenGraphPath2 = '...replace_with_your_path/some_frozen_graph.pb'

    # new file name to save combined model
    combinedFrozenGraph = 'combined_frozen_inference_graph.pb'

    # loads both graphs
    graph1 = load_graph(frozenGraphPath1)
    graph2 = load_graph(frozenGraphPath2)

    # get tensor names from first graph
    tensor_dict1 = get_inference(graph1)

    with graph1.as_default():

        # getting tensors to add crop and resize step
        image_tensor = tensor_dict1['image_tensor']
        scores = tensor_dict1['detection_scores'][0]
        num_detections = tf.cast(tensor_dict1['num_detections'][0], tf.int32)
        detection_boxes = tensor_dict1['detection_boxes'][0]

        # I had to add NMS becuase my ssd model outputs 100 detections and hence it runs out of memory becuase of huge tensor shape
        selected_indices = tf.image.non_max_suppression(detection_boxes, scores, 5, iou_threshold=0.5)
        selected_boxes = tf.gather(detection_boxes, selected_indices)

        # intermediate crop and resize step, which will be input for second model(FRCNN)
        cropped_img = tf.image.crop_and_resize(image_tensor,
                                               selected_boxes,
                                               tf.zeros(tf.shape(selected_indices), dtype=tf.int32),
                                               [300, 60] # resize to 300 X 60
                                               )
        cropped_img = tf.cast(cropped_img, tf.uint8, name='cropped_img')


    gdef1 = graph1.as_graph_def()
    gdef2 = graph2.as_graph_def()

    g1name = "graph1"
    g2name = "graph2"

    # renaming while_context in both graphs
    rename_frame_name(gdef1, g1name)
    rename_frame_name(gdef2, g2name)

    # This combines both models and save it as one
    with tf.Graph().as_default() as g_combined:

        x, y = tf.import_graph_def(gdef1, return_elements=['image_tensor:0', 'cropped_img:0'])

        z, = tf.import_graph_def(gdef2, input_map={"image_tensor:0": y}, return_elements=['detection_boxes:0'])

        tf.train.write_graph(g_combined, "./", combinedFrozenGraph, as_text=False)
Tensorflow Object Detection API has slow inference time with tensorflow serving

Tensorflow Object Detection API has slow inference time with tensorflow serving


By : Anuraw
Date : March 29 2020, 07:55 AM
I wish this help you I am unable to match the inference times reported by Google for models released in their model zoo. Specifically I am trying out their faster_rcnn_resnet101_coco model where the reported inference time is 106ms on a Titan X GPU. , I was able to solve the two problems by
is there a version of the inference example of the Tensorflow Object detection API that can run on batches of images sim

is there a version of the inference example of the Tensorflow Object detection API that can run on batches of images sim


By : James Wells
Date : March 29 2020, 07:55 AM
Hope that helps Instead of passing just one numpy array of the size (1, image_width, image_heigt, 3) you can pass a numpy array with your image batch of the size (batch_size, image_width, image_heigt, 3) to the sess.run command:
code :
output_dict = sess.run(tensor_dict, feed_dict={image_tensor: image_batch})
Tensorflow real time object detection

Tensorflow real time object detection


By : nabil horen
Date : March 29 2020, 07:55 AM
this one helps. 1) With tensorflow you can start with 150-200 images of each class to start testing with some decent initial results. You may have to increase the images based on results
2) Yes
Related Posts Related Posts :
  • read a binary file (python)
  • Number Sequence in MySQL
  • Unit testing in python?
  • s/mime v3 with M2Crypto
  • Using wget via Python
  • Running a python script in background from a CGI
  • Edit Distance in Python
  • how to read url data
  • A RAM error of big array
  • Python: For loop problem
  • How do I create a list with 256 elements?
  • RAW Image processing in Python
  • access django session from a decorator
  • Multi-Threaded data insertion in MySQL using python
  • Making all variables accessible to namespace
  • What are the differences among sqlite3 from python2.5, pysqlite and apsw
  • Detect marker with opencv and python
  • Python ctypes and dynamic linking
  • Downloading a directory tree with ftplib
  • ImportError: No Module named simplejson
  • Pre-generating GUIDs for use in python?
  • `ipython` tab autocomplete does not work on imported module
  • Matching blank entries in django queryset for optional field with corresponding ones in a required field
  • Control VLC from Python in Windows
  • Return unicode string from python via ajax
  • Dynamically customize django admin columns?
  • Any way to add tabbed forms in django administration site?
  • Python / SQLite - database locked despite large timeouts
  • Encoding in XML declaration python
  • deletion of folders
  • Python Mindstorms RCX
  • Regex to split on successions of newline characters
  • Unicode filename to python subprocess.call()
  • Removing non-breaking spaces from strings using Python
  • Is there a way to backup everything in an app-engine blobstore?
  • Process a set of files from a source directory to a destination directory in Python
  • How to know if optparse option was passed in the command line or as a default
  • Rewriting a for loop in pure NumPy to decrease execution time
  • Comparing a time delta in python
  • Find next lower item in a sorted list
  • MySQLdb not INSERTING, _mysql does fine
  • Scipy Negative Distance? What?
  • What are the common patterns in web programming?
  • Percent signs in windows path
  • How to add a random number to a subsection of a numpy array?
  • How to generate all the values of an iterable besides the last few?
  • Searching by both class and range in XPath
  • Python code execution in Perl interpreter
  • Best Way to Include Variable in Python3
  • Serialize the @property methods in a Python class
  • What is the most platform- and Python-version-independent way to make a fast loop for use in Python?
  • Good way to edit the previous defined class in ipython
  • Bounced email on Google App Engine
  • Search jpeg files using python
  • Dynamically create class attributes
  • python unichr problem
  • Python beginner, strange output problem
  • Python: Finding a value in 1 list and finding that corresponding index in another list
  • can't install mysqlclient on mac os x mojave
  • Error indicates flattened dimensions when loading pre-trained network
  • shadow
    Privacy Policy - Terms - Contact Us © 35dp-dentalpractice.co.uk