Unable to test and deploy a deeplabv3-mobilenetv2 tensorflow-lite segmentation model for inference











up vote
1
down vote

favorite












We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.



We were able to successfully convert the model with the following command:-




CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
--graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
--input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize




The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-



bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
INFO: Elapsed time: 0.154s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
STARTING!
Num runs: [50]
Inter-run delay (seconds): [-1]
Num threads: [1]
Benchmark name:
Output prefix:
Warmup runs: [1]
Graph: path/to/venv/tensorflow/toco256.tflite]
Input layers:
Input shapes:
Use nnapi : [0]
Loaded model path/to/venv/tensorflow/toco256.tflite
resolved reporter
Initialized session in 45.556ms
Running benchmark for 1 iterations
tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
Node number 24 (PAD) failed to prepare.

Failed to invoke!
Aborted (core dumped)


We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.



This issue seems to be similar to the following github issue:
(https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.



Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
Tensorflow version: 1.11
Bazel version: 0.17.2
OS: Ubuntu 18.04



Also the android application was not able to load the model properly (tflite interpretr)



So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?



UPDATE:-



Using tensorflow 1.12, we got a new error :



$ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`

tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.


Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-



F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match


In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).










share|improve this question




























    up vote
    1
    down vote

    favorite












    We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.



    We were able to successfully convert the model with the following command:-




    CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
    --graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
    --input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize




    The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-



    bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
    INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
    INFO: Found 1 target...
    Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
    bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
    INFO: Elapsed time: 0.154s, Critical Path: 0.00s
    INFO: 0 processes.
    INFO: Build completed successfully, 1 total action
    INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
    STARTING!
    Num runs: [50]
    Inter-run delay (seconds): [-1]
    Num threads: [1]
    Benchmark name:
    Output prefix:
    Warmup runs: [1]
    Graph: path/to/venv/tensorflow/toco256.tflite]
    Input layers:
    Input shapes:
    Use nnapi : [0]
    Loaded model path/to/venv/tensorflow/toco256.tflite
    resolved reporter
    Initialized session in 45.556ms
    Running benchmark for 1 iterations
    tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
    Node number 24 (PAD) failed to prepare.

    Failed to invoke!
    Aborted (core dumped)


    We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.



    This issue seems to be similar to the following github issue:
    (https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.



    Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
    Tensorflow version: 1.11
    Bazel version: 0.17.2
    OS: Ubuntu 18.04



    Also the android application was not able to load the model properly (tflite interpretr)



    So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?



    UPDATE:-



    Using tensorflow 1.12, we got a new error :



    $ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`

    tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
    Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.


    Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-



    F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match


    In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).










    share|improve this question


























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.



      We were able to successfully convert the model with the following command:-




      CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
      --graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
      --input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize




      The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-



      bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
      INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
      INFO: Found 1 target...
      Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
      bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
      INFO: Elapsed time: 0.154s, Critical Path: 0.00s
      INFO: 0 processes.
      INFO: Build completed successfully, 1 total action
      INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
      STARTING!
      Num runs: [50]
      Inter-run delay (seconds): [-1]
      Num threads: [1]
      Benchmark name:
      Output prefix:
      Warmup runs: [1]
      Graph: path/to/venv/tensorflow/toco256.tflite]
      Input layers:
      Input shapes:
      Use nnapi : [0]
      Loaded model path/to/venv/tensorflow/toco256.tflite
      resolved reporter
      Initialized session in 45.556ms
      Running benchmark for 1 iterations
      tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
      Node number 24 (PAD) failed to prepare.

      Failed to invoke!
      Aborted (core dumped)


      We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.



      This issue seems to be similar to the following github issue:
      (https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.



      Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
      Tensorflow version: 1.11
      Bazel version: 0.17.2
      OS: Ubuntu 18.04



      Also the android application was not able to load the model properly (tflite interpretr)



      So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?



      UPDATE:-



      Using tensorflow 1.12, we got a new error :



      $ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`

      tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
      Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.


      Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-



      F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match


      In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).










      share|improve this question















      We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.



      We were able to successfully convert the model with the following command:-




      CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
      --graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
      --input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize




      The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-



      bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
      INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
      INFO: Found 1 target...
      Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
      bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
      INFO: Elapsed time: 0.154s, Critical Path: 0.00s
      INFO: 0 processes.
      INFO: Build completed successfully, 1 total action
      INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
      STARTING!
      Num runs: [50]
      Inter-run delay (seconds): [-1]
      Num threads: [1]
      Benchmark name:
      Output prefix:
      Warmup runs: [1]
      Graph: path/to/venv/tensorflow/toco256.tflite]
      Input layers:
      Input shapes:
      Use nnapi : [0]
      Loaded model path/to/venv/tensorflow/toco256.tflite
      resolved reporter
      Initialized session in 45.556ms
      Running benchmark for 1 iterations
      tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
      Node number 24 (PAD) failed to prepare.

      Failed to invoke!
      Aborted (core dumped)


      We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.



      This issue seems to be similar to the following github issue:
      (https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.



      Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
      Tensorflow version: 1.11
      Bazel version: 0.17.2
      OS: Ubuntu 18.04



      Also the android application was not able to load the model properly (tflite interpretr)



      So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?



      UPDATE:-



      Using tensorflow 1.12, we got a new error :



      $ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`

      tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
      Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.


      Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-



      F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match


      In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).







      android tensorflow tensorflow-lite deeplab






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 15 at 15:44

























      asked Nov 9 at 15:47









      user3177661

      62




      62
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          0
          down vote













          I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.






          share|improve this answer





















          • Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
            – user3177661
            Nov 24 at 3:51










          • Also, refer (github.com/tensorflow/tensorflow/issues/23747)
            – user3177661
            Nov 24 at 3:53










          • I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
            – Alpha
            Nov 26 at 2:11










          • Yes, we can inspect & see the whole network using Netron !!!
            – user3177661
            Nov 26 at 9:16











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53228969%2funable-to-test-and-deploy-a-deeplabv3-mobilenetv2-tensorflow-lite-segmentation-m%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          0
          down vote













          I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.






          share|improve this answer





















          • Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
            – user3177661
            Nov 24 at 3:51










          • Also, refer (github.com/tensorflow/tensorflow/issues/23747)
            – user3177661
            Nov 24 at 3:53










          • I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
            – Alpha
            Nov 26 at 2:11










          • Yes, we can inspect & see the whole network using Netron !!!
            – user3177661
            Nov 26 at 9:16















          up vote
          0
          down vote













          I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.






          share|improve this answer





















          • Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
            – user3177661
            Nov 24 at 3:51










          • Also, refer (github.com/tensorflow/tensorflow/issues/23747)
            – user3177661
            Nov 24 at 3:53










          • I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
            – Alpha
            Nov 26 at 2:11










          • Yes, we can inspect & see the whole network using Netron !!!
            – user3177661
            Nov 26 at 9:16













          up vote
          0
          down vote










          up vote
          0
          down vote









          I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.






          share|improve this answer












          I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 23 at 1:25









          Alpha

          212




          212












          • Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
            – user3177661
            Nov 24 at 3:51










          • Also, refer (github.com/tensorflow/tensorflow/issues/23747)
            – user3177661
            Nov 24 at 3:53










          • I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
            – Alpha
            Nov 26 at 2:11










          • Yes, we can inspect & see the whole network using Netron !!!
            – user3177661
            Nov 26 at 9:16


















          • Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
            – user3177661
            Nov 24 at 3:51










          • Also, refer (github.com/tensorflow/tensorflow/issues/23747)
            – user3177661
            Nov 24 at 3:53










          • I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
            – Alpha
            Nov 26 at 2:11










          • Yes, we can inspect & see the whole network using Netron !!!
            – user3177661
            Nov 26 at 9:16
















          Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
          – user3177661
          Nov 24 at 3:51




          Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
          – user3177661
          Nov 24 at 3:51












          Also, refer (github.com/tensorflow/tensorflow/issues/23747)
          – user3177661
          Nov 24 at 3:53




          Also, refer (github.com/tensorflow/tensorflow/issues/23747)
          – user3177661
          Nov 24 at 3:53












          I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
          – Alpha
          Nov 26 at 2:11




          I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
          – Alpha
          Nov 26 at 2:11












          Yes, we can inspect & see the whole network using Netron !!!
          – user3177661
          Nov 26 at 9:16




          Yes, we can inspect & see the whole network using Netron !!!
          – user3177661
          Nov 26 at 9:16


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.





          Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


          Please pay close attention to the following guidance:


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53228969%2funable-to-test-and-deploy-a-deeplabv3-mobilenetv2-tensorflow-lite-segmentation-m%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Schultheiß

          Verwaltungsgliederung Dänemarks

          Liste der Kulturdenkmale in Wilsdruff