Unable to test and deploy a deeplabv3-mobilenetv2 tensorflow-lite segmentation model for inference
up vote
1
down vote
favorite
We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.
We were able to successfully convert the model with the following command:-
CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
--graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
--input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize
The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-
bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
INFO: Elapsed time: 0.154s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
STARTING!
Num runs: [50]
Inter-run delay (seconds): [-1]
Num threads: [1]
Benchmark name:
Output prefix:
Warmup runs: [1]
Graph: path/to/venv/tensorflow/toco256.tflite]
Input layers:
Input shapes:
Use nnapi : [0]
Loaded model path/to/venv/tensorflow/toco256.tflite
resolved reporter
Initialized session in 45.556ms
Running benchmark for 1 iterations
tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
Node number 24 (PAD) failed to prepare.
Failed to invoke!
Aborted (core dumped)
We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.
This issue seems to be similar to the following github issue:
(https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.
Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
Tensorflow version: 1.11
Bazel version: 0.17.2
OS: Ubuntu 18.04
Also the android application was not able to load the model properly (tflite interpretr)
So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?
UPDATE:-
Using tensorflow 1.12, we got a new error :
$ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`
tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.
Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-
F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match
In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).
android tensorflow tensorflow-lite deeplab
add a comment |
up vote
1
down vote
favorite
We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.
We were able to successfully convert the model with the following command:-
CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
--graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
--input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize
The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-
bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
INFO: Elapsed time: 0.154s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
STARTING!
Num runs: [50]
Inter-run delay (seconds): [-1]
Num threads: [1]
Benchmark name:
Output prefix:
Warmup runs: [1]
Graph: path/to/venv/tensorflow/toco256.tflite]
Input layers:
Input shapes:
Use nnapi : [0]
Loaded model path/to/venv/tensorflow/toco256.tflite
resolved reporter
Initialized session in 45.556ms
Running benchmark for 1 iterations
tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
Node number 24 (PAD) failed to prepare.
Failed to invoke!
Aborted (core dumped)
We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.
This issue seems to be similar to the following github issue:
(https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.
Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
Tensorflow version: 1.11
Bazel version: 0.17.2
OS: Ubuntu 18.04
Also the android application was not able to load the model properly (tflite interpretr)
So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?
UPDATE:-
Using tensorflow 1.12, we got a new error :
$ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`
tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.
Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-
F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match
In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).
android tensorflow tensorflow-lite deeplab
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.
We were able to successfully convert the model with the following command:-
CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
--graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
--input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize
The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-
bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
INFO: Elapsed time: 0.154s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
STARTING!
Num runs: [50]
Inter-run delay (seconds): [-1]
Num threads: [1]
Benchmark name:
Output prefix:
Warmup runs: [1]
Graph: path/to/venv/tensorflow/toco256.tflite]
Input layers:
Input shapes:
Use nnapi : [0]
Loaded model path/to/venv/tensorflow/toco256.tflite
resolved reporter
Initialized session in 45.556ms
Running benchmark for 1 iterations
tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
Node number 24 (PAD) failed to prepare.
Failed to invoke!
Aborted (core dumped)
We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.
This issue seems to be similar to the following github issue:
(https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.
Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
Tensorflow version: 1.11
Bazel version: 0.17.2
OS: Ubuntu 18.04
Also the android application was not able to load the model properly (tflite interpretr)
So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?
UPDATE:-
Using tensorflow 1.12, we got a new error :
$ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`
tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.
Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-
F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match
In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).
android tensorflow tensorflow-lite deeplab
We are trying to run a semantic segmentation model on android using deeplabv3 and mobilenetv2.We followed the official tensorflow lite conversion procedure using TOCO and tflite_convert with the help of bazel.The source frozen graph was obtained from the official TensorFlow DeepLab Model Zoo.
We were able to successfully convert the model with the following command:-
CUDA_VISIBLE_DEVICES="0" toco --output_file=toco256.tflite
--graph_def_file=path/to/deeplab/deeplabv3_mnv2_pascal_trainval/frozen_inference_graph.pb
--input_arrays=ImageTensor --output_arrays=SemanticPredictions --input_shapes=1,256,256,3 --inference_input_type=QUANTIZED_UINT8 --inference_type=FLOAT --mean_values=128 --std_dev_values=127 --allow_custom_ops --post_training_quantize
The size of the tflite file was around 2.25 Mb.But when we tried to test the model using the official benchmark tool, it failed with the following error report :-
bazel run -c opt tensorflow/contrib/lite/tools/benchmark:benchmark_model -- --graph=`realpath toco256.tflite`
INFO: Analysed target //tensorflow/contrib/lite/tools/benchmark:benchmark_model (0 packages loaded).
INFO: Found 1 target...
Target //tensorflow/contrib/lite/tools/benchmark:benchmark_model up-to-date:
bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model
INFO: Elapsed time: 0.154s, Critical Path: 0.00s
INFO: 0 processes.
INFO: Build completed successfully, 1 total action
INFO: Running command line: bazel-bin/tensorflow/contrib/lite/tools/benchmark/benchmark_model '--graph=path/to/deeplab/venINFO: Build completed successfully, 1 total action
STARTING!
Num runs: [50]
Inter-run delay (seconds): [-1]
Num threads: [1]
Benchmark name:
Output prefix:
Warmup runs: [1]
Graph: path/to/venv/tensorflow/toco256.tflite]
Input layers:
Input shapes:
Use nnapi : [0]
Loaded model path/to/venv/tensorflow/toco256.tflite
resolved reporter
Initialized session in 45.556ms
Running benchmark for 1 iterations
tensorflow/contrib/lite/kernels/pad.cc:96 op_context.dims != 4 (3 != 4)
Node number 24 (PAD) failed to prepare.
Failed to invoke!
Aborted (core dumped)
We also tried the same command without including the 'allow_custom_ops' and 'post_training_quantize' options and even used the same input size as 1,513,513,3; but the result was the same.
This issue seems to be similar to the following github issue:
(https://github.com/tensorflow/tensorflow/issues/21266). However in the latest version of TensorFlow the issue is supposed to be fixed.
Model: http://download.tensorflow.org/models/deeplabv3_mnv2_pascal_trainval_2018_01_29.tar.gz
Tensorflow version: 1.11
Bazel version: 0.17.2
OS: Ubuntu 18.04
Also the android application was not able to load the model properly (tflite interpretr)
So, how can we convert a segmentation model properly to a tflite format which can be used for inference on an android device?
UPDATE:-
Using tensorflow 1.12, we got a new error :
$ bazel run -c opt tensorflow/lite/tools/benchmark:benchmark_model -- --graph=`realpath /path/to/research/deeplab/venv/tensorflow/toco256.tflite`
tensorflow/lite/kernels/depthwise_conv.cc:99 params->depth_multiplier * SizeOfDimension(input, 3) != SizeOfDimension(filter, 3) (0 != 32)
Node number 30 (DEPTHWISE_CONV_2D) failed to prepare.
Also,while using a newer version of the same model(3 Mb .pb file) with depth_multiplier=0.5 from the tensorflow deeplab model zoo, we got a different error:-
F tensorflow/lite/toco/graph_transformations/propagate_fixed_sizes.cc:116] Check failed: dim_x == dim_y (3 vs. 32)Dimensions must match
In this case we used the same aforementioned command for tflite conversion ;but we were not even able to produce a 'tflite' file as output.It seems to be an issue with depth multiplier values.(Even we tried giving the depth_multiplier parameter as argument at the time of conversion).
android tensorflow tensorflow-lite deeplab
android tensorflow tensorflow-lite deeplab
edited Nov 15 at 15:44
asked Nov 9 at 15:47
user3177661
62
62
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.
Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
– user3177661
Nov 24 at 3:51
Also, refer (github.com/tensorflow/tensorflow/issues/23747)
– user3177661
Nov 24 at 3:53
I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
– Alpha
Nov 26 at 2:11
Yes, we can inspect & see the whole network using Netron !!!
– user3177661
Nov 26 at 9:16
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.
Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
– user3177661
Nov 24 at 3:51
Also, refer (github.com/tensorflow/tensorflow/issues/23747)
– user3177661
Nov 24 at 3:53
I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
– Alpha
Nov 26 at 2:11
Yes, we can inspect & see the whole network using Netron !!!
– user3177661
Nov 26 at 9:16
add a comment |
up vote
0
down vote
I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.
Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
– user3177661
Nov 24 at 3:51
Also, refer (github.com/tensorflow/tensorflow/issues/23747)
– user3177661
Nov 24 at 3:53
I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
– Alpha
Nov 26 at 2:11
Yes, we can inspect & see the whole network using Netron !!!
– user3177661
Nov 26 at 9:16
add a comment |
up vote
0
down vote
up vote
0
down vote
I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.
I have same issue. from https://github.com/tantara/JejuNet I see that he was successfully convert model to tflite. I PM him for help, but unfortunately no response right now.
answered Nov 23 at 1:25
Alpha
212
212
Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
– user3177661
Nov 24 at 3:51
Also, refer (github.com/tensorflow/tensorflow/issues/23747)
– user3177661
Nov 24 at 3:53
I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
– Alpha
Nov 26 at 2:11
Yes, we can inspect & see the whole network using Netron !!!
– user3177661
Nov 26 at 9:16
add a comment |
Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
– user3177661
Nov 24 at 3:51
Also, refer (github.com/tensorflow/tensorflow/issues/23747)
– user3177661
Nov 24 at 3:53
I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
– Alpha
Nov 26 at 2:11
Yes, we can inspect & see the whole network using Netron !!!
– user3177661
Nov 26 at 9:16
Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
– user3177661
Nov 24 at 3:51
Actually, we were also trying the same and we even had a discussion with the same person.Unfortunately, we couldn't still resolve the issue.But it looks like the problem lies in the initial layers of the frozen graph where some p reprocessing operations related to size or dimension adjustment is being carried out(input is [1, ?,?, 3], for accepting arbitrary size ).Some of them may not be supported by tensorflow lite(which expects fixed size inputs).May be if we remove or skip these it may work.Otherwise we may have to retrain the network after modifying it.
– user3177661
Nov 24 at 3:51
Also, refer (github.com/tensorflow/tensorflow/issues/23747)
– user3177661
Nov 24 at 3:53
Also, refer (github.com/tensorflow/tensorflow/issues/23747)
– user3177661
Nov 24 at 3:53
I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
– Alpha
Nov 26 at 2:11
I dumped JejuNet tflite, it's very different with we converted from official model. the input node is MobilenetV2/MobilenetV2/input and type uint8[1,256,256,3]. total OPs only 71. but we have 156
– Alpha
Nov 26 at 2:11
Yes, we can inspect & see the whole network using Netron !!!
– user3177661
Nov 26 at 9:16
Yes, we can inspect & see the whole network using Netron !!!
– user3177661
Nov 26 at 9:16
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53228969%2funable-to-test-and-deploy-a-deeplabv3-mobilenetv2-tensorflow-lite-segmentation-m%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown