Standard Deviation Pooling with Keras











up vote
0
down vote

favorite












I am trying to implement a standard deviation pooling layer using keras. The idea is similar to implement a layer with a functionality similar to AveragePooling1D, but calculating standard deviation instead.



My first course of action was to try and implement this as a Lambda layer. It should take a 3d tensor such as (batch_size,time,features) and a stride integer (indicating the size of the window). It should return a tensor with shape (batch_size,time,features).



My implementation follows:



import tensorflow
import keras
from keras.layers import Dense, TimeDistributed, Lambda, Input
import numpy as np
import keras.backend as K

def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds

ipt = Input(shape=(None,10))
d = TimeDistributed(Dense(10))(ipt)
out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

m = keras.Model(inputs=ipt, outputs=out)
x = np.arange(1000).reshape(1,-1,10)
m.predict(x).shape


However, my output (which shows the data types for both the stride and the data tensors, in this order) is this:



int32 float32 ---
float32 float32 ---


And the stack trace is this:





---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
1 ipt = Input(shape=(None,10))
2 d = TimeDistributed(Dense(10))(ipt)
----> 3 out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/base_layer.pyc in __call__(self, inputs, **kwargs)
472 if all([s is not None
473 for s in to_list(input_shape)]):
--> 474 output_shape = self.compute_output_shape(input_shape)
475 else:
476 if isinstance(input_shape, list):

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in compute_output_shape(self, input_shape)
643 if isinstance(input_shape, list):
644 xs = [K.placeholder(shape=shape) for shape in input_shape]
--> 645 x = self.call(xs)
646 else:
647 x = K.placeholder(shape=input_shape)

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in call(self, inputs, mask)
680 if has_arg(self.function, 'mask'):
681 arguments['mask'] = mask
--> 682 return self.function(inputs, **arguments)
683
684 def compute_mask(self, inputs, mask=None):

in stdev_pooling(inputs)
5 print K.dtype(stride), K.dtype(data), '---'
6
----> 7 num_windows = K.shape(data)[1] / stride
8
9 idxs = K.arange(num_windows-1) * stride

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in binary_op_wrapper(x, y)
848 with ops.name_scope(None, op_name, [x, y]) as name:
849 if isinstance(x, ops.Tensor) and isinstance(y, ops.Tensor):
--> 850 return func(x, y, name=name)
851 elif not isinstance(y, sparse_tensor.SparseTensor):
852 try:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in _div_python2(x, y, name)
972 with ops.name_scope(name, "div", [x, y]) as name:
973 x = ops.convert_to_tensor(x, name="x")
--> 974 y = ops.convert_to_tensor(y, name="y", dtype=x.dtype.base_dtype)
975 x_dtype = x.dtype.base_dtype
976 y_dtype = y.dtype.base_dtype

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in convert_to_tensor(value, dtype, name, preferred_dtype)
996 name=name,
997 preferred_dtype=preferred_dtype,
--> 998 as_ref=False)
999
1000

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
1092
1093 if ret is None:
-> 1094 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1095
1096 if ret is NotImplemented:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _TensorTensorConversionFunction(t, dtype, name, as_ref)
929 raise ValueError(
930 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 931 (dtype.name, t.dtype.name, str(t)))
932 return t
933

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("lambda_9/Placeholder_1:0", shape=(), dtype=float32)'



Interestingly, as far as I understand, it says that the stride variable is a float32, which shoud be converted to int32, although it was declared an int32 variable K.variable(20, dtype='int32', name='stride_var').



What is wrong here? Any help would be much appreciated! Thanks!



EDIT:



As @BlackBear suggested, I added an explicit cast to stride and it seems to have solved part of the problem:



def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds


output:



int32 float32 ---
int32 float32 ---


However, now I have a new error that I have no idea where it comes from!



Here is the stack trace:





---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
3 x = np.arange(2000).reshape(2,-1,10)
4
----> 5 m = keras.Model(inputs=ipt, outputs=out)
6
7 m.predict(x).shape

/home/juliano/.local/lib/python2.7/site-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name +
90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in __init__(self, *args, **kwargs)
91 'inputs' in kwargs and 'outputs' in kwargs):
92 # Graph network
---> 93 self._init_graph_network(*args, **kwargs)
94 else:
95 # Subclassed network

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _init_graph_network(self, inputs, outputs, name)
235 # Keep track of the network's nodes and layers.
236 nodes, nodes_by_depth, layers, layers_by_depth = _map_graph_network(
--> 237 self.inputs, self.outputs)
238 self._network_nodes = nodes
239 self._nodes_by_depth = nodes_by_depth

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _map_graph_network(inputs, outputs)
1351 layer=layer,
1352 node_index=node_index,
-> 1353 tensor_index=tensor_index)
1354
1355 for node in reversed(nodes_in_decreasing_depth):

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1338 tensor_index = node.tensor_indices[i]
1339 build_map(x, finished_nodes, nodes_in_progress, layer,
-> 1340 node_index, tensor_index)
1341
1342 finished_nodes.add(node)

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1310 ValueError: if a cycle is detected.
1311 """
-> 1312 node = layer._inbound_nodes[node_index]
1313
1314 # Prevent cycles.

AttributeError: 'NoneType' object has no attribute '_inbound_nodes'



EDIT: I've updated my stdev_pooling function and it now returns the correct output. However, I'm still having the AttributeError: 'NoneType' object has no attribute '_inbound_nodes' error...










share|improve this question




















  • 1




    Seems like keras calls your function twice: the first time stride is an int32, but the second time it is a float32. I cannot explain why this happens, but adding an explicit cast to int32 could help
    – BlackBear
    Nov 8 at 17:10












  • Yup! It seems like so. I'll try that. Thanks!
    – Juliano Foleiss
    Nov 8 at 17:12






  • 1




    @JulianoFoleiss Not related to the error you get: are you sure your implementation is correct? Window size and stride are two different things and I don't see any variable representing window size in your code?
    – today
    Nov 8 at 17:17










  • @today You are right. In this case I am considering that window size == stride, in other words there's no overlap between windows.
    – Juliano Foleiss
    Nov 8 at 17:37










  • @BlackBear I added the explicit cast to int32 right after the beginning of the function and now the error is gone. Thanks! The fact that the function is called twice is interesting. However, now I have another error. I'll edit the question accordingly. Take a look if you want =D
    – Juliano Foleiss
    Nov 8 at 17:45















up vote
0
down vote

favorite












I am trying to implement a standard deviation pooling layer using keras. The idea is similar to implement a layer with a functionality similar to AveragePooling1D, but calculating standard deviation instead.



My first course of action was to try and implement this as a Lambda layer. It should take a 3d tensor such as (batch_size,time,features) and a stride integer (indicating the size of the window). It should return a tensor with shape (batch_size,time,features).



My implementation follows:



import tensorflow
import keras
from keras.layers import Dense, TimeDistributed, Lambda, Input
import numpy as np
import keras.backend as K

def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds

ipt = Input(shape=(None,10))
d = TimeDistributed(Dense(10))(ipt)
out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

m = keras.Model(inputs=ipt, outputs=out)
x = np.arange(1000).reshape(1,-1,10)
m.predict(x).shape


However, my output (which shows the data types for both the stride and the data tensors, in this order) is this:



int32 float32 ---
float32 float32 ---


And the stack trace is this:





---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
1 ipt = Input(shape=(None,10))
2 d = TimeDistributed(Dense(10))(ipt)
----> 3 out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/base_layer.pyc in __call__(self, inputs, **kwargs)
472 if all([s is not None
473 for s in to_list(input_shape)]):
--> 474 output_shape = self.compute_output_shape(input_shape)
475 else:
476 if isinstance(input_shape, list):

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in compute_output_shape(self, input_shape)
643 if isinstance(input_shape, list):
644 xs = [K.placeholder(shape=shape) for shape in input_shape]
--> 645 x = self.call(xs)
646 else:
647 x = K.placeholder(shape=input_shape)

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in call(self, inputs, mask)
680 if has_arg(self.function, 'mask'):
681 arguments['mask'] = mask
--> 682 return self.function(inputs, **arguments)
683
684 def compute_mask(self, inputs, mask=None):

in stdev_pooling(inputs)
5 print K.dtype(stride), K.dtype(data), '---'
6
----> 7 num_windows = K.shape(data)[1] / stride
8
9 idxs = K.arange(num_windows-1) * stride

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in binary_op_wrapper(x, y)
848 with ops.name_scope(None, op_name, [x, y]) as name:
849 if isinstance(x, ops.Tensor) and isinstance(y, ops.Tensor):
--> 850 return func(x, y, name=name)
851 elif not isinstance(y, sparse_tensor.SparseTensor):
852 try:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in _div_python2(x, y, name)
972 with ops.name_scope(name, "div", [x, y]) as name:
973 x = ops.convert_to_tensor(x, name="x")
--> 974 y = ops.convert_to_tensor(y, name="y", dtype=x.dtype.base_dtype)
975 x_dtype = x.dtype.base_dtype
976 y_dtype = y.dtype.base_dtype

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in convert_to_tensor(value, dtype, name, preferred_dtype)
996 name=name,
997 preferred_dtype=preferred_dtype,
--> 998 as_ref=False)
999
1000

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
1092
1093 if ret is None:
-> 1094 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1095
1096 if ret is NotImplemented:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _TensorTensorConversionFunction(t, dtype, name, as_ref)
929 raise ValueError(
930 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 931 (dtype.name, t.dtype.name, str(t)))
932 return t
933

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("lambda_9/Placeholder_1:0", shape=(), dtype=float32)'



Interestingly, as far as I understand, it says that the stride variable is a float32, which shoud be converted to int32, although it was declared an int32 variable K.variable(20, dtype='int32', name='stride_var').



What is wrong here? Any help would be much appreciated! Thanks!



EDIT:



As @BlackBear suggested, I added an explicit cast to stride and it seems to have solved part of the problem:



def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds


output:



int32 float32 ---
int32 float32 ---


However, now I have a new error that I have no idea where it comes from!



Here is the stack trace:





---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
3 x = np.arange(2000).reshape(2,-1,10)
4
----> 5 m = keras.Model(inputs=ipt, outputs=out)
6
7 m.predict(x).shape

/home/juliano/.local/lib/python2.7/site-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name +
90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in __init__(self, *args, **kwargs)
91 'inputs' in kwargs and 'outputs' in kwargs):
92 # Graph network
---> 93 self._init_graph_network(*args, **kwargs)
94 else:
95 # Subclassed network

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _init_graph_network(self, inputs, outputs, name)
235 # Keep track of the network's nodes and layers.
236 nodes, nodes_by_depth, layers, layers_by_depth = _map_graph_network(
--> 237 self.inputs, self.outputs)
238 self._network_nodes = nodes
239 self._nodes_by_depth = nodes_by_depth

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _map_graph_network(inputs, outputs)
1351 layer=layer,
1352 node_index=node_index,
-> 1353 tensor_index=tensor_index)
1354
1355 for node in reversed(nodes_in_decreasing_depth):

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1338 tensor_index = node.tensor_indices[i]
1339 build_map(x, finished_nodes, nodes_in_progress, layer,
-> 1340 node_index, tensor_index)
1341
1342 finished_nodes.add(node)

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1310 ValueError: if a cycle is detected.
1311 """
-> 1312 node = layer._inbound_nodes[node_index]
1313
1314 # Prevent cycles.

AttributeError: 'NoneType' object has no attribute '_inbound_nodes'



EDIT: I've updated my stdev_pooling function and it now returns the correct output. However, I'm still having the AttributeError: 'NoneType' object has no attribute '_inbound_nodes' error...










share|improve this question




















  • 1




    Seems like keras calls your function twice: the first time stride is an int32, but the second time it is a float32. I cannot explain why this happens, but adding an explicit cast to int32 could help
    – BlackBear
    Nov 8 at 17:10












  • Yup! It seems like so. I'll try that. Thanks!
    – Juliano Foleiss
    Nov 8 at 17:12






  • 1




    @JulianoFoleiss Not related to the error you get: are you sure your implementation is correct? Window size and stride are two different things and I don't see any variable representing window size in your code?
    – today
    Nov 8 at 17:17










  • @today You are right. In this case I am considering that window size == stride, in other words there's no overlap between windows.
    – Juliano Foleiss
    Nov 8 at 17:37










  • @BlackBear I added the explicit cast to int32 right after the beginning of the function and now the error is gone. Thanks! The fact that the function is called twice is interesting. However, now I have another error. I'll edit the question accordingly. Take a look if you want =D
    – Juliano Foleiss
    Nov 8 at 17:45













up vote
0
down vote

favorite









up vote
0
down vote

favorite











I am trying to implement a standard deviation pooling layer using keras. The idea is similar to implement a layer with a functionality similar to AveragePooling1D, but calculating standard deviation instead.



My first course of action was to try and implement this as a Lambda layer. It should take a 3d tensor such as (batch_size,time,features) and a stride integer (indicating the size of the window). It should return a tensor with shape (batch_size,time,features).



My implementation follows:



import tensorflow
import keras
from keras.layers import Dense, TimeDistributed, Lambda, Input
import numpy as np
import keras.backend as K

def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds

ipt = Input(shape=(None,10))
d = TimeDistributed(Dense(10))(ipt)
out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

m = keras.Model(inputs=ipt, outputs=out)
x = np.arange(1000).reshape(1,-1,10)
m.predict(x).shape


However, my output (which shows the data types for both the stride and the data tensors, in this order) is this:



int32 float32 ---
float32 float32 ---


And the stack trace is this:





---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
1 ipt = Input(shape=(None,10))
2 d = TimeDistributed(Dense(10))(ipt)
----> 3 out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/base_layer.pyc in __call__(self, inputs, **kwargs)
472 if all([s is not None
473 for s in to_list(input_shape)]):
--> 474 output_shape = self.compute_output_shape(input_shape)
475 else:
476 if isinstance(input_shape, list):

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in compute_output_shape(self, input_shape)
643 if isinstance(input_shape, list):
644 xs = [K.placeholder(shape=shape) for shape in input_shape]
--> 645 x = self.call(xs)
646 else:
647 x = K.placeholder(shape=input_shape)

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in call(self, inputs, mask)
680 if has_arg(self.function, 'mask'):
681 arguments['mask'] = mask
--> 682 return self.function(inputs, **arguments)
683
684 def compute_mask(self, inputs, mask=None):

in stdev_pooling(inputs)
5 print K.dtype(stride), K.dtype(data), '---'
6
----> 7 num_windows = K.shape(data)[1] / stride
8
9 idxs = K.arange(num_windows-1) * stride

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in binary_op_wrapper(x, y)
848 with ops.name_scope(None, op_name, [x, y]) as name:
849 if isinstance(x, ops.Tensor) and isinstance(y, ops.Tensor):
--> 850 return func(x, y, name=name)
851 elif not isinstance(y, sparse_tensor.SparseTensor):
852 try:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in _div_python2(x, y, name)
972 with ops.name_scope(name, "div", [x, y]) as name:
973 x = ops.convert_to_tensor(x, name="x")
--> 974 y = ops.convert_to_tensor(y, name="y", dtype=x.dtype.base_dtype)
975 x_dtype = x.dtype.base_dtype
976 y_dtype = y.dtype.base_dtype

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in convert_to_tensor(value, dtype, name, preferred_dtype)
996 name=name,
997 preferred_dtype=preferred_dtype,
--> 998 as_ref=False)
999
1000

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
1092
1093 if ret is None:
-> 1094 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1095
1096 if ret is NotImplemented:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _TensorTensorConversionFunction(t, dtype, name, as_ref)
929 raise ValueError(
930 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 931 (dtype.name, t.dtype.name, str(t)))
932 return t
933

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("lambda_9/Placeholder_1:0", shape=(), dtype=float32)'



Interestingly, as far as I understand, it says that the stride variable is a float32, which shoud be converted to int32, although it was declared an int32 variable K.variable(20, dtype='int32', name='stride_var').



What is wrong here? Any help would be much appreciated! Thanks!



EDIT:



As @BlackBear suggested, I added an explicit cast to stride and it seems to have solved part of the problem:



def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds


output:



int32 float32 ---
int32 float32 ---


However, now I have a new error that I have no idea where it comes from!



Here is the stack trace:





---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
3 x = np.arange(2000).reshape(2,-1,10)
4
----> 5 m = keras.Model(inputs=ipt, outputs=out)
6
7 m.predict(x).shape

/home/juliano/.local/lib/python2.7/site-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name +
90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in __init__(self, *args, **kwargs)
91 'inputs' in kwargs and 'outputs' in kwargs):
92 # Graph network
---> 93 self._init_graph_network(*args, **kwargs)
94 else:
95 # Subclassed network

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _init_graph_network(self, inputs, outputs, name)
235 # Keep track of the network's nodes and layers.
236 nodes, nodes_by_depth, layers, layers_by_depth = _map_graph_network(
--> 237 self.inputs, self.outputs)
238 self._network_nodes = nodes
239 self._nodes_by_depth = nodes_by_depth

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _map_graph_network(inputs, outputs)
1351 layer=layer,
1352 node_index=node_index,
-> 1353 tensor_index=tensor_index)
1354
1355 for node in reversed(nodes_in_decreasing_depth):

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1338 tensor_index = node.tensor_indices[i]
1339 build_map(x, finished_nodes, nodes_in_progress, layer,
-> 1340 node_index, tensor_index)
1341
1342 finished_nodes.add(node)

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1310 ValueError: if a cycle is detected.
1311 """
-> 1312 node = layer._inbound_nodes[node_index]
1313
1314 # Prevent cycles.

AttributeError: 'NoneType' object has no attribute '_inbound_nodes'



EDIT: I've updated my stdev_pooling function and it now returns the correct output. However, I'm still having the AttributeError: 'NoneType' object has no attribute '_inbound_nodes' error...










share|improve this question















I am trying to implement a standard deviation pooling layer using keras. The idea is similar to implement a layer with a functionality similar to AveragePooling1D, but calculating standard deviation instead.



My first course of action was to try and implement this as a Lambda layer. It should take a 3d tensor such as (batch_size,time,features) and a stride integer (indicating the size of the window). It should return a tensor with shape (batch_size,time,features).



My implementation follows:



import tensorflow
import keras
from keras.layers import Dense, TimeDistributed, Lambda, Input
import numpy as np
import keras.backend as K

def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds

ipt = Input(shape=(None,10))
d = TimeDistributed(Dense(10))(ipt)
out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

m = keras.Model(inputs=ipt, outputs=out)
x = np.arange(1000).reshape(1,-1,10)
m.predict(x).shape


However, my output (which shows the data types for both the stride and the data tensors, in this order) is this:



int32 float32 ---
float32 float32 ---


And the stack trace is this:





---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
in ()
1 ipt = Input(shape=(None,10))
2 d = TimeDistributed(Dense(10))(ipt)
----> 3 out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')])

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/base_layer.pyc in __call__(self, inputs, **kwargs)
472 if all([s is not None
473 for s in to_list(input_shape)]):
--> 474 output_shape = self.compute_output_shape(input_shape)
475 else:
476 if isinstance(input_shape, list):

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in compute_output_shape(self, input_shape)
643 if isinstance(input_shape, list):
644 xs = [K.placeholder(shape=shape) for shape in input_shape]
--> 645 x = self.call(xs)
646 else:
647 x = K.placeholder(shape=input_shape)

/home/juliano/.local/lib/python2.7/site-packages/keras/layers/core.pyc in call(self, inputs, mask)
680 if has_arg(self.function, 'mask'):
681 arguments['mask'] = mask
--> 682 return self.function(inputs, **arguments)
683
684 def compute_mask(self, inputs, mask=None):

in stdev_pooling(inputs)
5 print K.dtype(stride), K.dtype(data), '---'
6
----> 7 num_windows = K.shape(data)[1] / stride
8
9 idxs = K.arange(num_windows-1) * stride

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in binary_op_wrapper(x, y)
848 with ops.name_scope(None, op_name, [x, y]) as name:
849 if isinstance(x, ops.Tensor) and isinstance(y, ops.Tensor):
--> 850 return func(x, y, name=name)
851 elif not isinstance(y, sparse_tensor.SparseTensor):
852 try:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/ops/math_ops.pyc in _div_python2(x, y, name)
972 with ops.name_scope(name, "div", [x, y]) as name:
973 x = ops.convert_to_tensor(x, name="x")
--> 974 y = ops.convert_to_tensor(y, name="y", dtype=x.dtype.base_dtype)
975 x_dtype = x.dtype.base_dtype
976 y_dtype = y.dtype.base_dtype

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in convert_to_tensor(value, dtype, name, preferred_dtype)
996 name=name,
997 preferred_dtype=preferred_dtype,
--> 998 as_ref=False)
999
1000

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in internal_convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, ctx)
1092
1093 if ret is None:
-> 1094 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1095
1096 if ret is NotImplemented:

/home/juliano/.local/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _TensorTensorConversionFunction(t, dtype, name, as_ref)
929 raise ValueError(
930 "Tensor conversion requested dtype %s for Tensor with dtype %s: %r" %
--> 931 (dtype.name, t.dtype.name, str(t)))
932 return t
933

ValueError: Tensor conversion requested dtype int32 for Tensor with dtype float32: 'Tensor("lambda_9/Placeholder_1:0", shape=(), dtype=float32)'



Interestingly, as far as I understand, it says that the stride variable is a float32, which shoud be converted to int32, although it was declared an int32 variable K.variable(20, dtype='int32', name='stride_var').



What is wrong here? Any help would be much appreciated! Thanks!



EDIT:



As @BlackBear suggested, I added an explicit cast to stride and it seems to have solved part of the problem:



def stdev_pooling(inputs):
data, stride = inputs

stride = K.cast(stride, dtype='int32')

print K.dtype(stride), K.dtype(data), '---'

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds


output:



int32 float32 ---
int32 float32 ---


However, now I have a new error that I have no idea where it comes from!



Here is the stack trace:





---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
in ()
3 x = np.arange(2000).reshape(2,-1,10)
4
----> 5 m = keras.Model(inputs=ipt, outputs=out)
6
7 m.predict(x).shape

/home/juliano/.local/lib/python2.7/site-packages/keras/legacy/interfaces.pyc in wrapper(*args, **kwargs)
89 warnings.warn('Update your `' + object_name +
90 '` call to the Keras 2 API: ' + signature, stacklevel=2)
---> 91 return func(*args, **kwargs)
92 wrapper._original_function = func
93 return wrapper

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in __init__(self, *args, **kwargs)
91 'inputs' in kwargs and 'outputs' in kwargs):
92 # Graph network
---> 93 self._init_graph_network(*args, **kwargs)
94 else:
95 # Subclassed network

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _init_graph_network(self, inputs, outputs, name)
235 # Keep track of the network's nodes and layers.
236 nodes, nodes_by_depth, layers, layers_by_depth = _map_graph_network(
--> 237 self.inputs, self.outputs)
238 self._network_nodes = nodes
239 self._nodes_by_depth = nodes_by_depth

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in _map_graph_network(inputs, outputs)
1351 layer=layer,
1352 node_index=node_index,
-> 1353 tensor_index=tensor_index)
1354
1355 for node in reversed(nodes_in_decreasing_depth):

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1338 tensor_index = node.tensor_indices[i]
1339 build_map(x, finished_nodes, nodes_in_progress, layer,
-> 1340 node_index, tensor_index)
1341
1342 finished_nodes.add(node)

/home/juliano/.local/lib/python2.7/site-packages/keras/engine/network.pyc in build_map(tensor, finished_nodes, nodes_in_progress, layer, node_index, tensor_index)
1310 ValueError: if a cycle is detected.
1311 """
-> 1312 node = layer._inbound_nodes[node_index]
1313
1314 # Prevent cycles.

AttributeError: 'NoneType' object has no attribute '_inbound_nodes'



EDIT: I've updated my stdev_pooling function and it now returns the correct output. However, I'm still having the AttributeError: 'NoneType' object has no attribute '_inbound_nodes' error...







python python-2.7 tensorflow keras






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 8 at 20:47

























asked Nov 8 at 16:55









Juliano Foleiss

507




507








  • 1




    Seems like keras calls your function twice: the first time stride is an int32, but the second time it is a float32. I cannot explain why this happens, but adding an explicit cast to int32 could help
    – BlackBear
    Nov 8 at 17:10












  • Yup! It seems like so. I'll try that. Thanks!
    – Juliano Foleiss
    Nov 8 at 17:12






  • 1




    @JulianoFoleiss Not related to the error you get: are you sure your implementation is correct? Window size and stride are two different things and I don't see any variable representing window size in your code?
    – today
    Nov 8 at 17:17










  • @today You are right. In this case I am considering that window size == stride, in other words there's no overlap between windows.
    – Juliano Foleiss
    Nov 8 at 17:37










  • @BlackBear I added the explicit cast to int32 right after the beginning of the function and now the error is gone. Thanks! The fact that the function is called twice is interesting. However, now I have another error. I'll edit the question accordingly. Take a look if you want =D
    – Juliano Foleiss
    Nov 8 at 17:45














  • 1




    Seems like keras calls your function twice: the first time stride is an int32, but the second time it is a float32. I cannot explain why this happens, but adding an explicit cast to int32 could help
    – BlackBear
    Nov 8 at 17:10












  • Yup! It seems like so. I'll try that. Thanks!
    – Juliano Foleiss
    Nov 8 at 17:12






  • 1




    @JulianoFoleiss Not related to the error you get: are you sure your implementation is correct? Window size and stride are two different things and I don't see any variable representing window size in your code?
    – today
    Nov 8 at 17:17










  • @today You are right. In this case I am considering that window size == stride, in other words there's no overlap between windows.
    – Juliano Foleiss
    Nov 8 at 17:37










  • @BlackBear I added the explicit cast to int32 right after the beginning of the function and now the error is gone. Thanks! The fact that the function is called twice is interesting. However, now I have another error. I'll edit the question accordingly. Take a look if you want =D
    – Juliano Foleiss
    Nov 8 at 17:45








1




1




Seems like keras calls your function twice: the first time stride is an int32, but the second time it is a float32. I cannot explain why this happens, but adding an explicit cast to int32 could help
– BlackBear
Nov 8 at 17:10






Seems like keras calls your function twice: the first time stride is an int32, but the second time it is a float32. I cannot explain why this happens, but adding an explicit cast to int32 could help
– BlackBear
Nov 8 at 17:10














Yup! It seems like so. I'll try that. Thanks!
– Juliano Foleiss
Nov 8 at 17:12




Yup! It seems like so. I'll try that. Thanks!
– Juliano Foleiss
Nov 8 at 17:12




1




1




@JulianoFoleiss Not related to the error you get: are you sure your implementation is correct? Window size and stride are two different things and I don't see any variable representing window size in your code?
– today
Nov 8 at 17:17




@JulianoFoleiss Not related to the error you get: are you sure your implementation is correct? Window size and stride are two different things and I don't see any variable representing window size in your code?
– today
Nov 8 at 17:17












@today You are right. In this case I am considering that window size == stride, in other words there's no overlap between windows.
– Juliano Foleiss
Nov 8 at 17:37




@today You are right. In this case I am considering that window size == stride, in other words there's no overlap between windows.
– Juliano Foleiss
Nov 8 at 17:37












@BlackBear I added the explicit cast to int32 right after the beginning of the function and now the error is gone. Thanks! The fact that the function is called twice is interesting. However, now I have another error. I'll edit the question accordingly. Take a look if you want =D
– Juliano Foleiss
Nov 8 at 17:45




@BlackBear I added the explicit cast to int32 right after the beginning of the function and now the error is gone. Thanks! The fact that the function is called twice is interesting. However, now I have another error. I'll edit the question accordingly. Take a look if you want =D
– Juliano Foleiss
Nov 8 at 17:45












1 Answer
1






active

oldest

votes

















up vote
0
down vote



accepted










After fiddling a bit more with the code and reading about how keras interacts with tensorflow (in many different places, including the source code for tensorflow and keras) I figured out what was wrong.



First of all, here's a minimal working example of what I wanted to do:





import tensorflow
import keras
from keras.layers import Dense, TimeDistributed, Lambda, Input
import numpy as np
import keras.backend as K


def stdev_pooling(inputs, stride):

data = inputs

padding = K.shape(data)[1] % stride

data = K.switch(padding > 0, K.temporal_padding(data, padding=(0,stride-padding)), data )

num_windows = K.shape(data)[1] / stride

idxs = K.arange(num_windows) * stride

windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

windows = K.permute_dimensions(windows, (1,0,2,3))

stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

return stds

ipt = Input(shape=(None,10))
d = TimeDistributed(Dense(10))(ipt)
#stride is an argument to stdev_pooling, not a signal coming from
#a previous layer. Thus it must be passed in the `arguments`
#dictionary of the `Lambda` layer.
out = Lambda(stdev_pooling, arguments={'stride':15})(d)

x = np.arange(2000).reshape(2,-1,10)
m = keras.Model(inputs=ipt, outputs=out)
y = m.predict(x)
print y
print y.shape


The problem stemmed from the line out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')]) in the previous code.



When a signal (such as K.variable(...)) is provided as input to the Lambda layer, keras expects it to be connected to an Input layer. Thus, the error AttributeError: 'NoneType' object has no attribute '_inbound_nodes'.



The solution was simply to provide the stride argument through the arguments dictionary of the Lambda layer constructor:



out = Lambda(stdev_pooling, arguments={'stride':15})(d)



I hope this code helps everyone trying to build some sort of pooling layer in keras. When I have some time I shall write it as a proper Pooling Layer. For now, this Lambda version should do.






share|improve this answer





















    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














     

    draft saved


    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53212559%2fstandard-deviation-pooling-with-keras%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote



    accepted










    After fiddling a bit more with the code and reading about how keras interacts with tensorflow (in many different places, including the source code for tensorflow and keras) I figured out what was wrong.



    First of all, here's a minimal working example of what I wanted to do:





    import tensorflow
    import keras
    from keras.layers import Dense, TimeDistributed, Lambda, Input
    import numpy as np
    import keras.backend as K


    def stdev_pooling(inputs, stride):

    data = inputs

    padding = K.shape(data)[1] % stride

    data = K.switch(padding > 0, K.temporal_padding(data, padding=(0,stride-padding)), data )

    num_windows = K.shape(data)[1] / stride

    idxs = K.arange(num_windows) * stride

    windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

    windows = K.permute_dimensions(windows, (1,0,2,3))

    stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

    return stds

    ipt = Input(shape=(None,10))
    d = TimeDistributed(Dense(10))(ipt)
    #stride is an argument to stdev_pooling, not a signal coming from
    #a previous layer. Thus it must be passed in the `arguments`
    #dictionary of the `Lambda` layer.
    out = Lambda(stdev_pooling, arguments={'stride':15})(d)

    x = np.arange(2000).reshape(2,-1,10)
    m = keras.Model(inputs=ipt, outputs=out)
    y = m.predict(x)
    print y
    print y.shape


    The problem stemmed from the line out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')]) in the previous code.



    When a signal (such as K.variable(...)) is provided as input to the Lambda layer, keras expects it to be connected to an Input layer. Thus, the error AttributeError: 'NoneType' object has no attribute '_inbound_nodes'.



    The solution was simply to provide the stride argument through the arguments dictionary of the Lambda layer constructor:



    out = Lambda(stdev_pooling, arguments={'stride':15})(d)



    I hope this code helps everyone trying to build some sort of pooling layer in keras. When I have some time I shall write it as a proper Pooling Layer. For now, this Lambda version should do.






    share|improve this answer

























      up vote
      0
      down vote



      accepted










      After fiddling a bit more with the code and reading about how keras interacts with tensorflow (in many different places, including the source code for tensorflow and keras) I figured out what was wrong.



      First of all, here's a minimal working example of what I wanted to do:





      import tensorflow
      import keras
      from keras.layers import Dense, TimeDistributed, Lambda, Input
      import numpy as np
      import keras.backend as K


      def stdev_pooling(inputs, stride):

      data = inputs

      padding = K.shape(data)[1] % stride

      data = K.switch(padding > 0, K.temporal_padding(data, padding=(0,stride-padding)), data )

      num_windows = K.shape(data)[1] / stride

      idxs = K.arange(num_windows) * stride

      windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

      windows = K.permute_dimensions(windows, (1,0,2,3))

      stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

      return stds

      ipt = Input(shape=(None,10))
      d = TimeDistributed(Dense(10))(ipt)
      #stride is an argument to stdev_pooling, not a signal coming from
      #a previous layer. Thus it must be passed in the `arguments`
      #dictionary of the `Lambda` layer.
      out = Lambda(stdev_pooling, arguments={'stride':15})(d)

      x = np.arange(2000).reshape(2,-1,10)
      m = keras.Model(inputs=ipt, outputs=out)
      y = m.predict(x)
      print y
      print y.shape


      The problem stemmed from the line out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')]) in the previous code.



      When a signal (such as K.variable(...)) is provided as input to the Lambda layer, keras expects it to be connected to an Input layer. Thus, the error AttributeError: 'NoneType' object has no attribute '_inbound_nodes'.



      The solution was simply to provide the stride argument through the arguments dictionary of the Lambda layer constructor:



      out = Lambda(stdev_pooling, arguments={'stride':15})(d)



      I hope this code helps everyone trying to build some sort of pooling layer in keras. When I have some time I shall write it as a proper Pooling Layer. For now, this Lambda version should do.






      share|improve this answer























        up vote
        0
        down vote



        accepted







        up vote
        0
        down vote



        accepted






        After fiddling a bit more with the code and reading about how keras interacts with tensorflow (in many different places, including the source code for tensorflow and keras) I figured out what was wrong.



        First of all, here's a minimal working example of what I wanted to do:





        import tensorflow
        import keras
        from keras.layers import Dense, TimeDistributed, Lambda, Input
        import numpy as np
        import keras.backend as K


        def stdev_pooling(inputs, stride):

        data = inputs

        padding = K.shape(data)[1] % stride

        data = K.switch(padding > 0, K.temporal_padding(data, padding=(0,stride-padding)), data )

        num_windows = K.shape(data)[1] / stride

        idxs = K.arange(num_windows) * stride

        windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

        windows = K.permute_dimensions(windows, (1,0,2,3))

        stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

        return stds

        ipt = Input(shape=(None,10))
        d = TimeDistributed(Dense(10))(ipt)
        #stride is an argument to stdev_pooling, not a signal coming from
        #a previous layer. Thus it must be passed in the `arguments`
        #dictionary of the `Lambda` layer.
        out = Lambda(stdev_pooling, arguments={'stride':15})(d)

        x = np.arange(2000).reshape(2,-1,10)
        m = keras.Model(inputs=ipt, outputs=out)
        y = m.predict(x)
        print y
        print y.shape


        The problem stemmed from the line out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')]) in the previous code.



        When a signal (such as K.variable(...)) is provided as input to the Lambda layer, keras expects it to be connected to an Input layer. Thus, the error AttributeError: 'NoneType' object has no attribute '_inbound_nodes'.



        The solution was simply to provide the stride argument through the arguments dictionary of the Lambda layer constructor:



        out = Lambda(stdev_pooling, arguments={'stride':15})(d)



        I hope this code helps everyone trying to build some sort of pooling layer in keras. When I have some time I shall write it as a proper Pooling Layer. For now, this Lambda version should do.






        share|improve this answer












        After fiddling a bit more with the code and reading about how keras interacts with tensorflow (in many different places, including the source code for tensorflow and keras) I figured out what was wrong.



        First of all, here's a minimal working example of what I wanted to do:





        import tensorflow
        import keras
        from keras.layers import Dense, TimeDistributed, Lambda, Input
        import numpy as np
        import keras.backend as K


        def stdev_pooling(inputs, stride):

        data = inputs

        padding = K.shape(data)[1] % stride

        data = K.switch(padding > 0, K.temporal_padding(data, padding=(0,stride-padding)), data )

        num_windows = K.shape(data)[1] / stride

        idxs = K.arange(num_windows) * stride

        windows = K.map_fn(lambda w: data[:, w: (w + stride), :], idxs, dtype=K.floatx())

        windows = K.permute_dimensions(windows, (1,0,2,3))

        stds = K.map_fn(lambda w: K.std(w, axis=1), windows)

        return stds

        ipt = Input(shape=(None,10))
        d = TimeDistributed(Dense(10))(ipt)
        #stride is an argument to stdev_pooling, not a signal coming from
        #a previous layer. Thus it must be passed in the `arguments`
        #dictionary of the `Lambda` layer.
        out = Lambda(stdev_pooling, arguments={'stride':15})(d)

        x = np.arange(2000).reshape(2,-1,10)
        m = keras.Model(inputs=ipt, outputs=out)
        y = m.predict(x)
        print y
        print y.shape


        The problem stemmed from the line out = Lambda(stdev_pooling)([d,K.variable(20, dtype='int32', name='stride_var')]) in the previous code.



        When a signal (such as K.variable(...)) is provided as input to the Lambda layer, keras expects it to be connected to an Input layer. Thus, the error AttributeError: 'NoneType' object has no attribute '_inbound_nodes'.



        The solution was simply to provide the stride argument through the arguments dictionary of the Lambda layer constructor:



        out = Lambda(stdev_pooling, arguments={'stride':15})(d)



        I hope this code helps everyone trying to build some sort of pooling layer in keras. When I have some time I shall write it as a proper Pooling Layer. For now, this Lambda version should do.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 9 at 13:15









        Juliano Foleiss

        507




        507






























             

            draft saved


            draft discarded



















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53212559%2fstandard-deviation-pooling-with-keras%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Schultheiß

            Verwaltungsgliederung Dänemarks

            Liste der Kulturdenkmale in Wilsdruff