Conversation
|
|
||
|
|
||
| @register_torch_op(torch_alias=["conv2d"]) | ||
| @register_torch_op(torch_alias=["conv1d", "conv2d", "conv3d", "conv_transpose1d", "conv_transpose2d", "conv_transpose3d"]) |
There was a problem hiding this comment.
Since we already have passing unit tests for conv1d, I don't think we should add an alias for it here. PyTorch must be lowering it to other ops which we do support.
There was a problem hiding this comment.
Can you help me then understand why do I get ?
the following model ops are IMPLEMENTED:
add
atan2
clamp
complex
constant
constantchunk
cos
exp
gelu
layer_norm
linear
mul
sin
tensor
transpose
the following model ops are MISSING:
conv1d
floatimplicit
istft
Traceback (most recent call last):
File "/home/aalvarez/Projects/main/apps/tts-train/vocoder/export.py", line 73, in <module>
main()
File "/home/aalvarez/Projects/main/apps/tts-train/vocoder/export.py", line 54, in main
model.to_coreml(
File "/home/aalvarez/.virtualenvs/tts-train-XZ1ykfT_-py3.9/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/home/aalvarez/Projects/main/apps/tts-train/vocoder/vocos/__init__.py", line 502, in to_coreml
coreml_mdl = ct.convert(
File "/home/aalvarez/Projects/coremltools/coremltools/converters/_converters_entry.py", line 553, in convert
mlmodel = mil_convert(
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/converter.py", line 188, in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/converter.py", line 212, in _mil_convert
proto, mil_program = mil_convert_to_proto(
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/converter.py", line 286, in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/converter.py", line 108, in __call__
return load(*args, **kwargs)
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/frontend/torch/load.py", line 75, in load
return _perform_torch_convert(converter, debug)
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/frontend/torch/load.py", line 122, in _perform_torch_convert
raise e
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/frontend/torch/load.py", line 114, in _perform_torch_convert
prog = converter.convert()
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/frontend/torch/converter.py", line 484, in convert
convert_nodes(self.context, self.graph)
File "/home/aalvarez/Projects/coremltools/coremltools/converters/mil/frontend/torch/ops.py", line 88, in convert_nodes
raise RuntimeError(
RuntimeError: PyTorch convert function for op 'conv1d' not implemented.
There was a problem hiding this comment.
Are you calling torch.jit.trace prior to conversion?
There was a problem hiding this comment.
No, I'm using torch.script only and I guess there lies the discrepancy.
In my experience, using torch.script is much more reliable because of:
- data dependent models with conditionals (main point)
- additions of extra functions
- no input example boilerplate
|
@alealv - thanks for the pull request. In order to merge this, we need unit tests. Please add unit tests for these new aliases. |
|
@alealv does the |
|
I'm also facing the same issue but with |
|
@katelyn-chen - I don't know. That is a PyTorch issue. I suggest you ask in a PyTorch forum. |
I just tried with tracing as you suggested. Here are the differences With
|
|
@alealv - if you want to add Regarding your |
906f052 to
f31b8de
Compare
|
I just updated Convolution tests to be also tested when using torch script. |
| if padding == "same" and stride != 1: | ||
| return | ||
|
|
||
| class FunctionalConv1D(nn.Module): | ||
| def __init__(self): | ||
| super(FunctionalConv1D, self).__init__() | ||
| self.stride=stride | ||
| self.padding=padding | ||
| self.groups=groups | ||
| def forward(self, input_data, weights): | ||
| return nn.functional.conv1d( | ||
| input_data, weights, stride=stride, padding=padding | ||
| input_data, weights, stride=self.stride, padding=self.padding, groups=self.groups | ||
| ) | ||
|
|
||
| model = DynamicConv() | ||
| model = FunctionalConv1D().eval() |
There was a problem hiding this comment.
With this code I get the following error:
coremltools/converters/mil/frontend/torch/test/test_torch_ops.py:1710:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
coremltools/converters/mil/frontend/torch/test/testing_utils.py:286: in run_compare_torch
model_spec, mlmodel, coreml_inputs, coreml_results = convert_and_compare(
coremltools/converters/mil/frontend/torch/test/testing_utils.py:195: in convert_and_compare
mlmodel = convert_to_mlmodel(model_spec, input_data, backend=backend,
coremltools/converters/mil/frontend/torch/test/testing_utils.py:120: in convert_to_mlmodel
return ct_convert(model_spec, inputs=inputs, convert_to=backend,
coremltools/converters/mil/testing_utils.py:466: in ct_convert
mlmodel = converter(
coremltools/converters/_converters_entry.py:583: in convert
mlmodel = mil_convert(
coremltools/converters/mil/converter.py:188: in mil_convert
return _mil_convert(model, convert_from, convert_to, ConverterRegistry, MLModel, compute_units, **kwargs)
coremltools/converters/mil/converter.py:212: in _mil_convert
proto, mil_program = mil_convert_to_proto(
coremltools/converters/mil/converter.py:286: in mil_convert_to_proto
prog = frontend_converter(model, **kwargs)
coremltools/converters/mil/converter.py:108: in __call__
return load(*args, **kwargs)
coremltools/converters/mil/frontend/torch/load.py:80: in load
return _perform_torch_convert(converter, debug)
coremltools/converters/mil/frontend/torch/load.py:99: in _perform_torch_convert
prog = converter.convert()
coremltools/converters/mil/frontend/torch/converter.py:519: in convert
convert_nodes(self.context, self.graph)
coremltools/converters/mil/frontend/torch/ops.py:89: in convert_nodes
add_op(context, node)
coremltools/converters/mil/frontend/torch/ops.py:729: in listconstruct
_array_construct(context, node, array_type=list)
coremltools/converters/mil/frontend/torch/ops.py:702: in _array_construct
inputs = _get_inputs(context, node)
coremltools/converters/mil/frontend/torch/ops.py:224: in _get_inputs
inputs = get_bindings(node.inputs)
coremltools/converters/mil/frontend/torch/ops.py:210: in get_bindings
results.append(context[i])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = %input_data.1 : (1, 3, 7, fp32)
%weights.1 : (3, 3, 3, fp32)
%3 : (1,int32)*
%4 : None
, torch_name = 'stride'
def __getitem__(self, torch_name: str) -> Var:
"""
Lookup a name in the context. Note that since nested blocks must be
able to access anything that was defined before them, we have to
search all contexts for a name, starting with the most local scope.
"""
for idx in reversed(range(len(self._current_graph))):
current_graph = self._current_graph[idx]
if torch_name in current_graph:
return self._current_graph[idx][torch_name]
> raise ValueError(f"Torch var {torch_name} not found in context {self.name}")
E ValueError: Torch var stride not found in context
coremltools/converters/mil/frontend/torch/converter.py:251: ValueErrorCan someone help me to fix this? I don't know coremltools o deeply to solve this.
There was a problem hiding this comment.
if I try to follow how it was done previously:
class FunctionalConv1D(nn.Module):
def forward(self, input_data, weights):
return nn.functional.conv1d(
input_data, weights, stride=stride, padding=padding, groups=groups
)
model = FunctionalConv1D().eval()
input_shape = [
(1, in_channels, width),
(out_channels, int(in_channels / groups), kernel_size),
]I get:
coremltools/converters/mil/frontend/torch/test/test_torch_ops.py:1705:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
coremltools/converters/mil/frontend/torch/test/testing_utils.py:267: in run_compare_torch
model_spec = torch.jit.script(model)
envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/jit/_script.py:1324: in script
return torch.jit._recursive.create_script_module(
envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/jit/_recursive.py:559: in create_script_module
return create_script_module_impl(nn_module, concrete_type, stubs_fn)
envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/jit/_recursive.py:636: in create_script_module_impl
create_methods_and_properties_from_stubs(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
concrete_type = <torch.ConcreteModuleType object at 0x7f65d8224970>
method_stubs = [ScriptMethodStub(resolution_callback=<function createResolutionCallbackFromEnv.<locals>.<lambda> at 0x7f65d88a2b80>, ...l_method=<bound method T
estFunctionalConv.test_convolution1d.<locals>.FunctionalConv1D.forward of FunctionalConv1D()>)]
property_stubs = []
def create_methods_and_properties_from_stubs(
concrete_type, method_stubs, property_stubs
):
method_defs = [m.def_ for m in method_stubs]
method_rcbs = [m.resolution_callback for m in method_stubs]
method_defaults = [get_default_args(m.original_method) for m in method_stubs]
property_defs = [p.def_ for p in property_stubs]
property_rcbs = [p.resolution_callback for p in property_stubs]
> concrete_type._create_methods_and_properties(
property_defs, property_rcbs, method_defs, method_rcbs, method_defaults
)
E RuntimeError:
E python value of type 'int' cannot be used as a value. Perhaps it is a closed over global variable? If so, please consider passing it in as an argument o
r use a local varible instead.:
E File "/root/coremltools/coremltools/converters/mil/frontend/torch/test/test_torch_ops.py", line 1697
E def forward(self, input_data, weights):
E return nn.functional.conv1d(
E input_data, weights, stride=stride, padding=padding, groups=groups
E ~~~~~~ <--- HERE
E )
envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/jit/_recursive.py:469: RuntimeErrorThere was a problem hiding this comment.
These look like errors related to trying to convert PyTorch models which haven't been traced. Do you only get these error when use_scripting=True? That causes the model not to get traced.
There was a problem hiding this comment.
Yes and it's exactly what I want. I want to add tests for scripting.
|
I'm trying to figure out how the The test fails with: E ValueError: Torch var bias not found in contextI'm getting a problem only when Here is the output graph: graph(%self : __torch__.torch.nn.modules.conv.Conv1d,
%input.1 : Tensor):
%weight : Tensor = prim::GetAttr[name="weight"](%self)
%bias : Tensor? = prim::GetAttr[name="bias"](%self)
%4 : int = prim::Constant[value=1]() # /root/coremltools/envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/nn/modules/conv.py:306:45
%5 : int = prim::Constant[value=0]() # /root/coremltools/envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/nn/modules/conv.py:307:24
%6 : int = prim::Constant[value=3]() # /root/coremltools/envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/nn/modules/conv.py:307:38
%7 : int[] = prim::ListConstruct(%4)
%8 : int[] = prim::ListConstruct(%5)
%9 : int[] = prim::ListConstruct(%6)
%10 : Tensor = aten::conv1d(%input.1, %weight, %bias, %7, %8, %9, %4) # /root/coremltools/envs/coremltools-dev-py3.9/lib/python3.9/site-packages/torch/nn/modules/conv.py:306:15
return (%10)So my understanding is that we have only one input, the How does I see that: Node: %bias : Tensor? = prim::GetAttr[name="bias"](%self)
Type: <class 'torch.Node'>
Is tensor: False
Is quantize tensor: False
prefix: bias
Module: NoneAnd we have if it's a tensor it does nothing. def _lower_graph_block(graph):
for node in list(graph.nodes()):
...
is_tensor = _check_is_tensor(node, module)
is_quantized_tensor = _check_is_quantized_tensor(node, module)
if is_tensor or is_quantized_tensor:
... def _check_is_tensor(node, module):
if not isinstance(module, torch.Tensor):
return False
if str(node.output().type()) not in ("Tensor", "Optional[Tensor]"):
raise TypeError(f'Type "{node.output().type()}" not supported')
return TrueCan anyone help me to understand this? |
| try: | ||
| results.append(context[i]) | ||
| except ValueError: | ||
| results.append(None) |
There was a problem hiding this comment.
This was the magic trick to continue when bias is None, because is an optional parameter.
|
|
||
|
|
||
| @register_torch_op(torch_alias=["conv2d", "convolution"]) | ||
| # NOTE: This function is also an alias of: ["conv_transpose1d", "conv_transpose2d", "conv_transpose3d"] but we lack tests for those |
There was a problem hiding this comment.
Unfortunately, I fail to add test for conv_transposeXd because it asks for output_size as an input in the computational graph which I don't know how to solve. Though, the function should support the operation.
When trying to convert Torch
Conv1dlayer I get:And although it seems to be supported, the aliases aren't there hence the error.
A similar situation happens with
conv_transposeIt seems to be supported but it fails for the same reason. Though, in this case some code had be added to match the order of inputs used by Torch
I hope this helps to improve the code.
Note: I'm using a
torch.scriptmodelThis should also close #1753