default_fused_wt_fake_quant¶
- torch.quantization.fake_quantize.default_fused_wt_fake_quant¶
alias of functools.partial(<class ‘torch.ao.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize’>, observer=<class ‘torch.ao.quantization.observer.MovingAverageMinMaxObserver’>, quant_min=-128, quant_max=127, dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){}