Reversible Deep Neural Network Watermarking:Matching the Floating-point Weights

05/29/2023
by   Junren Qin, et al.
0

Static deep neural network (DNN) watermarking embeds watermarks into the weights of DNN model by irreversible methods, but this will cause permanent damage to watermarked model and can not meet the requirements of integrity authentication. For these reasons, reversible data hiding (RDH) seems more attractive for the copyright protection of DNNs. This paper proposes a novel RDH-based static DNN watermarking method by improving the non-reversible quantization index modulation (QIM). Targeting the floating-point weights of DNNs, the idea of our RDH method is to add a scaled quantization error back to the cover object. Two schemes are designed to realize the integrity protection and legitimate authentication of DNNs. Simulation results on training loss and classification accuracy justify the superior feasibility, effectiveness and adaptability of the proposed method over histogram shifting (HS).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset