深圳大学论坛

标题: Compression with Neural Networks 神经网络压缩算法 [打印本页]

作者: 深大选课指南    时间: 2017-8-16 14:23
标题: Compression with Neural Networks 神经网络压缩算法
本文提出了一套基于神经网络的全分辨率有损图像压缩方法。我们描述的每个架构可以在部署过程中提供可变的压缩率,而无需重新培训网络:每个网络只需要一次培训。我们所有的架构都包括一个基于神经网络(RNN)的编码器和解码器,二进制化和用于熵编码的神经网络。我们比较了RNN类型(LSTM,关联LSTM),并引入了一个新的混合GRU和ResNet。我们还研究了“一次性”与加性重建架构,并引入了一种新的缩放加法框架。我们比较以前的工作,显示出4.3%-8.8%AUC(速率 - 失真曲线下的面积)的改善,取决于所使用的感知度量。据我们所知,这是第一个神经网络架构,能够在柯达数据集图像上的速率 - 失真曲线上的大多数比特率的图像压缩上胜过JPEG,并且不需要熵编码。       
tensorflow
http://www.tensorflownews.com/


This paper presents a set of full-resolution lossy image compression methods based on neural networks. Each of the architectures we describe can provide variable compression rates during deployment without requiring retraining of the network: each network need only be trained once. All of our architectures c**ist of a recurrent neural network (RNN)-based encoder and decoder, a binarizer, and a neural network for entropy coding. We compare RNN types (LSTM, associative LSTM) and introduce a new hybrid of GRU and ResNet. We also study "one-shot" versus additive rec**truction architectures and introduce a new scaled-additive framework. We compare to previous work, showing improvements of 4.3%-8.8% AUC (area under the rate-distortion curve), depending on the perceptual metric used. As far as we know, this is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding.


https://arxiv.org/abs/1608.05148





欢迎光临 深圳大学论坛 (http://www.fendouai.com/) Powered by Discuz! X3.1