lossless coding 中文意思是什麼

lossless coding 解釋
無損失碼
  1. The high entropy of the real and imaginary parts of sar raw data makes lossless - coding compression techniques unfit for sar raw data. in chapter 4, some compression algorithms for sar raw data compression, such as baq, upq, bavq and wt - subbandcade are analyzed and discussed. an improved unrestricted polar quantizer ( upq ), which can enhance the performance of the quantizer is put forward

    第四章分別對塊自適應量化( baq )演算法、非約束式極坐標量化( upq )演算法、塊自適應矢量量化( bavq )演算法和小波變換子帶編碼演算法進行了分析和研究,並詳細討論了這些演算法在工程實際中的應用。
  2. Based on statistic property of all sub - bands coefficients, this paper gives a lossless image coding scheme including dpgm, huffman coding and run length coding. this scheme reduces the calculating complexity and works highly efficiently

    針對圖像整數小波分解后的各子帶系數的統計特性,提出結合dpcm , huffman及遊程編碼的無損圖像編碼方法,該方法運算簡潔,速度快,易於硬體實現。
  3. The dissertation combine the theory, with using integers dct transform to realize grey image lossless compress with the method of reversible color space integers transform and reversible integers dpcm prediction to realize lossless compress from grey image to color image with huffman coding method via visual c + + program

    本文借鑒整數dct變換實現灰度圖像無損壓縮的理論研究成果,將其與可逆的顏色空間整數變換和可逆的整數dpcm預測相結合,採用哈夫曼編碼方法,用vc編程實現了從灰度圖像到彩色圖像的無損壓縮。
  4. The adaptation processing includes linear prediction coefficient adaptation and adaptation of quantization step size for residual signals. based on g. 726, we adopt a huffman coder to make use of probability statistic of bit cascade covering n ( n 1 ) samples generated from adpcm, in order to further reduce the bit rate. ng is lossless entropy coding, the speech quality of our improved algorithm should be same as that of g. 726 standard

    我們的研究和改進工作包括:研究最優非均勻自適應量化器,及其自適應演算法;研究波形預測函數,以及函數零點、極點的自適應演算法;基於每n ( n 1 )個樣本所對應符號的概率統計,對預測殘差量化值再進行huffman編碼,進一步降低比特率。
  5. Research on image lossless compression method based on adaptive bit - level arithmetic coding

    基於二值自適應算術編碼的圖像無損壓縮演算法研究
  6. In the paper, chapter 1 gives a comprehensive introduction of digital image compressing including its recent status, technical standards, classification in the world. chapter 2 introduces briefly the thought and ii procedure of vector quantization, describes lgb algorithm and vector quantization based on sofm neural network. chapter 3 discusses predictable coding in lossy and lossless aspects, analyzes adaptive predictable coding based on bp neural network, introduces the evaluation of algorithm on neural network in image compression. chapter 4 discusses the applications of mathematical transformation in image compression and does experiments related, analyzes the strategies of image coding in transformed domain. in chapter 5 images are decomposed and represented by wavelet transform, then discusses the characteristics and effects of wavelet functions in image compression, analyzes the wavelet coefficients after images are decomposed ; based on the theories and analyses in the prior chapters, the paper presents an image compression scheme and gives results. the test results shows that the image compression scheme is practical and helpful to map into the local content of images to get rid off redundancy, so that, it can require satisfactory results of image compression

    方案首先利用小波多分辨分析性質,對圖像進行小波分解,對分解后各子圖的小波系數進行了統計分析,針對各子圖的小波系數特點,對不同的子圖分別採用不同的壓縮方法,低頻子圖採用基於神經網路的自適應預測編碼,高頻子圖採用基於神經網路的矢量量化編碼,從而實現對圖像數據的壓縮處理。本論文第一章介紹了數字圖像壓縮處理的國內外當前的概況以及其技術標準和分類。在第二章,介紹了數字圖像的矢量量化技術的數學思想和過程,對lbg演算法和基於sofm神經網路的矢量量化進行了闡述、分析。
  7. Neural networks are used more frequently in lossy data coding than in general lossless data coding, because standard neural networks must be trained off - line and they are too slow to be practical. in this thesis, statistical language model based on maximum entropy and neural networks are discussed particularly. then, an arithmetic coding algorithm based on maximum entropy and neural networks are proposed in this thesis

    傳統的人工神經網路數據編碼演算法需要離線訓練且編碼速度慢,因此通常多用於專用有損編碼領域如聲音、圖像編碼等,在無損數據編碼領域應用較少,針對這種現狀,本文詳細地研究了最大熵統計語言模型和神經網路演算法各自的特點,在此基礎上提出了一種基於神經網路和最大熵原理的算術編碼方法,這是一種自適應的可在線學習的演算法,並具有精簡的網路結構。
  8. An adaptive predictive coding based on image segmentation for lossless compression of ultrasonic well logging images

    基於分塊自適應預測的超聲測井圖象無損壓縮編碼
  9. The subimage of the lowest frequency ( ll4 ) is carried out lossless compressed coding ; 2 the subimages on diagonal direction of the highest frequency ( hhl ) is abandoned and is not carried out coding, because it is of great probability for zero, and it little affects visual

    其主要包括對最低頻子帶單獨進行無損壓縮編碼,對最高頻對角線方向子帶捨去不編碼。對其餘各子帶根據視覺特點的不同,分別分配不同的比特數並按其進行零樹量化,最後再遊程編碼。
  10. A selection method of serm factorizations for linear transforms is presented. it is discovered that the near optimal results are almost everywhere and when the factorization error is small, the closer the permutation matrices, the closer the results. according this fact, a local search is proposed that based near optimal factorization method which can obtain the near optimal results. moreover, this method convergence very fast, can usually obtain useful results by very limited iterations. it is tested lossless image coding with the selection results and good results is obtained

    通過大量的實驗,發現serm分解的近似最優結果是大量存在的,而且這些近似最優結果的分佈是分散的,當分解結果的誤差度量比較小時的時候,有當置換矩陣相近時,分解結果的誤差也相近的實驗事實。基於此觀察結果,給出了基於局部搜索的serm分解的近似最優分解方法,可以得到非常接近最優分解的結果。
  11. This paper mainly realize integer wavelet transform using lifting scheme, then study and compare several embedded coding arithmetic. base on analyzing the image coefficient, we propose an improved ezw ( embedded zerotree wavelets ) arithmetic. jt aimed at increasing compression ratio in lossless image compression and subjective quality in loss image compression

    本文主要利用提升演算法實現圖像的整數小波變換,然後對幾種經典的嵌入式圖像編碼演算法進行了研究和比較,在對經過小波變換的圖像系數進行統計分析后,提出一種對ezw ( embeddedzerotreewavelets )演算法進行改進的嵌入式圖像壓縮編碼演算法。
  12. Third, the quantization of wavelet coefficients based on the embedded zerotree wavelet ( ezw ) is discussed and some improvement is made to it. finally, this paper brings forward a scalable image coding method based on iwt and the improved zerotree quantization, which can realize lossless to lossy image compression, more simple computing, and higher - quality reconstructed image compared with image coding method based on traditional wavelet transform. the experimental results are satisfying

    在此基礎上,提出了一種基於整數小波變換和改進零樹量化方法的可分級圖像編碼方法,該方法可以實現圖像從完全無損到有損壓縮,與基於傳統小波變換及零樹量化的方法相比,運算簡潔,速度快,重構圖像質量高,取得了令人滿意的效果。
  13. Information technology - lossy lossless coding of bi - level images

    信息技術.雙級圖像的有損無損編碼
  14. Lossless and lossy compression, embedded lossy to lossless coding, progressive transmission by pixel accuracy and by resolution, robustness to the presence of bit - errors and region - of - interest coding, are some representative features

    作為熵編碼, jpeg2000採用了基於上下文的自適應二進制算術編碼技術,通過以編碼塊( codeblock )為編碼單位等技術獲得了較高的容錯性。
  15. Hlz : an adaptive lossless coding algorithm with hybrid dictionary

    一種採用混合字典的自適應無損編碼演算法
  16. Therefore, the main subject of this paper is to design a universal, low complexity, lossless and near - lossless compression algorithm for biomedical signals, fith a series of techniques, including context modeling, adaptive prediction and golomb coding, our algorithm obtains satisfactory results on various kinds of biomedical signals with low complexity of implementation

    從上述考慮出發,本課題研究設計了一個通用、低復雜性的生物醫學信號無損近無損壓縮演算法。通過採用上下文建模、自適應預測和golomb編碼等一系列技術,該壓縮演算法對各類生物醫學信號都獲得了較好的壓縮效果,達到了通用、低復雜性的設計要求。
  17. Text is a kind of very common resource in digital library, and lossless techniques play an important role in compressing text. starting from the shannon ' s entropy theory, we analyze the lossless compression algorithms, and implement arithmetic coding algorithm in c. in the experiments, we compare four different lossless compression algorithms by their performances such as compression rate, compression rate tendency with the length of data, stability, and complexity, using 35 groups data series with 4 different length

    本文從信息論中shannon熵定理出發,對無損壓縮技術進行系統地分析,用c語言實現了其中的算術編碼演算法,並用它對35組、四種不同長度數據序列進行了壓縮,給出了實驗結果,然後從壓縮比、壓縮比隨字元串長度的變換趨勢、演算法穩定性和演算法復雜性等四個方面對其與其它三種壓縮演算法lzw 、 lz77 、 rle進行了分析與比較。
  18. Lossless still image compression using integer wavelet transform and improved spiht coding

    一種基於整數小波變換的圖像編碼演算法
  19. This paper compares the set of features offered by jpeg 2000, versus the current still - image compression and coding standards. the study concentrates on the aspects such as functions, lossy and lossless compression efficiency, region of interest coding, error resilience and complexity. as a result, a conclusion - how to choose compression and coding standard is elicited

    本文還利用包括j2k模型在內的實現,就jpeg2000與靜態圖像壓縮編碼的現有標準進行了分析和比較,對比了它們在提供的功能、有損和無損的壓縮效率、 roi編碼和差錯恢復以及復雜度等方面的異同,並得出選擇編碼壓縮標準的一個結論。
分享友人