commit | 2b63814b15a2aaae54b7943f0cd935892fae628f | [log] [tgz] |
---|---|---|
author | Victor Costan <costan@google.com> | Tue May 04 22:28:33 2021 |
committer | Victor Costan <costan@google.com> | Tue May 04 22:53:34 2021 |
tree | b545a6aa6acd1a9fdb15d3354babbcff6de90603 | |
parent | 9c1be17938429574cdec8fbf820f2d9d5ea66c5c [diff] |
Tag open source release 1.1.9. PiperOrigin-RevId: 372007801
Snappy, a fast compressor/decompressor.
Snappy is a compression/decompression library. It does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. (For more information, see “Performance”, below.)
Snappy has the following properties:
Snappy has previously been called “Zippy” in some Google presentations and the like.
Snappy is intended to be fast. On a single core of a Core i7 processor in 64-bit mode, it compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more. (These numbers are for the slowest inputs in our benchmark suite; others are much faster.) In our tests, Snappy usually is faster than algorithms in the same class (e.g. LZO, LZF, QuickLZ, etc.) while achieving comparable compression ratios.
Typical compression ratios (based on the benchmark suite) are about 1.5-1.7x for plain text, about 2-4x for HTML, and of course 1.0x for JPEGs, PNGs and other already-compressed data. Similar numbers for zlib in its fastest mode are 2.6-2.8x, 3-7x and 1.0x, respectively. More sophisticated algorithms are capable of achieving yet higher compression rates, although usually at the expense of speed. Of course, compression ratio will vary significantly with the input.
Although Snappy should be fairly portable, it is primarily optimized for 64-bit x86-compatible processors, and may run slower in other environments. In particular:
Experience has shown that even heavily tuned code can be improved. Performance optimizations, whether for 64-bit x86 or other platforms, are of course most welcome; see “Contact”, below.
You need the CMake version specified in CMakeLists.txt or later to build:
git submodule update --init mkdir build cd build && cmake ../ && make
Note that Snappy, both the implementation and the main interface, is written in C++. However, several third-party bindings to other languages are available; see the home page for more information. Also, if you want to use Snappy from C code, you can use the included C bindings in snappy-c.h.
To use Snappy from your own C++ program, include the file “snappy.h” from your calling file, and link against the compiled library.
There are many ways to call Snappy, but the simplest possible is
snappy::Compress(input.data(), input.size(), &output);
and similarly
snappy::Uncompress(input.data(), input.size(), &output);
where “input” and “output” are both instances of std::string.
There are other interfaces that are more flexible in various ways, including support for custom (non-array) input sources. See the header file for more information.
When you compile Snappy, the following binaries are compiled in addition to the library itself. You do not need them to use the compressor from your own library, but they are useful for Snappy development.
snappy_benchmark
contains microbenchmarks used to tune compression and decompression performance.snappy_unittests
contains unit tests, verifying correctness on your machine in various scenarios.snappy_test_tool
can benchmark Snappy against a few other compression libraries (zlib, LZO, LZF, and QuickLZ), if they were detected at configure time. To benchmark using a given file, give the compression algorithm you want to test Snappy against (e.g. --zlib) and then a list of one or more file names on the command line.If you want to change or optimize Snappy, please run the tests and benchmarks to verify you have not broken anything.
The testdata/ directory contains the files used by the microbenchmarks, which should provide a reasonably balanced starting point for benchmarking. (Note that baddata[1-3].snappy are not intended as benchmarks; they are used to verify correctness in the presence of corrupted data in the unit test.)
Snappy is distributed through GitHub. For the latest version and other information, see https://github.com/google/snappy.