Questions tagged [snappy]

Snappy is a compression algorithm for byte streams and a library implementing this algorithm. The standard distribution includes bindings for C and C++; there are third-party bindings for many other languages.

Snappy does not aim for maximum compression, or compatibility with any other compression library; instead, it aims for very high speeds and reasonable compression. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more.

Snappy is widely used inside Google, in everything from BigTable and MapReduce to Google's internal systems.

366 questions
9
votes
1 answer

module 'snappy' has no attribute 'decompress'

I'm tring to use kafka-python. It request to install Snappy. So I install it by pip install snappy pip install python_snappy-0.5.2-cp36-cp36m-win_amd64.whl In both ways Snappy install successfully. But in both time when i'm trying to run python…
GihanDB
  • 591
  • 2
  • 6
  • 23
9
votes
2 answers

Snappy & wkhtmltopdf : page numbering in footer

I would like to have the page number in the footer of every page generated with Snappy and Wkhtmltopdf, but i haven't found any clue about it. I can set a footer text (with options 'footer-center') but how to put the page number ?
qdelettre
  • 1,873
  • 5
  • 25
  • 36
8
votes
1 answer

'remote write receiver' HTTP API request in Prometheus

I am trying to find a working example of how to use the remote write receiver in Prometheus. Link : https://prometheus.io/docs/prometheus/latest/querying/api/#remote-write-receiver I am able to send a request to the endpoint ( POST /api/v1/write )…
amolgautam
  • 897
  • 10
  • 20
8
votes
2 answers

R arrow: Error: Support for codec 'snappy' not built

I have been using the latest R arrow package (arrow_2.0.0.20201106) that supports reading and writing from AWS S3 directly (which is awesome). I don't seem to have issues when I write and read my own file (see below): write_parquet(iris,…
Mike.Gahan
  • 4,565
  • 23
  • 39
8
votes
1 answer

error with snappy while importing fastparquet in python

I have installed installed the following modules in my EC2 server which already has python (3.6) & anaconda installed : snappy pyarrow s3fs fastparquet except fastparquet everything else works on importing. When I try to import fastparquet it…
stormfield
  • 1,696
  • 1
  • 14
  • 26
7
votes
3 answers

How can I open a .snappy.parquet file in python?

How can I open a .snappy.parquet file in python 3.5? So far, I used this code: import numpy import pyarrow filename = "/Users/T/Desktop/data.snappy.parquet" df = pyarrow.parquet.read_table(filename).to_pandas() But, it gives this…
user9439906
  • 433
  • 2
  • 7
  • 17
7
votes
1 answer

How to decompress mongo journal files

As I have explored, journal files created by Mongodb is compressed using snappy compression algorithm. but I am not able to decompress this compressed journal file. It gives an error on trying to decompress Error stream missing snappy…
stackMonk
  • 1,033
  • 17
  • 33
7
votes
3 answers

spark returns error libsnappyjava.so: failed to map segment from shared object: Operation not permitted

I have just extracted and setup spark 1.6.0 into environment that has a fresh install of hadoop 2.6.0 and hive 0.14. I have verified that hive, beeline and mapreduce works fine on examples. However, as soon as I run sc.textfile() within spark-shell,…
paolov
  • 2,139
  • 1
  • 34
  • 43
7
votes
6 answers

hadoop mapreduce: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z

I am trying to write a snappy block compressed sequence file from a map-reduce job. I am using hadoop 2.0.0-cdh4.5.0, and snappy-java 1.0.4.1 Here is my code: package jinvestor.jhouse.mr; import java.io.ByteArrayOutputStream; import…
msknapp
  • 1,595
  • 7
  • 22
  • 39
7
votes
2 answers

cassandra 1.2 fails to init snappy in freebsd

ERROR [WRITE-/10.10.35.30] 2013-06-19 23:15:56,907 CassandraDaemon.java (line 175) Exception in thread Thread[WRITE-/10.10.35.30,5,main] java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy at…
sushil
  • 165
  • 1
  • 9
6
votes
1 answer

org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] no native library is found for os.name=Mac and os.arch=aarch64

I'm building a cdc pipeline to read mysql binlog through maxwell and putting them into kafka my compression type is snappy in maxwell config.But at consumer end in my spring project I'm getting this error. org.xerial.snappy.SnappyError:…
jss
  • 199
  • 2
  • 13
6
votes
1 answer

About a java.lang.NoClassDefFoundError: Could not initialize class org.xerial.snappy.Snappy

i have an error when I try to compile, test and run a junit test. I want to load a local Avro file using DataFrames but I am getting an exception: org.xerial.snappy.SnappyError: [FAILED_TO_LOAD_NATIVE_LIBRARY] null I am not using Cassandra at all,…
aironman
  • 837
  • 5
  • 26
  • 55
6
votes
1 answer

Why is querying Parquet files is slower than text files in Hive?

I decided to use Parquet as storage format for hive tables and before I actually implement it in my cluster, I decided to run some tests. Surprisingly, Parquet was slower in my tests as against the general notion that it is faster than plain text…
Rahul
  • 2,354
  • 3
  • 21
  • 30
6
votes
4 answers

pyspark how to load compressed snappy file

I have compressed a file using python-snappy and put it in my hdfs store. I am now trying to read it in like so but I get the following traceback. I can't find an example of how to read the file in so I can process it. I can read the text file…
Levi Pierce
  • 63
  • 1
  • 1
  • 4
6
votes
0 answers

Hadoop native library and snappy not loaded

I'm trying to enable the Hadoop native library and the snappy library for compression in Hadoop 2.2.0, but I always end up with: ./hadoop/bin/hadoop checknative -a Native library checking: hadoop: false zlib: false snappy: false lz4: …
yvesonline
  • 4,609
  • 2
  • 21
  • 32
1
2
3
24 25