Jump to content
 







Main menu
   


Navigation  



Main page
Contents
Current events
Random article
About Wikipedia
Contact us
Donate
 




Contribute  



Help
Learn to edit
Community portal
Recent changes
Upload file
 








Search  

































Create account

Log in
 









Create account
 Log in
 




Pages for logged out editors learn more  



Contributions
Talk
 



















Contents

   



(Top)
 


1 History  





2 Features  





3 Compression and encoding  



3.1  Dictionary encoding  





3.2  Bit packing  





3.3  Run-length encoding (RLE)  







4 Comparison  





5 See also  





6 References  





7 External links  














Apache Parquet






Français
 

Edit links
 









Article
Talk
 

















Read
Edit
View history
 








Tools
   


Actions  



Read
Edit
View history
 




General  



What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Wikidata item
 




Print/export  



Download as PDF
Printable version
 
















Appearance
   

 






From Wikipedia, the free encyclopedia
 


Apache Parquet
Initial release13 March 2013; 11 years ago (2013-03-13)
Stable release

2.9.0 / 6 October 2021; 2 years ago (2021-10-06)[1]

Repository
Written inJava (reference implementation)[2]
Operating systemCross-platform
TypeColumn-oriented DBMS
LicenseApache License 2.0
Websiteparquet.apache.org

Apache Parquet is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. It is similar to RCFile and ORC, the other columnar-storage file formats in Hadoop, and is compatible with most of the data processing frameworks around Hadoop. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

History

[edit]

The open-source project to build Apache Parquet began as a joint effort between Twitter[3] and Cloudera.[4] Parquet was designed as an improvement on the Trevni columnar storage format created by Doug Cutting, the creator of Hadoop. The first version, Apache Parquet 1.0, was released in July 2013. Since April 27, 2015, Apache Parquet has been a top-level Apache Software Foundation (ASF)-sponsored project.[5][6]

Features

[edit]

Apache Parquet is implemented using the record-shredding and assembly algorithm,[7] which accommodates the complex data structures that can be used to store data.[8] The values in each column are stored in contiguous memory locations, providing the following benefits:[9]

Apache Parquet is implemented using the Apache Thrift framework, which increases its flexibility; it can work with a number of programming languages like C++, Java, Python, PHP, etc.[10]

As of August 2015,[11] Parquet supports the big-data-processing frameworks including Apache Hive, Apache Drill, Apache Impala, Apache Crunch, Apache Pig, Cascading, Presto and Apache Spark. It is one of external data formats used by pandas Python data manipulation and analysis library.

Compression and encoding

[edit]

In Parquet, compression is performed column by column, which enables different encoding schemes to be used for text and integer data. This strategy also keeps the door open for newer and better encoding schemes to be implemented as they are invented.

[edit]

Parquet has an automatic dictionary encoding enabled dynamically for data with a small number of unique values (i.e. below 105) that enables significant compression and boosts processing speed.[12]

Bit packing

[edit]

Storage of integers is usually done with dedicated 32 or 64 bits per integer. For small integers, packing multiple integers into the same space makes storage more efficient.[12]

[edit]

To optimize storage of multiple occurrences of the same value, a single value is stored once along with the number of occurrences.[12]

Parquet implements a hybrid of bit packing and RLE, in which the encoding switches based on which produces the best compression results. This strategy works well for certain types of integer data and combines well with dictionary encoding.[12]

Comparison

[edit]

Apache Parquet is comparable to RCFile and Optimized Row Columnar (ORC) file formats — all three fall under the category of columnar data storage within the Hadoop ecosystem. They all have better compression and encoding with improved read performance at the cost of slower writes. In addition to these features, Apache Parquet supports limited schema evolution[citation needed], i.e., the schema can be modified according to the changes in the data. It also provides the ability to add new columns and merge schemas that do not conflict.

Apache Arrow is designed as an in-memory complement to on-disk columnar formats like Parquet and ORC. The Arrow and Parquet projects include libraries that allow for reading and writing between the two formats.[citation needed]

See also

[edit]

References

[edit]
  1. ^ "Apache Parquet – Releases". Apache.org. Archived from the original on 22 February 2023. Retrieved 22 February 2023.
  • ^ "Parquet-MR source code". GitHub. Archived from the original on 11 June 2018. Retrieved 2 July 2019.
  • ^ "Release Date". Archived from the original on 2016-10-20. Retrieved 2016-09-12.
  • ^ "Introducing Parquet: Efficient Columnar Storage for Apache Hadoop - Cloudera Engineering Blog". 2013-03-13. Archived from the original on 2013-05-04. Retrieved 2018-10-22.
  • ^ "Apache Parquet paves the way for better Hadoop data storage". 28 April 2015. Archived from the original on 31 May 2017. Retrieved 21 May 2017.
  • ^ "The Apache Software Foundation Announces Apache™ Parquet™ as a Top-Level Project : The Apache Software Foundation Blog". 27 April 2015. Archived from the original on 20 August 2017. Retrieved 21 May 2017.
  • ^ "The striping and assembly algorithms from the Google-inspired Dremel paper". github. Archived from the original on 26 October 2020. Retrieved 13 November 2017.
  • ^ "Apache Parquet Documentation". Archived from the original on 2016-09-05. Retrieved 2016-09-12.
  • ^ "Apache Parquet Cloudera". Archived from the original on 2016-09-19. Retrieved 2016-09-12.
  • ^ "Apache Thrift". Archived from the original on 2021-03-12. Retrieved 2016-09-14.
  • ^ "Supported Frameworks". Archived from the original on 2015-02-02. Retrieved 2016-09-12.
  • ^ a b c d "Announcing Parquet 1.0: Columnar Storage for Hadoop | Twitter Blogs". blog.twitter.com. Archived from the original on 2016-10-20. Retrieved 2016-09-14.
  • [edit]
    Retrieved from "https://en.wikipedia.org/w/index.php?title=Apache_Parquet&oldid=1230368827"

    Categories: 
    2015 software
    Apache Software Foundation projects
    Cloud computing
    Free system software
    Hadoop
    Software using the Apache license
    Hidden categories: 
    Articles with short description
    Short description is different from Wikidata
    Articles lacking reliable references from October 2016
    All articles lacking reliable references
    Articles needing additional references from October 2016
    All articles needing additional references
    All articles with unsourced statements
    Articles with unsourced statements from April 2023
    Articles with unsourced statements from January 2023
     



    This page was last edited on 22 June 2024, at 09:27 (UTC).

    Text is available under the Creative Commons Attribution-ShareAlike License 4.0; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc., a non-profit organization.



    Privacy policy

    About Wikipedia

    Disclaimers

    Contact Wikipedia

    Code of Conduct

    Developers

    Statistics

    Cookie statement

    Mobile view



    Wikimedia Foundation
    Powered by MediaWiki