Google File System
It has been suggested that this article or section be merged with [[::Google platform|Google platform]]. (Discuss) |
This article is in need of attention from an expert on the subject. WikiProject Software may be able to help recruit one. (November 2008) |
This article needs references that appear in reliable third-party publications. Primary sources or sources affiliated with the subject are generally not sufficient for a Wikipedia article. Please add more appropriate citations from reliable sources. (November 2007) |
Google File System (GFS) is a proprietary distributed file system developed by Google Inc. for its own use.[1] It is designed to provide efficient, reliable access to data using large clusters of commodity hardware.
Design
GFS is optimized for Google's core data storage and usage needs (primarily the search engine), which can generate enormous amounts of data that needs to be retained;[2] Google File System grew out of an earlier Google effort, "BigFiles", developed by Larry Page and Sergey Brin in the early days of Google, while it was still located in Stanford.[2] Files are divided into chunks of 64 megabytes, which are only extremely rarely overwritten, or shrunk; files are usually appended to or read. It is also designed and optimized to run on Google's computing clusters, the nodes of which consist of cheap, "commodity" computers, which means precautions must be taken against the high failure rate of individual nodes and the subsequent data loss. Other design decisions select for high data throughputs, even when it comes at the cost of latency.
The nodes are divided into two types: one Master node and a large number of Chunkservers. Chunkservers store the data files, with each individual file broken up into fixed size chunks (hence the name) of about 64 megabytes,[3] similar to clusters or sectors in regular file systems. Each chunk is assigned a unique 64-bit label, and logical mappings of files to constituent chunks are maintained. Each chunk is replicated several times throughout the network, with the minimum being three, but even more for files that have high demand or need more redundancy.
The Master server doesn't usually store the actual chunks, but rather all the metadata associated with the chunks, such as the tables mapping the 64-bit labels to chunk locations and the files they make up, the locations of the copies of the chunks, what processes are reading or writing to a particular chunk, or taking a "snapshot" of the chunk pursuant to replicating it (usually at the instigation of the Master server, when, due to node failures, the number of copies of a chunk has fallen beneath the set number). All this metadata is kept current by the Master server periodically receiving updates from each chunk server ("Heart-beat messages").
Permissions for modifications are handled by a system of time-limited, expiring "leases", where the Master server grants permission to a process for a finite period of time during which no other process will be granted permission by the Master server to modify the chunk. The modified chunkserver, which is always the primary chunk holder, then propagates the changes to the chunkservers with the backup copies. The changes are not saved until all chunkservers acknowledge, thus guaranteeing the completion and atomicity of the operation.
Programs access the chunks by first querying the Master server for the locations of the desired chunks; if the chunks are not being operated on (if there are no outstanding leases), the Master replies with the locations, and the program then contacts and receives the data from the chunkserver directly (similar to Kazaa and its supernodes).
As opposed to many filesystems, GFS is not implemented in the kernel of an operating system, but is instead provided as a userspace library.
See also
- BigTable
- MapReduce
- Fossil, the native file system of Plan 9
- List of Google products
- Hadoop and its "Hadoop Distributed File System" (HDFS), an open source Java product similar to GFS.
- CloudStore
- Cloud storage
References
- ↑ 2.0 2.1 "All this analysis requires a lot of storage. Even back at Stanford, the Web document repository alone was up to 148 gigabytes, reduced to 54 gigabytes through file compression, and the total storage required, including the indexes and link database, was about 109 gigabytes. That may not sound like much today, when you can buy a Dell laptop with a 120-gigabyte hard drive, but in the late 1990s commodity PC hard drives maxed out at about 10 gigabytes." "How Google Works". Cite error: Invalid
<ref>
tag; name "big-files" defined multiple times with different content - ↑ "The files managed by the system typically range from 100 megabytes to several gigabytes. So, to manage disk space efficiently, the GFS organizes data into 64-megabyte "chunks," which are roughly analogous to the "blocks" on a conventional file system--the smallest unit of data the system is designed to support. For comparison, a typical Linux block size is 4,096 bytes. It's the difference between making each block big enough to store a few pages of text, versus several fat shelves full of books." "How Google Works"
- "The Google File System" (PDF), Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung; pub. 19th ACM Symposium on Operating Systems Principles, Lake George, NY, October, 2003.
External links
- A Google-published research paper about GFS
- Google File System Eval: Part I at StorageMojo
- "How Google Works"
- ZDnet article on GFS
- "GFS: Evolution on Fast-forward"
- Recordings from a Course about Distributed Systems by Google, which also features a Lecture on GFS
bg:Гугъл файлова система cs:Google File System de:Google File System fr:Google File System it:Google File System ru:Google File System zh:Google檔案系統
If you like SEOmastering Site, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...
- Pages with reference errors
- Pages with broken file links
- Articles to be merged from July 2009
- Articles with invalid date parameter in template
- All articles to be merged
- Miscellaneous articles needing expert attention
- Articles needing expert attention from November 2008
- All articles needing expert attention
- Articles lacking reliable references from November 2007
- All articles lacking reliable references
- Linux file systems
- Parallel computing
- Distributed file systems