Merk
Tilgang til denne siden krever autorisasjon. Du kan prøve å logge på eller endre kataloger.
Tilgang til denne siden krever autorisasjon. Du kan prøve å endre kataloger.
Saves the content of the DataFrame in Parquet format at the specified path.
Syntax
parquet(path, mode=None, partitionBy=None, compression=None)
Parameters
| Parameter | Type | Description |
|---|---|---|
path |
str | The path in any Hadoop-supported file system. |
mode |
str, optional | The behavior when data already exists. Accepted values are 'append', 'overwrite', 'ignore', and 'error' or 'errorifexists' (default). |
partitionBy |
str or list, optional | Names of partitioning columns. |
compression |
str, optional | The compression codec to use. |
Returns
None
Examples
Write a DataFrame into a Parquet file and read it back.
import tempfile
with tempfile.TemporaryDirectory(prefix="parquet") as d:
spark.createDataFrame(
[{"age": 100, "name": "Alice"}]
).write.parquet(d, mode="overwrite")
spark.read.format("parquet").load(d).show()
# +---+------------+
# |age| name|
# +---+------------+
# |100|Alice|
# +---+------------+