Del via


csv (DataFrameWriter)

Saves the content of the DataFrame in CSV format at the specified path.

Syntax

csv(path, mode=None, compression=None, sep=None, quote=None, escape=None,
    header=None, nullValue=None, escapeQuotes=None, quoteAll=None,
    dateFormat=None, timestampFormat=None, ignoreLeadingWhiteSpace=None,
    ignoreTrailingWhiteSpace=None, charToEscapeQuoteEscaping=None,
    encoding=None, emptyValue=None, lineSep=None)

Parameters

Parameter Type Description
path str The path in any Hadoop-supported file system.
mode str, optional The behavior when data already exists. Accepted values are 'append', 'overwrite', 'ignore', and 'error' or 'errorifexists' (default).

Returns

None

Examples

Write a DataFrame into a CSV file and read it back.

import tempfile
with tempfile.TemporaryDirectory(prefix="csv") as d:
    df = spark.createDataFrame([{"age": 100, "name": "Alice"}])
    df.write.csv(d, mode="overwrite")

    spark.read.schema(df.schema).format("csv").option(
        "nullValue", "Alice").load(d).show()
    # +---+----+
    # |age|name|
    # +---+----+
    # |100|NULL|
    # +---+----+