Del via


partitionedBy

Partitions the output table created by create, createOrReplace, or replace using the given columns or transforms. When specified, the table data is stored by these values for efficient reads.

For example, when a table is partitioned by day, it may be stored in a directory layout like:

  • table/day=2019-06-01/
  • table/day=2019-06-02/

Partitioning is one of the most widely used techniques to optimize physical data layout. It provides a coarse-grained index for skipping unnecessary data reads when queries have predicates on the partitioned columns. For partitioning to work well, the number of distinct values in each column should typically be less than tens of thousands.

col and cols support only the following transform functions:

  • pyspark.sql.functions.years
  • pyspark.sql.functions.months
  • pyspark.sql.functions.days
  • pyspark.sql.functions.hours
  • pyspark.sql.functions.bucket

Syntax

partitionedBy(col, *cols)

Parameters

Parameter Type Description
col Column or str The first partitioning column or transform.
*cols Column or str, optional Additional partitioning columns or transforms.

Returns

DataFrameWriterV2