Merk
Tilgang til denne siden krever autorisasjon. Du kan prøve å logge på eller endre kataloger.
Tilgang til denne siden krever autorisasjon. Du kan prøve å endre kataloger.
Create a multi-dimensional cube for the current DataFrame using the specified columns, allowing aggregations to be performed on them.
Syntax
cube(*cols: "ColumnOrName")
Parameters
| Parameter | Type | Description |
|---|---|---|
cols |
list, str, int or Column | The columns to cube by. Each element should be a column name (string) or an expression (Column) or a column ordinal (int, 1-based) or list of them. |
Returns
GroupedData: Cube of the data based on the specified columns.
Notes
A column ordinal starts from 1, which is different from the 0-based __getitem__.
Examples
df = spark.createDataFrame([("Alice", 2), ("Bob", 5)], schema=["name", "age"])
df.cube("name").count().orderBy("name").show()
# +-----+-----+
# | name|count|
# +-----+-----+
# | NULL| 2|
# |Alice| 1|
# | Bob| 1|
# +-----+-----+
df.cube("name", df.age).count().orderBy("name", "age").show()
# +-----+----+-----+
# | name| age|count|
# +-----+----+-----+
# | NULL|NULL| 2|
# | NULL| 2| 1|
# | NULL| 5| 1|
# |Alice|NULL| 1|
# |Alice| 2| 1|
# | Bob|NULL| 1|
# | Bob| 5| 1|
# +-----+----+-----+