在Apache Spark中,“with as”和“cache”都是优化Spark执行速度的方法,但是它们用法和作用略有不同。在实际使用中,需要根据具体的应用场景来选择。
from pyspark.sql.functions import *
from pyspark.sql.types import *
schema = StructType([
StructField('num1', IntegerType(), True),
StructField('num2', IntegerType(), True),
StructField('num3', IntegerType(), True)
])
data = [(1,2,3), (4,5,6), (7,8,9)]
df = spark.createDataFrame(data, schema=schema)
df = df.withColumnRenamed('num1', 'new_num1')
df.show()
输出结果为:
+--------+----+----+
|new_num1|num2|num3|
+--------+----+----+
| 1| 2| 3|
| 4| 5| 6|
| 7| 8| 9|
+--------+----+----+
from pyspark.sql.functions import *
from pyspark.sql.types import *
schema = StructType([
StructField('name', StringType(), True),
StructField('age', IntegerType(), True),
StructField('gender', StringType(), True)
])
data = [("John", 22, "male"), ("Jane", 32, "female"), ("Bob", 45, "male")]
df = spark.createDataFrame(data, schema=schema)
df.cache()
df.show()
输出结果为:
+----+---+------+
|name|age|gender|
+----+---+------+
|