how to change a Dataframe column from String type to Double type in pyspark

I have a dataframe with column as String. I wanted to change the column type to Double type in PySpark.

Following is the way, I did:

    toDoublefunc = UserDefinedFunction(lambda x: x,DoubleType())
    changedTypedf = joindf.withColumn("label",toDoublefunc(joindf['show']))

Just wanted to know, is this the right way to do it as while running through Logistic Regression, I am getting some error, so I wonder, is this the reason for the trouble.

There is no need for an UDF here. Column already provides cast method with DataType instance :

    from pyspark.sql.types import DoubleType

    changedTypedf = joindf.withColumn("label", joindf["show"].cast(DoubleType()))

or short string:

    changedTypedf = joindf.withColumn("label", joindf["show"].cast("double"))

where canonical string names (other variations can be supported as well) correspond to simpleString value. So for atomic types:

    from pyspark.sql import types 

    for t in ['BinaryType', 'BooleanType', 'ByteType', 'DateType', 
              'DecimalType', 'DoubleType', 'FloatType', 'IntegerType', 
               'LongType', 'ShortType', 'StringType', 'TimestampType']:
        print(f"{t}: {getattr(types, t)().simpleString()}")
    BinaryType: binary
    BooleanType: boolean
    ByteType: tinyint
    DateType: date
    DecimalType: decimal(10,0)
    DoubleType: double
    FloatType: float
    IntegerType: int
    LongType: bigint
    ShortType: smallint
    StringType: string
    TimestampType: timestamp

and for example complex types

    types.ArrayType(types.IntegerType()).simpleString()
    'array<int>'
    types.MapType(types.StringType(), types.IntegerType()).simpleString()
    'map<string,int>'

From: stackoverflow.com/q/32284620

Back to homepage or read more recommendations: