Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
menu search
person
Welcome To Ask or Share your Answers For Others

Categories

I am converting my legacy Python code to Spark using PySpark.

I would like to get a PySpark equivalent of:

usersofinterest = actdataall[actdataall['ORDValue'].isin(orddata['ORDER_ID'].unique())]['User ID']

Both, actdataall and orddata are Spark dataframes.

I don't want to use toPandas() function given the drawback associated with it.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
1.0k views
Welcome To Ask or Share your Answers For Others

1 Answer

  • If both dataframes are big, you should consider using an inner join which will work as a filter:

    First let's create a dataframe containing the order IDs we want to keep:

    orderid_df = orddata.select(orddata.ORDER_ID.alias("ORDValue")).distinct()
    

    Now let's join it with our actdataall dataframe:

    usersofinterest = actdataall.join(orderid_df, "ORDValue", "inner").select('User ID').distinct()
    
  • If your target list of order IDs is small then you can use the pyspark.sql isin function as mentioned in furianpandit's post, don't forget to broadcast your variable before using it (spark will copy the object to every node making their tasks a lot faster):

    orderid_list = orddata.select('ORDER_ID').distinct().rdd.flatMap(lambda x:x).collect()[0]
    sc.broadcast(orderid_list)
    

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
thumb_up_alt 0 like thumb_down_alt 0 dislike
Welcome to ShenZhenJia Knowledge Sharing Community for programmer and developer-Open, Learning and Share
...