Donate. I desperately need donations to survive due to my health

Get paid by answering surveys Click here

Click here to donate

Remote/Work from Home jobs

Running word count using pyspark with pycharm on windows

I have Python 3.7 v,spark-2.4.0-bin-hadoop2.7, JDK latest(11.0.1), Pycharm installed in my windows machine. I am trying to run word count in pycharm with following code:

from pyspark import SparkConf, SparkContext
sc = SparkContext(master="local",appName="Spark Demo")
contentRDD = sc.textFile("file:\\C:\\Users\\Desktop\\deckofcards.txt")
nonempty_lines = contentRDD.filter(lambda x: len(x) > 0)
words = nonempty_lines.flatMap(lambda x: x.split(''))
wordcount = words.map(lambda x:(x,1)).reduceByKey(lambda x,y:x+y)
.map(lambda x: (x[1], x[0])).sortByKey(False)
for word in wordcount.collect():
    print(word)
wordcount.saveAsTextFile("C:\\Users\\Desktop\\output")

but it's giving me the error:

py4j-0.10.7-src.zip is present in path C:\Installation\spark-2.4.0-bin-hadoop2.7\python\lib

There are no permission issues while reading file. The error I am facing is as below:

File "C:\Installation\spark-2.4.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2263, in _defaultReducePartitions return self.getNumPartitions()

File "C:\Installation\spark-2.4.0-bin-hadoop2.7\python\pyspark\rdd.py", line 2517, in getNumPartitions return self._prev_jrdd.partitions().size()

File "C:\Installation\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\java_gateway.py", line 1257, in call

File "C:\Installation\spark-2.4.0-bin-hadoop2.7\python\lib\py4j-0.10.7-src.zip\py4j\protocol.py", line 328, in get_return_value

py4j.protocol.Py4JJavaError: An error occurred while calling o21.partitions. : org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: file:/C:/Users/srini/Desktop/deckofcards.txt at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:287) at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229) at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:204) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:251) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:49) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:253) at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:251) at scala.Option.getOrElse(Option.scala:121) at org.apache.spark.rdd.RDD.partitions(RDD.scala:251) at org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:61) at org.apache.spark.api.java.AbstractJavaRDDLike.partitions(JavaRDDLike.scala:45) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.base/java.lang.reflect.Method.invoke(Method.java:566) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:238) at java.base/java.lang.Thread.run(Thread.java:834)

Process finished with exit code 1

Comments