PythonException: An exception was thrown from the Python worker. Please see the stack trace below. 'TypeError: float() argument must be a string or a number, not 'NoneType'', from , line 3. Full traceback below: Traceback (most recent call last): File "", line 3, in TypeError: float() argument must be a string or a number, not 'NoneType'
How do we handle `None` in the spark DataFrame? I'd like to identify which rows / columns contain `None` and drop them?
Hope you are well. Just wanted to see if you were able to find an answer to your question and would you like to mark an answer as best? It would be really helpful for the other members too.
Cheers!
Even though the code throws the issue while write, the issue can be in the code before as spark is lazily evaluated. The error "
TypeError: float() argument must be a string or a number, not 'NoneType'
" generally comes when we pass a variable to float() function and the value of the variable at that time is None. I would recommend you to check for such issues.