Just to report that my first experience with SNAPPY just went south. I have the equivalent of a gpt graph for GRD Calibration + Terrain-Correction implemented in SNAPPY, but it takes 32 mins instead of 6m11 (same box, same GRD file).
java_max_mem: 43G (which seems woefully wasteful for processing a 1 GB zip file).
I like the overall idea of python integration, and would even expect a performance penalty for calling java from python, etc. but 5 times is well beyond reasonable expectation.
My final goal is to do SLC processing, and a 5x penalty in that case would be unworkable. Back to gpt for now, but looking forward to solutions in this space.
Guido
5x performance penalty does not sound too surprising to me but thank you for verifying - this is why I tell everyone to do their pre-processing with gpt-graphs. SNAP is multi-threaded and multi-core while SNAPpy isn’t, plus programmers will tell you there are many technical reasons why Python will probably never run as fast as JAVA:
softwareengineering.stackexchange.com
We can perhaps do something about the multi-core issue in the future but I think it’s quite safe to predict that SNAPpy will always perform worse than native SNAP.
ps. if you want to increase performance and have lots of RAM, you can run everything from a RAMDISK.
Hello,
I am a bit confused by this post.
My understanding was that snappy does enable to call java efficient codes but without multi-core option. In this case, the 5-time speed difference should be due to the use of 5 cores vs 1 core, isn’t it?
For me the main disadvantage of snappy remains the memory use (
Snappy not freeing memory
)