You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
I have included a self contained code sample to reproduce the problem.
i.e. it's possible to run as 'python bug.py'.
This will be a somewhat sparse bug report, but putting here early in case anyone recognizes this. We can do more work to replicate the environment if not.
In our upstream tests, xarray tests nightly against unreleased versions of our dependencies to get ahead of any incompatible code. Occasionally it also catches bugs in upstream libraries.
Here, we're getting an error importing some items from
numba.np.ufunc
, including the somewhat confusing
SystemError: initialization of _internal failed without raising an exception
at the bottom of the stack trace.
Thanks for the report
@max-sixty
. Do you know what version of Python, Numba and NumPy (and where from, e.g.
conda-forge
or
pip
or ...) were present when this a) failed and b) was last successful?
I think:
SystemError: initialization of _internal failed without raising an exception
is saying that the module failed to initialize as it returned an error code, but there was no exception raised to go with the failure result (e.g. function PyInit_<module name> returned NULL but an exception wasn't set).
Do you know what version of Python, Numba and NumPy (and where from, e.g. conda-forge or pip or ...) were present when this a) failed and b) was last successful?
This is the first day it's failed with this error, so it is likely to be a change somewhere in the past day.
Though I realize "somewhere" isn't that helpful, and it now seems that it may not be a numba-specific error given that hasn't changed. Do you possibly have any similar tests which run on the latest branch of numpy etc? If those pass, then this is more likely to be something specific to xarray / numbagg, even though that didn't look likely at first glance.
is saying that the module failed to initialize as it returned an error code, but there was no exception raised to go with the failure result (e.g. function PyInit_<module name> returned NULL but an exception wasn't set).
Ah great, that makes much more sense.
This is the first day it's failed with this error, so it is likely to be a change somewhere in the past day.
That's not quite true, it started failing 5 days ago but there was a bug in our CI which caused that to go unnoticed. Probably doesn't change too much, though.
Do you know what version of Python, Numba and NumPy (and where from, e.g. conda-forge or pip or ...) were present when this a) failed and b) was last successful?
This is the first day it's failed with this error, so it is likely to be a change somewhere in the past day.
Though I realize "somewhere" isn't that helpful, and it now seems that it may not be a numba-specific error given that hasn't changed. Do you possibly have any similar tests which run on the latest branch of numpy etc? If those pass, then this is more likely to be something specific to xarray / numbagg, even though that didn't look likely at first glance.
Thanks for this information. Numba doesn't automatically test against NumPy latest/mainline, but it does test NumPy alphas/pre-releases and provides feedback for these. Once a NumPy package becomes more generally available across a number of distribution mechanisms, support for that release of NumPy is added to Numba. As a result, released versions of Numba have explicit version constraints with respect to packages such as NumPy. As it's quite common for updates to e.g. NumPy to break something in Numba, Numba tries really hard to provide predicable failure modes (e.g. won't install or RuntimeError for unsupported NumPy being found).
Do you think it'd be possible to diff the logs between last working and now? That might hint at the problem?
is saying that the module failed to initialize as it returned an error code, but there was no exception raised to go with the failure result (e.g. function PyInit_<module name> returned NULL but an exception wasn't set).
Ah great, that makes much more sense.
Great, glad that's clearer, it's quite a "low level" problem.
I just checked, and as far as I can tell the only difference are some I/O libraries (unrelated, I think) and the OS version. We use the ubuntu-latest tag to run on github actions, and apparently that changed from 20.04.5 to 22.04.1. Not sure if that should make a difference? Edit: the version of __glibc changes as well: __glibc=2.31=0 → __glibc=2.35=0
Edit: For reference, here's the logs: passing (we have other errors, but they're definitely unrelated) and failing
Thanks for checking @keewis. I'm going to make some guesses based on Numba internals... I think the following:
That the issue is unlikely to be glibc, Python or Python C-API related as to get to the point of importing numba.np.ufunc._internal Numba would have had to have imported other C extension modules that would have similar glibc linkage and C-API use and these evidently worked "OK" as errors were not raised.
That whilst Numba guards against using unsupported NumPy versions, current NumPy nightlies are likely to be reporting that they are 1.23 series builds as IIRC 1.24 hasn't been tagged, so Numba is effectively using something that hasn't been tested.
Following some discussion at the weekly triage meeting, @gmarkall noted that there's some potential for susceptibility against changes in NumPy ufunc definitions in the init_ufunc_dispatch function in _internal.c. NumPy has recently merged a change to help with Update ufunc loop signature resolution to use NumPy #22422 #8538 which may well be the cause of the problem if 2. is also correct.
Numba relies on NumPy infrastructure for e.g. ufunc creation, and so there is a very tight coupling in this respect, between Numba and NumPy (the code is all in C and uses Python/NumPy C-API).
To debug and potentially fix the issue reported, maybe first try installing a supported released version of NumPy? If that fixes it, it narrows the problem down to what is suspected in the above.
Hope this helps.
To debug and potentially fix the issue reported, maybe first try installing a supported released version of NumPy? If that fixes it, it narrows the problem down to what is suspected in the above.
Our tests are still very much passing on the newest released versions of libraries; so this is only an issue with the versions from HEADs. I just tried installing numpy from main — I get the same failure with numba 0.56.3 but it works with numba 0.56.4. I notice that's consistent with the declared dependencies of numba, so it seems numba is doing everything it should be...
Possibly over at xarray, we should at least ensure we're on the latest numba for this test. I did a PR to do that earlier: pydata/xarray#7311 (review). Let us know if there's some pre-compiled version of dependencies (pydata/xarray#7311 (comment)), otherwise we'll just use pip.
Feel free to close, thanks for your attention @stuartarchibald !
for me numba=0.56.4 from conda-forge fails locally with numpy=1.24.0.dev0+1120.gf30af6acd, so not sure what I'm doing wrong? @max-sixty, was that with the numba head?
and pip does indeed complain that numba requires numpy<1.24 but installs it anyways (as intended in this case, because we do actually want to test with the most recent version, even if that breaks something). So while point 2 from #8615 (comment) does not apply, point 3 may very well be the cause of the problem.
Redoing the test; I also get a failure with:
both that nightly wheel command, and pip install . on numpy''s HEAD
both numba 0.56.3 and 0.56.4
So I think my prior success with 0.56.4 must have been a mistake (I notice that pip reinstalls a lower version of numpy when installing a new version of numba, so that may have been it, mea culpa)
...so I'm not sure what we should do from Xarray's POV — I guess we can try running tests without numba? But that misses some of the utility of these upstream tests.
changed the title
"SystemError: initialization of _internal failed without raising an exception"
Error on import with numpy HEAD
Nov 23, 2022
...so I'm not sure what we should do from Xarray's POV — I guess we can try running tests without numba? But that misses some of the utility of these upstream tests.
I'm not sure what is best for the Xarray project, however this information might help informing that decision.
Numba's releases are restricted for use against specific versions of NumPy for reasons as demonstrated in this issue. It is often the case that Numba might "work" on other versions of NumPy, but this cannot be guaranteed as they might not be binary compatible etc.
Numba main branch has no restrictions on the version of NumPy, but it also doesn't track NumPy HEAD. The folks who maintain Numba do an upgrade of the supported NumPy version, often in a single patch, as pre-release packages become available for testing. I'll raise this issue with the other maintainers at the next triage meeting, but historically the view has been:
There are simply not the resources (human, packages, or compute) for Numba to track NumPy HEAD.
Development of Numba is considerably easier in stable environments. There are already many variations:
Python versions (the bytecode from which Numba compiles changes in literally every version)
NumPy versions (algorithms, APIs, etc change in pretty much every version)
OS/Architecture (Numba supports at least 7, each with their own individual issues)
The following quick-n-dirty patch gets around the SystemError with NumPy main:
diff --git a/numba/np/ufunc/_internal.c b/numba/np/ufunc/_internal.c
index 98a643788..c83b4f403 100644
--- a/numba/np/ufunc/_internal.c+++ b/numba/np/ufunc/_internal.c@@ -326,10 +326,13 @@ init_ufunc_dispatch(int *numpy_uses_fastcall)
} else if (strncmp(crnt_name, "reduceat", 9) == 0) {
ufunc_dispatch.ufunc_reduceat =
(PyCFunctionWithKeywords)crnt->ml_meth;
+ } else if (strncmp(crnt_name, "resolve_dtypes", 15) == 0) {
} else {
result = -1;
break;
+ case '_':+ break;
default:
result = -1; /* Unknown method */
However, I have not tried much beyond that yet!
The following is also needed because the deprecated MachAr is now removed:
with this I'm able to start running the test suite - as things pass by, everything appears OK so far, but will update here if the test run completes.
Thanks @stuartarchibald — that makes sense. I think we'll explore excluding anything that runs numba in these tests if this crops up again
2. There are simply not the resources (human, packages, or compute) for Numba to track NumPy HEAD.
Totally empathize! FWIW, we've found that the upstream HEAD tests reduce the maintainer burden, since we can solve problems at a slower pace — no need to rush when there's an upstream release that breaks something — and we can provide feedback on breakages where there's a mistaken breaking change. But numba is a more technically complex project than Xarray, so possibly the learning is not transferable.
Thanks @stuartarchibald — that makes sense. I think we'll explore excluding anything that runs numba in these tests if this crops up again
There are simply not the resources (human, packages, or compute) for Numba to track NumPy HEAD.
Totally empathize! FWIW, we've found that the upstream HEAD tests reduce the maintainer burden, since we can solve problems at a slower pace — no need to rush when there's an upstream release that breaks something — and we can provide feedback on breakages where there's a mistaken breaking change. But numba is a more technically complex project than Xarray, so possibly the learning is not transferable.
Thanks for sharing this. I can see that for some projects, aspects of the described approach would certainly be beneficial in terms of maintenance burden.
The Numba maintainers do track changes to dependencies and work with other projects if issues are anticipated, they also often "try out" the code base against newer versions of projects that impact Numba (cPython, LLVM, NumPy) etc to assess the level of effort needed to do the necessary updates. I think the learning from the Numba side is probably, that in developing a compiler, having a stable and predictable environment in which to develop makes it easier to isolate problems. This is in part due to problems often being very niche e.g. one single Python+NumPy+OS+CPU combination might exhibit a particular issue!
Just a heads up that I think I'm hitting this error in CIs now that NumPy 1.24 has been released:
________________________ ERROR collecting test session _________________________
../../../micromamba/envs/test/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1014: in _gcd_import
<frozen importlib._bootstrap>:991: in _find_and_load
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
<frozen importlib._bootstrap>:671: in _load_unlocked
../../../micromamba/envs/test/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:168: in exec_module
exec(co, module.__dict__)
pandas/tests/window/conftest.py:93: in <module>
@pytest.fixture(params=[pytest.param("numba", marks=td.skip_if_no("numba")), "cython"])
pandas/util/_test_decorators.py:192: in skip_if_no
not safe_import(package, min_version=min_version), reason=msg
pandas/util/_test_decorators.py:95: in safe_import
mod = __import__(mod_name)
../../../micromamba/envs/test/lib/python3.8/site-packages/numba/__init__.py:43: in <module>
from numba.np.ufunc import (vectorize, guvectorize, threading_layer,
../../../micromamba/envs/test/lib/python3.8/site-packages/numba/np/ufunc/__init__.py:3: in <module>
from numba.np.ufunc.decorators import Vectorize, GUVectorize, vectorize, guvectorize
../../../micromamba/envs/test/lib/python3.8/site-packages/numba/np/ufunc/decorators.py:3: in <module>
from numba.np.ufunc import _internal
E SystemError: initialization of _internal failed without raising an exception
We can pin numpy to 1.23.5 for now, no issue
for me numba=0.56.4 from conda-forge fails locally with numpy=1.24.0.dev0+1120.gf30af6acd, so not sure what I'm doing wrong? @max-sixty, was that with the numba head?
and pip does indeed complain that numba requires numpy<1.24 but installs it anyways (as intended in this case, because we do actually want to test with the most recent version, even if that breaks something). So while point 2 from #8615 (comment) does not apply, point 3 may very well be the cause of the problem.
Thanks. This help a lot. I reinstall numpy by "pip install numpy==1.23.0". Then, the error when "import numba" was solved.
import datashader fails with SystemError: initialization of _internal failed without raising an exception
movingpandas/movingpandas-examples#22
Remove pinned numpy version when the problem is fixed (UMAP not working with Numpy>1.23.5)
BiAPoL/napari-clusters-plotter#215
* Limit version of numpy to <1.24
To fix issue with numba and numpy 1.24 (see numba/numba#8615)
* switch to spdx identifier
* remove license file
* Apply suggestions from code review
---------
Co-authored-by: James A. Fellows Yates <[email protected]>
* Limit version of numpy to <1.24
To fix issue with numba and numpy 1.24 (see numba/numba#8615)
* switch to spdx identifier
* remove license file
* Apply suggestions from code review
---------
Co-authored-by: James A. Fellows Yates <[email protected]>