I keep getting the following error:
ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
I checked everything and can’t find the answer why it’s happening.
I have docker container running for Redis and Celery Broker, and I can see that Celery accepts tasks but can’t connect to Redis and doesn’t turn task over for following execution.
Here’s my
__init__
file as per the Celery docs:
from .celery import app as celery_app
__all__ = ('celery_app',)
In the celery.py
I have following code:
import os
from celery import Celery
# set the default Django settings module for the 'celery' program.
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
app = Celery('Notification')
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
app.config_from_object('django.conf:settings', namespace='CELERY')
# Load task modules from all registered Django app configs.
app.conf.task_default_queue = 'notification'
app.autodiscover_tasks()
This is my Redis connection in setting_model.py
:
class RedisConfig(BaseModel):
host: str = "redis" # name of container
port: int = 6379
db: int = 0
class Config:
arbitrary_types_allowed = True
def get_redis_connection_str(self) -> str:
Return redis connection string
return f"redis://{self.host}:{self.port}/{self.db}"
In the settings.py
I have the following:
# Celery
CELERY_BROKER_URL = env.str(
"REDIS_URL", default=RedisConfig().get_redis_connection_str()
Here’s my docker-compose file:
notification-celery:
build:
context: ./notification
dockerfile: Dockerfile
command: [ "celery", "-A", "project", "worker", "-l", "info", "-Q", "notification" ]
volumes:
- ./volumes/notification/static:/app/static
- ./volumes/notification/media:/app/media
env_file: 'index.env'
networks:
- qoovee
depends_on:
- redis
restart: always
redis:
image: redis:6.0.9
restart: always
ports:
- "6379:6379"
expose:
- 6379
healthcheck:
test: [ "CMD", "redis-cli", "-h", "localhost", "ping" ]
interval: 1m
timeout: 5s
networks:
- qoovee
I checked everything and don’t get why it doesn’t work.
I searched through Internet as well but non of the advices helped me so hope here I can find one!
Where does the idea for this come from? I’m not familiar with anything using this type of structure for a redis configuration.
I’ve only ever used the broker_url
and result_backend
settings along with the older capitalized versions CELERY_BROKER_URL
and CELERY_RESULT_BACKEND
.
See Redis backend settings.
It’s pydantic settings, it’s been like this before, I’m new on the project and trying do not change much. Just current targets.
It’s tricky that locally it doesn’t work, however on demo service everything up and running without problem…
KenWhitesell
Maybe there’s something wrong with the dockerfile? Maybe I should add port binding? Though I tried and didn’t work, but maybe I should make it in different way?
I did in docker-compose
file like this:
ports:
- "8000:8005"
Didn’t work though.
you would have needed to change it, because redis isn’t at 127.0.0.1 relative to the celery container.
The ports setting on the containers don’t matter, because you’ve got both containers running in the same docker network.
Do you have any other code running in that container that is accessing redis? I’d check to see how that might be configured. (You show that CACHE_URL setting - what is using it, if anything?) If something is using it, that would also need to be changed to reference the redis
host name in the url.
You may also want to verify what is being set for CELERY_BROKER_URL. (A print statement in your settings file would show up in the docker logs)
I have following task that is trying to connect to Redis:
def send_email(request):
data = {
'emails': request.data.get('emails'),
'context': request.data.get('context', {}),
'template': request.data.get('template'),
**request.data.get('sub')
nots = notify.delay('send_mail', **data)
logger.info(f'NOTS: {nots}')
return SuccessResponse()
@shared_task
def notify(action, service=None, *args, **kwargs):
service = Service.objects.first()
except Service.DoesNotExist as ex:
logger.error(f'{ex}')
return
notification = Notification(service)
logger.info(f'SENDING EMAIL: {getattr(notification, action, lambda **i: print(i))(*args, **kwargs)}')
getattr(notification, action, lambda **i: print(i))(*args, **kwargs)
I even get an ID from celery, but it doesn’t go to Redis.
But this error doesn’t necessarily need to come from celery - it could be any part of your code that tries to connect to the redis server.
Do a global search through your code for either/both of the strings ‘127.0.0.1’ or ‘localhost’. Do not limit your search to just the celery components. Also check for settings and related settings files that may also have those references. Identify everything with either of those strings.
Well, didn’t find anything…
Everything seems fine, even if I push to demo server, it’s connect to Redis, but not local…
And on the demo we have OS Linux but locally I have Win10 with containers.
I thought maybe there’s something to add to host?
This is the only thing I’m getting:
2023-02-22 10:16:07 -------------- celery@83ba26f14872 v4.1.0 (latentcall)
2023-02-22 10:16:07 ---- **** -----
2023-02-22 10:16:07 --- * *** * -- Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-debian-8.10 2023-02-22 16:16:07
2023-02-22 10:16:07 -- * - **** ---
2023-02-22 10:16:07 - ** ---------- [config]
2023-02-22 10:16:07 - ** ---------- .> app: Notification:0x7f8928231748
2023-02-22 10:16:07 - ** ---------- .> transport: redis://localhost:6379/0
2023-02-22 10:16:07 - ** ---------- .> results: disabled://
2023-02-22 10:16:07 - *** --- * --- .> concurrency: 4 (prefork)
2023-02-22 10:16:07 -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
2023-02-22 10:16:07 --- ***** -----
2023-02-22 10:16:07 -------------- [queues]
2023-02-22 10:16:07 .> notification exchange=notification(direct) key=notification
2023-02-22 10:16:07
2023-02-22 10:16:07
2023-02-22 10:16:07 [tasks]
2023-02-22 10:16:07 . webapp.tasks.notify
2023-02-22 10:16:07
2023-02-22 10:16:07
2023-02-22 10:16:07 Please specify a different user using the -u option.
2023-02-22 10:16:07
2023-02-22 10:16:07 User information: uid=0 euid=0 gid=0 egid=0
2023-02-22 10:16:07
2023-02-22 10:16:07 uid=uid, euid=euid, gid=gid, egid=egid,
2023-02-22 10:16:07 [2023-02-22 16:16:07,665: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
2023-02-22 10:16:07 Trying again in 2.00 seconds...
2023-02-22 10:16:07
2023-02-22 10:16:09 [2023-02-22 16:16:09,669: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
2023-02-22 10:16:09 Trying again in 4.00 seconds...
2023-02-22 10:16:09
2023-02-22 10:16:13 [2023-02-22 16:16:13,675: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
2023-02-22 10:16:13 Trying again in 6.00 seconds...
I noticed even If I change anything in the code, it doesn’t reflect on connection, meaning if I put 127.0.0.1:6379/0
or container name it still tries to connect to redis://localhost:6379/0
. I tried to build new image after changing code and I see in container it changed, but still it tries to connect to localhost
. I can’t find how to change it. Maybe in the container I can do some other settings in order to change it or add something in docker-compose file?
What does your Dockerfile look like for this compared to the Dockerfile for your regular Django application?
Your container clearly isn’t running with your settings. Either some environment variable or other setting is interferring with what you’re trying to do.
Hello Ken, sorry, could not answer earlier.
Here’s my Dockerfile:
FROM qoovee/notification:image
ENV PROJECT_NAME=notification
WORKDIR /app
ADD requirements/ requirements/
RUN pip install --upgrade pip && pip install --no-cache-dir -r requirements/dev.txt
COPY . .
RUN chown app:app -R .
EXPOSE 8000
And this is the docke-compose that starts application:
notification-celery:
build:
context: ./notification
dockerfile: Dockerfile
command: [ "celery", "-A", "project", "worker", "-l", "info", "-Q", "notification" ]
volumes:
- ./volumes/notification/static:/app/static
- ./volumes/notification/media:/app/media
env_file: 'index.env'
networks:
- qoovee
expose:
- 8000
depends_on:
- redis
restart: always
In .env file I have this line REDIS_URL=redis://redis:6739/0
When I inspect container I see it takes REDIS_URL as it says in .env file, but still get this error:
2023-02-28 12:34:58 /usr/local/lib/python3.6/site-packages/environ/environ.py:615: UserWarning: /app/project/settings/.env doesn't exist - if you're not configuring your environment separately, create one.
2023-02-28 12:34:58 "environment separately, create one." % env_file)
2023-02-28 12:35:00 /usr/local/lib/python3.6/site-packages/celery/platforms.py:795: RuntimeWarning: You're running the worker with superuser privileges: this is
2023-02-28 12:35:00 absolutely not recommended!
2023-02-28 12:35:00
2023-02-28 12:35:00 Please specify a different user using the -u option.
2023-02-28 12:35:00
2023-02-28 12:35:00 User information: uid=0 euid=0 gid=0 egid=0
2023-02-28 12:35:00
2023-02-28 12:35:00 uid=uid, euid=euid, gid=gid, egid=egid,
2023-02-28 12:35:00
2023-02-28 12:35:00 -------------- celery@70b79a0f3b1e v4.1.0 (latentcall)
2023-02-28 12:35:00 ---- **** -----
2023-02-28 12:35:00 --- * *** * -- Linux-5.10.16.3-microsoft-standard-WSL2-x86_64-with-debian-8.10 2023-02-28 10:35:00
2023-02-28 12:35:00 -- * - **** ---
2023-02-28 12:35:00 - ** ---------- [config]
2023-02-28 12:35:00 - ** ---------- .> app: Notification:0x7f08cea2e748
2023-02-28 12:35:00 - ** ---------- .> transport: redis://localhost:6379/0
2023-02-28 12:35:00 - ** ---------- .> results: disabled://
2023-02-28 12:35:00 - *** --- * --- .> concurrency: 4 (prefork)
2023-02-28 12:35:00 -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
2023-02-28 12:35:00 --- ***** -----
2023-02-28 12:35:00 -------------- [queues]
2023-02-28 12:35:00 .> notification exchange=notification(direct) key=notification
2023-02-28 12:35:00
2023-02-28 12:35:00
2023-02-28 12:35:00 [tasks]
2023-02-28 12:35:00 . webapp.tasks.notify
2023-02-28 12:35:00
2023-02-28 12:35:00 [2023-02-28 10:35:00,566: ERROR/MainProcess] consumer: Cannot connect to redis://localhost:6379/0: Error 99 connecting to localhost:6379. Cannot assign requested address..
2023-02-28 12:35:00 Trying again in 2.00 seconds...