GVKun编程网logo

Configuring and Running Django + Celery in Docker Containers

46

对于ConfiguringandRunningDjango+CeleryinDockerContainers感兴趣的读者,本文将会是一篇不错的选择,并为您提供关于c#–container.Regist

对于Configuring and Running Django + Celery in Docker Containers感兴趣的读者,本文将会是一篇不错的选择,并为您提供关于c# – container.RegisterWebApiControllers(GlobalConfiguration.Configuration)导致InvalidOperationException、Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?、celery开启worker报错django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, bu...、Django-Docker容器化部署:Django-Docker-MySQL-Nginx-Gunicorn云端部署的有用信息。

本文目录一览:

Configuring and Running Django + Celery in Docker Containers

Configuring and Running Django + Celery in Docker Containers

 

Configuring and Running Django + Celery in Docker Containers

 Justyna Ilczuk  Oct 25, 2016  0 Comments

Configuring and Running Django + Celery in Docker Containers

After reading this blog post, you will be able to configure Celery with Django, PostgreSQL, Redis, and RabbitMQ, and then run everything in Docker containers.

Today, you''ll learn how to set up a distributed task processing system for quick prototyping. You will configure Celery with Django, PostgreSQL, Redis, and RabbitMQ, and then run everything in Docker containers. You''ll need some working knowledge of Docker for this tutorial, which you can get in one my previous posts here.

Django is a well-known Python web framework, and Celery is a distributed task queue. You''ll use PostgreSQL as a regular database to store jobs, RabbitMQ as message broker, and Redis as a task storage backend.

Motivation

When you build a web application, sooner or later you''ll have to implement some kind of offline task processing.

Example:

Alice wants to convert her cat photos from .jpg to .png or create a .pdf from her collection of .jpg cat files. Doing either of these tasks in one HTTP request will take too long to execute and will unnecessarily burden the web server - meaning we can''t serve other requests at the same time. The common solution is to execute the task in the background - often on another machine - and poll for the result.  

A simple setup for an offline task processing could look like this:

1. Alice uploads a picture.  
2. Web server schedules job on worker.  
3. Worker gets job and converts photo.  
4. Worker creates some result of the task (in this case, a converted photo).  
5. Web browser polls for the result.  
6. Web browser gets the result from the server.  

This setup looks clear, but it has a serious flaw - it doesn''t scale well. What if Alice has a lot of cat pictures and one server wouldn''t be enough to process them all at once? Or, if there was some other very big job and all other jobs would be blocked by it? Does she care if all of the images are processed at once? What if processing fails at some point?

Frankly, there is a solution that won''t kill your machine every time you get a bigger selection of images. You need something between the web server and worker: a broker. The web server would schedule new tasks by communicating with the broker, and the broker would communicate with workers to actually execute these tasks. You probably also want to buffer your tasks, retry if they fail, and monitor how many of them were processed.

You would have to create queues for tasks with different priorities, or for those suitable for different kinds of workers.

All of this can be greatly simplified by using Celery - an open-source, distributed tasks queue. It works like a charm after you configure it - as long as you do so correctly.

How Celery is built

Celery consists of:

  • Tasks, as defined in your app
  • A broker that routes tasks to workers and queues
  • Workers doing the actual work
  • A storage backend

You can watch a more in-depth introduction to Celery here or jump straight to Celery''s getting started guide.

Your setup

Start with the standard Django project structure. It can be created with django-admin, by running in shell:

$ django-admin startproject myproject

Which creates a project structure:

.
└── myproject
    ├── manage.py
    └── myproject
        ├── __init__.py
        ├── settings.py
        ├── urls.py
        └── wsgi.py

At the end of this tutorial, it''ll look like this:

.
├── Dockerfile
├── docker-compose.yml
├── myproject
│   ├── manage.py
│   └── myproject
│       ├── celeryconf.py
│       ├── __init__.py
│       ├── models.py
│       ├── serializers.py
│       ├── settings.py
│       ├── tasks.py
│       ├── urls.py
│       ├── views.py
│       └── wsgi.py
├── requirements.txt
├── run_celery.sh
└── run_web.sh

Creating containers

Since we are working with Docker 1.12, we need a proper Dockerfile to specify how our image will be built.

Custom container

Dockerfile

# use base python image with python 2.7
FROM python:2.7

# add requirements.txt to the image
ADD requirements.txt /app/requirements.txt

# set working directory to /app/
WORKDIR /app/

# install python dependencies
RUN pip install -r requirements.txt

# create unprivileged user
RUN adduser --disabled-password --gecos '''' myuser  

Our dependencies are:

requirements.txt

Django==1.9.8  
celery==3.1.20  
djangorestframework==3.3.1  
psycopg2==2.5.3  
redis==2.10.5  

I''ve frozen versions of dependencies to make sure that you will have a working setup. If you wish, you can update any of them, but it''s not guaranteed to work.

Choosing images for services

Now we only need to set up RabbitMQ, PostgreSQL, and Redis. Since Docker introduced its official library, I use its official images whenever possible. However, even these can be broken sometimes. When that happens, you''ll have to use something else.

Here are images I tested and selected for this project:

  • Official PostgreSQL image
  • Official Redis image
  • Official RabbitMQ image

Using docker-compose to set up a multicontainer app

Now you''ll use docker-compose to combine your own containers with the ones we chose in the last section.

docker-compose.yml

version: ''2''

services:  
  # PostgreSQL database
  db:
    image: postgres:9.4
    hostname: db
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres
      - POSTGRES_DB=postgres
    ports:
      - "5432:5432"

  # Redis
  redis:
    image: redis:2.8.19
    hostname: redis

  # RabbitMQ
  rabbit:
    hostname: rabbit
    image: rabbitmq:3.6.0
    environment:
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=mypass
    ports:
      - "5672:5672"  # we forward this port because it''s useful for debugging
      - "15672:15672"  # here, we can access rabbitmq management plugin

  # Django web server
  web:
    build:
      context: .
      dockerfile: Dockerfile
    hostname: web
    command: ./run_web.sh
    volumes:
      - .:/app  # mount current directory inside container
    ports:
      - "8000:8000"
    # set up links so that web knows about db, rabbit and redis
    links:
      - db
      - rabbit
      - redis
    depends_on:
      - db

  # Celery worker
  worker:
    build:
      context: .
      dockerfile: Dockerfile
    command: ./run_celery.sh
    volumes:
      - .:/app
    links:
      - db
      - rabbit
      - redis
    depends_on:
      - rabbit

Configuring the web server and worker

You''ve probably noticed that both the worker and web server run some starting scripts. Here they are (make sure they''re executable):

run_web.sh

#!/bin/sh

# wait for PSQL server to start
sleep 10

cd myproject  
# prepare init migration
su -m myuser -c "python manage.py makemigrations myproject"  
# migrate db, so we have the latest db schema
su -m myuser -c "python manage.py migrate"  
# start development server on public ip interface, on port 8000
su -m myuser -c "python manage.py runserver 0.0.0.0:8000"  

run_celery.sh

#!/bin/sh

# wait for RabbitMQ server to start
sleep 10

cd myproject  
# run Celery worker for our project myproject with Celery configuration stored in Celeryconf
su -m myuser -c "celery worker -A myproject.celeryconf -Q default -n default@%h"  

The first script - run_web.sh - will migrate the database and start the Django development server on port 8000. 
The second one , run_celery.sh, will start a Celery worker listening on a queue default.

At this stage, these scripts won''t work as we''d like them to because we haven''t yet configured them. Our app still doesn''t know that we want to use PostgreSQL as the database, or where to find it (in a container somewhere). We also have to configure Redis and RabbitMQ.

But before we get to that, there are some useful Celery settings that will make your system perform better. Below are the complete settings of this Django app.

myproject/settings.py

import os

from kombu import Exchange, Queue


BASE_DIR = os.path.dirname(os.path.dirname(__file__))

# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = ''megg_yej86ln@xao^+)it4e&ueu#!4tl9p1h%2sjr7ey0)m25f''

# SECURITY WARNING: don''t run with debug turned on in production!
DEBUG = True  
TEMPLATE_DEBUG = True  
ALLOWED_HOSTS = []

# Application definition

INSTALLED_APPS = (  
    ''rest_framework'',
    ''myproject'',
    ''django.contrib.sites'',
    ''django.contrib.staticfiles'',

    # required by Django 1.9
    ''django.contrib.auth'',
    ''django.contrib.contenttypes'',

)

MIDDLEWARE_CLASSES = (  
)

REST_FRAMEWORK = {  
    ''DEFAULT_PERMISSION_CLASSES'': (''rest_framework.permissions.AllowAny'',),
    ''PAGINATE_BY'': 10
}

ROOT_URLCONF = ''myproject.urls''

WSGI_APPLICATION = ''myproject.wsgi.application''

# Localization ant timezone settings

TIME_ZONE = ''UTC''  
USE_TZ = True

CELERY_ENABLE_UTC = True  
CELERY_TIMEZONE = "UTC"

LANGUAGE_CODE = ''en-us''  
USE_I18N = True  
USE_L10N = True

# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.7/howto/static-files/
STATIC_URL = ''/static/''

# Database Condocker-composeuration
DATABASES = {  
    ''default'': {
        ''ENGINE'': ''django.db.backends.postgresql_psycopg2'',
        ''NAME'': os.environ.get(''DB_ENV_DB'', ''postgres''),
        ''USER'': os.environ.get(''DB_ENV_POSTGRES_USER'', ''postgres''),
        ''PASSWORD'': os.environ.get(''DB_ENV_POSTGRES_PASSWORD'', ''postgres''),
        ''HOST'': os.environ.get(''DB_PORT_5432_TCP_ADDR'', ''db''),
        ''PORT'': os.environ.get(''DB_PORT_5432_TCP_PORT'', ''''),
    },
}

# Redis

REDIS_PORT = 6379  
REDIS_DB = 0  
REDIS_HOST = os.environ.get(''REDIS_PORT_6379_TCP_ADDR'', ''redis'')

RABBIT_HOSTNAME = os.environ.get(''RABBIT_PORT_5672_TCP'', ''rabbit'')

if RABBIT_HOSTNAME.startswith(''tcp://''):  
    RABBIT_HOSTNAME = RABBIT_HOSTNAME.split(''//'')[1]

BROKER_URL = os.environ.get(''BROKER_URL'',  
                            '''')
if not BROKER_URL:  
    BROKER_URL = ''amqp://{user}:{password}@{hostname}/{vhost}/''.format(
        user=os.environ.get(''RABBIT_ENV_USER'', ''admin''),
        password=os.environ.get(''RABBIT_ENV_RABBITMQ_PASS'', ''mypass''),
        hostname=RABBIT_HOSTNAME,
        vhost=os.environ.get(''RABBIT_ENV_VHOST'', ''''))

# We don''t want to have dead connections stored on rabbitmq, so we have to negotiate using heartbeats
BROKER_HEARTBEAT = ''?heartbeat=30''  
if not BROKER_URL.endswith(BROKER_HEARTBEAT):  
    BROKER_URL += BROKER_HEARTBEAT

BROKER_POOL_LIMIT = 1  
BROKER_CONNECTION_TIMEOUT = 10

# Celery configuration

# configure queues, currently we have only one
CELERY_DEFAULT_QUEUE = ''default''  
CELERY_QUEUES = (  
    Queue(''default'', Exchange(''default''), routing_key=''default''),
)

# Sensible settings for celery
CELERY_ALWAYS_EAGER = False  
CELERY_ACKS_LATE = True  
CELERY_TASK_PUBLISH_RETRY = True  
CELERY_DISABLE_RATE_LIMITS = False

# By default we will ignore result
# If you want to see results and try out tasks interactively, change it to False
# Or change this setting on tasks level
CELERY_IGNORE_RESULT = True  
CELERY_SEND_TASK_ERROR_EMAILS = False  
CELERY_TASK_RESULT_EXPIRES = 600

# Set redis as celery result backend
CELERY_RESULT_BACKEND = ''redis://%s:%d/%d'' % (REDIS_HOST, REDIS_PORT, REDIS_DB)  
CELERY_REDIS_MAX_CONNECTIONS = 1

# Don''t use pickle as serializer, json is much safer
CELERY_TASK_SERIALIZER = "json"  
CELERY_ACCEPT_CONTENT = [''application/json'']

CELERYD_HIJACK_ROOT_LOGGER = False  
CELERYD_PREFETCH_MULTIPLIER = 1  
CELERYD_MAX_TASKS_PER_CHILD = 1000  

Those settings will configure the Django app so that it will discover the PostgreSQL database, Redis cache, and Celery.

Now, it''s time to connect Celery to the app. Create a file celeryconf.py and paste in this code:

myproject/celeryconf.py

import os

from celery import Celery  
from django.conf import settings

os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

app = Celery(''myproject'')

CELERY_TIMEZONE = ''UTC''

app.config_from_object(''django.conf:settings'')  
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)  

That should be enough to connect Celery to our app, so the run_X scripts will work. You can read more about first steps with Django and Celery here.

Defining tasks

Celery looks for tasks inside the tasks.py file in each Django app. Usually, tasks are created either with a decorator, or by inheriting the Celery Task Class.

Here''s how you can create a task using decorator:

@app.task
def power(n):  
    """Return 2 to the n''th power"""
    return 2 ** n

And here''s how you can create a task by inheriting after the Celery Task Class:

class PowerTask(app.Task):  
    def run(self, n):
    """Return 2 to the n''th power"""
        return 2 ** n

Both are fine and good for slightly different use cases.

myproject/tasks.py

from functools import wraps

from myproject.celeryconf import app  
from .models import Job

# decorator to avoid code duplication

def update_job(fn):  
    """Decorator that will update Job with result of the function"""

    # wraps will make the name and docstring of fn available for introspection
    @wraps(fn)
    def wrapper(job_id, *args, **kwargs):
        job = Job.objects.get(id=job_id)
        job.status = ''started''
        job.save()
        try:
            # execute the function fn
            result = fn(*args, **kwargs)
            job.result = result
            job.status = ''finished''
            job.save()
        except:
            job.result = None
            job.status = ''failed''
            job.save()
    return wrapper


# two simple numerical tasks that can be computationally intensive

@app.task
@update_job
def power(n):  
    """Return 2 to the n''th power"""
    return 2 ** n


@app.task
@update_job
def fib(n):  
    """Return the n''th Fibonacci number.
    """
    if n < 0:
        raise ValueError("Fibonacci numbers are only defined for n >= 0.")
    return _fib(n)


def _fib(n):  
    if n == 0 or n == 1:
        return n
    else:
        return _fib(n - 1) + _fib(n - 2)

# mapping from names to tasks

TASK_MAPPING = {  
    ''power'': power,
    ''fibonacci'': fib
}

Building an API for scheduling tasks

If you have tasks in your system, how do you run them? In this section, you''ll create a user interface for job scheduling. In a backend application, the API will be your user interface. Let''s use the Django REST Framework for your API.

To make it as simple as possible, your app will have one model and only one ViewSet (endpoint with many HTTP methods).

Create your model, called Job, in myproject/models.py.

from django.db import models


class Job(models.Model):  
    """Class describing a computational job"""

    # currently, available types of job are:
    TYPES = (
        (''fibonacci'', ''fibonacci''),
        (''power'', ''power''),
    )

    # list of statuses that job can have
    STATUSES = (
        (''pending'', ''pending''),
        (''started'', ''started''),
        (''finished'', ''finished''),
        (''failed'', ''failed''),
    )

    type = models.CharField(choices=TYPES, max_length=20)
    status = models.CharField(choices=STATUSES, max_length=20)

    created_at = models.DateTimeField(auto_now_add=True)
    updated_at = models.DateTimeField(auto_now=True)
    argument = models.PositiveIntegerField()
    result = models.IntegerField(null=True)

    def save(self, *args, **kwargs):
        """Save model and if job is in pending state, schedule it"""
        super(Job, self).save(*args, **kwargs)
        if self.status == ''pending'':
            from .tasks import TASK_MAPPING
            task = TASK_MAPPING[self.type]
            task.delay(job_id=self.id, n=self.argument)

Then create a serializerview, and URL configuration to access it.

myproject/serializers.py

from rest_framework import serializers

from .models import Job


class JobSerializer(serializers.HyperlinkedModelSerializer):  
    class Meta:
        model = Job

myproject/views.py

from rest_framework import mixins, viewsets

from .models import Job  
from .serializers import JobSerializer


class JobViewSet(mixins.CreateModelMixin,  
                 mixins.ListModelMixin,
                 mixins.RetrieveModelMixin,
                 viewsets.GenericViewSet):
    """
    API endpoint that allows jobs to be viewed or created.
    """
    queryset = Job.objects.all()
    serializer_class = JobSerializer

myproject/urls.py

from django.conf.urls import url, include  
from rest_framework import routers

from myproject import views


router = routers.DefaultRouter()  
# register job endpoint in the router
router.register(r''jobs'', views.JobViewSet)

# Wire up our API using automatic URL routing.
# Additionally, we include login URLs for the browsable API.
urlpatterns = [  
    url(r''^'', include(router.urls)),
    url(r''^api-auth/'', include(''rest_framework.urls'', namespace=''rest_framework''))
]

For completeness, there is also myproject/wsgi.py, defining WSGI config for the project:

import os  
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

from django.core.wsgi import get_wsgi_application  
application = get_wsgi_application()  

and manage.py

#!/usr/bin/env python
import os  
import sys

if __name__ == "__main__":  
    os.environ.setdefault("DJANGO_SETTINGS_MODULE", "myproject.settings")

    from django.core.management import execute_from_command_line

    execute_from_command_line(sys.argv)

Leave __init__.py empty.

That''s all. Uh... lots of code. Luckily, everything is on GitHub, so you can just fork it.

Running the setup

Since everything is run from Docker Compose, make sure you have both Docker and Docker Compose installed before you try to start the app:

$ cd /path/to/myproject/where/is/docker-compose.yml
$ docker-compose build
$ docker-compose up

The last command will start five different containers, so just start using your API and have some fun with Celery in the meantime.

Accessing the API

Navigate in your browser to 127.0.0.1:8000 to browse your API and schedule some jobs.

Scale things out

Currently, we have only one instance of each container. We can get information about our group of containers with the docker-compose ps command.

$ docker-compose ps
           Name                          Command               State                                        Ports                                      
------------------------------------------------------------------------------------------------------------------------------------------------------
dockerdjangocelery_db_1       /docker-entrypoint.sh postgres   Up      0.0.0.0:5432->5432/tcp  
dockerdjangocelery_rabbit_1   /docker-entrypoint.sh rabb ...   Up      0.0.0.0:15672->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp  
dockerdjangocelery_redis_1    /entrypoint.sh redis-server      Up      6379/tcp  
dockerdjangocelery_web_1      ./run_web.sh                     Up      0.0.0.0:8000->8000/tcp  
dockerdjangocelery_worker_1   ./run_celery.sh                  Up  

Scaling out a container with docker-compose is extremely easy. Just use the docker-compose scale command with the container name and amount:

$ docker-compose scale worker=5
Creating and starting dockerdjangocelery_worker_2 ... done  
Creating and starting dockerdjangocelery_worker_3 ... done  
Creating and starting dockerdjangocelery_worker_4 ... done  
Creating and starting dockerdjangocelery_worker_5 ... done  

Output says that docker-compose just created an additional four worker containers for us. We can double-check it with the docker-compose ps command again:

$ docker-compose ps
           Name                          Command               State                                        Ports                                      
------------------------------------------------------------------------------------------------------------------------------------------------------
dockerdjangocelery_db_1       /docker-entrypoint.sh postgres   Up      0.0.0.0:5432->5432/tcp  
dockerdjangocelery_rabbit_1   /docker-entrypoint.sh rabb ...   Up      0.0.0.0:15672->15672/tcp, 25672/tcp, 4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp  
dockerdjangocelery_redis_1    /entrypoint.sh redis-server      Up      6379/tcp  
dockerdjangocelery_web_1      ./run_web.sh                     Up      0.0.0.0:8000->8000/tcp  
dockerdjangocelery_worker_1   ./run_celery.sh                  Up  
dockerdjangocelery_worker_2   ./run_celery.sh                  Up  
dockerdjangocelery_worker_3   ./run_celery.sh                  Up  
dockerdjangocelery_worker_4   ./run_celery.sh                  Up  
dockerdjangocelery_worker_5   ./run_celery.sh                  Up  

You''ll see there five powerful Celery workers. Nice!

Summary

Congrats! You just married Django with Celery to build a distributed asynchronous computation system. I think you''ll agree it was pretty easy to build an API, and even easier to scale workers for it! However, life isn''t always so nice to us, and sometimes we have to troubleshoot.

Contribution

Original article written by Justyna Ilczuk, updated by Michał Kobus.

 ENGINEERING | DOCKER | CELERY | DJANGO | DOCKER COMPOSEShare:     

 

c# – container.RegisterWebApiControllers(GlobalConfiguration.Configuration)导致InvalidOperationException

c# – container.RegisterWebApiControllers(GlobalConfiguration.Configuration)导致InvalidOperationException

在我的集成测试中,我使用的是我正在测试的Web API项目中构建的相同的SimpleInjector.Container.

但是组合根类中的这一行:

container.RegisterWebApiControllers(GlobalConfiguration.Configuration);

导致异常:

System.TypeInitializationException : The type initializer for 'MyProject.Api.Test.Integration.HttpClientFactory' threw an exception.
---- system.invalidOperationException : This method cannot be called during the application's pre-start initialization phase.
Result StackTrace:  
at MyProject.Api.Test.Integration.HttpClientFactory.Create()
   at MyProject.Api.Test.Integration.Controllers.ProductControllerIntegrationTest.<GetProductBarcode_Should_Return_Status_BadRequest_When_Barcode_Is_Empty>d__0.MoveNext() in d:\Projects\My\MyProject.Api.Test.Integration\Controllers\ProductControllerIntegrationTest.cs:line 26
----- Inner Stack Trace -----
   at System.Web.Compilation.BuildManager.EnsuretopLevelFilesCompiled()
   at System.Web.Compilation.BuildManager.GetReferencedAssemblies()
   at System.Web.Http.WebHost.WebHostAssembliesResolver.System.Web.Http.dispatcher.IAssembliesResolver.GetAssemblies()
   at System.Web.Http.dispatcher.DefaultHttpControllerTypeResolver.GetControllerTypes(IAssembliesResolver assembliesResolver)
   at System.Web.Http.WebHost.WebHostHttpControllerTypeResolver.GetControllerTypes(IAssembliesResolver assembliesResolver)
   at SimpleInjector.SimpleInjectorWebApiExtensions.GetControllerTypesFromConfiguration(HttpConfiguration configuration)
   at SimpleInjector.SimpleInjectorWebApiExtensions.RegisterWebApiControllers(Container container,HttpConfiguration configuration)
   at MyProject.Api.ContainerConfig.RegisterTypes(Container container) in d:\Projects\My\MyProject.Api\App_Start\ContainerConfig.cs:line 128
   at MyProject.Api.ContainerConfig.CreateWebApiContainer() in d:\Projects\My\MyProject.Api\App_Start\ContainerConfig.cs:line 63
   at MyProject.Api.Test.Integration.HttpClientFactory..cctor() in d:\Projects\My\MyProject.Api.Test.Integration\HttpClientFactory.cs:line 17

评论后,一切正常,网络应用程序本身和测试.

所以问题是:

>例外的原因是什么?
>(这种方法真的需要吗?)

这是HttpClientFactory的代码(一个辅助类,用于创建具有适当头的HttpClient,例如api密钥或授权):

internal static class HttpClientFactory
{
    private static readonly Container _container = ContainerConfig.CreateWebApiContainer();

    public static HttpClient Create()
    {
        var client = new HttpClient { BaseAddress = GetUrl() };
        //...
        return client;
    }
}

解决方法

如果我们仔细观察堆栈跟踪,我们可以准确地看到这里发生了什么. RegisterWebApiControllers扩展方法在从HttpConfiguration获取的IHttpControllerTypeResolver实例上调用GetControllerTypes方法,并传递也从配置中检索的IAssembliesResolver.调用GetControllerTypes方法(WebHostHttpControllerTypeResolver)调用DefaultHttpControllerTypeResolver的GetControllerTypes,最终将调用System.Web.Compilation.BuildManager类的GetReferencedAssemblies.

但是,System.Web.Compilation.BuildManager不能在ASP.NET管道的早期调用,也不能在ASP.NET的上下文之外调用.由于您正在进行测试,BuildManage将抛出您遇到的异常.

所以这里的解决方案(或’技巧’)将在单元测试时替换默认的IAssembliesResolver.我认为旋转变压器看起来像这样:

public class TestAssembliesResolver : IAssembliesResolver
{
    public ICollection<Assembly> GetAssemblies()
    {
        return AppDomain.CurrentDomain.GetAssemblies();
    }
}

[TestMethod]
public void TestMethod1()
{
    // Replace the original IAssembliesResolver.
    GlobalConfiguration.Configuration.Services.Replace(typeof(IAssembliesResolver),new TestAssembliesResolver());

    var container = SimpleInjectorWebApiInitializer.BuildContainer();

    container.Verify();
}

你不得不处理这个问题有点不幸,特别是因为Simple Injector的设计是可测试的.我们似乎忽略了这一点,将RegisterWebApiControllers扩展方法与Web API深深地集成在一起.我们必须退后一步,思考如何更轻松地验证单元测试中的Web API配置.

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

启动docker报错:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

systemctl status docker

解决方法:
vim /etc/docker/daemon.json

{
 "registry-mirrors": ["https://registry.docker-cn.com"]
}

systemctl restart docker.service
docker正常启动

celery开启worker报错django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, bu...

celery开启worker报错django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, bu...

其实挺简单的问题,但花了自己一个下午来解决,先是浏览各种博客,无果;没办法,然后去看celery官方文档,无果,近乎绝望,最后仔细看代码,找到问题所在(如下),自学狗这效率。。。。。。

下面是自己task.py中的代码

# 使用celery
from django.conf import settings
from celery import Celery
from django.template import loader, RequestContext
from goods.models import GoodsType, IndexGoodsBanner, IndexPromotionBanner, IndexTypeGoodsBanner
import os


# 在任务处理者一端加这几句
import os
import django
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dailyfresh.settings")
django.setup()

# 创建一个Celery类的实例对象
app = Celery(''celery_tasks.tasks'', broker=''redis://127.0.0.1:6379/1'')

# 定义任务函数
@app.task
def generate_static_index_html():
    """产生首页静态页面"""
    # 获取商品的种类信息
    types = GoodsType.objects.all()

    # 获取首页轮播商品信息
    goods_banners = IndexGoodsBanner.objects.all().order_by(''index'')

    # 获取首页促销活动信息
    promotion_banners = IndexPromotionBanner.objects.all().order_by(''index'')

    # 获取首页分类商品展示信息
    for type in types:  # GoodsType
        # 获取type种类首页分类商品的图片展示信息
        image_banners = IndexTypeGoodsBanner.objects.filter(type=type, display_type=1).order_by(''index'')
        # 获取type种类首页分类商品的文字展示信息
        title_banners = IndexTypeGoodsBanner.objects.filter(type=type, display_type=0).order_by(''index'')

        # 动态给type增加属性,分别保存首页分类商品的图片展示信息和文字展示信息
        type.image_banners = image_banners
        type.title_banners = title_banners

    # 组织模板上下文
    context = {''types'': types,
               ''goods_banners'': goods_banners,
               ''promotion_banners'': promotion_banners}

    # 使用模板
    # 1.加载模板文件,返回模板对象
    temp = loader.get_template(''static_index.html'')
    # 2.模板渲染
    static_index_html = temp.render(context)

    # 生成首页对应静态文件
    save_path = os.path.join(settings.BASE_DIR, ''static/index.html'')
    with open(save_path, ''w'') as f:
        f.write(static_index_html)

当使用celery -A celery_tasks.tasks worker -l info开启worker时,出现标题所示的报错,

错误原因:

from goods.models import GoodsType, IndexGoodsBanner, IndexPromotionBanner, IndexTypeGoodsBanner

这行代码应该写在环境配置后面,不然python解释器找不到goods模块,具体代码如下

# 使用celery
from django.conf import settings
from celery import Celery
from django.template import loader, RequestContext

# 在任务处理者一端加这几句
import os
import django
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dailyfresh.settings")
django.setup()

from goods.models import GoodsType, IndexGoodsBanner, IndexPromotionBanner, IndexTypeGoodsBanner

# 创建一个Celery类的实例对象
app = Celery(''celery_tasks.tasks'', broker=''redis://127.0.0.1:6379/1'')

# 定义任务函数
@app.task
def generate_static_index_html():
    """产生首页静态页面"""
    # 获取商品的种类信息
    types = GoodsType.objects.all()

    # 获取首页轮播商品信息
    goods_banners = IndexGoodsBanner.objects.all().order_by(''index'')

    # 获取首页促销活动信息
    promotion_banners = IndexPromotionBanner.objects.all().order_by(''index'')

    # 获取首页分类商品展示信息
    for type in types:  # GoodsType
        # 获取type种类首页分类商品的图片展示信息
        image_banners = IndexTypeGoodsBanner.objects.filter(type=type, display_type=1).order_by(''index'')
        # 获取type种类首页分类商品的文字展示信息
        title_banners = IndexTypeGoodsBanner.objects.filter(type=type, display_type=0).order_by(''index'')

        # 动态给type增加属性,分别保存首页分类商品的图片展示信息和文字展示信息
        type.image_banners = image_banners
        type.title_banners = title_banners

    # 组织模板上下文
    context = {''types'': types,
               ''goods_banners'': goods_banners,
               ''promotion_banners'': promotion_banners}

    # 使用模板
    # 1.加载模板文件,返回模板对象
    temp = loader.get_template(''static_index.html'')
    # 2.模板渲染
    static_index_html = temp.render(context)

    # 生成首页对应静态文件
    save_path = os.path.join(settings.BASE_DIR, ''static/index.html'')
    with open(save_path, ''w'') as f:
        f.write(static_index_html)

 此时使用celery -A celery_tasks.tasks worker -l info 就能正常开启worker了

Django-Docker容器化部署:Django-Docker-MySQL-Nginx-Gunicorn云端部署

Django-Docker容器化部署:Django-Docker-MySQL-Nginx-Gunicorn云端部署

上一章我们实现了在 Docker 中添加了 MySQL 数据库,但采用的开发服务器虽然使用便捷,但性能差、可靠性低,无法应用在生产环境中。

因此本章将实现 Docker + Django + MySQL + Nginx + Gunicorn 容器项目,完成最终的服务器部署。

直接进入本章的 Docker 入门读者,建议回到教程第一章开始阅读,否则某些内容不好理解。对 Django 项目部署都没有概念的读者,还可以先阅读我的博文:将 Django 项目部署到服务器。

Docker-compose

在部署到服务器之前,先来尝试本地部署。

在上一章的基础上,继续修改 docker-compose.yml 配置:

version: "3"

services:
  app:
    restart: always
    build: .
    command: bash -c "python3 manage.py collectstatic --no-input && python3 manage.py migrate && gunicorn --timeout=30 --workers=4 --bind :8000 django_app.wsgi:application"
    volumes:
      - .:/code
      - static-volume:/code/collected_static
    expose:
      - "8000"
    depends_on:
      - db
    networks:
      - web_network
      - db_network
  db:
    image: mysql:5.7
    volumes:
      - "./mysql:/var/lib/mysql"
    ports:
      - "3306:3306"
    restart: always
    environment:
      - MYSQL_ROOT_PASSWORD=mypassword
      - MYSQL_DATABASE=django_app
    networks:
      - db_network
  nginx:
    restart: always
    image: nginx:latest
    ports:
      - "8000:8000"
    volumes:
      - static-volume:/code/collected_static
      - ./config/nginx:/etc/nginx/conf.d
    depends_on:
      - app
    networks:
      - web_network
      
networks:
  web_network:
    driver: bridge
  db_network:
    driver: bridge
    
volumes:
  static-volume:

有点复杂。来看看大体思路:

  • 定义了 3 个容器,分别是 appdbnginx 。容器之间通过定义的端口进行通讯。
  • 定义了 2 个网络,分别是 web_networkdb_network 。只有处在相同网络的容器才能互相通讯。不同网络之间是隔离的,即便采用同样的端口,也无法通讯。
  • 定义了 1 个数据卷static-volume 。数据卷非常适合多个容器共享使用同一数据,你可以看到 appnginx 都用到了它。
  • exposeports 都可以暴露容器的端口,区别是 expose 仅暴露给其他容器,而 ports 会暴露给其他容器和宿主机。

这么讲可能还是很难理解,让我们继续分解。

网络 network

Docker 允许用户给每个容器定义其工作的网络,只有在相同的网络之中才能进行通讯。你可以看到 nginx 容器处于 web_network 网络,而 db 容器处于 db_network 网络,因此它两是无法通讯的,实际上确实也不需要通讯。而 app 容器同时处于 web_networkdb_network 网络,相当于是桥梁,连通了3个容器。

定义网络可以隔离容器的网络环境,也方便运维人员一眼看出网络的逻辑关系。

数据卷

之前我们见识过的用于映射宿主机和容器目录的卷了,实际上称为挂载;现在新出现的 static-volume 才叫。它的使用方式像这样:static-volume:/code/collected_static ,冒号后面还是容器内的目录,但冒号前的却不是宿主机目录、仅仅是卷的名称而已。从本质上讲,数据卷也是实现了宿主机和容器的目录映射,但是数据卷是由 Docker 进行管理的,你甚至都不需要知道数据卷保存在宿主机的具体位置。

相比挂载,数据卷的优点是由于是 Docker 统一管理的,不存在由于权限不够引发的挂载问题,也不需要在不同服务器指定不同的路径;缺点是它不太适合单配置文件的映射。

和挂载一样,数据卷的生命周期脱离了容器,删除容器之后卷还是存在的。下次构建镜像时,指定卷的名称就可以继续使用了。

既然 Docker 能够管理卷,所以要想删除卷也是非常容易的。指令嘛,我不告诉你,生产环境千万不要手贱。定期备份数据是个好习惯。

数据卷有个很重要的特性:启动时如果卷是空的,则会将容器映射目录的所有内容复制到卷里去。换句话说就是,只要卷初始化完成后,容器原始的 collected_static 目录就不会再使用了,新增的文件也只存在于卷中,容器中是没有的。

实际上 static 静态文件(以及 media 媒体文件)的持久存储,通过挂载或者数据卷都可以实现;具体用哪种,这个就见仁见智了,你自己选择。

篇幅有限,教程没有讲到 media 媒体文件,但它的设置和 static 是完全相同的。

其他配置

首先修改 Nginx 的配置文件,即映射到 nginx 容器的 config/nginx/django_app.conf

upstream app {
  ip_hash;
  server app:8000;
}

server {
  listen 8000;
  server_name localhost;
  
  location /static/ {
    autoindex on;
    alias /code/collected_static/;
  }
  
  location / {
    proxy_pass http://app/;
  }
}

此配置下 Nginx 会监听容器的 8000 端口,并将受到的请求发送到 app 容器(静态文件请求除外)。

requirements.txt 文件中增加 gunicorn 库:

django==2.2
mysqlclient==1.3.14
gunicorn==19.9.0

最后修改 django_app/settings.py和静态文件存放目录的配置:

...

ALLOWED_HOSTS = [''*'']

...

STATIC_ROOT = os.path.join(BASE_DIR, ''collected_static'')
STATIC_URL = ''/static/''

所有配置就完成了。

教程使用空的 Django 项目,为演示效果,就没有修改 DEBUG=False 了。若你用的自己的项目测试,记得把它为 False。

测试

测试指令就一条:

$ docker-compose up

浏览器访问 127.0.0.1:8000 又看到熟悉的 Django 小火箭了。

和上一章类似,第一次启动容器时可能会出现无法连接 MySQL 的错误,这是由于虽然 db 容器已经启动,但初始化并未完成;重新启动容器之后就可以正常工作了。若多次启动都无法正常工作,那就是别的原因了,好好检查吧。

本地部署成功,下一步服务器部署。

服务器部署

有了本地部署的经验,服务器部署就非常非常简单了。

还是类似的,部署前将 Docker 、 Docker-compose 、 Python3 等工具在服务器上安装好;将项目用 Git 克隆到服务器本地。

接下来把 settings.pyconfig/nginx/django_app.confrequirements.txt 相关位置都按教程流程改好;将 docker-compose.ymlDockerfile 复制到服务器。

由于 http 请求默认为 80 端口,所以为了接收公网请求,还需要做一点点修改 docker-compose.yml 的工作:

version: "3"

services:
  app:
    ...
    command: bash -c "... your_project_name.wsgi:application"  # 改为你的项目名称
    ...
  db:
    ...
  nginx:
    ...
    ports:
      - "80:8000"  # 监听 80 端口
    ...
      
networks:
  ...
    
volumes:
  ...

修改 Gunicorn 绑定的项目名称,以及让宿主机监听公网 http 默认的 80 端口。

此外还要修改 config/nginx/django_app.conf

upstream your_domain_name {
  ip_hash;
  server app:8000;
}

server {
  ...
  
  location / {
    proxy_pass http://your_domain_name/;
  }
}

这个改动主要是为了照顾各种第三方登录的回调地址(不改这里, GitHub、Weibo 三方登录都会失败)。如果你没有类似的需求,不改也是可以的。比如博主的个人网站是 www.dusaiphoto.com,所以这里的 your_domain_name 就修改为 www.dusaiphoto.com

最后,记得将 settings.py 中的 DEBUG 配置修改好:

# DEBUG=True 注释掉
DEBUG=False

这样就可以了!构建镜像并启动容器:

 docker-compose up

在浏览器中就可以正常访问你的网站了。

总结

现在你已经可以部署一个线上的容器化 Django 项目了,恭喜!

若本教程对你有帮助,请到GitHub给个 Star 哟,也欢迎阅读我的Django 搭建博客教程。

老朋友们,下个教程见!


  • 有疑问请在杜赛的个人网站留言,我会尽快回复。
  • 教程示例代码:django-docker-tutorial
  • 或Email私信我:dusaiphoto@foxmail.com

关于Configuring and Running Django + Celery in Docker Containers的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于c# – container.RegisterWebApiControllers(GlobalConfiguration.Configuration)导致InvalidOperationException、Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?、celery开启worker报错django.core.exceptions.ImproperlyConfigured: Requested setting INSTALLED_APPS, bu...、Django-Docker容器化部署:Django-Docker-MySQL-Nginx-Gunicorn云端部署的相关信息,请在本站寻找。

本文标签: