Work around Celery job failures on Python 3.6
The slightly odd hack that we had to integrate historical class-based
tasks with Celery's modern preferences (see e.g.
https://docs.celeryq.dev/en/stable/history/whatsnew-4.0.html#the-task-base-class-no-longer-automatically-register-tasks)
no longer worked with Python 3.6.
`RunJob` and `CeleryRunJob` are partly "overriding general behaviour",
which makes sense to do in a subclass of `Task`, but they also implement
their own `run` methods. `celery_app.task` creates a new `Task`
instance using the decorated function as its `run` method, and so
persuading those to use `CeleryRunJob.run` instead took some effort.
However, with Python 3.6 this fails because the running task instance
doesn't seem to be an instance of `CeleryRunJob` according to `super()`.
Switching to `celery_app.register_task` avoids some unnecessary
complexity, but the core problem remains. For now, I've just switched
to the old-style way of calling the superclass using `RunJob.run(self,
job_id)`; inelegant though it is, it works on both Python 3.5 and 3.6.
I suspect that eventually we'll need to rethink how `lazr.jobrunner`'s
Celery integration works based on modern Celery best practices in order
to fix this properly.