Chunked Transfer encoding is usually used in cases where the content length is unknown when the sender starts transmitting the data. The receiver can handle each chunk while the server is still producing new ones.
This implies the the server is sending the whole time. I don't think that it makes too much sense to send I'm still working|I'm still working|I'm still working|
in chunks and as far as I know chunked transfer-encoding is handled transparently by most application servers. They switch automatically when the response is bigger then a certain size.
A common pattern for your use case looks like this:
The client triggers a bulk operation:
POST /batch-jobs HTTP/1.1
The server creates a resource which describes the status of the job and returns the URI in the Location header:
HTTP/1.1 202 Accepted
Location: /batch-jobs/stats/4711
The client checks this resource and receives a 200:
GET /batch-jobs/stats/4711 HTTP/1.1
This example uses JSON but you could also return plain text or add caching headers which tell the client how long he should wait for the next poll.
HTTP/1.1 200 OK
Content-Type: application/json
{ "status" : "running", "nextAttempt" : "3000ms" }
If the job is done the server should answer with a 303 and the URI of the resource he has created:
HTTP/1.1 303 See other
Location: /batch-jobs/4711
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…