The new feature will be a "storage" for jobs. So jobs can be created with multiple "iterations".
This works like this:
1. In the first round bounties are submitted normally. Al bounty submissions have attached a "storage" value - this is basically data that the work author wants to "keep". The storage is filled from within the Elastic PL program.
2. When the maximum number of bounties is reached, the job goes into iteration 2 (and 3 and 4 ...) where the job can reuse the "storage" from other iterations.
3. The job is closed when ITERATIONS*MAX_BOUNTIES Bounties are reached.
Why do we need this? Simulated Annealing, Genetic Algorithms, and any other optimization technique which is iteration-based and needs information from the prior iteration.
Not sure if the 10 bounty limit is not too low at this point. Will finish the implementation tomorrow, then we can test it!
Great update ! Also, very thoughtful functionality to open up usability.