Revision b0e599e82f502c2c46d0ffc4ce65e7a0062f5c8f

Committed on 04/09/2018 2:31 am by André R <andre.romcke@gmail.com> [GitHub Diff]

Optimized content bulk loading for use with larger batch sizes (#2429)

Several issues were identified with current bulk loading logic which slows it down on larger batches:
1. The SQL becomes to big as we build up a large OR expression, which also means we repeat language filtering per item
2. version filtering was not done on the join meaning mySQL had to do a lot more work to filter out correct items.
3. loadVersionedNameData was given a lot of duplicated entries which caused the query there to be slowed down considerably as well

So as:
- version property on LoadStruct was not used and not adviced to be used (to make sure you get published one)
- languages casues a lot of SQL being generated

It was decided / suggested to drop LoadStruct and rather do a bit dumber filtering which would allow storage engine to handle
larger batches much better.

Changes:

* Deprecated LoadStruct;

* Refactored eZ\Publish\SPI\Persistence\Content\Handler::loadContentList to use separate lists of Content Ids and translations (language codes) instead of deprecated LoadStruct;

* Improved performance of eZ\Publish\Core\Persistence\Legacy\Content\Gateway::loadContentList on large list of Content Ids.