If you train a complex model on its own output, you get a phenomenon which has been termed model collapse – over successive iterations the model focuses more and more on the most common or typical patterns. Measures of performance may even show improvement initially, but incrementally the model loses the ability to represent edge cases and the uncommon. Eventually the model performance collapses as it generates similar and inappropriate output to all inputs.

Go to Source