The words "statistically significant" don't belong together, because there's nothing objective about significance. Something has statistical significance when it deviates from a normal distribution curve in a way that is rare. For most people "statistical" simply translates into "scientific," leading the whole expression to mean, "scientists think it's important." If a sample of 30 men and 30 women show that men have 120 points of X and women have 125 points, the magazine article could say that the difference is statistically significant. Technically this means that there is likely a population difference between men and women, but to the average reader, it means Men Are from Mars, Women Are from Venus, leading to new social order to a new generation.
Software releases that follow the principle of incremental improvement may still carry legacy habits from major release cycles. The major release cycle was a byproduct of an unconnected era, whereby all energy had to be placed into releasing one big CD to distribute to stores. Patches were Band-Aids that hopefully not everybody had to apply in order for their software to work correctly. But the problem with alternating minor and major releases, is that the major releases trade the promise of brand new features with a dip in user-experience. The first half of the dip is the suspension of patches and bug fixes in the run-up to the major release. This is because all the quality-assurance testing is tied up for the major release, and so software companies don't have time to test the bug fixes in both the old version and the new version. The second half of the dip is in the bug-fixing phase, where the public becomes extended beta-testers. The more features that are packed into the major release, the squared number of potential issues that could arise.
Most of the time, having features bundled together, such as an operating system face-lift plus some functionality upgrades, are purely so the marketing guys can have a "big splash." One day, the minor/major release dichotomy will be obsolete in favor of constant-sized releases. Or, some company will figure out how to subvert the extended beta-test. But even that would only cover the post-release dip. The pre-release dip would still occur as the company announces, "This will be fixed in version 2.0."
Consider a DVD of the movie Fight Club. If someone placed it in a DVD player, they'd see that Ed Norton punches Brad Pitt in the face. If they don't put the DVD in the player, it's still true that Ed punches Brad. If all DVD players in the world were destroyed, but we retained the DVDs, it would still be true that in Fight Club, Ed punches Brad. The absence of the players don't invalidate the truth about that movie.
If all the DVDs were destroyed, people would still remember that Ed punches Brad. There would be no proof, and one could say that if we were to go back in time, we would find that indeed it is the case. This thought experiment could go further and further, asking questions like, "What if everyone's memory was erased?" Which begs the question, Is the medium of an event's existence necessary for the event's existence? Can existence be solely predicated on information, or do zeroes and ones have to be etched into a disc?
Likewise, it's possible that we are in a similar movie, a movie that isn't playing anywhere. It doesn't exist in the memory of any being, but it's simply what happens in this particular, abstract, and grand, sequence.
Entelechy, or the principle that the purpose of things inhere in the design of things, leads to a tyranny of the majority. People have to agree to the purpose and the design of something, and therefore the most popular opinion prevails. If one takes a pluralistic view, that everything has multiple purposes and multiple designs, then minority uses of the human body, such as the homosexuality, wouldn't be suppressed.