The majority of the sources measure the performance of studios by comparing their metrics to industrial averages (not to mention our research (Max Pog, 2023), which has the same sin), taking the “one-size-fits-all” approach. Of course, we understand this logic: it is faster and less expensive to use available and thought-to-be confident sources of data than to build own sample of research subjects.
Yet, this method makes it hard to control (or at least, get knowledge of) the built-in terms and data processing, which are essential when studying such vague phenomena as “a new venture”, “startup”, and “a venture studio”. For instance, many sources use average numbers of startups’ time to various funding rounds and exit events (e.g., Burris, Mohammadi and Maiocco, 2023; Kannan and Peterman, 2022; Mohammadi and Maiocco, 2022; Zasowski, 2022), stating that startups supported by studios evolve faster than “traditional” ones (whatever that means). Consider, for instance, the difference in raising a Seed round, which is from 2 to 3 times faster for studios’ startups than for “traditional” ones (Zasowski, 2022; Max Pog, 2023).
However, there could be a huge variance of timing variables due to the different factors like the influence of startup supporting organizations (maybe accelerators and VC funds also comparably increase the growth speed?), startup industry (for some industries - e.g., bio, pharma, hardware - it may take longer to get external funding), and external factors (COVID, economic crisis, wars, etc.).
We believe that conclusions made on averaged data may lessen the value of the results since researchers may (and probably do) miss the influence of some critical factors and, because of that, get false-positive/negative outcomes.