The games we play: critical complexity improves machine learning
When mathematical modelling is applied to capture a complex system, multiple models are often created that characterize different aspects of that system. Often, a model at one level will produce a prediction which is contradictory at another level but both models are accepted because they are both useful. Rather than aiming to build a single unified model of a complex system, the modeller acknowledges the infinity of ways of capturing the system of interest, while offering their own specific insight. We refer to this pragmatic applied approach to complex systems – one which acknowledges that they are incompressible, dynamic, nonlinear, historical, contextual, and value-laden – as Open Machine Learning (Open ML). In this paper we define Open ML and contrast it with some of the grand narratives of ML of two forms: 1) Closed ML, ML which emphasizes learning with minimal human input (e.g. Google's AlphaZero) and 2) Partially Open ML, ML which is used to parameterize existing models. To achieve this, we use theories of critical complexity to both evaluate these grand narratives and contrast them with the Open ML approach. Specifically, we deconstruct grand ML `theories' by identifying thirteen 'games' played in the ML community. These games lend false legitimacy to models, contribute to over-promise and hype about the capabilities of artificial intelligence, reduce wider participation in the subject, lead to models that exacerbate inequality and cause discrimination and ultimately stifle creativity in research. We argue that best practice in ML should be more consistent with critical complexity perspectives than with rationalist, grand narratives.
READ FULL TEXT