Wednesday 1 April 2015

The Limits of Rationality (an important mathematical oversight)

The following discussion involves an enquiry into some of the properties of the natural numbers, and entails a critique of the conventional definition of an integer as a stable index of numeric value. This is with concern to the fact that digital information systems require numeric values to be represented across a range of numerical radices (decimal, binary, octal, hexadecimal, etc.), and for them to serve not only as an index of quantity, but also ultimately as the basis of the machine-code for a multitude of operational processing instructions upon data. Later in the discussion I resolve upon a critique of digital information systems insofar as it is judged that those systems may be characterised by a tendency towards inherent logical inconsistency.

Rationality and Proportion in the Natural Numbers

In the following I use both the terms 'number' and 'integer' interchangeably, although I should point out that this enquiry is predominantly concerned with the set of the natural numbers, i.e., those positive whole numbers (including zero) we conventionally employ to count. Natural numbers are a subset of the category integers, as the latter also includes negative whole numbers, with which I am unconcerned. I am concerned however with the technical definition of an integer – i.e., as an entity in itself, whose properties are generally understood to be self-contained ('integral') – a definition hence inherited by the sub-category natural numbers.
We are familiar with the term 'irrational numbers' in maths – referring to examples such as √2, or π, and which implies that the figure cannot be expressed exactly as the ratio of any two whole numbers (e.g., 22/7 – a close rational approximation to π), and therefore does not resolve to a finite number of decimal places, or to a settled pattern of recurring digits following the decimal point. Irrational numbers have played a decisive role in the history of mathematics because, as they are impossible to define as discrete and finite magnitudes by means of number, they cannot be represented proportionally without resorting to geometry; while much of the modern development of mathematics has involved the shift from an emphasis upon Classical geometry as its foundation to one based upon abstract algebraic notation. The motivation towards abstraction was therefore determined to a great extent by the need to represent the irrational numbers without requiring their explanation in terms of continuous geometric magnitudes.1
The abstract representation of irrational numbers within algebra allows them to enter into calculations which also involve the 'rational' quantities of discrete integers and finite fractions, thereby combining elements that were previously considered incommensurable in terms of their proportion. The effect of this was to subvert the Classical distinction between discrete and continuous forms of magnitude. In the Greek understanding of discrete magnitudes, number always related to the being in existence of "a definite number of definite objects",2 and so the idea of their proportion was similarly grounded in the idea of a number of separable objects in existence. There is a different mode of proportion that applies to geometrical objects such as lines and planes – one that involves a continuously divisible scale, in comparison to the 'staggered' scale that would apply in the case of a number of discrete objects.
In terms of abstract algebraic notation, the Cartesian coordinates (x,y), for instance, might commonly stand in for any unknown integer value; but they may also take on the value of irrationals such as √2, which in the Greek tradition could only be represented diagrammatically. Therefore, as a complement to this newly empowered, purely intellectual form of mathematical discourse (epitomised in Descartes' project for a mathesis universalis), the idea of proportion (similarly, of logic) demanded a comparable abstraction, so that proportion is no longer seen to derive ecologically according to the forms of distribution of the objects under analysis, and instead becomes applied axiomatically – from without.
In the received definition of an integer as a measure of abstract quantity, the basis of an integer's 'integrity' (hence also that of the natural numbers) is no longer dependent upon its relation to the phenomenal identity of objects in existence, but to the purely conceptual identity that inheres in the unit '1' – a value which is nevertheless understood to reside intrinsically and invariably within the concept.3 Hence, integers also acquire an axiomatic definition, and any integer will display proportional invariance with respect to its constituent units (to say '5' is for all purposes equivalent to saying '1+1+1+1+1' – the former is simply a more manageable expression); and by virtue of this we can depend upon them as signifiers of pure quantity, untroubled by issues of quality. The difficulty with this received understanding is that the proportional unit '1', as an abstract entity, is only ever a symbolic construct, derived under the general concept of number, and which stands in, by a sort of tacit mental agreement, as an index for value. As such it is a character that lacks a stable substantial basis, unless, that is, we assert that certain mental constructs possess transcendental objectivity. If we consider the unit '1' in the context of binary notation, for instance, we perceive that in addition to its quantitative value it has also come to acquire an important syntactical property – it is now invested with the quality of 'positivity', it being the only alternative character to the 'negative' '0'. Can such syntactic properties be contained within the transcendental objectivity of the unit '1', considering that they do not similarly apply to '1' in decimal? In this case clearly not, as the property arises only as a condition of the restrictive binary relationship between the two digits within that particular system of notation.
Hence, somewhat antithetically to received understanding, it appears as a necessary conclusion that there are dynamic, context-specific attributes associated with particular integers which are not absolute or fixed (intrinsic), but variable, and which are determined extrinsically, according to the relative frequency of individual elements within the restricted range of available characters circumscribed by the terms of the current working radix (0-9 in decimal, 0-7 in octal, for instance). These attributes inevitably impose certain syntactical dependencies upon the characters within those notations. In that case, the proportionality that we are accustomed to apply axiomatically to the set of the natural numbers as self-contained entities should be reconsidered as a characteristic which rather depends exclusively upon the system of their notation within the decimal rational schema – one which will not automatically transfer as a given property to the same values when transcribed across alternative numerical radices. The conditions of proportionality that obtain between integers in a decimal system will be inconsistent with those obtaining between their corresponding (numerically equal) values in an alternative radix. As far as I am aware, this problem is one that has not been previously reported amongst mathematicians and so it will help at this stage to be able to refer to some empirical proof.
The page Radical Affinity and Variant Proportion in Natural Numbers (and associated pdf file) presents a series of numerical datasets of the decimal exponential series x0, [...], x10 , beginning with the decimal value x=10 (extended for x=(2, [...], 9) in the pdf), in comparison with corresponding series from all number bases from binary to nonary (base-9). It then displays tables and graphs of the values of the logarithmic differences between successive exponential values in each series; i.e., employing the derived radical logarithms (logb) for each respective radix (base-b). In each case, with a few exceptions, the graphs reveal a failure of logical consistency. The ratios between successive exponentials of, for instance, 128 (=1010) when treated as octal logarithms, display a series that cannot be determined on any rational principles. The problem arises due to the fact that octal logarithms (log8) are derived from 'common' or decimal logarithms (log10 – also written as 'Log'), according to the formula: log8x=log10x/log108. If one performs the same exercise for successive exponential values in the decimal series, and produces a series of graphs showing the distributions of values for constant values of x, with the exponential index z occupying the horizontal axis, the results are a series of horizontal straight lines at y=Logx. In the examples for the radical series described above however, horizontal straight lines occur only in a limited number of cases4. The distributions revealed are mostly irregular series of variegated peaks and troughs displaying proportional inconsistency, for example:
Graph to show logarithmic differences between octal correspondents of sequential exponentials of x=10 (decimal).

r=(log8xz) – (log8xz-1), for x=128

These findings therefore pose problems for use of the logarithmic function as logarithms express common ratios of proportion, and logarithms for diverse number bases (logb) are conventionally assumed to be perfectly derivable from 'common' logarithms (log10). If the logarithmic differences between successive exponentials in, for instance, the octal series: 12z8 (derived where the decimal value of x=10 – see graph above, and the Octal section of Radical Affinity etc.) do not produce a horizontal straight line, then these values are not proportionally consistent with their corresponding values in the decimal series: 10z10 , whose logarithmic differences do produce a horizontal straight line. This disclosure of a failure in consistency through use of the logarithmic function undermines the principle of rational proportionality conventionally understood to hold between diverse numerical radices and indicates that rationality operates effectively only under formally circumscribed limits, where previously, in terms of conventional mathematical understanding, no such limits had been perceived or considered.
These empirical findings confirm in principle that there are qualitative (or ‘behavioural’) properties that arise out of the relational (group) characteristics of particular integers; otherwise, the restrictive proportional rules that appear to be 'native' to individual numerical radices would be empirically impossible, or absurd, and therefore this must undermine the standard assumption of absolute proportional invariance between integers. However, I feel that it would be a mistake to consider such behavioural properties inhering mysteriously as intrinsic properties of integers themselves. Contrary to the standard definition of an integer (i.e., as an 'integral whole', or entity in itself), numbers are primarily constructions of the intellect, and as such do not really have the status of phenomenal objects capable of holding any intrinsic properties, aside from their notional quantities. Therefore, if they also exhibit empirical behavioural properties, it is likely these arise out of the sequential relationships between numerical characters (digits) with respect to their relative frequency as members of a limited group of available characters. The fact that in binary, for instance, the available characters are limited to '0' and '1', means that an instance of '1' in binary is quite differently potentiated from the same instance in decimal, even though the values 12 and 110 are quantitatively identical.
The logical 'either/or' (positive/negative) characteristic of binary notation noted earlier is of course what enables digital computer systems to employ binary code principally to convey a series of processing instructions, rather than serving merely as an index of quantity. The behavioural properties of individual digits according to the system of their notation may then be extrapolated in principle from the binary example, so that the factor of the relative frequency of individual digits according to the range of available digits within their respective radix (binary, octal, decimal, hexadecimal, etc.) comes to determine the logical potential of those digits uniquely in accordance with the rules that organise each respective radix, and which distinguish it from all alternative radices.
This analysis leads us to conclude that the exercise of rational proportionality (proportional invariance) in terms of quantitative understanding, as a governing principle, with universal applicability (therefore across diverse numerical radices), entails a basic technical misapprehension: it fails to perceive that the ratios of proportion obtaining in any quantitative system will depend implicitly on the terms of a signifying regime (i.e., the restrictive array of select digits at our disposal); the proportional rules of which will vary according to the range of available signifying elements, and the relative frequency (or 'logical potentiality') of individual elements therein.

An Inconvenient Truth Revealed

It is unfortunate that this recognition of the principle of variant proportionality between numerically equal integer values when expressed across diverse number radices (which has so far gone entirely unremarked by mathematicians and information scientists alike) was not made prior to the emergence in the late 20th Century of digital computing and digital information systems, for, as I will attempt to show in what follows, the issue has serious consequences for the logical consistency of data produced within those systems.
Information Science tends to treat the translation and recording of conventional analogue information into digital format unproblematically. The digital encoding of written, spoken, or visual information is seen to have little effect on the representational content of the message; the process is taken to be neutral, faithful, transparent. The assessment of quantitative and qualitative differences at the level of the observable world retains its accuracy despite at some stage involving a reduction, at the level of machine code, to the form of a series of simple binary (or 'logical') distinctions between '1' and '0' – positive and negative. This idea relies upon a tacit assumption that there exists such a level of fine-grained logical simplicity as the basis of a hierarchy of logical relationships, and which transcends all systems of conventional analogue (or indeed sensory) representation (be they linguistic, visual, sonic, or whatever); and that therefore we may break down these systems of representation to this level – the digital level – and then re-assemble them, as it were, without corruption.
However, as should now be clear from the analysis indicated above, the logical relationship between '1' and '0' in a binary system (which equates in quantitative terms with what we understand as their proportional relationship) is derived specifically from their membership of a uniquely defined group of digits (in the case of binary, limited to two members). It does not derive from a set of transcendent logical principles arising elsewhere and having universal applicability (a proposition that may well be unfamiliar, and perhaps unwelcome, to many mathematicians and information scientists alike).
It follows that the proportional relationships affecting quantitative expressions within binary, being uniquely and restrictively determined, cannot be assumed to apply (with proportional consistency) to translations of the same expressions into decimal (or into any other number radix, such as octal or hexadecimal). By extension therefore, the logical relationships within a binary (and hence digital) system of codes, being subject to the same restrictive determinations, cannot therefore be applied, with logical consistency that is, to conventional analogue representations of the observable world, as this would be to invest binary code with a transcendent logical potential that it simply cannot possess – they may be applied to such representations, and the results may appear to be internally consistent, but there is insufficient reason to expect that they will be logically consistent with the world of objects.
The issue of a failure of logical consistency is one which concerns the relationships between data objects – it does not concern the specific accuracy or internal content of data objects themselves (just as the variation in proportion across radices concerns the dynamic relations between integers, rather than their specific 'integral' numerical values). This means that, from a conventional scientific-positivist perspective, which generally relies for its raw data upon information derived from discrete acts of measurement, the problem will be difficult to recognise or detect (as the data might well appear to possess internal consistency). One will however experience the effects of the failure (while being rather mystified as to its causes) in the lack of a reliable correspondence between expectations derived from data analyses, and real-world events.

Logical Inconsistency is Inherent in Digital Information Systems

The extent of the problem of logical inconsistency is not limited however to that of the effects upon data arising from transformations of existing analogue information into digital format. Unfortunately, it is not a saving feature of digital information systems that, although not quite fully consistent with traditional analogue means of speaking about and depicting the world, they nevertheless result in a novel digitally-enhanced view through which they are able to maintain their own form of technologically-informed consistency. Rather, logical inconsistency is a recurrent and irremediable condition of data derived out of digital information processes, once that data is treated in isolation from the specific algorithmic processes under which it has been derived.
The principle that it is possible to encode information from a variety of non-digital sources into digital format and to reproduce that information with transparency depends implicitly on the idea that logic (i.e., proportionality) transcends the particular method of encoding logical values, implying that the rules of logic operate universally and are derived from somewhere external to the code. According to the analysis indicated above however, it is suggested that the ratios between numeric values expressed in any given numerical radix will be proportionally inconsistent with the ratios between the same values when expressed in an alternative radix, due to the fact that the rules of proportionality, understood correctly, are in fact derived uniquely and restrictively according to the internal characterological requirements of the specific codebase employed. This tells us that the principle widely employed in digital information systems5 – that of the seamless correspondence of logical values whether they are expressed as decimal, octal, hexadecimal, or as binary values – is now revealed as a mathematical error-in-principle.
As I have indicated in the previous section above, this makes problematic the assumption of consistency and transparency in the conversion of analogue information into digital format. However, the problem of logical inconsistency as a consequence of the non-universality of the rules of the various codebases employed is not limited to the (mostly unseen) machine-level translation of strictly numerical values from decimal or hexadecimal values back and forth into binary ones. The issue also has a bearing at the programming level – the level at which data objects are consciously selected and manipulated, and at which computational algorithms are constructed. Even at this level – at which most of the design and engineering component of digital information processing takes place – there is an overriding assumption that the logic of digital processes derives from a given repository of functional objects that possess universal logical potential, and that the resulting algorithmic procedures are merely instantiations of (rather than themselves constituting unique constructions of) elements of a system of logic that is preordained in the design of the various programming languages and programming interfaces.
But there is no universal programming language, and, in addition to that, there are no universal rules for the formulation of computational procedures, and hence of algorithms; so that each complete and functional algorithm must establish its own unique set of rules for the manipulation of its requisite data objects. Therefore, the data that is returned as the result of any algorithmic procedure (program) owes its existence and character to the unique set of rules established by the algorithm, from which it exclusively derives; which is to say that the returned data is qualitatively determined by those rules (rather than by some non-existent set of universal logical principles arising elsewhere) and has no absolute value or significance considered independently of that qualification.
To clarify these statements, we should consider what exactly is implied in the term 'algorithm', in order to understand why any particular algorithmic procedure must be considered as comprising a set of rules that are unique, and why its resultant data should therefore be understood as non-transferable. That is to say, when considered independently from the rules under which it is derived, the resultant data possesses no universally accessible logical consistency.
Not all logical or mathematical functions are computable6, but the ones which are computable are referred to as 'algorithms', and are exactly those functions defined as recursive functions. A recursive function is that in which the definition of the function includes an instance of the function 'nested' within itself. For instance, the set of natural numbers is subject to a recursive definition: "Zero is a natural number" defines the base case as the nested instance of the function – its functional properties being given a priori as a) wholeness; b) serving as an index of quantity; and c) having a successor. The remainder of the natural numbers are then defined as the (potentially infinite) succession of each member by another (sharing identical functional properties) in an incremental series. It is the recursive character of the function that makes it computable (that is, executable by a hypothetical machine, or Turing machine). In an important (simplified) sense then, computable functions (algorithms), as examples of recursive functions, are directly analogous in principle to the recursive function that defines the set of the natural numbers.7
The nested function has the property of being discrete and isolable – these characteristics being transferable, by definition, to each other instance of the function. In these terms, the function defining the natural numbers has the (perhaps paradoxical) characteristic of 'countable infinity' – as each instance of the function is discrete, there is the possibility of identifying each individual instance by giving it a unique name. In spite however of its potential in theory to proceed, as in the case of the natural numbers, to infinity, a computable function must at some stage know when to stop and return a result (as there is no appreciable function served by an endlessly continuous computation). At that point then the algorithm must know how to name its product, i.e., to give it a value; and therefore must have a system of rules for the naming of its products, and one that is uniquely tailored according to the actions the algorithm is designed to perform on its available inputs.
What is missing from the definition given above for the algorithm defining the natural numbers? We could not continue to count the natural numbers (potentially to infinity) without the ability to give each successive integer its unique identifier. However, we could neither continue to count them on the basis of absolutely unique identifiers, as it would be impossible to remember them all, and we would be unable to tell at a glance the scalar location of any particular integer in relation to the series as a whole. Therefore, we must have a system of rules which 'recycles' the names in a cascading series of registers (for example, in the series: 5, 25, 105, 1005, etc.); and that set of rules is exactly those pertaining to the radix (or 'base') of the number system, which defines the set of available digits in which the series may be written, including the maximum writable digit for any single register, before that register must 'roll over' to zero, and either spawn a new register to the left with the value '1', or increment the existing register to the left by 1. We can consider each distinct number radix (e.g., binary, ternary, octal, hexadecimal etc.) as a distinct computable function, each requiring its own uniquely tailored set of rules, analogously with our general definition of computable functions given above.8
For most everyday counting purposes, and particularly in terms of economics and finance, we naturally employ the decimal (or 'denary') system of notation in the counting of natural numbers. The algorithmic rules that define the decimal system are therefore normally taken for granted – we do not need to state them explicitly. However, the rules are always employed implicitly – they may not be abandoned or considered as irrelevant, or our system of notation would then become meaningless. If, for instance, we were performing a series of translations of numerical values between different radices, we would of course need to make explicit the relevant radix in the case of each written value, including those in base-10, to avoid confusion. The essential point is that, when considering expressions of value (numerical or otherwise) as the returned results of algorithmic functions (such as that of the series of natural numbers, or indeed any other computable function), the particular and unique set of rules that constitute each distinct algorithmic procedure, and through which data values are always exclusively derived, are indispensable to and must always be borne in mind in any proportionate evaluation of the data – they may not be left behind and considered as irrelevant, or the data itself will become meaningless and a source only of confusion.
It is important to emphasise in this analysis that, in accordance with the definition of recursive functions outlined above, a computational algorithm is functionally defined by reference to itself, through a nested instance of the function, rather than by reference to any universally available functional definition. In the broad context of data derived through digital information processes, it is essential therefore to the proportionate evaluation of all resultant data, that the data be qualified with respect to the particular algorithmic procedures through which it has been derived. There is no magical property of external logical consistency that accrues to the data simply because it has been derived through a dispassionate mechanical procedure – the data is consistent only with respect to the rules under which it has been processed, and which therefore must be made explicit in all quotations or comparisons of the data, to avoid confusion and disarray.
Such qualifications however are rarely made these days in the context of the general mêlée of data sharing that accompanies our collective online activity. Consider the Internet as an essentially unregulated aggregation of information from innumerable sources where there are no established standards or guidelines that specifically require any contributor to make explicit qualifications for its data with respect to the rules that define it and give it its unique and potent existence. We should not be overly surprised therefore if this laxity should contribute, as an unintended consequence, to the problem of society appearing progressively to lose any reliable criteria of objective truth with regard to information made available through it to the public domain.
The alacrity with which data tends to be 'mined', exchanged, and reprocessed, reflects a special kind of feverish momentum that belongs to a particular category of emerging commodity – much like that attached to oil and gold at various stages in the history of the United States. Our contemporary 'data rush' is really concerned with but a limited aspect of most data – its brute exchangeability – which implies symptomatically that those who gain from the merchandising of data are prone to suppress any obligation to reflect upon or to evaluate the actual relevance of the data they seek to market to its purported real-world criteria.

Conclusion

It was stated above that computable functions (algorithms) performed upon data values are defined as recursive functions, and are analogous, as a matter of principle, to the recursive function that defines the set of natural numbers. Logical consistency in digital information processes is therefore directly analogous to proportional consistency in the set of the natural numbers, which the preceding analysis now reveals as a principle that depends locally upon the rules (i.e., the restrictive array of available writable digits) governing the particular numerical radix we happen to be working in, and cannot be applied with consistency across alternative numerical radices. We should then make the precautionary observation that the logical consistency of data in a digital information system must likewise arise as a unique product of the particular algorithmic rules governing the processing of that data. It should not be taken for granted that two independent sets of data produced under different algorithmic rules, but relating to the same real-world criteria, will be logically consistent with each other merely by virtue of their shared ontological content. That is to say that the sharing of referential criteria between independent sets of data is always a notional one – one that requires each set of data to be qualified with respect to the rules under which the data has been derived.
Nevertheless, since the development of digital computing, and most significantly for the last three decades, computer science has relied upon the assumption of logical consistency as an integral, that is to say, as a given, transcendent property of data produced by digital means; and as one ideally transferable across multiple systems. It has failed to appreciate logical consistency as a property conditional upon the specific non-universal rules under which data is respectively processed. This technical misapprehension derives ultimately from a mathematical oversight, under which it has been assumed that the proportional consistency of a decimal system might be interpreted as a governing universal principle, applicable across diverse number radices. The analysis presented here indicates rather that the adoption of decimal notation as the universal method of numerical description is an arbitrary choice, and that the limited and restrictive proportional rules which govern that system can no longer be tacitly assumed as having any universal applicability.
  1. There are numerous references that might be cited here, but the distinction in Greek mathematical thought between 'discrete' and 'continuous' (or 'homogeneous') magnitudes and its influence upon modern algebra is discussed at length in two articles by Daniel Sutherland: Kant on Arithmetic, Algebra, and the Theory of Proportions, Journal of the History of Philosophy, vol.44, no.4 (2006), pp.533-558; and: Kant's Philosophy of Mathematics and the Greek Mathematical Tradition, The Philosophical Review, Vol. 113, No. 2 (April 2004), pp.157-201. See also: Renaissance notions of number and magnitude, Malet, A., Historia Mathematica, 33 (2006), pp.63-81. See also Jacob Klein's influential book on the development of algebra and the changing conception of number in the pre-modern period: Greek Mathematical Thought and the Origin of Algebra, Brann, E. (tr.), Cambridge: MIT, 1968 (repr. 1992). [back]
  2. Klein, ibid., Ch.6, pp.46-60. [back]
  3. Even in complete abstraction from the material world of objects therefore, integers somehow retain a trace of their Classical role, through a reification of the idea of intrinsic value, which still inheres as it were 'magically' in our system of notion – that which, as a methodological abstraction, is now derived purely under the concept of number. The reification of abstract numerical quantities, particularly during the early modern period, may also be viewed as a form of psychic recompense for the fact that in England during the 17th Century the cash base of society lost its essential intrinsic value in gold and silver, due in part to a relentless debasing of the coinage by 'clipping' and counterfeiture, and hence began to be replaced by paper notes and copper coinage around the turn of the century, following Sir Isaac Newton's stewardship of the Royal Mint during the period 1696-1727 (see: Newton and the Counterfeiter, Levenson, T., Mariner Books, 2010). This fundamental shift in the conception of monetary value from one based upon the intrinsic value of an amount of bullion necessarily present in any exchange relationship, to one that merely represented that value as existing elsewhere by way of a promise to pay, thus significantly enhancing the liquidity of finance and exchange in the motivation of trade, was one that occurred in parallel with the progressive realisation of Descartes' (and Leibnitz's) project for a universal mathesis based upon abstract formal notation. [back]
  4. In the resulting distributions horizontal straight lines are found to occur only where the decimal value of x (prior to its conversion to base-b) is equal to the value b, or to b2 or b3 (also, it is assumed, by extension to bn) – see Comments on pages 31 & 51 of the extended pdf version of Radical Affinity and Variant Proportion in Natural Numbers. [back]
  5. In terms of the largely unseen hardware-instruction (machine-code) level, digital information systems have made extensive use of octal and hexadecimal (base-16), in place of decimal, as the radices for conversions of strings of binary code into more manageable quantitative units. Historically, in older 12- or 24-bit computer architectures, octal was employed because the relationship of octal to binary is more hardware-efficient than that of decimal, as each octal digit is easily converted into a maximum of three binary digits, while decimal requires four. More recently, it has become standard practice to express a string of eight binary digits (a byte) by dividing it into two groups of four, and representing each group by a single hexadecimal digit (e.g., the binary 10011111 is split into 1001 and 1111, and represented as 9F in hexadecimal – corresponding to 9 and 15 in decimal). [back]
  6. One explanation given for this is that, while the set of the natural numbers is 'countably infinite', the number of possible functions upon the natural numbers is uncountable. Any computable function may be represented in the form of a hypothetical Turing machine, and, as individual Turing machines may be represented as unique sequences of coded instructions in binary notation, those binary sequences may be converted into their decimal correspondents, so that every possible computable function is definable as a unique decimal serial number. The number of possible Turing machines is therefore clearly countable, and as the number of possible functions on the natural numbers is uncountable, the number of possible functions is by definition greater than the number of computable ones. For further elaboration and specific proof of this principle see: Section 5 of: Barker-Plummer, D., Turing Machines, The Stanford Encyclopedia of Philosophy, Summer 2013 Edition, Edward N. Zalta (ed.): http://plato.stanford.edu/archives/sum2013/entries/turing-machine/ (accessed 09/12/2014).

    Turing's formulation of the Turing machine hypothesis, in his 1936 paper: On Computable Numbers..., was largely an attempt to answer the question of whether there was in principle some general mechanical procedure that could be employed as a method of resolving all mathematical problems. The question became framed in terms of whether there exists a general algorithm (i.e., Turing machine) which would be able determine if another Turing machine Tn ever stops (i.e., computes a result) for a given input m. This became known as the "Entscheidungsproblem" or "Halting problem". Turing's conclusion was that there was no such algorithm. From that conclusion it follows that there are mathematical problems for which there exists no computational (i.e., mechanical) solution. See: On Computable Numbers, with an Application to the Entscheidungsproblem; Proceedings of the London Mathematical Society, 2 (1937) 42: 230-65: http://somr.info/lib/Turing_paper_1936.pdf. See also: Ch. 2, pp.45-83 of: Penrose, R., The Emporer's New Mind, OUP, 1989; as well as pp.168-177 of the same, with reference to Diophantine equations and other examples of non-recursive mathematics. [back]

  7. The principle of recursion is nicely illustrated by the characteristics of a series of Russian Dolls. It is important to recognise that not all of the properties of the base case are transferable – for instance, zero is unique amongst the natural numbers in not having a predecessor. [back]
  8. For a discussion of these criteria in relation to Turing machines, see the section: Turing Machines & Logical Inconsistency, in the page: Mind: Before & Beyond Computation. [back]

Friday 19 December 2014

Mind: Before & Beyond Computation

A Sceptical View of the Cognitive Sciences

As a scientist (speaking hypothetically), I am unable to think entirely outside of my own thought processes, or, for that matter, inside the thought processes of another. Therefore, considering that whatever analyses we make of human thinking will inevitably be conditioned by an identical thought process, and hence be constrained by the subjectivism of the very processes under investigation, without the safeguard of an independent control for comparison, it is ambitious to expect to be able to arrive at a fully objective, or value-free, understanding of those processes. To what extent therefore might it be implicitly dangerous (in that it might entail unforeseen and irreversible consequences) for us to attempt to model those processes of thought, to fashion them as a 'technology', with a view to exporting them to inanimate objects or machines, with the potential to assume certain critical decision-making roles? Considering that such efforts to model human intellectual processes have been the driving force behind much of the technological innovation of the last half-century, it may be more prescient to enquire: Why were these problematic objections not raised a good deal earlier; or are we so blinded by technological optimism that we must remain inured to all its negative and disruptive consequences?
Due to the inherent difficulties in approaching the subject from a purely empirical perspective, I do not subscribe to the hard-empiricist position of the cognitive sciences, which view all aspects of thought and of language in terms of computational systems, and which limits the scope of the enquiry to explanation in terms of functional problem-solving mechanisms. There is more to mind than functional problem-solving. That seems a far too reductive approach to an enquiry which can only realistically proceed on the basis of intuition (since empiricism alone cannot provide all the answers). The dominant tendency of 20th Century debates on the philosophy of mind was that of the physicalist identification of 'mind' with 'brain', as a further symptom of this wholesale reductionism in the approach to the study of human intellectual processes. It is as if the concept of mind (a modern descendent of the metaphysical concept of soul), with its promise of independence for all who possessed it, were something of an anachronism for post-19th Century science, and one which it sought to 'bring into the laboratory', failing to anticipate that such an attempt to 'lock-down' the object of attention would in effect be to deprive us of an important context for discussing issues having little to do with functional computation, or with individuals' discrete physical organs and biological processes.
This insistence to render all aspects of psychical and linguistic operations to the level of functionally describable physical or biological systems is predicated on an assumption that an adequate understanding of these operations is possible on the basis of current or future knowledge of neurophysical and neurochemical criteria in the brain, and deductions thereof, with the addition of the use of advanced scanning techniques. In other words, there is little of consequence to learn about mental activity, other than what may eventually be revealed at the internal empirical level. In the first place, this overestimates the scope of current and future methods and tools of observation in representing with adequacy the biological systems under investigation; and seems to be unwise to the probability that whatever the current state of knowledge about the brain, science may well be forever committed to a greater or lesser degree of hypothesis and speculation over the subject. Secondly, why would one insist upon such a stultifying and restrictive set of analytic criteria, over-estimating the efficacy of empirical knowledge, if one were not already predisposed to constructing, as a standalone technological artefact, a synthetic model of intellectual operations, as a subset of those actual operations, and one which could be made appropriable to a process of mechanisation – that is, through the application of approximate cybernetic models?
Perhaps then it is not quite the case that the wholesale theoretical reduction of mental operations to the level of the physical and the biological results in a more objective, empirically-evidenced, and value-free, understanding of them. It rather helps to define, in strictly mechanistic terms, what we might need to extract as the computational 'essence' of cerebral processes, in order to provide the 'blueprints' for a set of radical instrumental and technological ambitions. Science does not develop in a vacuum, and the emergence of the information and cognitive sciences during the mid-20th Century gained its primary impetus in the context of two devastating world wars, and hence from the need to develop new forms of sophisticated weapons technology, and to enhance the computational power of military code-breaking systems.
In this context it is interesting to note that Noam Chomsky's 1950s research into computational linguistics – which lay much of the theoretical groundwork for the project of Artificial Intelligence in its approach to natural language processing – was undertaken with the financial support of the US Army Signal Corps; the US Air Force Office of Scientific Research; and the US Navy Office of Naval Research.1
Alongside these clear military incentives promoting research and development into the computational aspects of intelligence and linguistic processing, developments in semi-conductors, solid-state electronics, and integrated circuits led to the incursion into the mass-market during the 1970s of new and revolutionary brain-saving devices: the ubiquitous pocket-calculator, and various future-oriented digital timepieces, courtesy of light-emitting diodes and liquid-crystal displays. Indeed, if the gadget-buying public was not encouraged to reflect upon the generative impulse behind all this exciting new technology, it might ideally position itself as the principle target and beneficiary of this incipient technological revolution.
Professor of biology Steven Rose:
"Science cannot happen without major public or private expenditure but its goals are set at least as much by the market and the military as by the disinterested pursuit of knowledge. This is why neuroscientists have a responsibility to make their subject and its potentials as transparent as possible, and why the voices of concerned citizens should be heard not 'downstream' when the technologies are already fully formed, but 'upstream' while the science is still in progress. We have to find ways of ensuring that such voices are listened through the cacophony of slogans about 'better brains' – and the power of the military and the market."2
In the early 1990s, at the time of the first widespread influx of mobile telephones into the market, there was an enormous amount of personal resistance to the adoption of this new communications strategy. I recall that overtly using a mobile phone in public was rather frowned upon, as if it were a sign of excessively brash and showy behaviour. It was also extremely difficult to get business contacts to accept a mobile number as a contact detail without in addition offering a 'respectable' landline number. Mobile phones were surrounding by an aura of bad-taste, associated with the image of the itinerant pushy businessman, or the hipster cocaine dealer. Over a number of years however, that resistance was gradually worn down through the relentless marketing takeover of the telecoms companies. If the telcos had not had the advantage of unrestricted advertising, and had been obliged to put it to a public vote in the early stages ('upstream' in Steven Rose's terminology), for instance with the question: "Do you accept the more-or-less obligatory round-the-clock use of a mobile phone in your life?"; the proposal would, without any doubt, have been pre-emptively rejected.
More contemporaneously, the advent of 'smartphones' into the market did not face the same kind of hurdle. The telcos easily capitalised on their earlier marketing coup, the population having become naturalised to the need to carry around small pocketable communication devices. However, a similar kind of resistance does now seem to affect the reception of such technological advances as Google Glass into the marketplace. Wearing the Glass, it is no longer possible to maintain the pretence of undivided attention to the person directly in my midst, and it represents a decisively new kind of intervention of technology into the social sphere. Perhaps eventually this resistance too will be successfully overcome by advertising, and we will all be walking around with digital prostheses routinely strapped to our eyeballs. Or is there a threshold beyond which technological incursions on our bodies, rather than merely into our pockets, become morally or aesthetically intolerable?
Or perhaps we have just ceased to be intrigued by the innovations that technocracy, in its endless need to service growth in the economy, continues to throw at us – we are no longer wooed by the prospect of gadgets possessed of *artificial intelligence* because experience shows that, for the most part, they are not quite fully responsive to the nuances of our day-to-day requirements, and the inevitable further trade-off against sociability with a product like the Glass is unjustifiable. Or is it the case that we have simply become frustrated because the 'sentient being' we expect may be lurking in the machine is unable to understand a joke?

Some Assumptions of Computational Linguistics

Whatever it is that forms the kernel of our resistance, for many theorists in the cognitive sciences, the failure of current incarnations of machine intelligence to reach any kind of parity with human intelligence (for instance in the tendency of products like Siri to make glaring inferential errors in response to the most mundane queries; or its failure to apply context intuitively in order to resolve ambiguities which follow from the polysemous nature of certain words) is due principally to limitations in current hardware capacity, and such shortcomings will be overcome following projected exponential improvements in hardware design and capacity. So, as our minds seem to be uniquely interwoven in our personal and emotional experience, is all that is preventing us from forming satisfying interpersonal relationships with our digital devices simply the problem that computers are just not yet able to do computation fast enough? That seems to be the implication of recent narrative excursions into the domain of artificial intelligence as exemplified by the movie Her, where the protagonist, at some imagined not-too-distant future time, enters into just such a relationship with the 'OS' of his personal computer (nonetheless, the voice he falls in love with is the disembodied voice of Scarlett Johansson – reading from a script, written by another actual human – rather than that of an inanimate machine responding to its own self-instructions).
The expectation that such spirited congress of humans with machines might become realisable at any time in the near future is predicated on an assumption that both brain and mind (including language and emotion) may be fully describable within the terms of the current state of scientific knowledge – that is, according to the 'known laws of physics' (which underpin all the other sciences). The brain is understood as a biological organ whose cognitive functions are rooted in computational processes. Computation implies a linear sequence of logical operations on data values, with predictive, or algorithmic, properties. Hence, it is envisaged that the entirety of the brain’s cognitive functions might be reproduced in the form of commercial electronics. In this projected model the role of 'mind' tends to be represented as the equivalent of a collection of programmable software running on the 'hardware' of the brain. Hence, on the provisos that everything relating to the brain's intellectual operations can be reduced to the level of computation, and that mind can be understood as a collection of algorithms, contemporary shortcomings in the practical implementation of this model of intelligence can be interpreted as a deficiency in quantity of some mechanism.
The validation of this last principle will depend upon whether it is possible to determine a computational basis for language, for while computers may seem capable of conducting most routine computational tasks with consummate speed and accuracy, they are beset by recondite problems in interpretation and usage in their attempts at natural language processing.
Chomsky's 1950s research, which I mentioned earlier, can be viewed as an attempt at a quantitative analysis of natural language, specifically that of English, in terms of its grammatical 'phrase structure'. It was an attempt at 'predictive enumeration', that is, to analyse the logical relationships between a finite set of observed sentences, and a projected infinite set of possible 'legal' sentences, in such a way that the natural language could be modelled in ways conformable to an automated computational process, commonly represented in the form of a Turing machine.3 A Turing machine is a hypothetical machine model which cognitive and computer scientists employ to decide upon the computability of functions. It stands as the technical model for all computer algorithms, as a means of representing functions in a form suitable for processing by potential digital computers. Such computable functions are defined as recursive functions. Recursive functions are those in which the definition of the function includes an instance of the function 'nested' within itself – that is, they are defined self-referentially.4
For Chomsky, the phrase structure of natural language permits analysis in terms of a recursive function – as sentences may be constructed, for example, in the form of: Alice thinks that size is everything; the smaller component grammatical sentence size is everything is a case of a discrete function nested recursively within a larger (self-same) sentence function. Chomsky et al5 identified an analogy between recursive sentence construction and the set of natural numbers, which they refer to as "discrete infinity". The set of the natural numbers is subject to a recursive definition: 0 is a natural number defines the nested base case as a discrete whole number, and the remainder of the series is defined as the succession of each natural number by another whole number by adding '1'. The resulting set of discrete whole numbers is an infinite one. Analogously:
"Sentences are built up of discrete units: there are 6-word sentences and 7-word sentences, but no 6.5-word sentences. There is no longest sentence (any candidate sentence can be trumped by, for example, embedding it in "Mary thinks that . . ."), and there is no nonarbitrary upper bound to sentence length. In these respects, language is directly analogous to the natural numbers..."6
The quotation is from an article published in 2002, and the use of words as the unit division does not really convey the substance of Chomsky's 1950s research, which was generally addressed at "phrase structure grammar", with phrases, or subsidiary sentences, forming the unitary divisions. For a sentence to be infinitely extendable, the minimal unit must be a phrase (Mary thinks that..), as successively adding single words will not result in successive grammatical sentences. As a means of analysing recursive sentence construction (i.e., sentences within sentences) we cannot describe words as 'units' because an individual word does not have the grammatical integrity of a sentence.
The choice of words as the minimal units in the above quotation, while somewhat misleading, seems to have been with the aim of simplifying the demonstration, because single words exhibit greater apparent integrity as units than do the several words that constitute a phrase. Generally speaking, the attribution of 'unity' to any object implies that the object is 'integral with itself', capable of functioning independently of its specific location, with no unresolved external dependencies. In terms of recursively defined series, if we refer to elements which are nested within the larger series as 'units', we end up with units inside other units, which implies a contradiction.7 In terms of linguistic constructions specifically, the attribution of unity to internally nested phrases implies that the 'unit' has at least quasi-independence from determinations of external syntax and of context, a consequence which does not really have any ecological validity with respect to the communicative content of natural language utterances. Hence the emphasis upon isolable functional units within language suggests that the units themselves exert a principle causal or intentional influence upon meaning, and encourages the tendency for both context and global syntax to appear as concatenated effects of language, rather than, respectively, as its structural conditions and motivation.
To make this point more explicitly, consider the following examples with a view to analysing their meanings in relation to context. Take the sample sentence already referred to above: Alice thinks that size is everything. We can compare this sentence with another one – for instance: When estimating the total yield from an oil field, size is everything. Both examples contain the identical subsidiary grammatical sentence: size is everything. The first example might have appeared, for instance, in a commentary on the tale Alice in Wonderland; the second one perhaps in an article discussing geological surveillance techniques. In terms of the role of the subsidiary sentence within each of these larger narratives, there is little that is shared between the two complete sentences, except for a degree of hyperbole (whatever the relevance of size in either case, it is unlikely to be literally 'everything'). In the second example, the meanings of each of the nouns size and everything can be inferred locally from the preceding phrase. In the first example, however, the meanings of the nouns are quite ambiguous. Are we to infer that size relates to Alice's own bodily proportions (an inference in conformance with what we already know about the story), or is the writer implying that Alice has made some kind of philosophical abstraction from her own experiences about the nature of 'things' in general? We can only know what is intended by these words with reference to the larger narrative of the commentary and possibly to the original story itself. If while reading the commentary we came across the sentence Alice thinks that size is everything, we would most likely have already been prepared for the meaning; which is to say that the meaning does not reside integrally within the sentence, but in a larger non-linear narrative space. In Chomsky's terms, both instances of size is everything are functionally equivalent (though not necessarily identical), because the semantic potential of the subsidiary sentence is understood to be a factor of its discrete grammatical integrity. In this view meaning is seen to derive from the presence of meaning-full units within a linear arrangement of formal grammatical units, rather than from contextual references within a network of idiomatic associations, such as the non-linear associations suggested above for the example Alice thinks that size is everything. In the case of natural language utterances such as this, a functionalist analysis of grammatical phrase structure, in terms of its amenability to a computational structure, is insensitive to context, and therefore will not provide a framework for the accurate parsing of meanings. The discrete integrity of the phrase is not a key to its meaning and, furthermore, will not enable a computational device to distinguish instances of literalism from the instances of hyperbole exhibited in either of the two examples given above.
There is a clear tendency within the cognitive sciences of describing linguistic subdivisions as discrete functional units, in spite of the fact that notions of functionality in this sense have no real relevance to the construction of meaning in natural language. I suggest that this tendency is dictated by the requirement for the analysis to conform to the structure of the established and preferred model of the Turing machine, and therefore to render the analysis amenable to a computational structure, rather than for any ontological correspondence such a model may have to organic natural language processes.

Turing Machines and Logical Inconsistency

The Turing machine operates on the basis of discrete data values, represented by variable strings of '1's separated by non-data-bearing '0's, arranged in discrete cells upon a linear one-dimensional tape. Each action of the machine (read/write/move-left/move-right/stop), and each of the states it may occupy, is also finite and discrete. One cannot make language computable unless it also conforms to this structure. But the assumption that something functionally corresponding to this array of mechanical logical procedures must lie at the root of cerebral linguistic processes is entirely a deductive inference, without any empirical evidence in support of it. After all, how could one possibly arrange an experimental scenario, involving molecular examination of brains engaged in language production, which could provide any such empirical evidence?
Unlike the semantic elements of natural language, data values in the Turing machine are static and functionally invariable – logical consistency demands that, according to the selective state of the machine, the action it will take (or the value it returns) upon reading, for example, a '0' following a string of five '1's in succession, is fixed and invariable wherever the sequence may appear on a particular machine's memory tape. On the basis that Turing machine computability relates to recursive functions, this is analogous to saying that, in the set of natural numbers, the value of the integer '4' (commonly represented as a string of five '1's in unary notation) is proportionally consistent (i.e., by definition, logically consistent) in its relation to all other natural numbers. This much appears to be uncontroversial. However, for a particular Turing machine, its instruction to act in a certain way following a string of five '1's is dependent on the specific machine's table of rules (its program), which does not reside on the memory tape, but somewhere external to it.8 That is to say that logical consistency is not an integral feature of the data itself, as it is dependent upon a system of rules necessarily located remotely from the data. Those rules however must be unique for each individual Turing machine (algorithm), so that the data residing on a machine’s tape acquires its logical consistency only by virtue of an explicit or implicit reference to that particular machine's table of rules.
Natural numbers are commonly represented in decimal notation, but they may also be represented in any other number-base: binary, octal, duodecimal (base-12), etc. If one wished to design a Turing machine with the task of outputting the sequence of natural numbers between '0' and '10' in decimal, its table of rules would need to make explicit the rule that the maximum writeable digit is '9' by specifying exactly nine iterations of its incremental function, before that digit must 'roll-over' and revert to a '0', and a new digit be spawned to the left with the value '1'. In general, when working in decimal, there is no need to state the rules explicitly – they are just assumed, as decimal is the conventional default system of numerical notation. However, by making those rules operationally explicit, in conformance with the requirements of the Turing machine model, it assists in clarifying for us that the logic of those rules must be unique. Hence, it follows by definition that the proportionality of numerical values expressed in decimal must also be considered as a unique property that accrues to those values by exclusive virtue of the fact that they are expressed in decimal notation, and hence that proportionality cannot be considered to be freely transferable across diverse numerical radices.9
The choice of decimal as the default system of notation is made on the basis of an external arbitrary rule (analogous to the external location of the tables of rules for specific Turing machines) – we could be using any other numerical radix as a default; and indeed we do in fact use a combination of sexadecimal (base-60), duodecimal, and octal when representing divisions in time. Some native Meso-American cultures (e.g., the Pamean in Mexico) employ octal rather than decimal for everyday counting purposes. The notes of a musical scale form an octal series, as do the separable colours of white light. What is important to emphasise at this point is that the rules which define these various notations are incompatible – which is to say that they are logically and proportionally inconsistent with each other. What has not yet been acknowledged, not only by previous mathematicians, but also by information scientists devising Turing machines, is that the proportional consistency of values in a decimal series is a unique product of the external rule governing the system of available writable digits in decimal notation; and that therefore it cannot be assumed that the relations of proportionality pertaining between values when expressed in decimal will be seamlessly transferable to their numerically equal values when expressed as, say, octal, or as binary, or as hexadecimal values. That prevailing assumption is therefore an error-in-principle.
Logical consistency in the Turing machine corresponds to proportional consistency in the set of natural numbers, which according to the analysis above cannot be considered as freely transferable across diverse numerical radices. Therefore, analogously we can say that the logical import of data values in a Turing machine arises uniquely out of the relationship between the specific machine's table of rules and its memory tape, and cannot be exported to another Turing machine operating upon a different set of rules (however the corresponding data values are translated and represented in the new machine) without consequently incurring a failure in logical consistency.10
The upshot of this for natural language processing is that there can be no logically consistent universal computational algorithm suitable for encoding even English language into machine-readable form, for the reason that language is never functionally transparent (likewise, the data on a Turing machine's tape is not transparent viewed in isolation from the machine's unique table of rules). Analogous to this is the fact that the logic of any specific natural language utterance will be determined by non-universal discourse-specific rules according to the cultural, academic, or professional affiliations of its users.11

"Universal Computation" as a Grandiose Conceit

"... I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted."12
These prophetic words are those of Alan Turing, written in 1950. What is important to note is that in the same article Turing had already abandoned any serious enquiry into the nature and speciality of human thinking, deciding that to pose the question 'Can machines think?' to a contemporary public consensus would have been "meaningless", and tantamount to an invitation to ridicule. Turing's bold prediction therefore anticipates a radical change, less perhaps in the capabilities of machines themselves, but more crucially in the use and definition of fundamental words and concepts.13
The meaning of the word 'thinking' (or its linguistic antecedents) had probably not undergone much substantial change at all for centuries, or perhaps even for millennia; so for Turing to anticipate that the idea of a 'thinking machine' could undergo such a seismic alteration (from the ridiculous to the sublime) within a relatively brief span of fifty years, was to place an inordinate degree of faith in the power of technological advancement – a confidence that has since reached the status of a virtual hegemony amongst a majority of cognitive, computer, and neuroscientists around the world.
After the passage of seventy years are we now any closer to the realisation of Turing's self-fulfilling prophecy?
It is true that the general notion of 'intelligence' (as a corollary of 'thinking') has changed somewhat, so that we now often find the word frequently being used to indicate the mere possession of, or access to, valuable recorded information, with less emphasis upon the traditionally human processes of reflective or creative understanding – a shift from a dynamic, intellect-based, and performative definition of intelligence, to a static, digitised, information-based one. Even so, the school of Artificial Intelligence has recently decided that it is appropriate to invoke a two-tier qualitative distinction in its program, between 'AI' – which has become familiar to us to the point of banality in the various commercial implementations of 'smart' technology (largely an engineering project, sustained by virtue of the evolved, static notion of intelligence) – and 'AGI' ('artificial general intelligence'), which aims at the creation of machines possessed of creative and reflective aspects of conscious self-awareness comparable to those of the human mind.
Even amongst experts in the field however, the question of how such AGIs will become technologically feasible remains rather vague and ill-defined. Remarking upon the fact that AI has made no progress whatever (towards AGI that is) during the six decades of its existence, Professor David Deutsch, a physicist at Oxford University, wrote:
"Despite this long record of failure, AGI must be possible. That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory."14
I am not aware that "the universality of computation" had been identified as a "deep property of the laws of physics" prior to Turing's formalisation of the principle of computability in his 1936 paper (see note 3). Since Turing, computation has come to be understood as a property of information processing systems, in so far as those systems are designed to process solutions to problems identified in the management and assessment of information. It involves the establishment of certain well-defined functional procedures, and therefore depends upon the recognition of certain values or conditions which, when subjected to such a predefined and finite procedure, will result in a new set of values or conditions, at which point, having arrived at a solution, the computation terminates.
It seems that Prof. Deutsch's statement can be interpreted in two ways. Either he is intending to say that computation is a universal feature of human (i.e., physicists') understanding of the laws of physics in action, as they may be observed in a series of finite scenarios; or (which is a much bolder statement) he is suggesting that computation is actually a bona fide physical principle in itself, part of the very structure of the universe, as it were independently of any human process of observation and understanding of physical events. This ambiguity is implicit in Prof. Deutsch's statement, and I believe that he is quite unaware that the ambiguity exists.
To be clear, when Prof. Deutsch refers to "the laws of physics" he means the known laws of physics which, as we are all well aware, are a creation of human scientific endeavour, and are subject to historical revisions. For instance, the quantum mechanical properties of light and the value of the speed of light as a fundamental measurement constant were not a part of the known laws of physics until after Einstein's theory of Special Relativity. It is not out of the question that future developments in physics will lead to revisions or reversions in the established laws, in the same way that quantum mechanics leads to a theory about light which includes principles incompatible with either the wave theory of light which preceded it, or Newtonian corpuscular theory before that.15 In this sense, Prof. Deutsch's phrase: "everything that the laws of physics require physical objects to do" tends to put the cart before the horse, as it is the historically noncontinuous 'known laws of physics' that have followed from observations made upon physical events, and which in any given period are constrained within the limits of contemporary understanding.
I have no objection in principle to the first interpretation mentioned above. Computational algorithms may indeed have become an indispensable tool for physicists in their understanding and interpretation of natural and physical phenomena, based upon their set of finite observations of those phenomena. However, in order to undertake such computational analysis it is first necessary to establish a finite procedure, which will describe the transition from one physical state of affairs to another. There is no continuous computation (i.e., which does not result in a closed infinite loop). The identification of discrete physical states of affairs (of ends and beginnings) is purely a factor of human understanding, and the need for a function to finitely resolve the transition between a beginning and an end state does not occur in nature. To ascribe computation as a "deep property" of natural physical laws, independently of the human need to understand those laws, is therefore teleological, as it imputes to nature notions of purpose and required ends, which can only be justified by an appeal to some sort of divine will.
It is incorrect therefore to claim that AGI must be possible by an appeal to the metaphysical notion of the "universality of computation", which ignores the fact that there are no universal rules of computation. Such a statement does little more than perpetuate arbitrarily the wish-fulfilling and self-fulfilling prophesy initiated by Turing.
The ambiguity in Prof. Deutsch's statement tends to blur the distinction between the inherently limited and contingent need for human understanding of the laws of physics in action on the one hand, and the absolute, transcendent, and universal explanatory power of the laws so perceived on the other; the effect of which is to grant to physics an authority over knowledge which it could never and should never justly deserve. To adopt such a position is all very well for a physicist, as it gives to physicists as a community the grandiose privilege of explaining the universe to us mere mortals, through exclusively reductive functionalist principles, in obeisance to which authority all other systems of knowledge must ultimately pale into insignificance.
By ambiguously incorporating computation into the laws of physics, and by implication into the structure of the universe, Prof. Deutsch is actually implying something along the lines of: 'Computation is everything', or at least: 'Everything can be understood through computation'. But the computational procedures employed in the understanding of exemplary physical scenarios, let's say at the quantum mechanical level, have no universal applicability – they are meaningful in terms of quantum mechanics alone, as they include references to entities as 'units' which have logical import only in the field of quantum mechanics. They cannot be applied with logical consistency to the computations of Newtonian mechanics for instance, which still retain explanatory value at the macro scale. As the rules of computation therefore do not apply consistently even across all the sub-domains of physics, computation considered as a 'universal property', i.e., considered independently from its required unique set of governing rules and entity definitions, becomes a rather empty term, as all that the residual term implies is that there are identifiable problems that require identifiable solutions – which indeed we might recognise as a universal characteristic of all kinds of human situations. To describe such a universal characteristic in terms of a "deep property of the laws of physics" seems however an exercise in hyperbole.

Enlightenment Reason as a Cipher for Metaphysics

In the previous section, I have identified the instance of a principle (the supposed "universality of computation") being incorporated into the laws of physics, in order to provide authoritative validation for a speculative hypothesis about the capabilities of future machine technology, which derives solely from metaphysics (i.e., not deriving from any empirical method of proof), by a Professor of Physics who clearly has an abiding preference in favour of such a validation. Moreover, Prof. Deutsch does not even acknowledge, and seems to all intents and purposes to be quite unaware of, the difference between a metaphysical principle and an empirical observation.
This failure to recognise an implicit dependence upon metaphysical principles within scientific deliberation is not an isolated instance however. Both mathematics and physics, along with the other natural sciences to the extent that they rely upon fundamental mathematical and physical principles, routinely employ metaphysical principles in their patterns of explanation, in order to establish the ground rules for their respective practices, while usually insisting on the empirical validity of experimental findings, but in a way which systematically avoids (because it cannot be conducted empirically) a scientific approach to an understanding of metaphysics as a system of thought.
In the doctrines of scientific method, one will frequently come across appeals made to scientific Reason, as a governing principle in the formation of scientific judgements. Reason usually implies the employment of the principles of logic, proportion, and rationality, in assessments made upon experimental data, as a palliative to the influence of subjective bias, or even of superstition, into scientific deliberation. During the 17th and 18th Centuries, at the time of great and systemic changes within European Enlightenment Science (or 'natural philosophy' as it was then known) Reason became allied principally to the evidence provided from sense-data: in particular the visual sense. Under the influence chiefly of Francis Bacon's inductive methodology16, which established radically new approaches to the collection and interpretation of empirical data, a systematic attempt was made to rid scientific method of centuries-old habits of metaphysical (or 'syllogistic') reasoning, by the attempt to eliminate judgements based on intuition – in effect an attempt to preserve the 'objectivity' of raw sense-data from interpretations by the mind. Intuition, in so far as it tended to generate metaphysical conclusions, became identified as a source of error, or misguidance, in the practical applications of science. For the Greeks however, who were the progenitors of the concept of Reason inherited by Enlightenment scientists, intuition had been an essential component in the application of Reason, without which there could be no certain knowledge of Nature.
To the extent that all the sciences rely upon universal rules established within mathematics and physics, it is unlikely that such an attempt to eradicate intuition from scientific judgements could ever have been carried to completion. It is indisputable that mathematics, at least, relies upon core principles which cannot be derived empirically – a priori logical concepts such as number, function, relation, infinity, equality, etc.; which concepts therefore must be admitted into the canons of science as the pure forms of logic. The logically pure concepts of mathematics must be exercised through the intuition – the concept of number (in general) is formed without reference to any specific (empirical) instance of quantity and is applied intuitively. The important consideration is how such concepts arise in the mind, since their stability is unaffected by experience, or the information gathered from sense-data. It appears at times that experience must even be mitigated to conform to intuitions which arise out of the core principles of mathematics and physics. Therefore, if intuition might conceivably play an important role in the regulation of sensory experience, how reasonable was it for Bacon and his adherents to repudiate intuition as the chief source of error in pre-Enlightenment science?
The English empiricist philosophers John Locke and David Hume had arrived at the conclusion that human understanding prior to any sensory experience was impossible17, and this encouraged the idea that the contents of the mind could therefore be considered wholly as the accumulated results of experience. Intuition, although acknowledged by Locke only in the later parts of his treatise on human understanding as that faculty upon which "depends all the certainty and evidence of all our knowledge"18, then appears as a learned capacity, which is derived a posteriori to experience. Hence intuition, at one stage removed from direct experience, can also appear as a potential source of error, since the more reliable route to greater accuracy and utility in practical knowledge would seem to lie in the restless expansion of data acquired through direct observation of nature.

Empiricists were keen to dispel the theory that human understanding develops on the basis of certain innate ideas – a philosophy which derives from Plato, and which was popular amongst European rationalists. For the empiricists, the belief in innate ideas was a source of mysticism, and tended to reinforce metaphysical principles dogmatically, resulting in inertia and stagnation in scientific thinking. Hence Locke begins his Essay Concerning Human Understanding (1690) with the premise that there are no innate principles or ideas in the mind prior to its reception of the data from sensory experience. The mind of a newborn child was essentially a tabula rasa (Locke used the analogy of a sheet of blank paper) upon which would be written the "simple ideas of sensation", and around which the principles of the understanding would be subsequently constructed. The initial condition of the mind is therefore perceived to be one of pure receptivity.
In dissatisfaction with this argument, Kant had proposed (nearly a century after Locke's Essay..) that on the basis of experience alone one could not explain the formation of the categories of reason which distinguish between necessary and contingent truths, without which it would be impossible to arrive at the idea of a universal law – such laws must have transcendental potential, and be capable of being applied a priori to experience.19 Similarly, the principle of causality entails the idea of an effect being "posited by and through the cause and resulting from it", according to the principle of necessity, which could not be arrived at by empirical induction, as this would only show an effect as "merely annexed to the cause", i.e., contingently. Regardless of the frequency with which one might witness the same relationship between comparable events, one could not acquire merely by numerical addition the dignity of a necessity required to transcribe the relationship as a universal law – it requires a 'leap of faith', rather than simply an increment to experience.20
For Kant, all concepts of pure reason, exemplified by the pure a priori logical concepts of mathematics must, by definition, have the capability to transcend experience; otherwise we would be continually faced with the prospect of experience undermining reason, and there would be no grounds for certainty. There must therefore be a primary mode of pre-cognition (intuition), which is not determined by experience (through sensory perception), but which nevertheless continually seeks to prove itself (to represent itself) in relation to experience:
"The "I think" must accompany all my representations, for otherwise something would be represented in me which could not be thought; in other words, the representation would either be impossible, or at least be, in relation to me, nothing. That representation which can be given previously to all thought is called intuition. All the diversity or manifold content of intuition, has, therefore, a necessary relation to the "I think," in the subject in which this diversity is found. But this representation, "I think," is an act of spontaneity; that is to say, it cannot be regarded as belonging to mere sensibility."21
Kant is suggesting that intuition is not, as it appears in the final parts of Locke's Essay.., a post-hoc refinement of a matured (or perhaps misguided) understanding, but rather a faculty which operates at the roots of the understanding, a priori to all sensory experience. The key principles underpinning the subject's capacity for representing experience to itself are the primary intuitions of space and time; which are not to be conceived, as is perhaps customary amongst physicists, as concepts which may be deduced purely empirically (since there are no sensible material properties belonging to either space or time in themselves), but rather as the "pure forms of sensible intuition", which form the 'seat of consciousness' (a synthesis of internal and external apperceptive states with respect: a) to time, as the internal experience of succession; and: b) to space, as the external condition for the perception of objects), and unaccompanied by which no experience could ever assume form as a coherent representation for the subject.22
Kant maintains a necessary twofold distinction between cognitions of objects through means of sensory perception, and non-sensible ideas of 'things in themselves', where the latter are apprehended purely intellectually. As the objects of sensory experience are apprehended by us necessarily through the 'manifold of the intuitions of space and time', he argues that we cannot acquire any speculative knowledge of objects as things in themselves, but only as phenomena conditioned through the intuitions of time and space, and consequently as modes of mental representation. It is through these intuitions that we understand the principle of causality in nature, and all objects existing as material phenomena are subject to determination by external causes. Conversely, the mind cannot be apprehended through sensory perception, but only immanently, from within. Therefore, if the mind is to be understood as possessing the capacity for free will, it can only be so conceived as a thing in itself, that is, intellectually. If we do not maintain a categorical distinction between sensible objects as material phenomena and ideas of things in themselves, and attempt to view the mind as a phenomenon like any other, this must be to subject mind to external causal determinations – to make it a mere effect of experience – and which will negate its capacity for freedom.23
In Locke's analysis of human cognition, sensory perception, together with reflections upon ideas derived from sensory experience, are the formative principles of all understanding. The "simple ideas" of objects (or of their attributes) derived from sense-impressions are distinct "positive" or "absolute" ideas of "things in themselves"24; and are contained in the mind in abstraction from the causal relationships in which, as empirical objects, they are necessarily embedded. To understand the relations between objects, or between objects and properties, such as the relations of causality, is to "superinduce" something extraneous onto "the real existence of things"25, which otherwise have a kind of free-floating independence in the mind, as positive ideas of things in themselves. This is in contradistinction to Kant's view, in which it appears impossible to form any positive cognition of things in themselves, but only as phenomena conditioned through modes of representation, and hence also the determinations of causality. For Kant, with regard to the objects of sensory experience, the unconditioned 'thing in itself' cannot be thought without contradiction.26 Thus, the granting of transcendental positive truth-value to the simple ideas derived from sensory experience in Locke's analysis is illegitimate, as it fails to appreciate that the conditions for the reception of such impressions are a priori faculties of the understanding, in particular the intuitions of space and time; such that it is fair to say that the understanding operates, in some degree, as the author of its own experience. To the extent that any understanding of empirical relations is grounded upon the metaphysical synthesis of the intuitions of space and time, analyses of human understanding which exclude metaphysical considerations must remain indifferent to their own non-empirical foundations.
The history of empiricism from the 17th Century onwards is the history of this indifference. The simple ideas of objects and of their attributes, acquired through sense-perception, serve as the basic units for an instrumental interpretation of the world according to a new system of logic. While certainty in natural knowledge had previously depended upon a contractual acknowledgement of the limits of knowledge ultimately derived through intuition, this began to appear as an impediment to advancement of Science in its practical mastery over nature. Empiricism was to invigorate scientific method by the implementation of a new system of logic whereby certainty is granted instead through the direct correspondence of distinct ideas with their empirical referents. By breaking down the structure of human understanding to its discrete positive components, that is, to those elements that could be assured to derive purely from 'unmediated' sense-perception, concerns over subjectivism or over the non-empirical antecedents of pure reason, were effectively abrogated. While the formative principles of the new system of logic may have continued to be scrutinised within philosophy, and within the arts (in particular by Romanticism), the post-Enlightenment schism of science from philosophy meant that, in instrumental terms, the empirical sciences became institutionally immune to the need to reflect upon, or even to comprehend, their own metaphysical foundations.

Conclusion: There is No Algorithm

Although empiricism has its origins in Aristotle and Stoic philosophy, Locke is generally acknowledged as the founder of British empiricism in its modern form. It would be difficult to overestimate Locke's influence, not only upon the Sciences, but also upon social, political, and economic theory since the end of the 17th Century. If one wished to identify a single most important contributor to modern secular-humanist and libertarian Capitalist thinking, and to contemporary definitions of wealth, property rights, and individual liberty, that person would be Locke. He is also cited as a significant influence upon the American Declaration of Independence and on the formation of the United States Constitution. He is described by Thomas Jefferson, together with Bacon and Newton, as:
"[One of] the three greatest men that have ever lived, without any exception, and as having laid the foundation of those superstructures which have been raised in the Physical & Moral sciences [...]".27
There is a more or less unbroken line of influence following from Locke's philosophical positivism, through Hume and Berkeley in the 18th Century, to Mill's utilitarianism in the 19th, to the analytical philosophy and logical positivism of Wittgenstein, Russell, and the Vienna Circle in the 20th. However, it is the attempts at radical formalisation of thought and of language by logical positivists in which Locke's influence is most poignant, and which furnished the key epistemological premises informing 20th Century developments in the information and cognitive sciences. For example, in Turing's early speculations upon the criteria for designing machines with the capacity to imitate human intelligence, he wrote:
"Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer's. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed."28
"Rather little mechanism, and lots of blank sheets", which adopts exactly Locke's analogy for describing the initial state of pure receptivity of the child's brain, so that the ideas the child receives are here conceived as direct positive imprints from a reality pre-given to sensory experience. It is significant in this context that Turing apparently ignores the fact that the brain of a child must already be replete with highly sophisticated mechanisms for controlling its diverse bodily functions, even from a point in time well before its birth; yet at the same time, he supposes, it is mysteriously vacant of mechanisms with respect to its intellectual faculties.
Turing clearly thinks it expedient to the acquisitive task of 'obtaining the adult brain'29 to ignore the "rather little mechanism" (whatever that might consist of), while actually having no idea of its complexity or its structure. This attitude arises from the premise that the mind can be conceived as a collection of procedures ('mechanisms') which develop as a posteriori solutions to problems in the adjudication of sense-data; so that the mind tends to be interpreted as an assemblage of ad hoc puzzle-solutions. But if this is how we are to understand the mind develops, what then gives to it the organisational will to derive meaningful solutions out of the disorder of its earliest sense-impressions?
To return to the shared analogy of the blank sheets of paper, it matters little which particular analogy one employs here, so long as one bears in mind that even a blank sheet of paper exhibits both a structure and a set of properties, that is to say it is a medium, which therefore mediates (rather than simply 'reflects') whatever is transcribed upon it. Apparently, neither Locke nor Turing can conceive of a mind without structure and properties prior to its first sense-impressions, hence their need for the analogy; yet both are content to assume that understanding (or thinking) owes nothing at all to these factors, and that it arises purely as an accumulation of a set of reflections of the data acquired from unmediated sense-impressions.
The problem for Turing, and for the cognitive sciences generally, is that whatever the "rather little mechanism" might consist in, it is not something that might conceivably be investigated materially, or empirically, since there is no way to define the physical limits of a consciousness. In the ensuing project to design an inanimate machine with something-akin-to-a-capacity-for-thinking, the only way to approach the problem is from the somewhat subjective perspective of the contents of thoughts, or of perceptions, followed by an attempt to 'reverse-engineer' the thinking apparatus through the designed complexity of multiply parallel logical procedures (algorithms) performed upon static data values. There is the expectation that some form of analogue of consciousness, or at least the appearance of thinking, will simply arise thereof, as a kind of 'accident' of inbuilt complexity. That expectation relies essentially upon a tenuous functional analogy between the contents of human thoughts and data objects stored electronically – an analogy which acquires philosophical precedence most poignantly in Locke's positivist epistemology regarding sense data.
This projection about the appearance of thinking, and how to model it, includes the premise that all meaningful thinking bears a positive, ontological correspondence with objective reality, and can be reduced to statements of propositional logic, for example of the kind: "The Prime Minister is not bald", where the meaning is verifiable by some appeal to direct sensory observation; and that logical analysis may be applied to the language of thought to arrive at judgements of truth and falsity in much the same way that pure logic is applied to mathematical statements. Thus, the computational theory of mind addresses the domain of thought exclusively in terms of its functional role in assessing truth statements about the observable world. Logical positivism treats statements of the kind: "There is honour among thieves", for example, in which there is no logically derivable truth value, as metaphysical pseudo-statements, which are 'meaningless', since there is no entity corresponding to the concept of 'honour' which may be positively identified from sense-data. It may be helpful to point out that there is neither an observable entity corresponding to the idea of 'mind' – that which the computational theory of mind nevertheless takes as its object.
The problem of the computational approach therefore is that it takes as its sole domain for the genesis of ideas the data provided from sensory experience, because it is committed to a positivist epistemological perspective (outlined in the previous section) which states categorically that this is the only and original source of all human ideas. Hence it excludes from the constructions of thought the possibility of a synthesis of meaning by elements which have no derivation in sense experience, but whose origin must instead be credited to intuition. The question returns again therefore upon the origin and the relative status of intuition in the hierarchy of cognitive processes.
Intuition is that which Kant refers to as a non-empirical source of knowledge, or rather of understanding. Kant gives it a pre-eminence in the structure of human understanding as a necessary precondition for the internal sense of self, and for the external perception of objects – one which has tended to be frowned upon for the past 300 years or so, less so in the field of philosophy, but particularly within the empirical sciences. The reasons why the faculty of intuition should have become so deprecated, in spite of the fact that much of human discourse and reasoning depends implicitly upon it (even within the sciences themselves), an explanation is likely to be that its priority and place in the order of mental operations was simply inapprehensible to a positivist epistemology prepared to accept as knowledge only that given to it in the form of empirical sense data – while if the intuition is to be understood as Kant would have it, as the very foundation of all empirical knowledge, then by definition it must be in itself both before and beyond the scope of being known empirically.
  1. See: Chomsky, N., Three Models for the Description of Language, MIT, Cambridge, Massachusetts, 1956: http://somr.info/lib/Chomsky_1956.pdf; and: On Certain Formal Properties of Grammars, MIT, Cambridge, Massachusetts, 1959: http://somr.info/lib/Chomsky_1959.pdf. [back]
  2. Rose, S., We are moving ever closer to the era of mind control, The Observer, 5 February 2006: http://www.theguardian.com/science/2006/feb/05/comment.themilitary (accessed 06/12/2014). [back]
  3. Turing machines are hypothetical devices employed as a means of deciding which functions are computable by a potential digital computer. The 'machine' typically consists of the idea of an infinite length of tape marked into squares, on each of which may be printed a symbol, together with a scanning/printing device which may stop at any square on the tape to read or write its content. A square may contain only one symbol, and only a single square may be read at any one time. The functional properties of the Turing machine consist in: a predefined list of discrete states and the ability to change from one state to another; the read/write functions; and the move functions (one square only and in either left or right direction). In addition a Turing machine depends upon a table of rules, which defines the sequence of operations involved in moving from a start-state to a halt-state. In most examples of Turing machines (i.e. those appropriate to the function of digital computing) the set of symbols which may be recorded on the tape is restricted to {0,1} – which is a unary, rather than a binary, notation – the '0's having the property of 'blanks' or spacing-elements between blocks of '1's – the latter signifying 'meaningful' segments of the tape according to the length of the blocks. Turing had referred to his initial proposal of the model as the "universal computing machine" in his 1936 paper: On Computable Numbers, with an Application to the Entscheidungsproblem; Proceedings of the London Mathematical Society, 2 (1937) 42: 230-65: http://somr.info/lib/Turing_paper_1936.pdf. [back]
  4. As a recursive function is defined essentially by reference to itself, through a nested instance of the function, this means that computable functions do not in principle invoke any universally available functional definition – each algorithm is therefore, functionally speaking, unique. Turing's designation of the Turing machine model as the "universal computing machine" is therefore open to some misinterpretation, for the term "universal" relates only to the adaptability of the model to act as a theoretical host for any number of diverse routines, by successive digital encoding. Importantly, the set of rules that define a particular machine's operations upon the data in its memory tape is always necessarily unique, and hence possesses no universal functional applicability. For a detailed description of the Turing machine model and its application to examples of simple functions, see: Barker-Plummer, D., Turing Machines, The Stanford Encyclopedia of Philosophy, Summer 2013 Edition, Edward N. Zalta (ed.): http://plato.stanford.edu/archives/sum2013/entries/turing-machine/ (accessed 09/12/2014). [back]
  5. Chomsky, N.; Fitch, W. T.; Hauser, M. D., The Faculty of Language: What Is It, Who Has It, and How Did It Evolve?, pp.1571-3, Science, Vol.298, 22 November 2002, http://somr.info/lib/Chomsky_et_al_2002.pdf. [back]
  6. Ibid,. p.1571. [back]
  7. I feel that the attribution of 'unity' to the items under analysis is a kind of intellectual luxury, and a form of idealism, which permits an understanding of items through artificial abstraction, in isolation from their structural contexts, in which however, in actuality, they are always necessarily embedded. I have made a comparable critique applied to the series of the natural numbers, i.e., with respect to the definition of integers as 'integral wholes' (rather than, as I feel they ought to be considered, 'relative indices of numerical value') and the consequences of this critique for expectations of proportionality in quantitative systems – see: The Limits of Rationality; and: Integers & Proportion; as well as: Radical Affinity & Variant Proportion in Natural Numbers. [back]
  8. Although, in a universal Turing machine (cf. the "universal computing machine", as specified in Turing's 1936 paper (op cit., note3 above), which serves as a theoretical model for what we have come to know as the digital computer), it is possible to encode the instructions for subsidiary individual Turing machines ('programs') hosted in discrete sections at the beginning of the master machine's memory tape (see: Section 4 of: Barker-Plummer, D., op cit.: http://plato.stanford.edu/archives/sum2013/entries/turing-machine/ (accessed 09/12/2014)); the parent machine must still require its own master table of rules, which tells it how to operate upon the encoded child machines. Clearly, the master table of rules cannot itself be located on the universal machine's tape, or the machine would not be able to read its own rules. In my estimation, this appears as a serious oversight in the design of the Turing machine hypothesis, for it is simply taken for granted that the machine 'just knows' the operational instructions in its table of rules, without any provision for how the machine actually accesses those instructions (bearing in mind that the rules do not have universal applicability and must be unique to each machine, or class of machine – the term "universal" in the machine's designation relating to its capacity for encoding subsidiary machines, not the universality of its particular logical mechanism). This assumption that Turing machines are somehow 'divinely' instructed suggests an analogy with the way in which we customarily take for granted the rules of the decimal system as the universally appropriate rules for representing the natural numbers. The choice of decimal is in fact quite an arbitrary one; and in general there is a failure to appreciate that the proportionality attributed to the natural numbers is a unique property of that system – one deriving exclusively out of the rules that define the restrictive array of digits available to decimal notation, and which are therefore inconsistent with those defining alternative numerical radices. This issue is discussed further in subsequent paragraphs. [back]
  9. I have shown elsewhere (see the page: Radical Affinity & Variant Proportion in Natural Numbers) that with respect to the decimal exponential series: 10z, for z = (0, [...], 10), if one represents that same series across a range of alternative numerical radices (I have used those from binary to nonary – base-9) and then calculates the logarithmic differences between successive integers in each series (i.e., using the derived radical logarithms logb), the differences are found to proportionally inconsistent in each case with those in the decimal series (log10 – where the logarithmic difference between successive exponentials is equal to 1). The logarithmic function is intended to express common ratios of proportion and radical logarithms (e.g., log8 in the case of octal) are conventionally derived from 'common' logarithms (log10) according to the formula: log8x = log10x/log108. The graph of the results for the decimal series is clearly a horizontal straight line at y=1, and if the ratios of proportion were indeed 'common' for the same values when expressed across diverse radices, we should expect to see horizontal straight lines in the graphs for each radical series. The resulting graph in the case of each radical series however displays a series of variegated peaks and troughs, displaying proportional inconsistency. These results confirm beyond doubt that the rules of proportion pertaining between values within decimal notation are inconsistent with those between numerically equal values when expressed in alternative radices – a point which appears to have escaped the attention of mathematicians since the invention of logarithms 400 years ago (for our purposes these findings are of particular significance in the case of both binary and octal notations). [back]
  10. In terms of the practical application of information technologies, the vulnerability to such a failure would relate to all data procedures involving the passing of more than one item of associated data between one digital application and another; for example where a web-server passes data to-and-from a database server. For the most part, where cases are considered in isolation, the effects of such a failure in logical consistency would be unnoticeable (bearing in mind that the issue relates to variations in the comparative relations between data across systems, rather than to phenomenal changes in data elements themselves). However, in general terms, the conglomerate effect would be to undermine the representative value of data processed in this way, in particular where the processing may involve comparative assessments upon quantitative data. [back]
  11. With particular reference to language of scientific communities, see Thomas S. Kuhn's discussion of the breakdowns in communication within scientific communities over competing scientific theories, at moments of paradigm change within the natural sciences. Kuhn makes the point that where 'translation' and conversion to new scientific models are required, the process of persuasion is impeded by the fact that disputants have no recourse to a neutral language by which competing theories may resolve their differences – the terminology through which a scientific community sustains a prevailing paradigm arises implicitly out of its commitment to certain exemplars, or special cases, which are exactly the items thrown into question during moments of revolutionary change within the natural sciences: Exemplars, Incommensurability, and Revolutions, Section 5 of the Postscript to his The Structure of Scientific Revolutions, Chicago UP, 1996, pp.198-204.

    "The commitments that govern normal science specify not only what sorts of entities the universe does contain, but also, by implication, those that it does not. It follows [...] that a discovery like that of oxygen or X-rays does not simply add one more item to the population of the scientist's world. Ultimately it has that effect, but not until the professional community has re-evaluated traditional experimental procedures, altered its conception of entities with which it has long been familiar, and, in the process, shifted the network of theory through which it deals with the world." Ibid., p.7. [back]

  12. Turing, A., Computing Machinery and Intelligence (October 1950), Mind LIX (236), p.442: http://somr.info/lib/Mind-1950-TURING-433-60.pdf. [back]
  13. Turing had instead recast the problem in terms of what he called "the imitation game", whereby he proposed a thought experiment in which a human addresses a set of questions to a remote computer and consequently unwittingly mistakes the responses to those questions believing them to have been given by a human not a machine. The scenario since became known as the Turing Test. For further discussion and criticism on Turing's proposition, see my Is Artificial Intelligence a Fallacy? [back]
  14. Deutsch, D., Philosophy will be the key that unlocks artificial intelligence, The Guardian, 3 October 2012: http://www.theguardian.com/science/2012/oct/03/philosophy-artificial-intelligence (accessed 30/12/2014). [back]
  15. Kuhn, op cit., pp.12-13. See also: pp.14-15 of: Heisenberg, W., Planck's Discovery and the Philosophical Problems of Atomic Physics (lecture delivered Sept. 4, 1958); published in: On Modern Physics, Collier Books, New York, 1962, pp.9-28. [back]
  16. Bacon, F., Novum Organum, Or True Directions Concerning the Interpretation of Nature (1620), Constitution Society: http://www.constitution.org/bacon/nov_org.htm (accessed 18/01/2015). [back]
  17. Locke, J., An Essay Concerning Human Understanding (1690), Penn State's Electronic Classics Series: http://www2.hn.psu.edu/faculty/jmanis/locke/humanund.pdf (accessed 18/01/2015). Hume, D., A Treatise of Human Nature (1739), Project Gutenberg Ebooks: http://www.gutenberg.org/ebooks/4705 (accessed 18/01/2015). [back]
  18. Locke, J., ibid., Book IV, Chapter 2, Of the Degrees of our Knowledge, p.521. The word 'intuition' does not appear in Locke's treatise until Book IV (the final book – 'intuitive knowledge' appears for the first time in Book III, Ch.8, p.462). Locke had by that point already undertaken a thorough discussion of the concepts of sensation, perception, reflection, ideas, complex ideas, association, cause & effect, the modes of thinking, etc.; which suggests that he had intentionally avoided the subject of intuition, until the later sections, in spite of the fact that in Book IV (p.521) he then declares (extemporaneously) that "bare intuition; without the intervention of any other idea" is the source of the clearest form of human knowledge! [back]
  19. Kant, I., The Critique of Pure Reason (1787), Meiklejohn, J. M. D. (trans.), Chapter II, Of the Deduction of the Pure Concepts of the Understanding – SS 10. Transition to the Transcendental Deduction of the Categories. Project Gutenberg Ebooks: http://www.gutenberg.org/ebooks/4280 (accessed 18/01/2015). [back]
  20. Ibid., Chapter II, SS 9. Of the Principles of a Transcendental Deduction in general. [back]
  21. Ibid., Chapter II, SS 12. Of the Originally Synthetical Unity of Apperception. [back]
  22. Prior to the implementations of thought and of logic associated with the cognition of objects, there are, in Kant's view, two modes of primary sensible intuition through which the subject apprehends empirical reality. These are space and time, which, having no palpable physical properties or appearance in themselves, are to be understood not as items known empirically but as a priori intuitive representations of the external and internal senses (respectively), which provide the necessary conditions for the reception of empirical phenomena, externally, in space, and the subject's internal relations to those phenomena, in time:

    "By means of the external sense (a property of the mind), we represent to ourselves objects as without us, and these all in space. Herein alone are their shape, dimensions, and relations to each other determined or determinable. The internal sense, by means of which the mind contemplates itself or its internal state, gives, indeed, no intuition of the soul as an object; yet there is nevertheless a determinate form, under which alone the contemplation of our internal state is possible, so that all which relates to the inward determinations of the mind is represented in relations of time. Of time we cannot have any external intuition, any more than we can have an internal intuition of space." (Ibid., Introduction – Transcendental Doctrine of Elements. First Part. Transcendental Aesthetic – Section 1. Of Space – SS2. Metaphysical Exposition of this Conception). [back]

  23. Ibid., Preface to the Second Edition. [back]
  24. "Whatsoever doth or can exist, or be considered as one thing is positive: and so not only simple ideas and substances, but modes also, are positive beings: though the parts of which they consist are very often relative one to another: but the whole together considered as one thing, and producing in us the complex idea of one thing, which idea is in our minds, as one picture, though an aggregate of divers parts, and under one name, it is a positive or absolute thing, or idea." (Locke, op cit., Book II, Ch. 25, Of Relation, p.304. – my emphasis). [back]
  25. "This further may be considered concerning relation, that though it be not contained in the real existence of things, but something extraneous and superinduced, yet the ideas which relative words stand for are often clearer and more distinct than of those substances to which they do belong." (Ibid., p.305 – my emphasis). [back]
  26. Kant, op cit., Preface to the Second Edition. [back]
  27. Letter to Richard Price Paris, January 8, 1789, The Letters of Thomas Jefferson: http://www.let.rug.nl/usa/presidents/thomas-jefferson/letters-of-thomas-jefferson/jefl74.php (accessed 02/01/2015). [back]
  28. Turing, A., Computing Machinery and Intelligence, Mind LIX (236), October 1950, p.456: http://somr.info/lib/Mind-1950-TURING-433-60.pdf [back]
  29. The task of 'obtaining the adult brain' would also require, seventeen years following Turing's article, the illicit procurement of at least one, but probably several children (including myself, aged five), as sacrificial research subjects in a program of covert neurosurgical experimentation, conducted within the Brtish National Health Service. See: Special Operations in Medical Research for my exposition of this medical crime. [back]