Anticipation in constructing and interrogating Natural Information Systems: from Classical through Nano to Quantum Computing

 

B. Nick Rossiter, Informatics, Northumbria University, Newcastle upon Tyne, UK, NE1 8ST, nick.rossiter@unn.ac.uk

M. A. Heather, Sutherland Building, Northumbria University, NE1 8ST, m.heather@unn.ac.uk

 

Keywords: quantum databases, natural computing, nano-computing, search algorithms.

 

Abstract

 

Information systems anticipate the real world. Databases store, organise and search collections of real-world data. The conceptual organisation of data according to its inherent logical structure as a database was early recognised in computing as an efficient method for the storage and retrieval of information. In terms of anticipatory systems, information systems as databases constructed using classical methods are weak anticipatory. This is because of the reductionism and normalisation needed to operate databases on the familiar but idealised machines with von Neumann architectures consisting of fixed instructions between bit cells. The new developing areas are quantum computation, exploiting quantum mechanics principles in physics, nanoscale chemistry, bio- and molecular-computing processing as in genetics [1]. Natural computing is real-world processing and does not rely on any model. Data can be input neat without any reductionist pre-processing. The corresponding information system or database using natural computing should therefore be strong anticipatory.

 

Databases have traditionally employed the multilevel ANSI/SPARC architecture. Three layers and the mappings between them are typically defined by the internal schema addressing the layout of data on disk, the conceptual schema describing the data in terms of types and the external schema describing views derived for end-users from the conceptual schema. The mappings between schemas are algebraic or calculus expressions. Relationships are often performed in a separate process such as Entity-Relationship Modelling or Unified Modelling. Normalisation is needed to verify schema design, particularly to relate key and non-key attributes. The levels, mappings and relationships all have to be integrated in a consistent database design.

 

Formally a database design is a topos and representable as a Dolittle diagram subsuming the pullback/pushout relationships as:

 

 

 

 

 

 

 

 

 

 


Here f* is essentially an examination and re-indexing functor. It organises the data into a key for storage and applies a query for interrogation of the database. In ANSI/SPARC it puts together a key by concatenation as in the relational model. For retrieval f* looks up information by inspecting the key. In quantum theory the key is entanglement, in genetics it is a DNA strand. In terms of the Dolittle diagram therefore f* is the same operation in classical and natural computation. What then corresponds to the database schema in natural computing?

 

There is a need to re-examine the various interpretations found in quantum theory to build an operational system. The Dolittle diagram relates binary categorial limits (X) and colimits (+) for types and includes Heyting implication in intuitionistic logic. The pullback functor (f*) looks for a particular sub-limit to represent a relationship, thus emulating the join operation of databases (*). Other arrows represent projection , existential $ and universal " quantification and at a higher level even originality and style with respectively the unit h and counit e of adjunction. The Dolittle diagram is the universal relation with all possible relationships in parallel [3,4].

 

Recently Grover [2] has developed the idea of using quantum algorithms for faster searching of databases and Selinger [5] has produced a collection of operations at such a level that they could form the basis of a quantum programming language. Both offer potential for the development of quantum databases. However, databases in general require this conceptual level for the representation, querying and updating of data. The present approach is not yet at the f* conceptual level but relies on low-level operations analogous to classical methods like the CNOT gate (controlled NOT gate) where two input qubits, control and target, are xor'd and stored in a target qubit B + A. As a kind of f*, Grover makes use of the 'oracle', treated as a black box and used for collapsing the wave function, that is to determine when a solution has been derived. However, this is to treat the oracle as a local set-theoretic module lacking the non-locality properties required of quantum computation.

 

There is a fundamental question of status for natural computing as a strong anticipatory system. There is some evidence [3] that nano-computation is a separate transitional phase between and distinct from classical and quantum processors. The issue therefore remains whether natural, nano and quantum computing are all facets of the same operation. Anticipatory systems theory latent in the Dolittle diagram suggests they are all the same.

 

References

 

[1] Adleman, L, On constructing a molecular computer, DNA based computers, Lipton, R, and Baum, E, (edd), DIMACS, American Mathematical Society 1-21 (1996).

 

[2] Grover L K, A fast quantum mechanical algorithm for database search, Proc 28th Annual ACM Symposium Theory of Computing 212-219 (1996).

 

[3] Heather, M A, & Rossiter, B N, Locality, Weak or Strong Anticipation and Quantum Computing I. Non-locality in Quantum Theory, International Journal Computing Anticipatory Systems 13 307-326 (2002).

 

[4] Heather, M A, & Rossiter, B N, Locality, Weak or Strong Anticipation and Quantum Computing. II. Constructivism with Category Theory, International Journal Computing Anticipatory Systems 13 327-339 (2002).

 

[5] Selinger, P, Towards a Quantum Programming Language, 42pp, http://quasar.mathstat.uottawa.ca/~selinger/papers.html#qpl (2002).