Data streams are the fundamental part of what makes tornado different to any other OS architecture. Data streams essentially are a formal interface for the passing of data between entities which are derived from TKernel_DataStreamableObject. Data is passed between these objects in the form of TKernel_DataStreamData objects.

Data streams can only have one input, but they can have many outputs. This is to facilitate a feature of tornado called "work conglomeration" which essentially means that if the same data needs to be processed twice by two different things then it needs to be done only once and the results sent to two different recipients.

Data streams are inherently multitasking. When you send data into a data stream, the call returns immediately as the data sent is put on a output queue. This flushes asynchronously - essentially at the rate each of the receivers processes that data. As a result, your data stream object must be thread-safe on a per object basis - ie; your DataStreamInterface code will not have more than one thread per object running, but many threads may run as there are instances of that object.

Data stream objects must also be of a finite state - what goes in must always produce the same output. Remember this as tornado optimises data flow through automatically reorganising data flow based on this assumption.




(C) 1998 The Tornado II programming team (Last updated: 15 March 2009 19:01:54 -0000)