Anda di halaman 1dari 17

The Pipeline Design Pattern

Allan Vermeulen, Gabe Beged-Dov, Patrick Thompson Copyright Rogue Wave Software, Inc., 1995

Intent
Build data pipelines in a configurable and typesafe manner.

Motivation
Data pipelines are ubiquitous in computer graphics applications, in hardware and software both. Or any application, for that matter, which must perform a series of transformations on a stream of data. Ideally one would like to configure and reconfigure these pipelines from reusable autonomous components . Consider for example a World Wide Web browser. The primary job of a browser is to convert a stream of bytes into an intelligible display of text and graphics. A series of processing elements may be involved in this transformation. A network interface reads bytes off of the communication channel and passes them on to a document interpreter. The document interpreter adds structure to the stream of bytes, the result of which is an HTML tree; it passes the HTML tree onto a document formatter. The document formatter creates a display tree, which is passed onto the document displayer for display on the output device, usually a computer display monitor.1 Some Web browsers use a streaming pipeline, similar to this, to incrementally process data as it comes across the wire. As soon as information is available anywhere in the pipeline it is processed and passed on to the next element. This allows the user to start reading the document before it is fully transmitted, resulting in a quicker response time. We differentiate between data flow and control flow in the processing pipeline. For example, the Web browser pipeline can be constructed in various ways depending on the control flow policy. The data flow, however, is constant. In other words, the information which passes from one processing element to another remains unchanged, regardless of the control flow model. To better understand and discuss the control flow behavior of a pipeline, we categorize each processing element interface as being either active or passive, a reader or a writer. A processing element which is an endpoint has just one pipeline interface; interior pipeline processing elements have two pipeline interfaces. When two processing elements are connected in a pipeline one side must be a reader and the other a writer; one side must be active and the other passive. A writer is an element that generates messages, a reader receives messages. The side which initiates the message transfer is called the active side, the other side is said to be passive.

The data that is passed between processing elements may have complex structure and it is often not efficient to pass it by value but we will assume that there is no shared data structure. This is a key point in the pipes and filters pattern [Meunier] where no shared data is allowed between filters. We dont feel that this strong an admonition is necessary but use it in the motivating example in order to clarify the key points of the pattern.

Given the above definitions, when looking at the connection between 2 processing elements, you will always see that either an active-writer is connected to a passive-reader or an active-reader is connected to a passive-writer. An active writer initiates the transfer of data by calling the receive method on the passive reader. The data is passed as a method parameter. We call this a push flow. Here the control flow is left to right as is the data flow:
1: send(data)

Active Writer

2: receive(data)

Passive Reader

An the active reader initiates the data transfer by calling the supply method on the passive writer, the data is passed back to the active reader from the passive writer via a return value. This is a pull flow. Here the control flow is left to right; the data flow is just the opposite, right to left:
1: data demand()

Active Reader

2: data supply()

Passive Writer

Interior pipeline processing elements are characterized by their 2 pipeline interfaces as being either active-reader/active-writer, passive-reader/active-writer, active-reader/passive-writer, or passive-reader/passive-writer. Based on this model we can describe a number of possible configurations for our Web browser pipeline. First lets consider a browser pipeline which employs a push flow model:
1: send(Bytes) 3: send(HTMLtree) 5: send(FormatTree)

NetIntrface 2: receive(Bytes) ActWr

4: receive(HTMLtree) DocInterp PasRdActWr

DocFormat 6: receive(FormatTree) DocDisplay PasRdActWr PasRd

Here we have the four processing elements which we introduced earlier, arranged in a push flow pipeline. The network interface processing element is an active writer. It initiates the pipeline control flow by calling receive on the document interpreter and passing a buffer of bytes. The document interpreter is a passive reader on the network interface side and an active writer on the document format side. When its receive member function is called it builds up an HTML tree and passes it to the document formatter in a receive call. The document formatter is a passive-reader on the document interpreter side and a active-writer on the document display side. It takes the HTML tree passed in by the document interpreter and creates a format tree from it. The format tree is passed on to the document display through its receive method. The document display is a passive reader. It displays the document to the display device, at which point control is returned to the network interface at the beginning of the pipeline.

A pull flow browser pipeline would look as follows:


5: Bytes demand() 3: HTMLtree demand() 1: FormatTree demand()

NetIntrface PasWr

6: Bytes supply()

4: HTMLtree supply() DocFormat 2: FormatTree supply() DocDisplay DocInterp ActRdPasWr ActRdPasWr ActRd

Here we have the same processing elements, yet the flow of control starts at the document display. The document display in this case is an active reader. It initiates the pulling of data through the pipeline by calling the supply method on the document formatter. Once it receives the format tree it will render the document. The document formatter is a passive writer on the document display side, and an active reader on the document interpreter side. It creates the format tree and returns it to the document display. In order to create the document tree it must first get an HTML tree from the document interpreter. It does this by calling supply on the document interpreter. The document interpreter is a passive writer on the document formatter side and an active reader on the network interface side. It creates the HTML tree and returns it to the document formatter. But in order to create the tree it must first call supply on the network interface to get a byte buffer. The network interface is a passive writer. It responds to calls to supply by returning the next buffer of bytes from the network. This starts the cascading of data and control back to the document display.

The previous two examples are well suited for single or multi-threaded environments. The following two examples are tailored specifically for environments which support multiple threads.

The following example shows a push-pull hybrid. The flow of control starts simultaneously from both endpoints. The two threads exchange data through a queue which is represented in the pipeline model as a passive-reader/passive-writer.
3: HTMLTree demand()

1: send(Bytes)

3: send(HTMLtree)

1: FormatTree demand()

NetIntrface ActWr

2: receive(Bytes)

DocInterp PasRdActWr

2: FormatTree supply() DocFormat DocDisplay ActRdPasWr ActRd

4: receive(HTMLtree) Queue <HTMLtree> PasRdPasWr

4: HTMLtree supply()

From the network interface to the queue we have a push model of active writers connecting to passive readers. From the document display to the queue we have a pull model of active readers connecting to passive writers. The queue, being a passive-reader/passive-writer, plugs the two models together and acts as synchronization element between the two. The ultimate manifestation of this push-pull hybrid concept is to run each processing element in its own thread, connecting to its surrounding processing elements through synchronizing queues:
send(Bytes) Bytes demand() send(HTMLtree) HTMLtree demand() send(FormatTree) FormatTree demand()

NetIntrface ActWr

DocInterp ActRdActWr

DocFormat ActRdActWr

DocDisplay ActRd

receive(Bytes)

receive(HTMLtree)

receive(FormatTree)

Bytes supply() Queue <Bytes> PasRdPasWr Queue <HTMLtree> PasRdPasWr

HTMLtree supply() Queue <FormatTree> PasRdPasWr

FormatTree supply()

Each processing element, excepting the queues, has its own thread of control. The network interface is an active writer. It is in an infinite loop reading bytes off of the network and pushing them to the Bytes queue. The document interpreter is an active-reader/active-writer. It is in an infinite loop pulling bytes from the queue, formatting the bytes into an HTML tree, and then pushing the HTML tree to the HTMLtree queue.

The document formatter is also an active-reader/active-writer. It is in an infinite loop pulling HTML trees from the HTML tree queue, creating formatted document trees and pushing them to the format tree queue. The document displayer is an active reader. It is in a loop pulling format trees from the format tree queue and rendering them to the display.

This example clearly shows how the queue processing elements play the simultaneous role of data flow and control flow synchronizer. They act not just as data buffers, but as passive mediators between active processing elements.

Applicability
Use the Pipeline pattern when you need to build configurable data-flow pipelines.

Structure
The following Booch class diagram illustrates the inheritance hierarchy in the Pipeline pattern.

RWReference

PEInterface

Reader

Writer

In Active Reader | In demand() Passive Reader receive(In)

In

Out ActiveWriter | send(Out)

Out Passive Writer Out supply()

In, Out ActiveReader ActiveWriter In, Out PassiveReader ActiveWriter In, Out ActiveReader PassiveWriter

In, Out PassiveReader PassiveWriter

This diagram shows how concrete classes relate to the Pipeline hierarchy. It also illustrates the relationship between the ActiveReader and PassiveWriter classes, and between the ActiveWriter and the PassiveReader classes.
In Active Reader | In demand() In, Out ActiveReader PassiveWriter Out Concrete PasWr Out Passive Writer Out supply()

In Concrete ActRd

In, Out Concrete ActRdPasWr

Out ActiveWriter | send(Out) In, Out ActiveReader ActiveWriter In, Out PassiveReader ActiveWriter

In Passive Reader receive(In)

In, Out PassiveReader PassiveWriter

Out Concrete ActWr In, Out Concrete ActRdActWr In, Out Concrete PasRdActWr

In Concrete PasRd In, Out Concrete PasRdPas

Participants
RWReference - Used for reference counting. PEInterface - Base class for all processing elements. Keeps track of connection information. Reader - Base class for all Reader processing elements. Used primarily to type interfaces which can only accept readers. Writer - Base class for all Writer processing elements. Used primarily to type interfaces which

can only accept writers. ActiveReader - Base class for ActiveReader processing elements. Implements a protected member function called demand which calls supply on the connected PassiveWriter to get the next data item. The demand function is called by the derived class implementation to pull data from the pipeline. The ActiveReader class is parameterized by the data type which is passed between the ActiveReader and the PassiveWriter to which it connects. PassiveReader - Base class for PassiveReader processing elements. Declares a pure virtual function called receive which is implemented by the derived type to accept a data item. The receive function is called by an associated ActiveWriter. The PassiveReader class is parameterized by the data type which is passed by the ActiveWriter to the PassiveReader. ActiveWriter - Base class for ActiveWriter processing elements. Implements a protected member function called send which calls receive on the connected PassiveReader to pass along the next data item. The implementor of the ActiveWriter derivative calls send to push data through the pipeline. The ActiveWriter is parameterized by the data type which is passed between the ActiveWriter and the PassiveReader to which it connects.

PassiveWriter - Base class for PassiveWriter processing elements. Declares a pure virtual function called supply which is implemented by the derived implementation to return the next data item. The supply function is called by an associated ActiveReader. The PassiveWriter class is parameterized by the data type which is passed between it and the connected ActiveReader. ActiveReaderActiveWriter - Combines an ActiveReader and an ActiveWriter to create an interior pipeline node. PassiveReaderActiveWriter - Combines an PassiveReader and an ActiveWriter to create an interior pipeline node. ActiveReaderPassiveWriter - Combines an ActiveReader and an PassiveWriter to create an interior pipeline node. PassiveReaderPassiveWriter - Combines an PassiveReader and an PassiveWriter to create an interior pipeline node. ConcreteActiveReader - Usually used as the head of a pull pipeline. The thread of control of the pull pipeline originates here. The ConcreteActiveReader pulls the next data item from the pipeline by calling demand, which is implemented in the ActiveReader base class. ConcretePassiveReader - Usually used as the tail of a push pipeline. It must implement the pure virtual function, receive, which is defined in the PassiveReader base class. The receive function takes

the next data item as a parameter. ConcreteActiveWriter - Usually used as the head of a push pipeline. The thread of control of the push pipeline originates here. The ConcreteActiveWriter pushes the next data item into the pipeline by calling the send, which is implemented in the ActiveWriter base class. ConcretePassiveWriter - Usually used as the tail of a pull pipeline. It implements the supply pure virtual function which is defined in the PassiveWriter base class. The supply function returns the next data item. ConcreteActiveReaderActiveWriter - An internal processing element. Must have its own thread of control. It is usually in a loop calling demand which is defined in its ActiveReader base class, processing the data, and then sending the transformed data out by calling the send function defined in its ActiveWriter base. ConcretePassiveReaderActiveWriter - Usually used as an internal node in a push pipeline. It must implement the receive pure virtual function defined in its PassiveReader base. The receive function implementation will usually process the incoming data and then pass it along by calling the send member function implemented in its ActiveWriter base. ConcreteActiveReaderPassiveWriter - Usually used as an internal node in a pull pipeline. It must implement the supply pure virtual function defined in its PassiveWriter base. The supply function implementation usually call the demand function in its ActiveReader base to get the next data item, process the data, and then return the transformed data to the supply caller. ConcretePassiveReaderPassiveWriter - Usually used as an internal processing element to synchronize data and control flow between two active processing element interfaces. Must implement the supply pure virtual function defined in its PassiveWriter base, and the receive pure virtual function defined in PassiveReader base. Instead of instantly passing along the, possibly transformed, data as the other internal processing element types do, it must somehow buffer the data.

will

its

Collaborations
Concrete processing elements inherit from ActiveReader, PassiveReader, ActiveWriter, PassiveWriter, ActiveReaderActiveWriter, PassiveReaderActiveWriter, ActiveReaderPassiveWriter, or PassiveReaderPassiveWriter. ActiveWriter interfaces must be matched, connected, with PassiveReader interfaces, and PassiveWriter interfaces with ActiveReader interfaces. ActiveWriter processing elements call the receive function on their associated PassiveReader, passing in a data element. ActiveReader interfaces call the supply function on their associated PassiveWriter, which returns a data element.

A function in the base PEInterface base class is used to connect processing elements. This function connects PEInterface* objects. Type safe template functions wrap this call to enforce the PE connection rules (i.e., PassiveWriter - ActiveReader, ActiveReader - PassiveWriter.)

Consequences
The Pipeline pattern allows us to construct data flow pipelines from objects in a type-safe and pluggable manner. By pluggable, think of the analogy of connecting two garden hoses; a female connection of one must be matched with a male connection of the other, otherwise they cant be connected. The advantage of this is that we can configure the control flow model of the pipeline in a predictable manner. Type-safe means that we can define and enforce the type of data that is passed between processing elements. This type might change from connection to connection within the pipeline, which is exactly as one would expect in a series of data filters. The Pipeline pattern in its current incarnation has the drawback that it only supports single-flow pipelines. It precludes multiplexing of input and/or output flows to/from a processing element. This may be just an implementation issue, or else the pattern itself may need to be refined.

Implementation
1. Interface Conformance. Processing elements should only be allowed to be connected with complementary processing elements. Ideally, this should be enforced at compile time. This can be achieved in C++ by defining parameterized connect and disconnect functions which only support legal connections. These functions, in the implementation presented here, call connect and disconnect functions in the PEInterface base class. The PEInterface connect and disconnect functions connect PEInterface* objects. Type safety. Connected objects must agree on the type of the data which is transferred over the connection. This is accomplished in C++ by parameterizing endpoint processing element classes with an input data type or an output data type, and internal processing element nodes with both the input and output types. Multi-threading. The implementation should support multi-threading, but it shouldnt define the threading model. The pattern, and thus the implementation, supports synchronization processing elements; i.e., queues which support passive-writer/passivereader interfaces. Any processing element which has an active interface may be run in its own thread. Flexibility. The implementation should be such that the pipeline can be easily configured and reconfigured. Since the processing element implementations must inherit from the pipeline hierarchy this goal is not directly met. We address this problem by combining the Pipeline pattern with the Functor idiom [Cope92]. This technique will be illustrated in the next section.

2.

3.

4.

Sample Code and Usage


The following sample implementation illustrates how the WWW browser example could be built in C++. We will concentrate on the implementation of the browser itself, rather than going into the Pipeline pattern implementation in detail. For that discussion, please refer to the Appendix. To use the Pipeline pattern to implement our WWW browser pipeline, we must implement concrete processing elements which inherit from one of the pipeline processing element types. For the sake of this example, lets say we want to build a push pipeline. Thus we are going to have to implement the following concrete processing elements: NetInterfacePE, the head node PE

inheriting from ActiveWriter; DocInterpPE, an internal PE which inherits from PassiveReaderActiveWriter; DocFormatPE, an internal PE which inherits from PassiveReaderActiveWriter; and DocDisplayPE, the tail node PE inheriting from PassiveReader. To actually implement these processing elements we are going to use the C++ Functor idiom (ala, Coplien.) We will write our processing element functionality in Functor classes and pass these into the concrete elements which use the functor in their implementation. There are two reasons for doing this: 1) it allows us to reuse the concrete processing elements to build different types of processing elements, and 2) it allows us to use the PE functionality in different types of processing elements; that is we can reconfigure our pipeline, for example from a push flow model to a pull flow model, without changing the basic functionality of each PE. For our WWW browser push flow pipeline we will define three of these general concrete processing elements--an ActiveWriterEndpoint, a PassiveReaderActiveWriterXform, and a PassiveReaderEndpoint: template <class In, class Out, class Fn> class PassiveReaderActiveWriterXform : public PassiveReaderActiveWriter<In,Out> { private: Fn f_; // The functor which actually does the work public: PassiveReaderActiveWriterXform(Fn f) : f_(f) {} virtual void receive(In in) { send(f_(in)); } }; template <class Out, class Fn> class ActiveWriterEndpoint : public ActiveWriter<Out> { private: Fn f_; // The functor which actually does the work public: ActiveWriterEndpoint(Fn f) : f_(f) {}; void go() { while (TRUE) send(f_()); } }; template <class In, class Fn> class PassiveReaderEndpoint : public PassiveReader<In> { private: Fn f_; // The functor which actually does the work public: PassiveReaderEndpoint(Fn f) : f_(f) {}; void receive(In in) { f_(in); } }; So now its just a matter of implementing our WWW browser functors and building up the pipeline. class NetworkInterface { // Functor public: NetworkInterface() {} Bytes operator()() {return getBytesFromNet();} private: Bytes getBytesFromNet();

10

}; class DocInterpreter { // Functor public: DocInterpreter() {} HTMLtree operator()(Bytes buf) {return HTMLtree(buf);} }; class DocFormatter { // Functor public: DocFormatter() {} FormatTree operator()(HTMLtree html) {return FormatTree(html);} }; class DocDisplayer { // Functor public: DocDisplayer() {} void operator()(FormatTree t) {/* render t */} }; // typedefs because these names can get quite long typedef ActiveWriterEndpoint<Bytes,NetworkInterface> NetInterfacePE; typedef PassiveReaderActiveWriterXform<Bytes,HTMLtree,DocInterpreter> DocInterpPE; typedef PassiveReaderActiveWriterXform<HTMLtree,FormatTree,DocFormatter> DocFormatPE; typedef PassiveReaderEndpoint<FormatTree, DocDisplayer> DocDisplayPE; main() { // build the processing elements NetworkInterface ni; NetInterfacePE *netInterfacePE = DocInterpreter di; DocInterpPE *docInterpPE = DocFormatter df; DocFormatPE *docFormatPE = DocDisplayer dd; DocDisplayPE *docDisplayPE =

new NetInterfacePE(ni); new DocInterpPE(di); new DocFormatPE(df); new DocDisplayPE(dd);

// create handles to the ones we need to keep netInterfacePE->addReference(); // build the pipeline connect(netInterfacePE, docInterpPE); connect(docInterpPE,docFormatPE); connect(docFormatPE,docDisplayPE); // set the wheels in motion netInterfacePE->go(); // dismantle the pipeline disconnect(netInterfacePE, docInterpPE); disconnect(docInterpPE,docFormatPE); disconnect(docFormatPE,docDisplayPE); delete netInterfacePE; return 0;

11

Known Uses
Unix pipes support a similar model. Unix shells allow filter applications to be connected by pipes, whereby the output of one application becomes the input of the other. Unix pipes support buffering between applications. The Pipeline pattern is similar to the CORBAservices (COSS) Event Service. The Event Service has PushConsumer, PushSupplier, PullConsumer, and PullSupplier interfaces. These parallel the PassiveReader, ActiveWriter, ActiveReader, and PassiveWriter classes. The Consumers and Suppliers communicate through an EventChannel which mediates between the two. The event channel can be used to multiplex input and output flows and to buffer data. The event channel can also be used to connect incompatible interfaces; for example a PushSupplier can be connected to a PullConsumer. In order to build a Pipeline of processing elements using the Event service, one would have to separate each processing element in the chain with an event channel. Interior nodes in the pipeline would have to inherit from both consumer and supplier interfaces, for example PushConsumer and PullSupplier.

Related Patterns
In [CS95] several patterns are presented under the overrall banner of the layered service composition pattern. These patterns cover similar territory to the pattern described in this paper but with different areas of focus and emphasis. The Streams pattern [Edwards] is at a higher level than the pipeline pattern and explicitly avoids the issue of control flow. The Pipe and Filters pattern [Meunier] also does not deal explicitly with control flow but does touch more on variations of pipes that imply different control flow models.

References
[Meunier] Regine Meunier. The Pipes and Filters Architecture. Ch. 22 in Pattern Languages of Program Design. Addison-Wesley 1995. [CS95] James O. Coplien, Douglas C. Schmidt, editors. Pattern Languages of Program Design. Addison-Wesley 1995. [Cope92] James O. Coplien. Advanced C++ Programming Styles and Idioms. Addison-Wesley 1992. [Edwards] Stephen H. Edwards. Streams: A Pattern for Pull-Driven Processing. Ch. 21 in Pattern Languges of Program Design. Addison-Wesley 1995.

12

Appendix: Implementation Details


This appendix discusses the Pipeline implementation in more detail than was deemed appropriate for the sample code section. The PEInterface base class could be implemented as follows: class PEInterface : virtual public RWReference { protected: PEInterface *connectedTo_; PEInterface *otherSide_; // null for heads and tails public: static void connect(PEInterface*,PEInterface*); static void disconnect(PEInterface*,PEInterface*); public: PEInterface(PEInterface *otherSide=0); virtual ~PEInterface(); RWBoolean isConnected() const; RWBoolean isEnd() const; PEInterface* connectedTo() const; PEInterface* otherSide() const; }; The connectedTo_ member variable keeps track of what the processing element is connected to. The otherSide_ member variable is a pointer to the other half of an internal node; it takes advantage of the fact that combined processing elements (e.g., an ActiveReaderPassiveWriter) contain two copies of the PEInterface base, as we are using non-virtual multiple inheritance. The connect and disconnect functions are implemented as follows: void PEInterface::connect(PEInterface* x, PEInterface *y) { RWPRECONDITION(x!=0); RWPRECONDITION(y!=0); if (x->isConnected()) { RWTHROW(AlreadyConnectedErr(*x)); } if (y->isConnected()) { RWTHROW(AlreadyConnectedErr(*y)); } x->connectedTo_ = y; y->connectedTo_ = x; x->addReference(); y->addReference(); } void PEInterface::disconnect(PEInterface* x, PEInterface *y) { RWPRECONDITION(x!=0); RWPRECONDITION(y!=0); if (x->connectedTo_!=y) {RWTHROW(NotConnectedErr(*x));} if (y->connectedTo_!=x) {RWTHROW(NotConnectedErr(*y));} x->connectedTo_ = y->connectedTo_ = 0; if (x->removeReference()==0) {delete x;} if (y->removeReference()==0) {delete y;} } The PEInterface connect and disconnect member functions are meant to be called by template wrapper functions. We would have liked to have made the connect and disconnect member functions private and have the template wrapper functions as friends; however we dont think C++ will let us do this. The template connect and disconnect wrappers are defined as follows: template <class X> void connect(PassiveWriter<X>*,ActiveReader<X>*); template <class X> void connect(ActiveWriter<X>*,PassiveReader<X>*);

13

template <class X> void disconnect(PassiveWriter<X>*,ActiveReader<X>*); template <class X> void disconnect(ActiveWriter<X>*,PassiveReader<X>*); The Reader and Writer classes really only exist so that we can type interfaces. Thus their implementation is quite simple: class Reader : public PEInterface { protected: Reader(Writer *otherSide=0); }; class Writer : public PEInterface { protected: Writer(Reader *otherSide=0); }; The endpoint processing element nodes are defined as follows. The passive variants define pure virtual functions which must be implemented by each processing element implementation. The active variants implement functions which are meant to be called by the processing element implementations. These functions communicate with the connected processing element to send or ask for (demand) the next data element. The interaction between active processing element functions and passive processing element functions defines and enforces the protocol between processing elements. template <class X> class PassiveWriter : public Writer { public: PassiveWriter(Reader *otherSide=0) : Writer(otherSide) {} virtual X supply() =0; // return a data element }; template <class X> class PassiveReader : public Reader { public: PassiveReader(Writer *otherSide=0) : Reader(otherSide) {} virtual void receive(X) =0; // accept a data element }; template <class X> class ActiveWriter : public Writer { public: ActiveWriter(Reader *otherSide=0) : Writer(otherSide) {} virtual void send(X) const; // see below }; template <class X> void ActiveWriter<X>::send(X x) const { if (!isConnected()) { RWTHROW( NotConnectedErr(*this) ); } PassiveReader<X> *connectedTo = DYNAMIC_CAST(PassiveReader<X>*,connectedTo_); RWPOSTCONDITION(connectedTo!=0); connectedTo->receive(x); } template <class X> class ActiveReader : public Reader { public: ActiveReader(Writer *otherSide=0) : Reader(otherSide) {} virtual X demand() const; // see below };

14

template <class X> X ActiveReader<X>::demand() const { if (!isConnected()) { RWTHROW( NotConnectedErr(*this) ); } PassiveWriter<X> *connectedTo = DYNAMIC_CAST(PassiveWriter<X>*,connectedTo_); RWPOSTCONDITION(connectedTo!=0); return connectedTo->supply(); } The internal processing elements are naturally a combination of the simpler single interface nodes. They are defined as follows: template <class In, class Out> class ActiveReaderActiveWriter : public ActiveReader<In>, public ActiveWriter<Out> { public: ActiveReaderActiveWriter(); }; template <class In, class Out> class PassiveReaderActiveWriter : public PassiveReader<In>, public ActiveWriter<Out> { public: PassiveReaderActiveWriter(); }; template <class In, class Out> class ActiveReaderPassiveWriter : public ActiveReader<In>, public PassiveWriter<Out> { public: ActiveReaderPassiveWriter(); }; template <class In, class Out> class PassiveReaderPassiveWriter : public PassiveReader<In>, public PassiveWriter<Out> { public: PassiveReaderPassiveWriter(); }; In order to make concrete processing elements easy to implement and reconfigure, we have combined the Pipeline pattern with the Functor idiom. That is we have classes which inherit from the processing elements which take a functor as a template parameter. We can then plug any functor class into the predefined processing element implementations, given of course that the functor supports the expected interface. This way we can define our functional elements once and use them in various pipeline configurations. It will be clearer when you see the code. template <class In, class Out, class Fn> class PassiveReaderActiveWriterXform : public PassiveReaderActiveWriter<In,Out> { private: Fn f_; // The functor which actually does the work public: PassiveReaderActiveWriterXform(Fn f) : f_(f) {} virtual void receive(In in) { send(f_(in)); } };

15

template <class In, class Out, class Fn> class ActiveReaderPassiveWriterXform : public ActiveReaderPassiveWriter<In,Out> { private: Fn f_; // The functor which actually does the work public: ActiveReaderPassiveWriterXform(Fn f) : f_(f) {} virtual Out supply() { return f_(demand()); } }; template <class In, class Out, class Fn> class ActiveReaderActiveWriterXform : public ActiveReaderActiveWriter<In,Out> { private: Fn f_; // The functor which actually does the work public: ActiveReaderActiveWriterXform(Fn f) : f_(f) {} void go() { while (TRUE) send(f_(demand())); } }; template <class Out, class Fn> class ActiveWriterEndpoint : public ActiveWriter<Out> { private: Fn f_; // The functor which actually does the work public: ActiveWriterEndpoint(Fn f) : f_(f) {}; void go() { while (TRUE) send(f_()); } }; template <class Out, class Fn> class PassiveWriterEndpoint : public PassiveWriter<Out> { private: Fn f_; // The functor which actually does the work public: PassiveWriterEndpoint(Fn f) : f_(f) {} Out supply() { return f_(); } }; template <class In, class Fn> class ActiveReaderEndpoint : public ActiveReader<In> { private: Fn f_; // The functor which actually does the work public: ActiveReaderEndpoint(Fn f) : f_(f) {}; void go() { while (TRUE) f_(demand()); } }; template <class In, class Fn> class PassiveReaderEndpoint : public PassiveReader<In> { private:

16

Fn f_; // The functor which actually does the work public: PassiveReaderEndpoint(Fn f) : f_(f) {}; void receive(In in) { f_(in); } };

17

Anda mungkin juga menyukai