Manual: Trace and Monitoring Support

From nsnam
Revision as of 16:25, 7 February 2010 by Lobsang (Talk | contribs) (Explaining how to define the flow id for an agent)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Chapter 26

Previous Chapter "Mathematical Support" | Index | Next Chapter "Test Suite Support"

The procedures and functions described in this chapter can be found in ~ns/trace/trace.{cc, h}, ~ns/tcl/lib/ns-trace.tcl, ~ns/tools/queue-monitor.{cc, h}, ~ns/tcl/lib/ns-link.tcl, ~ns/common/packet.h, ~ns/tools/, and ~ns/classifier/

There are a number of ways of collecting output or trace data on a simulation. Generally, trace data is either displayed directly during execution of the simulation, or (more commonly) stored in a file to be post-processed and analyzed. There are two primary but distinct types of monitoring capabilities currently supported by the simulator. The first, called traces, record each individual packet as it arrives, departs, or is dropped at a link or queue. Trace objects are configured into a simulation as nodes in the network topology, usually with a Tcl "Channel" object hooked to them, representing the destination of collected data (typically a trace file in the current directory). The other types of objects, called monitors, record counts of various interesting quantities such as packet and byte arrivals, departures, etc. Monitors can monitor counts associated with all packets, or on a per-flow basis using a flow monitor.

To support traces, there is a special common header included in each packet (this format is defined in ~ns/common/packet.h as hdr_cmn). It presently includes a unique identifier on each packet, a packet type field (set by agents when they generate packets), a packet size field (in bytes, used to determine the transmission time for packets), and an interface label (used for computing multicast distribution trees).

Monitors are supported by a separate set of objects that are created and inserted into the network topology around queues. They provide a place where arrival statistics and times are gathered andmake use of the class Integrator to compute statistics over time intervals.

Trace Support

The trace support in OTcl consists of a number of specialized classes visible in OTcl but implemented in C++, combined with a set of Tcl helper procedures and classes defined in the ns library.

All following OTcl classes are supported by underlying C++ classes defined in ~ns/trace/ Objects of the following types are inserted directly in-line in the network topology:

Trace/Hop trace a “hop” (XXX what does this mean exactly; it is not really used XXX)
Trace/Enque a packet arrival (usually at a queue)
Trace/Deque a packet departure (usually at a queue)
Trace/Drop packet drop (packet delivered to drop-target)
Trace/Recv packet receive event at the destination node of a link
SnoopQueue/In on input, collect a time/size sample (pass packet on)
SnoopQueue/Out on output, collect a time/size sample (pass packet on)
SnoopQueue/Drop on drop, collect a time/size sample (pass packet on)
SnoopQueue/EDrop on an "early" drop, collect a time/size sample (pass packet on)

Objects of the following types are added in the simulation and a referenced by the objects listed above. They are used to aggregate statistics collected by the SnoopQueue objects:

QueueMonitor receive and aggregate collected samples from snoopers
QueueMonitor/ED queue-monitor capable of distinguishing between “early” and standard packet drops
QueueMonitor/ED/Flowmon per-flow statistics monitor (manager)
QueueMonitor/ED/Flow per-flow statistics container
QueueMonitor/Compat a replacement for a standard QueueMonitor when ns v1 compatibility is in use

OTcl Helper Functions

The following helper functions may be used within simulation scripts to help in attaching trace elements (see ~ns/tcl/lib/nslib. tcl); they are instance procedures of the class Simulator:

flush-trace {} flush buffers for all trace objects in simulation
create-trace { type file src dst } create a trace object of type type between the given src and dest nodes. If file is non-null, it is interpreted as a Tcl channel and is attached to the newly-created trace object. The procedure returns the handle to the newly created trace object.
trace-queue { n1 n2 file } arrange for tracing on the link between nodes n1 and n2. This function calls create-trace, so the same rules apply with respect to the file argument.
trace-callback{ ns command } arranges to call command when a line is to be traced. The procedure treats command as a string and evaluates it for every line traced. See ~ns/tcl/ex/callback_demo.tcl for additional details on usage.
monitor-queue { n1 n2 qtrace [Sample_interval]} calls the init-monitor function on the link between nodes n1 and n2. qtrace is a file handle opened with open to which data are recorded every Sample_interval (default: 0.1s).
drop-trace { n1 n2 trace } the given trace object is made the drop-target of the queue associated with the link between nodes n1 and n2.

The create-trace{} procedure is used to create a new Trace object of the appropriate kind and attach an Tcl I/O channel to it (typically a file handle). The src_ and dst_ fields are are used by the underlying C++ object for producing the trace output file so that trace output can include the node addresses defining the endpoints of the link which is being traced. Note that they are not used for matching. Specifically, these values in no way relate to the packet header src and dst fields, which are also displayed when tracing. See the description of the Trace class below.

The trace-queue function enables Enque, Deque, and Drop tracing on the link between nodes n1 and n2. The Link trace procedure is described below.

The monitor-queue function is constructed similarly to trace-queue. By calling the link’s init-monitor procedure, it arranges for the creation of objects (SnoopQueue and QueueMonitor objects) which can, in turn, be used to ascertain time-aggregated queue statistics.

The drop-trace function provides a way to specify a Queue’s drop target without having a direct handle of the queue.

Library support and examples

The Simulator procedures described above require the trace and init-monitor methods associated with the OTcl Link class. Several subclasses of link are defined, the most common of which is called SimpleLink. Thus, the trace and init-monitor methods are actually part of the SimpleLink class rather than the Link base class. The trace function is defined as follows (in ns-link.tcl):

# Build trace objects for this link and
# update the object linkage
SimpleLink instproc trace { ns f } {
  $self instvar enqT_ deqT_ drpT_ queue_ link_ head_ fromNode_ toNode_
  $self instvar drophead_

  set enqT_ [$ns create-trace Enque $f $fromNode_ $toNode_]
  set deqT_ [$ns create-trace Deque $f $fromNode_ $toNode_]
  set drpT_ [$ns create-trace Drop $f $fromNode_ $toNode_]

  $drpT_ target [$drophead_ target]
  $drophead_ target $drpT_
  $queue_ drop-target $drpT_

  $deqT_ target [$queue_ target]
  $queue_ target $deqT_

  if { [$head_ info class] == "networkinterface" } {
     $enqT_ target [$head_ target]
     $head_ target $enqT_
     # puts "head is i/f"
  } else {
     $enqT_ target $head_
     set head_ $enqT_
     # puts "head is not i/f"

This function establishes Enque, Deque, and Drop traces in the simulator $ns and directs their output to I/O handle $f. The function assumes a queue has been associated with the link. It operates by first creating three new trace objects and inserting the Enque object before the queue, the Deque object after the queue, and the Drop object between the queue and its previous drop target. Note that all trace output is directed to the same I/O handle.

This function performs one other additional tasks. It checks to see if a link contains a network interface, and if so, leaves it as the first object in the chain of objects in the link, but otherwise inserts the Enque object as the first one.

The following functions, init-monitor and attach-monitor, are used to create a set of objects used to monitor queue sizes of a queue associated with a link. They are defined as follows:

SimpleLink instproc attach-monitors { insnoop outsnoop dropsnoop qmon } {
   $self instvar queue_ head_ snoopIn_ snoopOut_ snoopDrop_
   $self instvar drophead_ qMonitor_

   set snoopIn_ $insnoop
   set snoopOut_ $outsnoop
   set snoopDrop_ $dropsnoop

   $snoopIn_ target $head_
   set head_ $snoopIn_

   $snoopOut_ target [$queue_ target]
   $queue_ target $snoopOut_

   $snoopDrop_ target [$drophead_ target]
   $drophead_ target $snoopDrop_

   $snoopIn_ set-monitor $qmon
   $snoopOut_ set-monitor $qmon
   $snoopDrop_ set-monitor $qmon
   set qMonitor_ $qmon
# Insert objects that allow us to monitor the queue size
# of this link. Return the name of the object that
# can be queried to determine the average queue size.
SimpleLink instproc init-monitor { ns qtrace sampleInterval} {
    $self instvar qMonitor_ ns_ qtrace_ sampleInterval_

    set ns_ $ns
    set qtrace_ $qtrace
    set sampleInterval_ $sampleInterval
    set qMonitor_ [new QueueMonitor]

    $self attach-monitors [new SnoopQueue/In] \
        [new SnoopQueue/Out] [new SnoopQueue/Drop] $qMonitor_

    set bytesInt_ [new Integrator]
    $qMonitor_ set-bytes-integrator $bytesInt_
    set pktsInt_ [new Integrator]
    $qMonitor_ set-pkts-integrator $pktsInt_
    return $qMonitor_

These functions establish queue monitoring on the SimpleLink object in the simulator ns. Queue monitoring is implemented by constructing three SnoopQueue objects and one QueueMonitor object. The SnoopQueue objects are linked in around a Queue in a way similar to Trace objects. The SnoopQueue/In(Out) object monitors packet arrivals( departures) and reports them to an associated QueueMonitor agent. In addition, a SnoopQueue/Out object is also used to accumulate packet drop statistics to an associated QueueMonitor object. For init-monitor the same QueueMonitor object is used in all cases. The C++ definitions of the SnoopQueue and QueueMonitor classes are described below.

The C++ Trace Class

Underlying C++ objects are created in support of the interface specified in this Section and are linked into the network topology as network elements. The single C++ Trace class is used to implement the OTcl classes Trace/Hop, Trace/Enque, Trace/Deque, and Trace/Drop. The type_ field is used to differentiate among the various types of traces any particular Trace object might implement. Currently, this field may contain one of the following symbolic characters: + for enque, - for deque, h for hop, and d for drop. The overall class is defined as follows in ~ns/trace/

class Trace : public Connector {
   int type_;
   nsaddr_t src_;
   nsaddr_t dst_;
   Tcl_Channel channel_;
   int callback_;
   char wrk_[256];
   void format(int tt, int s, int d, Packet* p);
   void annotate(const char* s);
   int show_tcphdr_; // bool flags; backward compat
   Trace(int type);
   int command(int argc, const char*const* argv);
   void recv(Packet* p, Handler*);
   void dump();
   inline char* buffer() { return (wrk_); }

The src_, and dst_ internal state is used to label trace output and is independent of the corresponding field names in packet headers. The main recv() method is defined as follows:

void Trace::recv(Packet* p, Handler* h)
format(type_, src_, dst_, p);
/* hack: if trace object not attached to anything free packet */
if (target_ == 0)
   send(p, h); /* Connector::send() */

The functionmerely formats a trace entry using the source, destination, and particular trace type character. The dump function writes the formatted entry out to the I/O handle associated with channel_. The format function, in effect, dictates the trace file format.

Trace File Format

The Trace::format() method defines the trace file format used in trace files produced by the Trace class. It is constructed to maintain backward compatibility with output files in earlier versions of the simulator (i.e., ns v1) so that ns v1 post-processing scripts continue to operate. The important pieces of its implementation are as follows:

// this function should retain some backward-compatibility, so that
// scripts don’t break.
void Trace::format(int tt, int s, int d, Packet* p)
   hdr_cmn *th = (hdr_cmn*)p->access(off_cmn_);
   hdr_ip *iph = (hdr_ip*)p->access(off_ip_);
   hdr_tcp *tcph = (hdr_tcp*)p->access(off_tcp_);
   hdr_rtp *rh = (hdr_rtp*)p->access(off_rtp_);
   packet_t t = th->ptype();
   const char* name =;

   if (name == 0)

   int seqno;
   /* XXX */
   /* CBR’s now have seqno’s too */
   if (t == PT_RTP || t == PT_CBR)
      seqno = rh->seqno();
   else if (t == PT_TCP || t == PT_ACK)
      seqno = tcph->seqno();
      seqno = -1;
   if (!show_tcphdr_) {
      sprintf(wrk_, "%c %g %d %d %s %d %s %d %d.%d %d.%d %d %d",
      iph->flowid() /* was p->class_ */,
      iph->src() >> 8, iph->src() & 0xff, // XXX
      iph->dst() >> 8, iph->dst() & 0xff, // XXX
      th->uid() /* was p->uid_ */);
   } else {
      "%c %g %d %d %s %d %s %d %d.%d %d.%d %d %d %d 0x%x %d",
      iph->flowid() /* was p->class_ */,
      iph->src() >> 8, iph->src() & 0xff, // XXX
      iph->dst() >> 8, iph->dst() & 0xff, // XXX
      th->uid(), /* was p->uid_ */

This function is somewhat unelegant, primarily due to the desire to maintain backward compatibility. It formats the source, destination, and type fields defined in the trace object (not in the packet headers), the current time, along with various packet header fields including, type of packet (as a name), size, flags (symbolically), flow identifier, source and destination packet header fields, sequence number (if present), and unique identifier. The show_tcphdr_ variable indicates whether the trace output should append tcp header information (ack number, flags, header length) at the end of each output line. This is especially useful for simulations using FullTCP agents. An example of a trace file (without the tcp header fields) might appear as follows:

+ 1.84375 0 2 cbr 210 ------- 0 0.0 3.1 225 610
- 1.84375 0 2 cbr 210 ------- 0 0.0 3.1 225 610
r 1.84471 2 1 cbr 210 ------- 1 3.0 1.0 195 600
r 1.84566 2 0 ack 40 ------- 2 3.2 0.1 82 602
+ 1.84566 0 2 tcp 1000 ------- 2 0.1 3.2 102 611
- 1.84566 0 2 tcp 1000 ------- 2 0.1 3.2 102 611
r 1.84609 0 2 cbr 210 ------- 0 0.0 3.1 225 610
+ 1.84609 2 3 cbr 210 ------- 0 0.0 3.1 225 610
d 1.84609 2 3 cbr 210 ------- 0 0.0 3.1 225 610
- 1.8461 2 3 cbr 210 ------- 0 0.0 3.1 192 511
r 1.84612 3 2 cbr 210 ------- 1 3.0 1.0 196 603
+ 1.84612 2 1 cbr 210 ------- 1 3.0 1.0 196 603
- 1.84612 2 1 cbr 210 ------- 1 3.0 1.0 196 603
+ 1.84625 3 2 cbr 210 ------- 1 3.0 1.0 199 612

Here we see 14 trace entries, five enque operations (indicated by “+” in the first column), four deque operations (indicated by “-”), four receive events (indicated by “r”), and one drop event. (this had better be a trace fragment, or some packets would have just vanished!). The simulated time (in seconds) at which each event occurred is listed in the second column. The next two fields indicate between which two nodes tracing is happening. The next field is a descriptive name for the the type of packet seen. The next field is the packet’s size, as encoded in its IP header.

The next field contains the flags, which not used in this example. The flags are defined in the flags[] array in Four of the flags are used for ECN: “E” for Congestion Experienced (CE) and “N” for ECN-Capable-Transport (ECT) indications in the IP header, and “C” for ECN-Echo and “A” for Congestion Window Reduced (CWR) in the TCP header. For the other flags, “P” is for priority, and “F” is for TCP Fast Start.

The next field gives the IP flow identifier field as defined for IP version 6. <ref>In ns v1, each packet included a class field, which was used by CBQ to classify packets. It then found additional use to differentiate between “flows” at one trace point. In ns v2, the flow ID field is available for this purpose, but any additional information (which was commonly overloaded into the class field in ns v1) should be placed in its own separate field, possibly in some other header.</ref> The subsequent two fields indicate the packet’s source and destination node addresses, respectively. The following field indicates the sequence number.<ref>In ns v1, all packets contained a sequence number, whereas in ns v2 only those Agents interested in providing sequencing will generate sequence numbers. Thus, this field may not be useful in ns v2 for packets generated by agents that have not filled in a sequence number. It is used here to remain backward compatible with ns v1.</ref> The last field is a unique packet identifier. Each new packet created in the simulation is assigned a new, unique identifier.

Packet Types

Each packet contains a packet type field used by Trace::format to print out the type of packet encountered. The type field is defined in the TraceHeader class, and is considered to be part of the trace support; it is not interpreted elsewhere in the simulator. Initialization of the type field in packets is performed by the Agent::allocpkt(void)method. The type field is set to integer values associated with the definition passed to the Agent constructor (Section 10.6.3). The currently-supported definitions, their values, and their associated symblic names are as follows (defined in ~ns/common/packet.h):

enum packet_t {
/* simple signalling messages */
PT_LIVE,// packet from live network
PT_TELNET,// not needed: telnet use TCP
/* new encapsulator */
/* CMU/Monarch’s extnsions */
// insert new packet types here
PT_NTYPE // This MUST be the LAST one

The constructor of class p_info glues these constants with their string values:

p_info() {
name_[PT_TCP]= "tcp";
name_[PT_UDP]= "udp";
name_[PT_CBR]= "cbr";
name_[PT_AUDIO]= "audio";
name_[PT_NTYPE]= "undefined";

See also section 12.2.2 for more details.

Queue Monitoring

Queue monitoring refers to the capability of tracking the dynamics of packets at a queue (or other object). A queue monitor tracks packet arrival/departure/drop statistics, and may optionally compute averages of these values. Monitoring may be applied to all packets (aggregate statistics), or per-flow statistics (using a Flow Monitor).

Several classes are used in supporting queue monitoring. When a packet arrives at a link where queue monitoring is enabled, it generally passes through a SnoopQueue object when it arrives and leaves (or is dropped). These objects contain a reference to a QueueMonitor object.

A QueueMonitor is defined as follows (~ns/tools/

class QueueMonitor : public TclObject {
   QueueMonitor() : bytesInt_(NULL), pktsInt_(NULL), delaySamp_(NULL),
   size_(0), pkts_(0),
   parrivals_(0), barrivals_(0),
   pdepartures_(0), bdepartures_(0),
   pdrops_(0), bdrops_(0),
   srcId_(0), dstId_(0), channel_(0) {
       bind("size_", &size_);
       bind("pkts_", &pkts_);
       bind("parrivals_", &parrivals_);
       bind("barrivals_", &barrivals_);
       bind("pdepartures_", &pdepartures_);
       bind("bdepartures_", &bdepartures_);
       bind("pdrops_", &pdrops_);
       bind("bdrops_", &bdrops_);
       bind("off_cmn_", &off_cmn_);
   int size() const { return (size_); }
   int pkts() const { return (pkts_); }
   int parrivals() const { return (parrivals_); }
   int barrivals() const { return (barrivals_); }
   int pdepartures() const { return (pdepartures_); }
   int bdepartures() const { return (bdepartures_); }
   int pdrops() const { return (pdrops_); }
   int bdrops() const { return (bdrops_); }
   void printStats();
   virtual void in(Packet*);
   virtual void out(Packet*);
   virtual void drop(Packet*);
   virtual void edrop(Packet*) { abort(); }; // not here
   virtual int command(int argc, const char*const* argv);
   // packet arrival to a queue
   void QueueMonitor::in(Packet* p)
       hdr_cmn* hdr = (hdr_cmn*)p->access(off_cmn_);
       double now = Scheduler::instance().clock();
       int pktsz = hdr->size();
       barrivals_ += pktsz;
       size_ += pktsz;
       if (bytesInt_)
           bytesInt_->newPoint(now, double(size_));
       if (pktsInt_)
           pktsInt_->newPoint(now, double(pkts_));
       if (delaySamp_)
           hdr->timestamp() = now;
       if (channel_)
... in(), out(), drop() are all defined similarly ...

It addition to the packet and byte counters, a queue monitor may optionally refer to objects that keep an integral of the queue size over time using Integrator objects. The Integrator class provides a simple implementation of integral approximation by discrete sums.

All bound variables beginning with p refer to packet counts, and all variables beginning with b refer to byte counts. The variable size_ records the instantaneous queue size in bytes, and the variable pkts_ records the same value in packets. When a QueueMonitor is configured to include the integral functions (on bytes or packets or both), it computes the approximate integral of the queue size (in bytes) with respect to time over the interval [t0, now], where t0 is either the start of the simulation or the last time the sum_ field of the underlying Integrator class was reset.

The QueueMonitor class is not derived from Connector, and is not linked directly into the network topology. Rather, objects of the SnoopQueue class (or its derived classes) are inserted into the network topology, and these objects contain references to an associated queue monitor. Ordinarily, multiple SnoopQueue objects will refer to the same queue monitor. Objects constructed out of these classes are linked in the simulation topology as described above and call QueueMonitor out, in, or drop procedures, depending on the particular type of snoopy queue.

Per-Flow Monitoring

A collection of specialized classes are used to to implement per-flow statistics gathering. These classes include: QueueMonitor/ED/Flowmon, QueueMonitor/ED/Flow, and Classifier/Hash. Typically, an arriving packet is inspected to determine to which flow it belongs. This inspection and flow mapping is performed by a classifier object. Once the correct flow is determined, the packet is passed to a flow monitor, which is responsible for collecting per-flow state. Per-flow state is contained in flow objects in a one-to-one relationship to the flows known by the flow monitor. Typically, a flow monitor will create flow objects on-demand when packets arrive that cannot be mapped to an already-known flow.

Statistics of individual flows, such as cwnd_, can also be obtained by tracing variables of the flows directly.

The Flow Monitor

The QueueMonitor/ED/Flowmon class is responsible for managing the creation of new flow objects when packets arrive on previously unknown flows and for updating existing flow objects. Because it is a subclass of QueueMonitor, each flow monitor contains an aggregate count of packet and byte arrivals, departures, and drops. Thus, it is not necessary to create a separate queue monitor to record aggregate statistics. It provides the following OTcl interface:

classifier get(set) classifier to map packets to flows
attach attach a Tcl I/O channel to this monitor
dump dump contents of flow monitor to Tcl channel
flows return string of flow object names known to this monitor

The classifier function sets or gets the name of the previously-allocated object which will perform packet-to-flow mapping for the flow monitor. Typically, the type of classifier used will have to do with the notion of “flow” held by the user. One of the hash based classifiers that inspect various IP-level header fields is typically used here (e.g. fid, src/dst, src/dst/fid). Note that while classifiers usually receive packets and forward them on to downstream objects, the flow monitor uses the classifier only for its packet mapping capability, so the flow monitor acts as a passive monitor only and does not actively forward packets.

The attach and dump functions are used to associate a Tcl I/O stream with the flow monitor, and dump its contents on-demand. The file format used by the dump command is described below.

The flows function returns a list of the names of flows known by the flow monitor in a way understandable to Tcl. This allows tcl code to interrogate a flow monitor in order to obtain handles to the individual flows it maintains.

Flow Monitor Trace Format

The flow monitor defines a trace format which may be used by post-processing scripts to determine various counts on a per-flow basis. The format is defined by the following code in ~ns/tools/

FlowMon::fformat(Flow* f)
   double now = Scheduler::instance().clock();
   sprintf(wrk_, "%8.3f %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d %d
       f->flowid(), // flowid
       0, // category
       f->ptype(), // type (from common header)
       f->flowid(), // flowid (formerly class)
       f->parrivals(), // arrivals this flow (pkts)
       f->barrivals(), // arrivals this flow (bytes)
       f->epdrops(), // early drops this flow (pkts)
       f->ebdrops(), // early drops this flow (bytes)
       parrivals(), // all arrivals (pkts)
       barrivals(), // all arrivals (bytes)
       epdrops(), // total early drops (pkts)
       ebdrops(), // total early drops (bytes)
       pdrops(), // total drops (pkts)
       bdrops(), // total drops (bytes)
       f->pdrops(), // drops this flow (pkts) [includes edrops]
       f->bdrops() // drops this flow (bytes) [includes edrops]

Most of the fields are explained in the code comments. The “category” is historical, but is used to maintain loose backwardcompatibility with the flow manager format in ns version 1.

The Flow Class

The class QueueMonitor/ED/Flow is used by the flow monitor for containing per-flow counters. As a subclass of QueueMonitor, it inherits the standard counters for arrivals, departures, and drops, both in packets and bytes. In addition, because each flow is typically identified by some combination of the packet source, destination, and flow identifier fields, these objects contain such fields. Its OTcl interface contains only bound variables:

src_ source address on packets for this flow
dst_ destination address on packets for this flow
flowid_ flow id on packets for this flow

Note that packets may be mapped to flows (by classifiers) using criteria other than a src/dst/flowid triple. In such circumstances, only those fields actually used by the classifier in performing the packet-flow mapping should be considered reliable.


TODO: Could someone who actually knows how to use makeflowmon etc please fill in this documentation.

To monitor the queue at a link between nodes n1 and n2, the following incantation can be used

# Specify which (existing) link to monitor
set slink [$ns link $n1 $n2]
# Create monitor
set fmon [$ns makeflowmon Fid]
$ns attach-fmon $slink $fmon

The structure fmon then records the following statistics

p(b)drops_ count of packets(bytes) discarded
p(b)departures_ count of packets(bytes) sent
p(b)arrivals_ count of packets(bytes) received (sent, dropped, or still in the queue)

which can be accessed by commands such as

set drop_count [ $fmon set pdrops_ ]

The per-flow information can be accessed as follows:

The flow ID can be defined on an agent and then used in the following code (let's say that $udp is an UDP agent previously created):

# Define a unique flowid for the UDP agent:
$udp set fid_ 1
set fcl [$fmon_ classifier]
# The last value "1" is the flow ID previously set:
set flow [$fcl lookup auto 0 0 1]
set queue_size [$flow set pkts_ ]

Commands at a glance

Following is a list of trace related commands commonly used in simulation scripts:

$ns_ trace-all <tracefile>

This is the command used to setup tracing in ns. All traces are written in the <tracefile>.

$ns_ namtrace-all <namtracefile>

This command sets up nam tracing in ns. All nam traces are written in to the <namtracefile>.

$ns_ namtrace-all-wireless <namtracefile> <X> <Y>

This command sets up wireless nam tracing. <X> and <Y> are the x-y co-ordinates for the wireless topology and all wireless nam traces are written into the <namtracefile>.

$ns_ nam-end-wireless <stoptime>

This tells nam the simulation stop time given in <stoptime>.

$ns_ trace-all-satlinks <tracefile>

This is a method to trace satellite links and write traces into <tracefile>.

$ns_ flush-trace

This command flushes the trace buffer and is typically called before the simulation run ends.

$ns_ get-nam-traceall

Returns the namtrace file descriptor stored as the Simulator instance variable called namtraceAllFile_.

$ns_ get-ns-traceall

Similar to get-nam-traceall. This returns the file descriptor for ns tracefile which is stored as the Simulator instance called traceAllFile_.

$ns_ create-trace <type> <file> <src> <dst> <optional:op>

This command creates a trace object of type <type> between the <src> and <dst> nodes. The traces are written into the <file>. <op> is the argument that may be used to specify the type of trace, like nam. if <op> is not defined, the default trace object created is for nstraces.

$ns_ trace-queue <n1> <n2> <optional:file>

This is a wrapper method for create-trace. This command creates a trace object for tracing events on the link represented by the nodes <n1> and <n2>.

$ns_ namtrace-queue <n1> <n2> <optional:file>

This is used to create a trace object for namtracing on the link between nodes <n1> and <n2>. This method is very similar to and is the namtrace counterpart of method trace-queue.

$ns_ drop-trace <n1> <n2> <trace>

This command makes the given <trace> object a drop-target for the queue associated with the link between nodes <n1> and <n2>.

$ns_ monitor-queue <n1> <n2> <qtrace> <optional:sampleinterval>

This sets up a monitor that keeps track of average queue length of the queue on the link between nodes <n1> and <n2>. The default value of sampleinterval is 0.1.

$link trace-dynamics <ns> <fileID> Trace the dynamics of this link and write the output to fileID filehandle.

ns is an instance of the Simulator or MultiSim object that was created to invoke the simulation.

The tracefile format is backward compatible with the output files in the ns version 1 simulator so that ns-1 postprocessing scripts can still be used. Trace records of traffic for link objects with Enque, Deque, receive or Drop Tracing have the following form:



<code> := [hd+-] h=hop d=drop +=enque -=deque r=receive <time> := simulation time in seconds
<hsrc> := first node address of hop/queuing link
<hdst> := second node address of hop/queuing link
<packet> := <type> <size> <flags> <flowID> <> <dst.dport> <seq>
<type> := tcp|telnet|cbr|ack etc.
<size> := packet size in bytes
<flags> := [CP] C=congestion, P=priority
<flowID> := flow identifier field as defined for IPv6
<> := transport address (src=node,sport=agent)
<> := transport address (dst=node,dport=agent)
<seq> := packet sequence number
<pktID> := unique identifer for every new packet

Only those agents interested in providing sequencing will generate sequence numbers and hence this field may not be useful for packets generated by some agents. For links that use RED gateways, there are additional trace records as follows:

<code> <time> <value>


<code> := [Qap] Q=queue size, a=average queue size, p=packet dropping probability
<time> := simulation time in seconds
<value> := value

Trace records for link dynamics are of the form:

<code> <time> <state> <src> <dst>


<code> := [v]
<time> := simulation time in seconds
<state> := [link-up | link-down]
<src> := first node address of link
<dst> := second node address of link

Previous Chapter "Mathematical Support" | Index | Next Chapter "Test Suite Support"