Code Processors

As seen in the previous sections, the main entity to analyse programs is called a code processor and is represented by the otawa::proc::Processor class. We have also shown how to use them to perform WCET computation. This section presents how to extend OTAWA by developing new code processors.

As a common example, all along this section, we show how to build a code processor that counts the number of instructions in the basic blocks and stores them in a property called INSTRUCTION_COUNT.

Definition

The usual way to add an analysis to OTAWA is to add a code processor. To write a code processor, One has to:

  • choose a name,
  • define the required features,
  • define the provided features,
  • implement the algorithm performing the analysis.

A code processor have to extend the class otawa::Processor (or one of its subclasses), define a registration to record the code processor to OTAWA framework and to overload some functions to let run the analysis algorithm:

#include <otawa/proc.h>
using namespace otawa;
 
class InstructionCounter: public Processor {
public:
	static p::declare reg;
	InstructionCounter();
 
protected:
	void processWorkSpace(WorkSpace *ws) override;	
};
 
InstructionCounter::InstructionCounter(): Processor(reg) {
  ...
}
 
p::declare InstructionCounter::reg = p::init("InstructionCounter", Version(1, 0, 0))
	.make<InstructionCounter>()
	.extend<Processor>()
	.require(COLLECTED_CFG_FEATURE)
	.provide(INSTRUCTION_COUNT_FEATURE);

Notice that the class declaration and the constructor definition are usually placed, respectively, in a .h header file and in a .cpp source file. The constructor often contains none or very few things and its main role is to pass the registration description to the otawa::Processor class. It is also advised to give the full C++ qualified name to benefit from facilities in OTAWA to run code processor from scripts or operform utility.

The registration InstructionCounter::reg is a static variable that (a) is of type p::declare and (b) needs to be declared separately in the source file. Code processor name and version are passed using the p::init() constructor that, then, supports several configurations:

  • make<T>() provides the class of the code processor to the registration,
  • extend<T>() composes the registration with the one of the base class,
  • require(F) records that the code processor requires the feature F before to be run,
  • provide(F) informs that the code processor builds the feature F,
  • use(F) records that the code processor needs feature F at analysis time but the dependency is released after.

As many require and provide as needed can be passed.

The function Processor::processWorkSpace() has to be overridden to implement the algorithm of the analysis as below.

void InstructionCounter::processWorkSpace(WorkSpace *ws) {  
	const CFGCollection *cfgs = COLLECTED_CFG_FEATURE.get(ws);
	for(auto g: *cfgs)
		for(auto b: *g)
			if(b->isBasic()) {
				int c = 0;
				for(auto i: *b->toBasic())
					c++;
				INSTRUCTION_COUNT(bb) = c;
			}
}

Shortly, the algorithm retrieves the collection of CFGs composing the current task (using the interface CFGCollection provided by COLLECTED_CFG_FEATURE). It traverses the blocks composing the CFGs and for each BB, it counts the instructions and stores the result in property whose identifier is INSTRUCTION_COUNT.

This supposes that somewhere a feature named INSTRUCTION_COUNT_FEATURE and an identifier named INSTRUCTION_COUNT are defined:

extern p::feature INSTRUCTION_COUNT_FEATURE;
extern p::id<int>  INSTRUCTION_COUNT;

As iterating on the CFGs and theirs BB is a very frequent and tedious task, OTAWA provides helper processors (like class BBProcessor) performing this work automatically. One has only to override a function named processBB() as below:

class InstructionCounter: public BBProcessor {
public:
	...
 
protected:
	void processBB(WorkSpace *ws, CFG *cfg, BasicBlock *bb) override;
};
 
void InstructionCounter::processBB(WorkSpace *ws, CFG *cfg, BasicBlock *bb) {
	if(b->isBasic()) {
		int c = 0;
		for(auto i: *b->toBasic())
			c++;
		INSTRUCTION_COUNT(bb) = c;
	}
}

There exist several helper processors adapted to the different program representation. Some of them are listed below:

  • CFGProcessor: iterates on the involved CFG,
  • BBProcessor: iterates on the involved basic blocks,
  • cache::LBlockProcessor: iterates on L-Block (code block mapped to cache blocks).

There are different ways to invoke a code processor (to make available its provided features). The simplest way is to use the workspace function run():

	WorkSpace *ws = ...;
	ws->run<InstrucitonCounter>();

If InstructionCounter is the default code processor of the feature INSTRUCTION_COUNT_FEATURE, the feature may be required to the workspace:

	WorkSpace *ws = ...;
	ws->require(INSTRUCTION_COUNT_FEATURE);

Code processor facilities

In addition to provide an interface to connect analyses to the OTAWA framework, the code processors provides several facilities to help to write and to run the analyses they implements. The facilities have the form of member functions or variables that are only accessible inside the code processor:

  • isVerbose() – returns true if verbose messages are required in the configuration.
  • logFor(l) – returns true if logging is asked at the level l (logging level are sorted from least detailed to most detailed – LOG_PROC, LOG_FILE, LOG_DEPS, LOG_FUN, LOG_BLOCK, LOG_INST).
  • log – output channel for logging information (cerr as default).
  • out – output channel to display working information to user (cout as default).
  • workspace() – current workspace.
  • warn(m) – display the warning message m.
  • record(s) – record a new statistics in the OTAWA managed statistic system.

To raise an error that will stop the analysis, one has to raise an exception of type otawa::Exception. This exception supports a message expressing the error cause. New exception classes may be built by extending this class.

void MyCodeProcessor::processBB(WorkSpace *ws, CFG *g, Block *b) {
	if(error_or(b))
		raise otawa::Exception("my error message");
}

Details about the Processing

As shown above, the otawa::proc::Processor class is the base class to implement a code processor. It provides an interface to process workspace and an interface to let actual analysers perform their work.

Classes Processor and WorkSpace are tightly coupled to work together and to let the analysis to be performed. The code processor work is specialized by overriding some member functions as:

  • void configure(const PropList *props) – this function is called to configure the code processor with the property list passed to invoke the analysis.
  • void setup(WorkSpace *workspace) – this function is called to setting up the processor, just before the analysis: it may be used to perform resource allocation for example.
  • void processWorkSpace(WorkSpace *workspace): this function is called to perform the analysis on the given workspace.
  • void cleanup(WorkSpace *workspace) – this function is called just after the analysis to clean up allocated resources only useful for the analysis.
  • void destroy(WorkSpace *ws) – the code processor is alive as long as its provided features are alive: this function is called when the feature is invalidated to release resources and properties of the provided features.

The life cycle of a processors spans from its invocation (when run from the workspace, or triggered to provide an attribute directly or not) until its invalidation (when the workspace is deleted or an invalidation is performed).

OTAWA ensures that these four functions are ever called in the following order when the analysis is invoked:

  1. configure()
  2. setup()
  3. processWorkSpace()
  4. cleanup()

When the code processor is invalidated, the following function is called:

  1. destroy()

If you are using an helper processor, setup() and cleanup() are good points to allocate and free resources used throughout the analysis. Notice it is recommanded to always call the configure() method of the parent class to let it initialize itself from the configuration properties. This is the parent class, through the call to Processor::configure(), that will manage common services like verbosity, logging, time measures and statistics gathering. If not called, these services will be unavailable.

The destroy() is a bit special and is related to the lifecycle of a feature. When a feature is required, an implementing code processor is identified and created. The code processor is launched to perform the corresponding analysis and will still alive as long as the feature is provided in the workspace. As soon as the feature is invalidated, the code processor instance is destructed and, just before, the destroy() function is called.

Providing a feature with interface

Section Properties, features and code processors defines features with an interface (to provide not only data but also calculations). To define a code processors providing such type of feature, one has to override the function Processor::interfaceFor(F) that takes as parameter the feature for which an interface is required. The general form of this functions is given below:

class MyProcessor {
public:
	...
	void *interfaceFor(const AbstractFeature &f) override {
		if(f == FEATURE1)
			return interface1;
		else if(f == FEATURE2)
			return interface2;
		...
		else
			return nullptr;
	}
};

Beware with C++ and returning a generic pointer void * that will be cast back to the actual type. This warning is particularly important if the interface is extended by the code processor class itself and therefore, this is returned. C++ is based on a system of offsets to implement multiple-inheritance that is lost when passed to void *. To prevent any issue, statically cast this to the interface type.

Updating/replacing a feature

Updating and replacing an existing features rises special issues in the property management. This means that the code processor is using the properties put to implement a feature F and redefine these properties.

This issue is amplified as OTAWA promotes a “functional” approach in the management of properties: the result of the analysis is not increased but replaced by a new improved version. This is the case of analysis changing the CFG: instead of adding/removing vertices and/or edges, the CFG are rebuilt and modified based on the existing CFGs.

Now, the issue is that the former provider of replaced feature will remove its properties: when it is invalidated, that is, after the analysis the work of the code processor providing the new version of the analysis, that is after the call of any function involved in the analysis.

To support the replacement of feature, there is a specific function called commit() that can be used. It is invoked just after the invalidation of invalidated feature by the code processor to let the code processor install its own properties.

This is used a lot by CFG transformers that change the form of the CFG to improve the precision of the analysis: LoopUnroller, SubCFGBuilder, Virtualizer, DelayedBuilder, ConditionalRestructurer, etc. Moreover, there is a generic class that helps to transform the CFG – and override and implement the commit() function, CFGTransformer.

Code processor plug-ins

Designing

The easier way to extend OTAWA – to add new analyses, is to write code processor and the easier way to embed these analyzes at run-time in OTAWA computation is to use plug-ins. A plug-in, in OTAWA, is just a dynamic library (as available in most desktop OS) providing a particular hook (a function) to register the plug-in in OTAWA. Once the hook has been recognized by OTAWA, all analyses contained in the plug-in is automatically recorded in OTAWA code processor database and ready to be used.

To retrieve plug-ins, code processors, identifiers and features, OTAWA uses the structure of their name. For example, the otawa::dcache::CATEGORY identifier of OTAWA data cache plug-in is contained in the namespace otawa::dcache, in conformance with its name, but is also supposed to be in a plug-in named otawa/dcache (“::” are replaced by “/”). On Linux, this means that OTAWA will look in its plug-in directories for a dynamic library which path is otawa/dcache.so.

Consequently, a very sample way to provide plug-ins to OTAWA in a consistent way is to choose a namespace for the content of a plug-in, my/plugin, for example and to declare all plug-in items inside:

namespace my { namespace plugin {
 
	extern p::id<bool> MY_ID;
 
	extern p::feature MY_FEATURE;
 
	class MyAnalysis: public Processor {
	...
	};
 
} }

And then to group them in a plug-in which relative path will be NS1/NS2. Then, identifier, features and code processor has to be named according to this structure: my::plugin::MY_ID for the identifier, my::plugin::MY_FEATURE for the feature and my::plugin::MyAnalysis for the code processor. This allows some tools like OTAWA or operform to retrieve and found automatically identifiers, features and code processors located in pug-ins.

For instance, operform can run MyAnalysis with a command like:

$ operform crc.elf process:my::plugin::MyAnalysis

The feature can also be invoked with:

$ operform crc.elf require:my::plugin::MY FEATURE

operform can be very useful to invoke a particular sequence of analyses or features but also to debug a developed analysis. Scripts are described in the OTAWA Development Manual.

Compiling and installing

Let the sources of the plug-in be SOURCE1, SOURCE2, …, a new source named hook.cpp is added to define the hook of the plug-in for OTAWA:

#include <otawa/proc/ProcessorPlugin.h>
using namespace otawa;
 
namespace my { namespace plugin {
 
class Plugin: public ProcessorPlugin {
public:
	Plugin(): ProcessorPlugin("my::plugin", Version(1, 0, 0), OTAWA_PROC_VERSION) { }
};
 
} }
 
static my::plugin::Plugin my_plugin;
ELM_PLUGIN(my_plugin, OTAWA_PROC_HOOK);

First, the source declare a new class for the plug-in with its name, version and an identifier for the version of the plug-in interfacen OTAWA_PROC_VERSION. Then a static instance of this class is created, my_plugin and the hook is created using an ELM macro of type code processor plugin, OTAWA_PROC_HOOK.

Compilation and installation, on Linux, can be done with a Makefile as:

NAME = plugin
NS = my
OTAWA_CONFIG = otawa-config
CXXFLAGS = $(shell $(OTAWA_CONFIG) --cflags)
LDFLAGS = $(shell $(OTAWA_CONFIG) --libs -r)
PLUG_DIR = $(shell (OTAWA_CONFIG) --plugdir)
SOURCES = SOURCE1.cpp SOURCE2.cpp ... hook.cpp
OBJECTS = $(SOURCES:.cpp=.o)
 
all: $(NAME).so
 
$(NAME).so: $(OBJECTS)
	gcc -shared -o $(NAME).so $(OBJECTS) $(LDFLAGS)
 
install:
	mkdir -p $(PLUG_DIR)/$(NS)
	cp $(NAME).so $(PLUG_DIR)/$(NS)
 

Refer to plugin for a portable CMake plug-in builder script.

Using a plug-in in an application

Plug-in can be used in scripts or with commands as operform and interconnected to OTAWA using standard interfaces and features but it may useful to use particular definition (identifier, features, code processor or classes) of a plug-in in a custom application.

For example, the following application is using the plug-in defined in the previous section:

#include <my/plugin.h>
 
int main() {
	WorkSpace *ws = MANAGER.load("crc.elf");
	ws->require(COLLECTED_CFG_FEATURE);
	for(auto g: *COLLECTED_CFG_FEATURE.get(ws))
		my::plugin::MY_ID(cfg) = true;
}

Although OTAWA may be able to retrieve automatically the plug-in, the program at startup needs to resolve the link induced by the use of MY_ID and will fail if there is no link edition with my/plugin.so. To achieve this, one has only to pass the plug-in as parameter to the otawa-config command in the Makefile:

LDFLAGS = $(shell $(OTAWA_CONFIG) --libs -r my/plugin)

The option –r is also important as it records in the executable, and on OSes supporting it, the path to link the plug-in. On OSes that does not support the record of dynamic library paths (Windows), either the path needs to be added a specific configuration variable (PATH on Windows), or one has to use a .eld follower file.

.eld files

In OTAWA, plug-ins often comes with an additional file named as the plug-in but with extension .eld. This file is supported by the ELM generic plug-in system to provide additional information about the linking. As not all OSes have the same dynamic linking features, this file is used to circumvent shortcomings found on some OSes.

The .eld file is a text file in INI format. For the example of my/plugin, it could be as minimalist as:

[elm-plugin]
name=my/plugin

But it may contain many information about the plug-in (one per line):

  • author – plug-in author name
  • copyright – license of the plug-in
  • description – description of the plug-in
  • path – actual path to dynamic library (alias)*
  • site – website of the plug-in.

It contains also functional items as deps and libs. libs is used to specify a “semicolon-separated list of paths of dynamic libraries used by the plug-in. This is used on some OSes that are not able to retrieve dynamic libraries out of the container directory of the plug-in.

The deps item is used to specify the list of plug-ins on which the current plug-in is dependent. For example, if my/plugin requires the OTAWA plug-ins otawa/dcache and otawa/trivial, the .eld file content becomes:

[elm-plugin]
name=my/plugin
deps=otawa/dcache;otawa/trivial

The .eld can be also used in the work of otawa-config when a plug-in is compiled: in this case, there is no need to list the dependency plug-ins when otawa-config is called. The LDFLAGS line of my/plugin Makefile becomes:

LDFLAGS = $(shell $(OTAWA_CONFIG) --libs -r -p plugin)

Code processor plug-ins

Designing

The easier way to extend OTAWA – to add new analyses, is to write code processor and the easier way to embed these analyses at run-time in OTAWA computation is to use plug-ins. A plug-in, in OTAWA, is just a dynamic library (as available in most desktop OS) providing a particular hook (a function) to register the plug-in in OTAWA. Once the hook has been recognized by OTAWA, all analyses contained in the plug-in are automatically recorded in OTAWA code processor database and ready to be used.

To retrieve plug-ins, code processors, identifiers and features, OTAWA uses the structure of their name. For example, the otawa::dcache::CATEGORY identifier of OTAWA data cache plug-in is contained in the namespace otawa::dcache, in conformance with its name, but is also supposed to be in a plug-in named otawa/dcache (”::“ are replaced by ”/“). On Linux, this means that OTAWA will look in its plug-in directories for a dynamic library which path is otawa/dcache.so.

Consequently, a very sample way to provide plug-ins to OTAWA in a consistent way is to choose a namespace for the content of a plug-in, my/plugin, for example and to declare all plug-in items inside:

namespace my { namespace plugin {
 
	extern p::id<bool> MY_ID;
 
	extern p::feature MY_FEATURE;
 
	class MyAnalysis: public Processor {
	...
	};
 
} }

And then to group them in a plug-in which relative path will be my/plugin. Then, identifier, features and code processor has to be named according to this structure: my::plugin::MY_ID for the identifier, my::plugin::MY_FEATURE for the feature and my::plugin::MyAnalysis for the code processor. This allows some tools like OTAWA or operform to retrieve and found automatically identifiers, features and code processors located in pug-ins.

For instance, operform can run MyAnalysis with a command like:

$ operform crc.elf process:my::plugin::MyAnalysis

The feature can also be invoked with:

$ operform crc.elf require:my::plugin::MY FEATURE

operform can be very useful to invoke a particular sequence of analyses or features but also to debug a developed analysis. Often, a particular microarchitecture may be supported as script of analysis and they will also uses the retrieving system of plugins to invoke the different analyses composing them 1).

Compiling and installing

Let the sources of the plug-in be SOURCE1, SOURCE2, …, a new source named hook.cpp is added to define the hook of the plug-in for @(OTAWA):

#include <otawa/proc/ProcessorPlugin.h>
using namespace otawa;
 
namespace my { namespace plugin {
 
class Plugin: public ProcessorPlugin {
public:
	Plugin(): ProcessorPlugin("my::plugin", Version(1, 0, 0), OTAWA_PROC_VERSION) { }
};
 
} }
 
static my::plugin::Plugin my_plugin;
ELM_PLUGIN(my_plugin, OTAWA_PROC_HOOK);

First, the source declare a new class for the plug-in with its name, version and an identifier for the version of the plug-in interfacen OTAWA_PROC_VERSION. Then a static instance of this class is created, my_plugin and the hook is created using an ELM macro of type code processor plugin, OTAWA_PROC_HOOK.

Compilation and installation, on Linux, can be done with a Makefile as:

NAME = plugin
NS = my
OTAWA_CONFIG = otawa-config
CXXFLAGS = $(shell $(OTAWA_CONFIG) --cflags)
LDFLAGS = $(shell $(OTAWA_CONFIG) --libs -r)
PLUG_DIR = $(shell (OTAWA_CONFIG) --plugdir)
SOURCES = SOURCE1.cpp SOURCE2.cpp ... hook.cpp
OBJECTS = $(SOURCES:.cpp=.o)
 
all: $(NAME).so
 
$(NAME).so: $(OBJECTS)
	gcc -shared -o $(NAME).so $(OBJECTS) $(LDFLAGS)
 
install:
	mkdir -p $(PLUG_DIR)/$(NS)
	cp $(NAME).so $(PLUG_DIR)/$(NS)
 

Refer to @ref:append:plugin@ for a portable CMake plug-in builder script.

Using a plug-in in an application

Plug-in can be used in scripts or with commands as operform and interconnected to @(OTAWA) using standard interfaces and features but it may useful to use particular definition (identifier, features, code processor or classes) of a plug-in in a custom application.

For example, the following application is using the plug-in defined in the previous section:

#include <my/plugin.h>
 
int main() {
	WorkSpace *ws = MANAGER.load("crc.elf");
	ws->require(COLLECTED_CFG_FEATURE);
	for(auto g: *COLLECTED_CFG_FEATURE.get(ws))
		my::plugin::MY_ID(cfg) = true;
}

Although @(OTAWA) may be able to retrieve automatically the plug-in, the program at startup needs to resolve the link induced by the use of MY_ID and will fail if there is no link edition with my/plugin.so. To achieve this, one has only to pass the plug-in as parameter to the otawa-config command in the Makefile:

LDFLAGS = $($(OTAWA_CONFIG) --libs -r my/plugin)

The option –r is also important as it records in the executable, and on OSes supporting it, the path to link the plug-in. On OSes that does not support the record of dynamic library paths (Windows), there is no other solution as adding the path to a specific configuration variable (PATH on Windows).

.eld files

In @(OTAWA), plug-ins often comes with an additional file named as the plug-in but with extension .eld. This file is supported by the ELM generic plug-in system to provide additional information about the linking. As not all OSes have the same dynamic linking features, this file is used to circumvent shortcomings found on some OSes.

The .eld file is a text file in INI format. For the example of my/plugin, it could be as minimalist as:

[elm-plugin]
name=plugin

But it may contain many information about the plug-in (one per line):

  • author – plug-in author name
  • copyright – license of the plug-in
  • description – description of the plug-in
  • path – actual path to dynamic library (alias)*
  • site – website of the plug-in.

It contains also functional items as deps and libs. libs is used to specify a “semicolon-separated list of paths of dynamic libraries used by the plug-in. This is used on some OSes that are not able to retrieve dynamic libraries out of the container directory of the plug-in.

The deps item is used to specify the list of plug-ins on which the current plug-in is dependent. For example, if my/plugin requires the @(OTAWA) plug-ins otawa/dcache and otawa/trivial, the .eld file content becomes:

[elm-plugin]
name=plugin
deps=otawa/dcache;otawa/trivial

The .eld can be also used in the work of otawa-config when a plug-in is compiled: in this case, there is no need to list the dependency plug-ins when otawa-config is called. The LDFLAGS line of my/plugin Makefile becomes:

LDFLAGS = $(shell $(OTAWA_CONFIG) --libs -r -p plugin)

Statistics System

One service provide by code processor is a simplified connection with the statistics system of @(OTAWA). This system provides unified interface to produce and exploits statistics about the performed analysis. The statistics system allow to reduce the burden of managing statistics data in the analysis and allow to use common display facility as otawa-stat.py.

For example, the feature ipet::WCET_FEATURE provides as statistics for each BB their execution time, their number of execution on the WCET path and the total execution time on the WCET path.

Definition

A statistics item is defined by:

  • a machine string identifier (for internal management),
  • a name readable by humans,
  • a list of keywords (for lookup by a human),
  • a measurement unit name – for instance cycle(s), hit(s), miss(es).

The example below shows the declaration of a statistics collector dedicated to count the number of misses of a L1 instruction cache:

class L1ICacheMissStats: public StatCollector {
public:
	inline L1ICacheMissStats(WorkSpace *ws): _ws(ws) { }
 
	cstring id(void) const override { return "l1-icache/miss"; }
	void keywords(Vector<cstring>& kws) override {
		kws.add("cache");
		kws.add("instruction");
		kws.add("L1");
		kws.add("miss");
	}
	cstring name(void) override const { return "L1 instruction cache miss count"; }
	cstring unit(void) override const { return "miss(es)"; }
 
	...
 
private:
	WorkSpace *_ws;
};

It is advised to structure the logical identifier of a statististics item in two parts separated by a ”/”. At the left, a generic category that the the statstics item belongs to – l1-icache in our example and, at right, the statistics identifier itself, miss for example.

Statistics Collection

To provide statistics value, four functions have to be implemented:

  • StatCollector::mergeAgreg(a, b) takes 2 statistics values and combine them together to obtain cumulated statistics of 2 pieces of code,
  • StatCollector::mergeContext(a, b) is used to combine statistics values of the same piece of code but from two different contexts,
  • StatCollector::total() gives the amount of the current statistics all over the program,
  • StatCollector::collect© is used to collect statistics for the whole set of code pieces composing the processed program.

The parameter c of StatCollector::collect() is of type Collector(), that represents an interface between a statistics provider (StatCollector) and an application using statistics. Collector is defined as:

class StatCollector {
public:
	...
	class Collector {
	public:
		virtual void collect(const Address& address, t::uint32 size, int value, const ContextualPath& path) = 0;
	};
	...
};

The StatCollector::collect() function has to call Collector::collect() for each code part that supports statistics. The parameters are:

  • address – base address of the code part,
  • size – size of the code part,
  • value – statistic value in this code part,
  • path – contextual path of this code part producing the statistic value.

This way, the statistics collector can choose the more adapted method to cross th code parts and to extract statistic values.

Going on with the L1 instruction cache miss statistics instance, we get the following implementation:

int L1ICacheMissStats::mergeContext(int v1, int v2) override {
	return max(v1, v2);
}
 
int L1ICacheMissStats::mergeAgreg(int v1, int v2) override {
	return v1 + v2;
}
 
void L1ICacheMissStats::collect(Collector& collector) override {
	for(auto b: COLLECTED_CFG_FEATURE.get(ws)->blocks())
		if(b->isBasic()) {
			BasicBlock *bb = b->toBasic();
			collector->collect(bb->address(), bb->size(), MISS_COUNT(bb), ContextualPath::null);
		}
}
 
int L1ICacheMissStats::total() override {
	int t = 0;
	for(auto b: COLLECTED_CFG_FEATURE.get(ws)->blocks())
		if(b->isBasic()) {
			BasicBlock *bb = b->toBasic();
			t += MISS_COUNT(bb);
		}
}

CFG and basic blocks are explained later in @ref:CFG@ in this document but to sum up, the program is split in blocks of code, called basic blocks, and a number of misses is determined for each block and stored using the MISS_COUNT property2). The statistics collector just explores each basic block and call the collector collect function for each basic block without considering any context. In the same way, the total is simply the sum of all the MISS_COUNT all over the program.

The aggregation merge is simply the sum: this is used, for instance, to aggregate the statistics for a complete function from the blocks that compose it. In the opposite, the context merge is used to aggregate statistics corresponding to a source line and the statistic value coming from different configuration of execution of the source line (call chains, unrolled loops, etc). As adding them together would not be meaningful compared to other source line, the maximum is taken.

Producing Statistics

Most often, as statistics are the result of an analysis performed on the program, the Processor class provides simple facilities to manage statistics. First statistics are not always required during the calculation of a WCET: this is the configuration that determines if statistics are produced. Second, the statistics providers has to be declared to a main statistics database. Finally, the statistics collector has to be removed when the the analysis feature is removed from the workspace.

This complete life cycle of statistics collector is managed by the Processor class. One has just to override a function named Processor::collectStats() that is invoked when statistics need to be generated. For each statistics collector, this function has to call the function record() to record a new statistics collector as in the example below (continuation of L1 instruction miss cache example):

class MyL1ICacheAnalysis: public Processor {
public:
	...
protected:
 
	void collectStats(WorkSpace *ws) override {
		recor(new L1ICacheMissStats(ws));
		...
	}
	...
};

As the way the statistics collectors for CFG and basic block are written is recurring, OTAWA provides BBStatCollector, an helper class that let the analysis developer to focus on the more specific topic of the statistics collection, the production of the statistic value. As in the new version of L1ICacheMissStats displayed below, one has only to override the function get():

class L1ICacheMissStats: public BBStatCollector {
public:
	L1ICacheMissStats(WorkSpace *ws): BBStatCollector(ws) { }
protected:
	int getStat(BasicBlock *bb) override { return MISS_COUNT(bb); }
};

Displaying the Statistics

The statistics can then be produced using standard OTAWA commands as owcet or operform and displayed using otawa-stat.py (if the corresponding code processor is invoked):

$ owcet -s trivial -D crc.elf
$ otawa-stat.py main l1-icache/miss
1)
Scripts are described in scripts
2)
MISS_COUNT is only introduced here for the sake of example and is not provided as is by OTAWA as is.