Incarnations of Modularity
November 05, 2019 - lutz
On more occasions than I’d be comfortable with I’ve heard that software should be modular. To be fair: I am one of the advocates of the claim that modularity is worth striving for. But what is actually meant by that? There is no universally agreed upon definition for modularity.
In this post I’d like to write about how “modularity” was implemented and has changed over the course of projects I was involved in. Since “modularity” is so ambiguous this post is highly subjective.
The first really big™ software I contributed significant amounts to is the FUmanoids software. The FUmanoids are the humanoid football playing robots of the Freie Universität Berlin (where I studied). If you have not heard about humanoid robot football, I pity you for your lack of knowledge; but I also envy you for the moments to come when you realize how awesome it is to live in a world where this sport exists! Robot football! Fuck Yeah!!! Unfortunately there is no electrical engineering (or engineering in general) faculty at my university, therefore the project members were almost entirely from a computer science background. The rules of our robot football league dictate the general layout of the robots. Also the rules encourage custom designed robots (there is actually a league where all teams compete with identical robots, we were not part of that). So we’ve also developed custom hardware! That part was mostly learning by doing. Go and have a look what we’ve developed but please keep in mind that we didn’t have any training. What I’m trying to convey is the fact that the FUmanoids team consisted of students. Most of the time we tried to tackle problems that were far above our level of expertise and frankly, there was rather limited engineering supervision and guidance from the university.
However, when it comes to software we actually had training, or more accurately: We were in the process of training right there, we were students after all. This means that the FUmanoids software was an interesting playground to try out things. There were some constantly recurring challenges though:
Students don’t tend to be around for long. Eventually they graduate and are released into the wilderness of the non-academic world. For the project this meant that students needed to get into the development cycle as quickly as possible. Also, the faster students could try stuff out the faster they gain knowledge. “Trying things out” and “playing with it” are IMHO the absolute best ways to gain knowledge in any engineering discipline. But the more things are tried out, the more failed trials your project accumulates. Therefore, another necessity emerged: Removing stuff shall be as easy as adding stuff. The latter is a quality trait in software that is highly underrated. In a nutshell: Our development cycle was better when the hurdle to add code as well as to remove code was lowered.
Modularity
As stated above, there is no universal definition of modularity (in software) that I am aware of where all developers agree upon. However, technically speaking “modular” simply means that something is composed of “modules”. “Modularity” could as well be understood as guidelines to structure a project. A module can be anything though: A source file, a set of source files grouped in directories to form modules or some abstract concept within a Software. Regardless of how you look at software you’ll see that any software is modular in its own way. Developers like to organize logic and with that some sort of modularity is implemented as a byproduct. Also, a project might not necessarily have a single concept of modularity; Modularity can also be achieved as an hierarchy of rules.
Summarized: Encouraging modularity in software actually means encouraging separation of logic. By dividing software into pieces (modules) following certain rules, you only need to tell a future reader those rules and help her or him greatly to understand the code.
Another aspect of modularity is that it enables modules to be replaced or altered. The easier the process of replacement, the faster a software can evolve.
Some History of the FUmanoids
The FUmanoids were found in 2006 and were (unfortunately rather forcefully) discontinued in 2017. Our robots were humanoid (two legs, two arms, one head) and had to move in a humanlike fashion (i.e., walk). Only sensors with an equivalent in humans were allowed (no GPS, Radar, Lidar, ultrasonic) so they had to rely on cameras as a primary input.
The Beginnings
2008 was one year after the first iPhone was released. That was way before people even thought about devices like Raspberry PIs! The FUmanoids had to work with hardware they could get their hands on and get students working with (no exotic DPSs or the likings). Back in the day an AVR processor had to do. The founder and head of the team had plenty of experience with ATmega controllers: How to program and how to build hardware that integrates them. So it was decided that an ATmega128 will do. If you know that platform then you are aware of the very limited resources of the ATmega128 but since there was some experience at hand the benefit of getting shit done™ outweighed the lack of RAM (4kB), ROM (128kB), clock speed (16MHz) and performance in arithmetic containing numbers greater than 65535 (yes, that 8bit processor was able to natively handle 16bit numbers! It just couldnt handle a lot of them). In that phase of the project the software had to fit into that processor and handle image processing and movement creation. The RAM was actually so scarce that certain RAM areas (buffers) were reused by multiple concurrently running pieces of the application. However, it worked. Unfortunately the code that was running then is lost to me, so all I know about it is on the level of hearsay. From what I’ve heard, the structure of the project was very easy straight forward and thus easy to follow even though some parts of the code were rather messy. The point is: it worked and also it enabled new students to join and contribute to the project quickly.
Modularity then was mostly implemented on an execution level. The application was split into five threads (vision, motion, behavior, role, strategy) therefore the boundaries between modules were defined on a more functional level. By that I mean: What was run independently belonged to an independent module. So if you were working on the behavior aspect of the code you’d know what source files to work on. The same applied to the other features of the software.
The First Computer
Not too much time later, the first single board computers emerged and were happily welcomed into our robots heads.
In 2009 one of the first “kinda easily” available embeddable Computer, a gumstix verdex pro (32Bits, whoopwhoop), replaced the tiny AVR.
Suddenly a whole Linux and a whopping 128MB of RAM and 32MB Flash was available!
With the new platform the FUmanoids software evolved from something with a focus on low RAM footprint to a software as you’d expect.
However, even that transition was not entirely painless as a very brave student set out to first port the old AVR code to the new platform.
The code that remained wasn’t that complex;
There was a main()
where a Robot object was instantiated which in turn had a representation of the body, a communication interface and image processing.
There was a single implementation for each of those representations.
However, some parts of the software were actually built like interchangeable blocks for certain tasks:
There were specific roles
for each situation all implemented as subclasses sharing a common interface.
The same applied for strategies
as well as motions.
Apart from those interface classes the overall structure was still following the thread-scheme from the the previous section.
From my current point of knowledge I’d say that the level of abstraction and modularity was actually a very good fit for the amount of code then. There were no enforced abstract classes where none were necessary and the code was still straight forward and easy to understand.
Sudden Capacity
In the course of the next two years the embedded computer marked started to boom. The variety of hardware exploded; As did our software. Hardware constraints were loosened a lot: suddenly there was almost infinite memory at hand! While the processing platform was changed twice, the amount of code went from 6k LOC to ca. 40k where a major portion of the code was never run at all. Students came and student parted, but their code remained as if it was integral. The project that used to be easy to understand now consisted of a multitude of singletons for each aspect of the software. I.e., there was a singleton for the robot, vision, for the motion, world model, debugger, feet(!), for every aspect of communication and so on. In other words, at this stage of the software modularity was achieved through rigorous usage of singletons that could be (and were) accessed from all over the code. Interestingly enough, there was very little abstraction (as in abstract base classes) happening actually. Note: IMHO singletons1 are not a bad design pattern; as long as the pattern is not overused or abused. For objects that can exist at most once within a software and need to be accessed from disjoint locations singletons are exactly the right design pattern.
I know first hand how hard it was to follow through the code, locate features and start to contribute. That was the time when I joined the team.
The BlackBoard Framework
In 2011 a collaboration between the NaoTH (the robot football team of the Humboldt-University Berlin) and the FUmanoids emerged.
Before that we were good friends, but since then we shared code.
NaoTH compete in the standard platform league where all robots are identical.
They came up with a framework that could enable software exchange on a source code level even across teams competing in different leagues.
The idea was to have a “blackboard”, a data storage for arbitrary types, where modules could fetch data from and put data into.
This framework introduced a concrete concept, in terms of source code, for what a module represents and how it is incarnated in code.
Module
was a base class where instances of derived classes were to be run in a specific order.
Somewhere was a list of names of modules to instantiate for the cognition as well as for the motion aspects of the robot.
After instantiation the modules could be inspected to determine their inputs and outputs and therefore their execution order.
In practice this looked like this:
A module accesses the data on the blackboard by requiring
or providing
data.
With that no module would call any method on another module but communicate via data only.
That way of modularization enabled almost entirely disjoint source code.
At this point students could come, develop a module of their own and without prior knowledge about the overall structure of the software they could have their own code executed.
Here is a simplified example of how my_module.cpp looked like (yes, it was pre C++11):
// this macro generates something like "struct MyModuleBase : Module {"
BEGIN_DECLARE_MODULE(MyModule)
// this generates a read only accessor to an input: InputType const& getInputType() const;
REQUIRE(InputType)
// this generates as accessor to an output: OutputType& getOutputType();
PROVIDE(OutputType)
// this generates the end of definition brackets "};"
END_DECLARE_MODULE(MyModule)
struct MyModule : MyModuleBase {
virtual void execute() {
InputType const& input = getInputType();
OutputType& output = getOutputType();
// do some processing
}
};
// emit a global factory object that can produce instances of MyModule
// all those factory objects are known to a central collection of factories by their name ("MyModule" in this case)
REGISTER_MODULE(MyModule)
The execute()
method is called after all modules that provide something that is required in the current module have been executed.
The first module to be executed does not have any requires.
This framework helped a lot to clean up the code and to build tools to visualize dependencies between modules, quickly integrate students into the workflow and so on. It also enabled us to easily remove code without breaking the whole software! Further it defined a workflow how to add features and a set of rules for the expected granularity of modules. Anyways, it was purely focussed on dataflow: Modules were synonymous for stages inside the data processing pipeline. But because of the nature of the FUmanoids focusing on the data processing pipeline was exactly the right tool at the right time for the project.
Modern C++
Years have passed when the code flourished again. Eventually though, a student set out to replace the blackboard framework with something that works very much the same way but utilizes modern C++. With this rewrite we got rid of the macros and simplified a lot of code. Until 2016 it grew to around 75k LOC while remaining easy to navigate. Modules were named by what they are doing and as there was virtually no performance penalty in splitting a single module into multiple, some modules actually did very little.
now a module’s code looked like this:
struct MyModule : moduleChain::Module {
moduleChain::Require<InputType> input {"NamedInput"};
moduleChain::Provide<OutputType> output {"NamedOutput"};
void execute() override {
InputType const& input = *input;
OutputType& output = *output;
}
};
REGISTER_MODULE(MyModule)
Meng Meng (TNGL)
Eventually I left the FUmanoids but I’ve constantly worked on projects that were spiritually related to the FUmanoids code. What I’ve learned from the FUmanoids was that data processing in a pipelined fashion works very well and benefits teams a lot. But some pieces of data don’t quite fit into the concept of a pipeline. There were certain data types that were pushed into the pipeline but didn’t belong there as they actually were not (meaningfully) mutable. E.g., the representation of the robot was passed through the pipeline even though at no stage the robot was actually produced or consumed; it was never even altered. At the same time the representation of a robot better be not implemented as a singleton as it would impose limits on the software which are hard to remove after the fact. I.e., when implemented as a singleton there could be only one robot at the same time in the software. I wanted to have a system that could give a similar flexibility as the pipeline where modules are instantiated and hooked together during runtime but where the modules don’t have to live within a pipeline. Also I wanted that system to be inspectable as the pipeline was (we were able to generate really pretty data flow diagrams of the pipeline).
Meng Meng is not only the name of a Panda in Berlin’s zoo but also the codename of my personal robot control software successor to the FUmanoids.
The aspect within Meng Meng that provides modularization is called TNGL
:
A TNGL (meaning: tangle; pronunciation: tingle) is a bundle of nodes (modules) that have links to other nodes within the same TNGL.
A node implements an interface by whatever means applicable (virtual or normal inheritance) thus exposing certain methods to other nodes.
Links between nodes are implemented as members of nodes; their inner workings are very much the same as pointers.
Also information about the relationship to a link’s target can be expressed:
When Node A
cannot work without having access to some B
(it requires the existence of a B
) then the TNGL will try to fulfill this necessity.
If it cannot do so, then A
will be torn down and not be part of the TNGL.
A
could also indicate that a link to a B
is nice to have but if it cannot be fulfilled the A
shall not be deleted.
There are actually some more relationships that can be expressed.
They are all rather straight forward.
However, TNGL does not enforce any requirements on nodes apart from them to use links. It comes with a caveat that is worth mentioning though: TNGL allows for circular relationships between nodes. To achieve this, the initialization process of a TNGL requires the nodes to be created first and then be linked together. That means that nodes need to have a method that is invoked after all nodes are hooked together. In other words: TNGL enforces a two stage construction. It also enforces a two stage destruction where a method on all nodes is called prior to the destructors of all nodes.
Anyways, here is some Code do demonstrate how a tngl::Node
looks like:
// my_interface.h
struct MyInterface {
virtual void foo() = 0;
virtual ~MyInterface() = default;
};
// A.cpp
namespace {
struct A : tngl::Node, MyInterface {
A() {} // throw here to indicate this node cannot be constructed for some reason
void initializeNode() override {} // called after all nodes are hooked together; implementation can be omitted
void deinitializeNode() override {} // called after before nodes are destroyed; implementation can be omitted
};
// a factory to produce MyFunctionality nodes. This factory is known to a singleton holding all application wide factories
auto builder = tngl::NodeBuilder<A>("A");
}
// B.cpp
#include "my_interface.h"
namespace {
struct B : tngl::Node {
tngl::Link<MyInterface> interface{this, tngl::Flags::CreateRequired};
void initializeNode() override {
interface->foo();
}
};
auto builder = tngl::NodeBuilder<B>("B");
}
// main.cpp
int main(int argc, char** argv) {
B seed_node;
tngl::TNGL{B};
tngl.initialize(); // call initializeNode on all modules
tngl.deinitialize(); // call deinitializeNode on all modules
return 0;
}
As you can see, the code overhead to implement nodes is very limited. Also all relationships between nodes can be inspected at runtime and pretty graphs can be generated:
When using TNGL it becomes pretty easy to add new features as dependencies of those features can be expressed as interfaces. Also, nodes are very testable as you’d need to implement tests against an interface; the rest (instantiation of the test-node and the testee-node and the actual testing) can be automated.
What is now TNGL has been evolving over several years. For my current bigger projects it helps me a lot. Especially the abstraction of aspects of the software via interfaces helps me focussing on one task at a time and get into the software quickly when I have not touched it for some time. I also fancy the inspection features of tngl, e.g., dependency graphs can be created automatically as well as information about what nodes are available, what nodes are instantiated and which nodes are not instantiated and why. It also enables me to implement other aspects of modularization like pipelining, load balancing and many more with the means of nodes. E.g., pipelining is realized by a node that has links to all other nodes implementing the interface of a pipeline-module and hooking them together analogous to the pipeline of the previous section.
Takeaway
In this post I’ve omitted concepts of modularity from closed source project that I’ve worked on as well. Frankly, I didn’t encounter any meaningful or different concept of modules there.
When modularization was enforced in code, it boiled down to separation of logic by imposing design rules. As a rule of thumb: whatever choice of modularity, it has to be easily defined. If a concept of modularity cannot be easily described it’s very likely that it will do more harm than good to a project.
Singletons
I’d strongly disadvocate cooking modules into singletons: I’ve seen it help code to grow wild and also I’ve seen people cleaning it up. It renders software very unscalable and makes people very unhappy.
Execution Units / Threads
From my experience I find that enforcing execution driven modularization to be a very fragile approach. It can be a perfectly adequate choice for smaller code bases but when the project grows it becomes hard to force functionality to remain in the boundaries of a single execution unit. Especially the exchange of data and synchronization between execution units can become tricky because knowing how and what data is exchanged needs knowledge of the whole software.
Data Driven Tasks with Dependencies
If your software happens to revolve around complex data processing tasks then this is a very nice concept of modularization. Adding, removing, trying code comes with a very low overhead. Because the overall data processing can be split into comprehensive parts, knowledge of the entire software is not necessary to contribute software. However, this is only applicable to streamlinable data processing situations. It also comes with the need of settling on a framework that might be hard to migrate away from.
Interface Driven Modularity
For any reasonable complex software I prefer this approach over all others. It poses the fewest assumptions on your project and (in the case of TNGL) comes with neat tools to inspect and verify the software on multiple levels. Also it plays nicely with other aspects of modularization as some nodes might use threads or some nodes might group to something that works like a pipeline.
-
Meyers singletons of course