Event-Driven Processor Programming
Almost since computers were invented, interrupts have been a common programming method to deal with real-time tasks. An interrupt causes a processor to stop the running task, and to execute an interrupt handler instead. The interrupt handler determines the cause of the interrupt, responds to the interrupt, whereupon control is restored to the original task. A simple example is an interrupt from a UART (a serial port) stating that a character has been received, and the interrupt handler will take the character from the UART and store it in a queue in memory for use by the main task.

Programming with interrupts can be complex and challenging, so in this article we outline a programming method called event-driven programming that offers most of the functionality of interrupts, while being easier to reason about.
Interrupts
To show why interrupts are a challenge, consider an Ethernet time-server. This is a device connected to a high precision clock that supplies a global reference clock using an Ethernet protocol. At the core, this device has four tasks, serviced by three interrupts (Figure 1):
- The background task responds to a time request that comes in over Ethernet. It will, for example, provide and implement a protocol that estimates the latency between Ethernet transmitter and receiver.
- On a clock interrupt: when a new clock reading is available, the clock value is incremented by one. Assuming we have a 64-bit clock value, this may involve multiple operations to implement the long addition.
- On an EthernetTX interrupt: check if there is another packet to be sent, and if so, provide it.
- On an EthernetRX interrupt: queue and timestamp the packet.
We can see here that interrupts serve two fundamental purposes to the system programmer: they offer a method to react to external stimuli (Ethernet, clock), and a method to implement semi-concurrent execution of code (between a background task and the other three tasks).
Programming this system is complex. Consider the clock interrupt, which performs an operation along the following lines (assuming a 32-bit architecture where LSW is the least significant word and MSW is the most significant word):
clock.LSW = clock.LSW + 1;
if (clock.LSW == 0)
pclock.MSW = clock.MSW + 1;
The background task may observe a time-stamp at any time by reading the value from clock.LSW and clock.MSW. There are two considerations when implementing this:
- If the clock interrupt happens just between the background task reading the LSW and the MSW, the background task may get an inconsistent view of the time (off by 4 billion ticks). The chances of this happening are slim (1 in 4 billion), which means that this problem is very unlikely to show up during testing. This problem can be solved by protecting the read operation, for example, by disabling around the read.
- Since the interrupt may happen at any time, the interrupt routine can make little or no assumptions about the state of the registers. It therefore must save all registers that are used in the interrupt routine, and restore all registers afterwards. In addition, it must load any values it needs.
These problems are exacerbated by considering other interrupts. For example, the EthernetRX interrupt will also take a timestamp, and both Ethernet interrupts interact with the background task by means of lists of packets that have to be kept consistent (Figure 2). A final problem that must be considered is that interrupts may arrive simultaneously, and the designer has to find a means to express what happens when an interrupt arrives while another interrupt is being serviced.
Events

The basic difference between events and interrupts is that an event can only be taken in a dedicated place in the code. The programmer chooses one or more places in the task, and declares that those places in the program are appropriate to dispatch an event from. Dispatching an event means that, depending on the source (and maybe type) of an event, a piece of code is executed, and on completion it is the programmer’s choice as to what happens next. In many cases the programmer may decide to dispatch another event.

Typically, when no events are available, the task will pause until an event becomes available. If programmers do not want a task to wait forever, they can ask for an event to execute code associated with a time-out after a set amount of time (say one second or 20 microseconds).
Using a specific place in the code dedicated to taking an event has a multitude of advantages:
- First, the code can be designed so that the program is in a known and consistent state. That is, it is not in the middle of updating a complex data structure (that could otherwise be modified by the event handler).
- Second, because the program is in a known state, any registers that need saving can be saved before the event is taken, reducing the latency of taking the event compared to an interrupt. Even more important, registers can be allocated across the background task and the event handlers, enabling more efficient use of registers and fewer accesses to memory.
- Third, an event can never slow down the program in unexpected places. An event executes from dedicated places only, and hence the time taken to handle an event is spent in a-priori known places, rather than anywhere in the code. This means that estimating worst-case execution time becomes easier and tighter.
The consequences of the first and third point are a direct advantage to the system designer; using events rather than interrupts removes the need for complex reasoning, and replaces it with pretty straightforward reasoning. This increases productivity, decreases testtime, and increases the chances of a program that works in all corner cases. The second point is not a direct advantage to the programmer, but it increases the scope of the underlying system. It can potentially react faster to external events, and make more efficient use of registers resulting in shorter and faster binaries, meaning that more tasks can be slotted into a single device.
Programming with events is not magic. The programmer still has to consider what should happen when events arrive simultaneously. Compared to interrupts, however, the answer is easier to formulate since events are taken at known places.
Programming With Events
To fully appreciate how event-driven programming works, we show code in both a high-level language (XC, in this section), and in assembly language (in the next section).
The XC syntax for the Ethernet time server is shown below:
while(1) {
select {
case clockInput :> tick:
clock.LSW = clock.LSW + 1;
if (clock.LSW == 0)
clock.MSW = clock.MSW + 1;
break;
case ethernetRX :> newPacket:
processPacket(newPacket, clock.LSW, clock.MSW);
break;
case ethernetTX :> packetDone:
if (!txQueueEmpty) {
ethernetTX <: getPacketFromTxQueue();
} else {
ethernetTXReady = 1;
}
break;
}
}

The select statement indicates that the program will wait for an event from one or more sources. Each case describes a possible source of an event (in this example there are three possible sources: clockInput, EthernetRX, and EthernetTX), and each case has a body of code that is executed if the event arrives and is dispatched. The first case states that when data is available on clockInput, the code that increments the LSW and MSW of the clock should be executed. The code for the second case states that when a packet has arrived over Ethernet, we wish to process this packet (possibly placing packets in the output-queue). The third case takes an event if a packet has just been transmitted over Ethernet, in which case the next packet is taken from the queue and transmitted.
The select statement is enclosed in a while(1) statement, which means that after the first event is dispatched, the program will wait for the second event, and so on, forever.
We can now analyze the behavior of this code. Events are taken only at the select, and hence the code inside each case statement is "safe" in the sense that the code cannot be interrupted. For example, if, just after an EthernetRX event, a clockInput event comes in, the code for the Ethernet packet will first be completed, and then the clock input will be dealt with (This may not be desirable — we will get back to this.). If the events arrived in a different order, then the clock input would be processed first, and then the Ethernet packet is served (with a clock value that is one higher). Since a case is always completely executed, there is no concern that code may be interrupted halfway.
We can calculate the worst-case execution time (WCET) for each case, and use the worst-case timings to analyze whether the program can keep up with the data, and whether any events can ever be missed. For example, the time to deal with the three cases may be 1, 20, and 3 microseconds.
First, the programmer can, from these numbers, derive that if all three events come in (almost) simultaneously, then the last event will be completed after 24 microseconds; this number can be verified against the system requirements to work out whether 24 microseconds is acceptable. It would be difficult to establish this number in an interruptdriven environment.
Second, the programmer can estimate the throughput of the system. Given the timings of the three cases, the system can deal with at most 1,000,000 clock inputs per second, at most 50,000 EthernetRX requests per second, and at most 333,333 EthernetTX packets per second; or a combination of those numbers. Again, these numbers can be checked against the requirements.
The numbers highlight potential problems. For example, the solution in Figure 3 works fine if the clock is a low frequency clock, say, 32 KHz. At 32 KHz the program can guarantee to pick up all the clock events, even if three events happen simultaneously. However, if the clock rate is, say, 100 KHz, then two or more clock pulses may arrive while the program deals with a single EthernetRX event. In this case, the program will not meet its requirements.
The way around this is to modify the program so that it operates on the received packets in a separate, concurrent task. There are two methods to achieve this. One is to still receive the packets in the same select statement, and pass the processing of the packet to a separate task; the other method is to receive the packet in a separate task but then to communicate the timestamp between the threads. The two execution traces are shown in Figure 4. Note that rather than using an interrupt to implement pseudo-concurrency, a multi-core or multi-threaded processor can be used to get true concurrency.
It is worth noting that when threads are waiting for an event, for example the next clock input or EthernetRX event, they are idle and do not do anything. This is not really a waste of resources. There must be enough slack in the system to cope with the worst case, which means that in a typical case the thread will have to be idle and wait. Indeed, separate threads can be used to improve latency further, for example by dedicating a thread to a single event source, which may be appropriate where an extremely low-latency response is required. This enables hardware interfaces to be implemented in software, as response times are reduced to a few instruction cycles.
Extra threads can also be used for a different purpose, specifically to increase throughput. Figure 4 shows a separate thread to implement the receiver processing, but it is still limited to the same throughput. A third thread could be deployed, enabling software to serve two requests simultaneously and increasing the bandwidth of the system.
Assembly Level Events
To show the workings of events, it is useful to look at how they can be described at assembly level. Like interrupts, each event source has an associated vector at which the code is stored to handle the event. To create an efficient event-driven instruction set, three basic instructions are required: enable specific event sources, wait for any enabled event, and disable one or all event sources. An example sequence that handles the three event sources above is shown below:
SETV ethRX, ethRX_vector
SETV ethTX, ethTX_vector
SETV clock, clock_vector
EEU ethRX
EEU ethTX
EEU clock
WAITEU clock_vector: ...
WAITEU ethRX_vector: ...
WAITEU ethTX_vector: ...WAITEU
The first three instructions set up the event vectors — this needs to be done only once. The second three instructions enable all three event sources. After that, the WAITEU instruction waits for an event, and on arrival, dispatches the event by jumping to one of the three vectors. Each event handler ends with a WAITEU instruction, which will dispatch the next event (jumping to the next handler).
Because there is a known control-flow that passes from the while to the select to any of the case statements, the compiler can perform full register allocation across all cases, making best use of the available registers and avoiding the need for pushing and popping registers on the stack. If interrupt routines are to be written with the same efficiency, it is manual work since only humans know where interrupts are taken.
On inspecting the assembly code, the final advantage of event-driven programming becomes apparent. Event-driven programming provides a clean mechanism to stop the core when idle: any time that a WAITEU instruction has no events, this thread will not activate until an event arrives. When all threads are waiting in a WAITEU, the processor can enter a "sleep" mode until an external event becomes available. This gives a clean architectural interface to different levels of sleep mode.
This article was written by Henk Muller, Principal Technologist, XMOS (Bristol, UK). For more information, contact Mr. Muller at
Top Stories
INSIDERDesign
Venus Aerospace’s Rotating Detonation Rocket Engine Completes First Flight...
INSIDERDesign
Bombardier is Digitally Upgrading its Aircraft Design, Engineering and...
INSIDERDefense
How the US Army is Advancing Research in Robotics, AI and Autonomy
INSIDERManned Systems
New Copper Alloy Could Provide Breakthrough in Durability for Military Systems
Original EquipmentManned Systems
ACT Expo 2025: Heavy-Duty EVs, H2 Trucks and Tariff Talk Dominate Day One
Technology ReportPower
Webcasts
Defense
Soar to New Heights: Simulation-Driven Design for Safety in Electrified...
Software
Improving Signal and Power Integrity Performance in Automotive...
Aerospace
Transforming Quality Management with Data-Driven Analytics
Software
Enhancing Automotive Software Efficiency with vECU-based...
Aerospace
Precision Under Pressure: The Centerless Grinding Advantage in...
Photonics/Optics
Breaking Barriers in Space Communication with Optical Technology