Developer info to improve scheduler
The simple scheduler used now (mainloop) will be improved so latency of some high-priority actions is improved. So we can
- move some high priority trigger-data processing to userspace
- move some tables from SRAM to EEPROM (to free space for MMC logging and networking), where there are limits of when it can be accessed for read (not during EPPROM write).
- the EEPROM write (if there is a demand) can only start right after all necessary variables for fuel VE/lambda/ign calc had been cached in SRAM (seach_table result and 2x2 grid of each table).
- Also, the above cached table-data cannot be updated during EEPROM write (new calc would be possible nevertheless, with some boundary-check consideration ; but easier to avoid it, and only recalc new injPW from changed MAP if it's absolutely needed within 8.5 msec; many competition controllers cannot even finish a full calc within that period)
- when these userspace calcs are allowed (not in EEPROM write and at least 4 msec passed since last) they can be medium priority
- implement more convenience features
See the priority ideas on GenBoard/UnderDevelopment/FirmWare (TODO: delete from there)
Even though it is relatively simple, it's a good idea to model it in JAVA first (see package org.vemsgroup.firmware.scheduler in JTune CVS) to verify operation (and maybe tune some variables).
Similar scheduler is implemented in most real-time operating systems.
See task-states on an [an x86 RTOS].
However we don't need preemptive multitasking. Cooperative is fine. So no need for separate stack for each process. When the process returns, it's stack is back to normal anyway. Timing sensitive tasks must be done in interrupt or high-priority process.
A nice OS with non-preemptive multitasking running on the atmega16 (gpl and compilable with avr-gcc) can be found here [ethernut.de]
Simple scheduler
Actually rather a task runner, since it just executes what was added with scheduler_add(). The operation solely depends on the conditions aroung scheduler_add().
<This was the one we decided to kill. We now are back at the original idea with 4 queue implementation. Look at scheduler.[c|h] in HEAD. Only main_loop uses this scheduler now, and it's just rescheduling itself immediatly after it has run.>
There are 3 queues that starve (unless scheduler_add() conditions are very tricky), while there should be max 1 queue that can starve.
scheduler_sleep() is putting the AVR to sleep. I think this is very dangerous, this is the easiest to get wrong. For battery powered systems it's worth it, but v3 consumes appr. 100 mA so we cannot save significant power. There are no scheduler_sleep in the new version. Sleeping was mostly for the emulator not running at 100% anyway.
Any task may re-schedule itself either by calling scheduler_add itself, or using the eventqueue to schedule itself sometimes in the future. Can I use the existing eventqueue for this?
We only set schedule flags from interrupt/eventqueue when we are there for other reason (eg. trigger, or action).
Otherwise userspace actions should not use interrupt for this. However if you feel uncomfortable to do many false comparisons (like softelapsed does), a second heap maintained from userspace is perfect. Just like eventqueue, but separate heap and actions from it are only executed when the scheduler thinks right (not asynchronously):
- not delaying any high priority task
- and no race/locking issues.
Dispatcher actions are also independent. 16 bit is perfect, no need to spare clocks by using 8 bit values.
Yes, we discussed this on the IRC channel. It's not in the current implementation though.
Non starving scheduler - actually max 1 queue (prio3) can be allowed to starve
- if there is anything runnable in prio0 (such as trigger data processing), that must be run. We take care that this is limited, and not starve prio1..prio3.
- else if there is nothing in prio1 and prio2, than prio3 can run (eg. LCD can be in prio3 with always runnable condition). This way prio3 can starve theoretically, but .. read below
- if there is sg. in prio1 or prio2, prio1 and prio2 are different priority, but prio1 cannot starve prio2:
- run max 2 consequtives tasks from prio1 (eg. fuel/ign calcs, comm tx/rx data; wbo2)
- than run max 1 task from prio2. Note that prio2 always has it's turn after prio1 was checked twice (wether sg. from prio1 actually did run or not)
Runnable conditions
- flagged asynchronously, eg. from irq or other places. (such as trigger data available; or comm data available or sending buffer almost empty). The amount of events is limited.
- softelapsed: a certain amount of time passed since last run. With some tuning, this ensures that there are times when nothing is runnable in prio0..prio2 so prio3 can run. IMHO this is the key for the nice behaviour. If all process just asks "I want to be run" after they run, we more or less get back the mainloop behaviour.
- always runnable (only allowed in prio3)
- other condition ???
If we don't use softelapsed type runnable conditions, that means we basically have 2 queues (prio3 is definitely meaningles than)
- prio0 (with just "flagged async conditions")
- and a lower priority queue. This can be divided as the prio1-prio2 above, so that prio1 tasks get more timeslot altogether.
That gives us 3 useable queues.
The implementation would be trivial:\nÿ1ÿ