RFC: Partition-based model of the DPS

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,655
Reaction score
2,376
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
After getting a bit up-to-date with the current state of the DPS implementation, I have an idea how to move the SimpleGPC concept we currently have closer to the user interface and use cases of the Space Shuttle:

  1. GPCs are degraded to be pure execution and I/O resources
  2. The central class of the DPS is the partition
  3. The partition is a memory configuration shared by a redundant set of one or more GPCs.
  4. All software in the partition is executed once per timestep, no synchronization, like we do now in SimpleGPC.
  5. I/O is delegated to the MDM/Shuttle bus system
  6. A partition can be reconfigured, GPCs can enter or leave a partition.
    1. For example a Freeze-Dried GNC GPC with the right memory configuration.
  7. We assume that all GPCs in a partition have the same memory contents.
  8. A partition without running GPCs is not executed.
  9. Every memory configuration has a major function handler that initializes process tables, etc.
  10. Every OPS has its own process handling MM transitions and ITEMs
  11. The OPS handler activates or deactivates processes.
  12. Every SPEC has its own process handling ITEMs
  13. All processes/software of a partition is derived from special superclasses defining their general behavior
    1. OPS/SPEC processes are plain event handlers reacting to keyboard inputs and initialization signals
    2. SOPs and RMs are executed again 12.5 or 6.25 times per second and have automatically handled I/O - this class could also be used for autopilot background processes.
    3. Sequencers are special event handlers reacting to software events (equivalent to the HAL/S) event (Like the "ON.... GOTO..." in BASIC)
The main idea is to get closer to the checklists again and make it easier to add new software functions or have simpler means to move one function from one major mode to the other.

Ideally, you should not need to know the whole SSU software to implement a new DPS process, but only need to know:
  • Which Major Function?
  • Which Major Modes?
  • Which class of process?
  • Which I/O operations are needed?
  • Where can the I/O be accessed in GPC memory?
  • What should the process do?


Soo... that what had been on my mind over the weekend from playing with SSU, quickly written down during the lunch break - what do you think? Good? Bad? Too complex?
 
Last edited:

N_Molson

Addon Developer
Addon Developer
Donator
Joined
Mar 5, 2010
Messages
9,295
Reaction score
3,265
Points
203
Location
Toulouse
The "Ultra" in SSU seems to call for "never too complex", a properly emulated GPC would sure be interesting :yes:
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,655
Reaction score
2,376
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
The "Ultra" in SSU seems to call for "never too complex", a properly emulated GPC would sure be interesting :yes:

Well, I have researched a lot about emulating an AP-101/S, but the problem is, we would have to write the software ourself then in AP-101/S object code for the next 25-50 years to come.

It would be a fun to see one get emulated, but it would not be useful for the short-term.

Thus the idea to move the focus to the software architecture and not the hardware it is running on. Then we can use C++ as sole language in SSU and only use HAL/S-FCOS-PASS concepts and patterns in the source code to simplify development for the Shuttle.
 

SiameseCat

Addon Developer
Addon Developer
Joined
Feb 9, 2008
Messages
1,699
Reaction score
2
Points
0
Location
Ontario
After getting a bit up-to-date with the current state of the DPS implementation, I have an idea how to move the SimpleGPC concept we currently have closer to the user interface and use cases of the Space Shuttle:

  1. GPCs are degraded to be pure execution and I/O resources
  2. The central class of the DPS is the partition
  3. The partition is a memory configuration shared by a redundant set of one or more GPCs.
  4. All software in the partition is executed once per timestep, no synchronization, like we do now in SimpleGPC.
  5. I/O is delegated to the MDM/Shuttle bus system
  6. A partition can be reconfigured, GPCs can enter or leave a partition.
    1. For example a Freeze-Dried GNC GPC with the right memory configuration.
  7. We assume that all GPCs in a partition have the same memory contents.
  8. A partition without running GPCs is not executed.
  9. Every memory configuration has a major function handler that initializes process tables, etc.
  10. Every OPS has its own process handling MM transitions and ITEMs
  11. The OPS handler activates or deactivates processes.
  12. Every SPEC has its own process handling ITEMs
  13. All processes/software of a partition is derived from special superclasses defining their general behavior
    1. OPS/SPEC processes are plain event handlers reacting to keyboard inputs and initialization signals
    2. SOPs and RMs are executed again 12.5 or 6.25 times per second and have automatically handled I/O - this class could also be used for autopilot background processes.
    3. Sequencers are special event handlers reacting to software events (equivalent to the HAL/S) event (Like the "ON.... GOTO..." in BASIC)
The main idea is to get closer to the checklists again and make it easier to add new software functions or have simpler means to move one function from one major mode to the other.

Ideally, you should not need to know the whole SSU software to implement a new DPS process, but only need to know:
  • Which Major Function?
  • Which Major Modes?
  • Which class of process?
  • Which I/O operations are needed?
  • Where can the I/O be accessed in GPC memory?
  • What should the process do?


Soo... that what had been on my mind over the weekend from playing with SSU, quickly written down during the lunch break - what do you think? Good? Bad? Too complex?
I think the basic outline is good.

Are we going to have multiple GPCs performing the same computations? I think this would slow down the simulation for no real benefit. My thinking is that the partition would be fairly similar to the SimpleGPCSystem class we have now, and would be responsible for running all the currently active processes. Shared memory between processes would be very helpful, since we currently don't have a good way of communicating between processes.
 

Urwumpe

Not funny anymore
Addon Developer
Donator
Joined
Feb 6, 2008
Messages
37,655
Reaction score
2,376
Points
203
Location
Wolfsburg
Preferred Pronouns
Sire
Are we going to have multiple GPCs performing the same computations?

No, that is exactly what I want to avoid that way. The computations and I/O are done only once, but which I/O ports of which GPC are used for commanding a bus (and thus the effect of failures there) is decided in one of the GPC objects.

Shared memory between processes would be very helpful, since we currently don't have a good way of communicating between processes.

Yes, that will be done by adding the COMPOOL (common pool) concept of the HAL/S system there. Every process can add a reference to a COMPOOL in its definition and access the variables defined there or use a number of words of the COMPOOL as target for DMA operations by the IOP. I will add some more reference material as soon as I drew it.

Essentially, I want this RFC to be seen as a sanity check. I know that I might see things wrong there or oversee something important. So, please - no applause, better find out where I am wrong, because I ran out of self-criticism there.
 
Top