EPS (Experiment Planning System)
EPS Overview
From [EPSD], The Experiment Planning System (EPS) is a software tool developed for the creation of th experiment operations for a specific time period (mission scenario). An operation timeline (ITL) is first entered, which comprises a list of experiment operations and a specific time for when each operation has to be executed. The EPS then checks for constraint violations one ach of the experiments and their operations.In order to use the EPS, the experiment operational procedures/requirements, and the experiment environment must first be modelled and implemented. This is done using an Experiment Description File (EDF) for the experiment model, and an Event File (EVF) for the environment model. The EDF provides a generic way to model experiment operations.
The environment model in turn generates operational boundaries, which can either affect the experiment as a whole or can be restricted to a specific experiment operation. After describing the experiments, operations can be requested through an operations timeline, known as an Input Timeline (ITL). The EPS will run the timeline checking for conflicts.
The EPS uses a modular approach for defining an experiment, and can therefore still be used even if only limited information about an experiment is provided.In order to perform conflict resolving at different levels and using
EPS Configuration
Simplified Power Model
The following EPS configuration keywords are relevant for the Simplified Power Model implemented for JUICE:
Power_algorithm: JUICE_SIMPLIFIED_MODEL
# Setup JUICE Simplified Power Model Power_algorithm: JUICE_SIMPLIFIED_MODEL
Note
There is a difference in between the Solar Array Available Power calculated by OSVE/EPS and the one calculated by the geopipeline package,
the difference is 22W (SA_AUXLOSSES: SA Harness losses) and is due to the fact that these losses are taken into account by EPS and battery modelling level, but not when
calculating the solar array generated power therefore the available power is higher in OSVE. The geopipeline removes the losses from
the solar array power to simulates what is effectively delivered to the PCDU. This is documented in SC6-501.
Geopipeline traj_data.POWER_SOLAR_ARRAY is identical with osve_power.epsAvailablePower IF THE power loss of 22 W (SA_AUXLOSSES: SA Harness losses) is removed from the traj_data.POWER_SOLAR_ARRAY The reason is that those losses are taken into account by EPS at battery modeling level, but not when computing the solar array generated power - while geopipeline removes the losses from the solar array power to simulates what is effectively delivered to the PCDU
Battery Capacity and Battery State of Charge
The Battery Capacity estimation is explained in detail in [EPSP] and when considering it, we need to carefully separate in between the battery capacity and the battery operation modes that provide a high limit in the battery capacity. The Battery Capacity provided in [EPSP] is summarized in the table below:
EOC SETTING FOR SIM. |
T0 |
BOL (-14.0%) |
EOL (-20.4%) |
EOL+ (-22.4%) |
COMMENTS |
|---|---|---|---|---|---|
EOC1 (100%) |
6390 |
5495 |
5086 |
4959 |
Long Eclipse, critical manoeuvres, etc |
EOC2 (90%) |
— |
4946 |
4577 |
4463 |
Fly-by science operation |
EOC3 (80%) |
— |
4396 |
4069 |
3967 |
G500 periodic orbit cycling |
In the table EOC1 (100%) is equivalent to the total Battery capacity (WBAT). In [EPSP] is is specified that in
every instance where WBAT(t) is calculated to a result WBAT(t) > EOC, it shall always be truncated to a maximum value = EOC.
This is translated by the fact that the EPS configuration setting BATTERY_CAPACITY needs to be set to WBAT or EOC1 (100%) and then
EOC is equivalent to BATTERY_CAPACITY * BATTERY_SOC_MAX.
If the BATTERY_CAPACITY parameter is set to EOC2 or ECO3, which was the case for the 7E1, PJ12, and ORB17 detailed
scenarios, then the BATTERY_SOC_MAX parameter needs to be set to 100.
The Battery Capacity estimate carried out in the [ORDD] by the Flight Control Team is different. To begin with instead of considering EOC1, EOC2, and EOC3, the FCT considers a SOC MAX of 93.17% and a SOC Cruise of 50% and the battery capacity at T0 or beginning of life is considered in Amperes hour with 273.3 Ah whereas the state of charge is evaluated in terms of a Battery Voltage (VBAT).
Although this is what is specified in the document, the FCT effectively uses a SOC of 92%,
and also the S/C Parameter CFT02D7C that provides the Battery Usable Capacity has a constant value of 993600 As
per second, therefore 276 Ah. This value does not change and is equivalent to the SOC 100% (EoCV 25.2 V) and therefore
to Battery Capacity of 25.2 V * 276 Ah = 6955.3 Wh
In order to account for the battery degradation the FCT uses the following formula:
`BATTERY_CAPACITY = BATTERY_BOL_CAPACITY (Ah) * BATTERY_AGEING (%) * VBAT (V)
where the BATTERY_AGEING has the following values:
BOL: 1.0
LEOP: 0946
CRUISE: 0.9
JOI: 0.86
Europa: 0.827
GEO: 0.816
GCO: 0.816
This formula needs to be used carefully because if the VBAT used incorporates the cap of the operation mode, then the
BATTERY_SOC_MAX parameter needs to be set to 100.
There is another S/C parameter CFT02D7C that provides the Battery state of charge but its usage and usefulness needs to be evaluated by the FCT.
Writing PORs
Tha maximum length of the observation label is set to 100 characters.
Instrument Timeline Files (ITL/OTL)
The same way, the ITL files need to start with ITL_ or OTL_.
Payload Operations Requests (POR/PDOR)
The way EPS identifies POR XML files is by its prefix (it checks that the POR filename starts with the first 4 letters set to POR_).
The validity range is not needed when using a top-level ITL that provides start and end times.
The validity range of the event timeline should always be greater or equal to the validity range of the action timeline
EPS Simulation
For an EPS simulation, The validity range of the event timeline should always be greater or equal to the validity range of the action timeline.
The following EPS configuration keywords modify the generation of PORs with EPS:
POR_write_action_default_params: <PARAM_ONLY|PARAM_AND_VALUE>
ITL_write_action_default_params: <NONE|PARAM_ONLY|PARAM_AND_VALUE>
For example:
POR_write_action_default_params: PARAM_AND_VALUE ITL_write_action_default_params: NONE
“All action parameter values will be explicitly written in the PORs sent to the MOC. ITL will omit default parameters (saving disk space).”
Thus, the values are:
NONE: No default values are written (only available in ITLs or JITLs)PARAM_ONLY: Only the name of the default parameters are written but not their values (Current behaviour)PARAM_AND_VALUE: The name and the value of the defaults parameters are written
Important
OSVE uses by default the following EPS parameter: -use-obs-profiles. This parameter specifies that the power and
data volume or data rate profiles provided either via the Observation Definition (ODF) or the Observation in the ITL take
precedence over any modelling power, data volume or data rate profile (for example resultant of a given instrument
mode as defined in the EDF). If these profiles are provided both in the ODF or the ITL, the ones of the ITL
take precedence.
Important
The EPS simulation starts after (1 second) the simulation start time, any action or observation that is scheduled at the simulation start time will not be taken into account in the simulation.
POR and PDOR
The validity range of a POR is not required, but it can be used by EPS when there is no start and end times on a top-level ITL.
During the checks of PCW3 PORs the FCT requested to deviate from the current version of the PLID: JUI-ESOC-MOC-ICD-003, Is.1, Rev.1 in order to be able to accommodate reasonable requests from PEP Hi and PEP Lo.
These requests are currently implemented in EPS and are as follows:
Allow for a
descriptionparameter for theoccurrenceListelement and its occurrences (present as optional in previous versions of the PLID but removed).Relax the 20 character length of the uniqueId parameter to maximum length. From the PLID it is open to interpretation whether if it is set to exactly 20 characters or up to 20 characters.
Derived Events
The EPS Derived Events allow the user to define events derived from already existing events by applying a number of operations.
In order to include them a new file called derived_events.def must be present in the same directory
where the file events definition file is.
For example, we can defined an event derived from an Europa flyby with a given time shift:
CA_EUROPA_DER = (CA_EUROPA+00:15:00)
Then we would need to add the following line in the event definition file:
5282 CA_EUROPA_DER CA_EUROPA_DER - - - FALSE - 0 GLOBAL MOMENTARY INACTIVE
In this case, it will generate the same events as CA_EUROPA_DER but starting 15 minutes later.
Please note that derived events do not have a COUNT and therefore activities (observations or actions) can only
be scheduled with all the instances of the derived events that have been calculated in the simulation. For example
the following input can be used (without an event count):
"start_time": {
"name": "CA_MOON",
"delta_time": "-000.00:51:43.000"
},
and then in the EPS output timeline you will see:
# ^-- Resolved event CA_MOON (INSTANCE = 1) -00:51:43 --^
19-Aug-2024_20:23:15 SWI OFF SWITCH_MODULE ( \
CURRENT_MS = OFF [ENG])
Note that instead of COUNT = 1 you get INSTANCE = 1.
Because of this derived events are mode adept for a usage not related to scheduling activities such as constraint checks or the calculation of Umbra and Penumbra periods for AGM.
In order to obtain a list of the calculated derived events, EPS can be run with the following option -display-derived-events.
For complete documentation (restricted access): [Derived Events Query Language](https://s2e2.cosmos.esa.int/confluence/display/PSS/On+Events+Query+Language)
EPS Inputs
Observation Definition Files (ODF)
Tha maximum length of the observation label is set to 100 characters.
Instrument Timeline Files (ITL/OTL)
The same way, the ITL files need to start with ITL_ or OTL_.
Payload Operations Requests (POR/PDOR)
The way EPS identifies POR XML files is by its prefix (it checks that the POR filename starts with the first 4 letters set to POR_). The validity range is not needed when using a top-level ITL that provides start and end times. The validity range of the event timeline should always be greater or equal to the validity range of the action timeline
The validity range of a POR is not required, but it can be used by EPS when there is no start and end times on a top-level ITL.
During the checks of PCW3 PORs the FCT requested to deviate from the current version of the PLID: JUI-ESOC-MOC-ICD-003, Is.1, Rev.1 in order to be able to accommodate reasonable requests from PEP Hi and PEP Lo.
These requests are currently implemented in EPS and are as follows:
Allow for a
descriptionparameter for theoccurrenceListelement and its occurrences (present as optional in previous versions of the PLID but removed).Relax the 20 character length of the uniqueId parameter to maximum length. From the PLID it is open to interpretation whether if it is set to exactly 20 characters or up to 20 characters.
EPS Simulation
The EPS simulation starts after (1 second) the simulation start time, any action or observation that is scheduled at the simulation start time will not be taken into account in the simulation.
EPS data rate resources management
Reference: https://s2e2.cosmos.esa.int/confluence/spaces/PSS/pages/21078072/EPS+modelling+language
Data stores
only mass memory devices may contain data stores
in case data stores are defined then no local memory is allowed
any number of data stores are allowed per mass memory
the data store label may not be redefined
a data store may optionally be qualified as SHARED or HK (house-keeping), in this case the name of the data store may not be experiment specific
the data resources of HK data store flows will not be included in the total experiment science data computations
a data store shall always be linked to a specific experiment, unless qualified as SHARED or HK
the data store experiment may be defined implicitly, i.e. the name of the data store is the same as the experiment name, or explicitly, i.e. the experiment is given after the data store label
the experiment qualifier is not allowed on implicit data stores
the experiment qualifier is mandatory on explicit data stores, the data store label is not allowed to be named after an experiment here
a data store may optionally be qualified as SELECTIVE, which is a data store that will never be downloaded, but its data shall be moved to a transfer store for actual downlink
no overflow of data will be reported for SELECTIVE data stores
a data store may optionally be qualified as CYCLIC, which is a data store that will never overflow
the experiment dedicated memory size shall be a positive value
the experiment downlink packet size shall be a positive value, 0 or -1 for activating the EPS Files Layer.
a packet size of zero is allowed, indicating that downlink is not packetised but is continuous
a packet size -1 is allowed, indicating that the EPS Files Layer is enabled for this datastore. If the EPS Files layer is enabled, all data falling in a datastore shall be appended to an open file, and only closed files could be downlinked. No files could be opened if datastore is full.
the priority level shall be from 0 (high priority) until 99 or higher, priorities >= 99 will result in no download of the data from mass memory
the default priority is 16
within the same priority level, either packets shall all be defined or shall all not be defined (i.e. set to zero size)
the identifier is used by the DATA_STORE resource to reference a specific data store for priority updates
The “SELECTIVE” keyword shall no be present in the datastore definition for datastores with EPS Files Layer enabled ( packet_size = -1 )
Each datastore has a downlink queue, they provide the file to be downlinked once a datastore is chosen by the round robin. This queue is a FIFO queue that is filled by its end each time a file is closed (only if datastore priority != 99). This queue needs to be managed every time the priority is updated, so if the downlink is enabled or disabled for that datastore. Once a datastore updates its priority to 99 (downlink disabled), it’s downlink queue will be emptied unless the first file if it has its downlink already started, note that this first file with downlink start date will remain in the queue until file is completely downlinked because the datastore’s priority is updated again to non 99 value. Remember that files with downlink start date could not be moved or even deleted. They are automatically removed once file’s downlink is completely finished, this is for assuring that the full file downlink is performed allways in the same datastore, such as an atomic but interruptable operation, so nothing can be done once downlink starts, just pause it, or finish the downlink. And if later the datastore priority goes back to a non 99 value (downlink enabled) the downlink queue will be filed again from the end, with all the closed but not downlinked files present in the datastore.
Only files downlinked in a given datastore with EPS Files Layer enabled will be accounted for the datastore accumulated data field. So moved files, deleted files and any other case will not be accounted. Only downlinked files are taken in account.
EDF Template:
#
# <Data stores start here...>
#
Nr_of_data_stores: <d>
#
Data_store: <label [[<experiment>|SHARED|HK]]> [SELECTIVE|CYCLIC] \
<memory size [[Mbytes]]> <packet size [[bytes]]|-1 (for enabling Files Layer)> \
[<priority>] [<identifier>]
#
... (d data stores)
Dataflow definitions
only data producing experiments may have dataflow definitions, the data shall be routed to a mass memory device with data stores
the dataflow can in principle be routed to either local memory, to a specific data store or to a specific PID
the routing will bypass any local memory unless explicitly indicated (i.e. with the LOCAL_TO_* types as listed below)
the LOCAL_TO_* types can only be used when local memory is available
- the following dataflow definition types may be defined:
TO_LOCAL LOCAL_TO_DS: <data store> LOCAL_TO_EXP_DS: <experiment> <data store> LOCAL_TO_PID: <PID number> LOCAL_TO_EXP_PID: <experiment> <PID number> TO_DS: <data store> TO_EXP_DS: <experiment> <data store> TO_PID: <PID number> TO_EXP_PID: <experiment> <PID number>
the dataflow definition type must define an experiment in case no experiment has been defined in the experiment dataflow, or in case the dataflow definition is used as experiment default dataflow
the data bus is optional and not allowed on the TO_LOCAL flow type
EDF Template:
#
# <Dataflow definitions start here...>
#
Nr_of_dataflow_defs: <df>
#
Dataflow_definition: <label> <TO_LOCAL|LOCAL_TO_DS|LOCAL_TO_EXP_DS| \
LOCAL_TO_PID|LOCAL_TO_EXP_PID|TO_DS| \
TO_EXP_DS|TO_PID|TO_EXP_PID> \
[<experiment>] [<data store>] [<PID number>] \
[<data bus>]
#
... (df dataflow definitions)
Dataflow specification
the dataflow specification is optional for the listed keywords
if no dataflow specification is provided then the data will be routed to the DEFAULT flow, i.e. following the existing rules regarding dataflow routing and the interpretation of negative numbers
- the following dataflow types are allowed:
TO_LOCAL
LOCAL_TO_FLOW: <dataflow name>
LOCAL_TO_PID: <PID number>
TO_FLOW: <dataflow name>
TO_PID: <PID number>
any local memory will be bypassed unless explicitly indicated, i.e. using the LOCAL_TO_* types as listed above, or in case a dataflow definition is referenced which has defined a LOCAL_TO_* type
the LOCAL_TO_* types can only be used when local memory is available
the data production keywords may be instantiated multiple times, however the dataflow specifications shall never be recurring, also with the use of the DEFAULT flow (i.e. no explicit dataflow)
the parameter based keywords will always have precedence above the explicit data rate or data volume keywords
only positive data rates or data volumes are allowed in case the dataflow specification is provided
Param_dataflow and Module_param_dataflow keywords are intended to set the experiment or module dataflow linked to a state parameter value. This state parameter shall be of type Eng_type: TEXT
EDF Template:
#
# <Dataflow specification starts here...>
#
# Used in keywords: MS_data_rate, MS_data_rate_parameter,
# Nominal_data_rate, Data_rate_parameter,
# Equivalent_data_rate,
# Data_rate_increase, Data_volume
#
<TO_LOCAL|LOCAL_TO_FLOW|LOCAL_TO_PID|TO_FLOW|TO_PID> [<dataflow>] [<PID number>]
PIDs
any number of PIDs are allowed per experiment
PID numbers shall be unique across all experiments
a PID can only be used in the experiment where it was defined
a PID default state can be defined, although it is possible to override its state using resource parameters from the timeline or PID enable flags in a specific mode
the PID will refer to the data store as defined by its ID, as such the data store shall have this (optional) ID defined
the PID data store will be a data store in the mass memory device using the actual data routing; typically the experiment dataflow will be used for this, however it is also possible to use a module or action dataflow with a different mass memory device referenced.
EDF Template:
#
# <PID definitions start here...>
#
Nr_of_PIDs: <p>
#
PID: <PID number> <DISABLE|ENABLE> <data store ID>
#
... (p PID definitions)
A JUICE Example:
Example of SSMM declaration:
############################################################
# Experiment SSMM_HIGH_RES
############################################################
Experiment: SSMM_HIGH_RES "JUICE HIGH_RES images solid state mass memory "
# This is in order to include EPS File Layer features to the SSMM_HIGH_RES experiment
Include_file: "juice__spacecraft_ssmm_file_layer.edf"
# Storage capacity
# Local_memory: 1100 [Gbits]
# Data_store: <data store [[HK|SHARED|<experiment>] SELECTIVE]> <memory size [[Mbytes]]> <packet size [[bytes]]|-1 (for enabling Files Layer)> [<priority>] [<identifier>]
# DEFAULT:
# Default datastore, FAKE datastore not reflected in JUI-ESAC-SGS-TN-033
Data_store: SCIENCE [SHARED] 1 [Mbytes] 0 [bits] 4 10
# REMOTE_SENSING SSMM:
# Remote Sensing BULK datastore, datastore to store high resolution images, no downlink (priority 99)
Data_store: SSMM_RS_BULK [REMOTE_SENSING] 625 [Gbits] -1 99 31
# Remote Sensing SELECTED datastore, datastore to store selected high resolution images to downlink
Data_store: SSMM_RS_SELECTED [REMOTE_SENSING] 100 [Gbits] -1 10 33
############################################################
# Experiment SSMM_LOW_RES
############################################################
Experiment: SSMM_LOW_RES "JUICE SSMM_LOW_RES images solid state mass memory"
# This is in order to include EPS File Layer features to the SSMM_LOW_RES experiment
Include_file: "juice__spacecraft_ssmm_file_layer.edf"
# Remote Sensing SELECTOR datastore, datastore to store low resolution images
Data_store: SSMM_RS_SELECTOR [REMOTE_SENSING] 50 [Gbits] -1 10 32
Example of an data rate producing experiment declaration:
# JUICE Timeline generated
# Experiment modes for experiment REMOTE_SENSING
# Experiment definition
Experiment: REMOTE_SENSING "REMOTE_SENSING"
# Experiment dataflow definition
# The main dataflow is linked to the SSMM. Then we create two flows :
# one for high resolution data and one for low resolution data
# The observation files will describe the loading in each of them
Dataflow: TO SSMM_HIGH_RES
Dataflow_definition: RS_LOW_FLOW TO_EXP_DS SSMM_LOW_RES SSMM_RS_SELECTOR
Dataflow_definition: RS_HIGH_FLOW TO_EXP_DS SSMM_HIGH_RES SSMM_RS_BULK
Mode: OFF
Nominal_power : 0 [Watts]
Nominal_data_rate: 0.0 [bits/sec] TO_FLOW RS_LOW_FLOW
Nominal_data_rate: 0.0 [bits/sec] TO_FLOW RS_HIGH_FLOW
Mode: CUSTOM
Nominal_power : 0.00 [Watts]
Nominal_data_rate: 100.0 [bits/sec] TO_FLOW RS_LOW_FLOW
Nominal_data_rate: 5000.0 [bits/sec] TO_FLOW RS_HIGH_FLOW
Example of an downlink experiment declaration:
# Experiment definition
Experiment: KAB_LINK "JUICE Ka band link"
# Data flow
Dataflow: FROM SSMM_HIGH_RES
# Downlink modes
Mode: DISABLED "No downlink"
Nominal_power : 0 [Watts]
Nominal_data_rate: 0 [Kbits/s]
Mode: DUMP_HGA "Downlink through HGA antenna"
Nominal_data_rate: 1000 [Kbits/s]
Nominal_power : 0 [Watts]
# Experiment definition
Experiment: XB_LINK "JUICE X band link"
# Data flow
Dataflow: FROM SSMM_LOW_RES
# Downlink modes
Mode: DISABLED "No downlink"
Nominal_power : 0 [Watts]
Nominal_data_rate: 0 [Kbits/s]
Mode: DUMP_HGA "Downlink through HGA antenna"
Nominal_data_rate: 500 [Kbits/s]
Nominal_power : 0 [Watts]
EPS Datastore’s Files Layer
In order to EPS to being able to handle files stored in an EPS datastore, there is what we called the EPS Datastore’s Files Layer. It is enabled by setting the packet-size to -1 when defining the “Data_store”.
EPS can only treat data to be down-linked as a simple data volume where a data volume is down-linked depending on the available down-link data rate, so bit by bit. Or the data volume can be split in packets with fixed size. This packets are down-linked one by one in an atomic way, so a packet is down-linked until it’s finished and no other packet could be down-linked at the same time. No downlink is perform from a mass memory until it doesn’t reach the packet size.
At this point JUICE SOC required to handle files with fixed or variable size, but also to model open and closed files (only closed files could be down-linked), so as a workaround a new resource parameter were created. The “TRANSFER_FILE” action parameter will allow to move all the data in a source datastore, the one we call “Open Files Datastore” and enqueue it as a single packet with a custom packet size in the target datastore “Close Files Datastore”.
NOTE: After implementation of issue SC6-522, and with -1 set for the datastore packet-size, there is no need to have two data stores per experiment, Open and Closed files could be together at the same datastore, because only closed files will be downlinked. So there is no need to have an unique datastore dedicated to open files.
All these enqueued packets will be down-linked following a FirstIn-FirstOut rule until the queue is empty, at this point the default datastore packet size is restored.
With this “TRANSFER_FILE” option the JUICE_SOC were able to simulate files but it should be to hard to derive any file attribute, such us, file creation or downlink dates, file status, etc.
At the other hand the JUICE_SOC required to simulate the SELECTIVE downlink, and requires to know the status of the SSMM at any point. For doing so, we have created the EPS Datastore’s Files Layer that is basically adding this files attributes information to the legacy EPS data-stores implementation.
Supported file resource parameters are:
OPEN_FILE
CLOSE_FILE
MOVE_FILE
DELETE_FILE
Some considerations when using the EPS Datastore’s Files Layer:
- 1 - Setting correct datastore’s packet size: In order to avoid EPS to start downlinking
data from the very first opened file (first OPEN_FILE action) or any other opened file when there is downlink capacity and nothing to downlink, that for JUICE is not allowed (only closed files downlink is supported), the user shall specify a packet size bigger than the expected maximum file size. This will make EPS to idle the downlink of the datastore until the memory usage reach that packet size, that will never happen because the CLOSE_FILE shall be executed earlier. This approach requires also the following EPS setting in the EPS configuration file:
Setting: DUMP_ENTIRE_PACKETS TRUE
Once the CLOSE_FILE action is invoked, the only one available opened file is then closed, and enqueued for downlink, that should only happen if the datastore priority is smaller that 99. At the next downlink pass, EPS will retrieve the first file enqueued for downlink and will set the datastore’s packet size equal to the file size. So this amount of memory shall be treated as a single and atomic packet downlink. Once the file is down-linked, it is removed from the downlink queue, so the next file becomes the first and the process is repeated until the queue goes empty. At this point the datastore’s packet size is re-set to the original one, so the one in the datastore’s definition.
IMPORTANT NOTE: If an opened file reaches the packet size defined in the datastore definition, the packet will be down-linked without checking that it is an opened file, use a packet size bigger than the expected maximum file size.
IMPORTANT NOTE: All of this could be ignored after implementation of issue SC6-522, so after setting packet size -1 for the datastore there is no need to care about file sizes, or open files being downlinked, only closed files should be downliked.
- 2 - EPS will report/log the file state change events only when the logging level is set to DEBUG
to avoid extensive logs by default.
- 3 - Updating the datastore priority during simulation will not prepare for downlink the possible files
already closed in the datastore. Files are only prepared for downlink if at the closing time the datastore has a priority that allows downlink, so smaller than 99. If required we could prepare an ACTION to force the preparation for downlink of a given file in case it were closed during a time that the datastore priority was no downlink ( 99 ). For the time being we consider that this is not required because the datastore priorities are not expected to change during simulation.
- 4 - There is no restriction for opening files or not in EPS at this point. So if there is an open file
all data sent to the datastore will be marked as part of the file until it’s closed. But if there is no any open file, EPS will treat the incoming data-rate as simple data to be stored and transmitted following the packet size rules, so legacy behaviour. To avoid this behaviour a just set the packet size to -1, so then you actually enable the EPS Files Layer for that datastore, as implemented in SC6-522.
- 5 - EPS will raise an error if packet size is -1 (Files Layer enabled) and if the user try to store data in
a datastore without any open file.
6 - EPS will raise an error if the user tries to open a file in an overflown datastore.
Example of an EDF for the Datastore’s Files Layer definition:
######################################################################################
#
# EPS DATASTORES FILES LAYER
#
######################################################################################
Global_actions: OPEN_FILE CLOSE_FILE MOVE_FILE DELETE_FILE
#
# OPEN_FILE Action definition
# This action will create a file with a given filename and with status OPEN in the specified datastore.
#
# Considerations:
#
# - File cannot already exist.
# - Only one opened file per datastore is allowed.
#
# Ej: OPEN_FILE (DS_PARAM = 30 \
# FILENAME_PARAM = "File_1" )
#
Parameter: DS_PARAM
Raw_type: UINT
Resource: FILE_STORE
Parameter: FILENAME_PARAM
Raw_type: STRING
Resource: FILE_NAME
Parameter: OPEN_FILE_PARAM
Eng_type: REAL
Default_value: 0.0
Resource: OPEN_FILE
Action: OPEN_FILE
Action_parameters: DS_PARAM FILENAME_PARAM OPEN_FILE_PARAM
#
# CLOSE_FILE Action definition
# This action will close a file with a given filename and with status OPEN in the specified datastore.
#
# Considerations:
#
# - The filename shall be a valid filename and that has been initially opened with OPEN_FILE action.
# - It cannot be empty.
# - It cannot be already closed.
#
# Ej: CLOSE_FILE (DS_PARAM = 30 \
# FILENAME_PARAM = "File_1" )
#
Parameter: CLOSE_FILE_PARAM
Eng_type: REAL
Default_value: 0.0
Resource: CLOSE_FILE
Action: CLOSE_FILE
Action_parameters: DS_PARAM FILENAME_PARAM CLOSE_FILE_PARAM
#
# MOVE_FILE Action definition
# This action will move a file with a given filename and with status CLOSED from the source datastore
# to a target datastore.
#
# Considerations:
#
# - The filename shall be a valid filename and that has been closed with CLOSE_FILE action.
# - It cannot be deleted
#
# Ej: MOVE_FILE (SOURCE_PARAM = 30 \
# TARGET_PARAM = 31 \
# FILENAME_PARAM = "File_1" )
#
Parameter: SOURCE_PARAM
Raw_type: UINT
Resource: SOURCE_STORE
Parameter: TARGET_PARAM
Raw_type: UINT
Resource: TARGET_STORE
Parameter: MOVE_FILE_PARAM
Eng_type: REAL
Default_value: 0.0
Resource: MOVE_FILE
Action: MOVE_FILE
Action_parameters: SOURCE_PARAM TARGET_PARAM FILENAME_PARAM MOVE_FILE_PARAM
#
# DELETE_FILE Action definition
# This action will mark as deleted a file with a given filename and with status CLOSED in the specified datastore.
#
# Considerations:
#
# - The filename shall be a valid filename been closed with CLOSE_FILE action.
# - It cannot be already deleted.
# - It cannot be currently downlinking.
#
# Ej: DELETE_FILE (DS_PARAM = 30 \
# FILENAME_PARAM = "File_1" )
#
Parameter: DELETE_FILE_PARAM
Eng_type: REAL
Default_value: 0.0
Resource: DELETE_FILE
Action: DELETE_FILE
Action_parameters: DS_PARAM FILENAME_PARAM DELETE_FILE_PARAM
#
# SET DS PRIORITY Action definition
#
Parameter: DS_ID_PARAM
Raw_type: ENUM
Eng_type: TEXT
Resource: DATA_STORE
Parameter_value: 31 DS_SSMM_RS_BULK
Parameter_value: 32 DS_SSMM_RS_SELECTOR
Parameter_value: 33 DS_SSMM_RS_SELECTED
Parameter_value: 40 DS_SSMM_GEO_BULK
Parameter_value: 41 DS_SSMM_GEO_SELECTED
Parameter_value: 42 DS_SSMM_GEO_SELECTOR
Parameter: PRIORITY_PARAM
Raw_type: UINT
Resource: PRIORITY
Action: SET_PRIORITY
Action_parameters: DS_ID_PARAM PRIORITY_PARAM
Example of an ITL using the Datastore’s Files Layer:
2033-06-19T11:00:00.000Z SSMM_HIGH_RES * OPEN_FILE (DS_PARAM = 31 FILENAME_PARAM = "File_2" )
2033-06-19T11:00:00.000Z SSMM_LOW_RES * OPEN_FILE (DS_PARAM = 32 FILENAME_PARAM = "File_2_Thumb" )
2033-06-19T11:00:00.000Z REMOTE_SENSING * SWITCH_MODE (CURRENT_MODE=CUSTOM [ENG])
2033-06-19T11:30:00.000Z REMOTE_SENSING * SWITCH_MODE (CURRENT_MODE=OFF [ENG])
2033-06-19T11:30:00.000Z SSMM_HIGH_RES * CLOSE_FILE (DS_PARAM = 31 FILENAME_PARAM = "File_2")
2033-06-19T11:30:00.000Z SSMM_LOW_RES * CLOSE_FILE (DS_PARAM = 32 FILENAME_PARAM = "File_2_Thumb")
2033-06-19T12:00:00.000Z SSMM_HIGH_RES * OPEN_FILE (DS_PARAM = 31 FILENAME_PARAM = "File_3" )
2033-06-19T12:00:00.000Z SSMM_LOW_RES * OPEN_FILE (DS_PARAM = 32 FILENAME_PARAM = "File_3_Thumb" )
2033-06-19T12:00:00.000Z REMOTE_SENSING * SWITCH_MODE (CURRENT_MODE=CUSTOM [ENG])
2033-06-19T12:30:00.000Z REMOTE_SENSING * SWITCH_MODE (CURRENT_MODE=OFF [ENG])
2033-06-19T12:30:00.000Z SSMM_HIGH_RES * CLOSE_FILE (DS_PARAM = 31 FILENAME_PARAM = "File_3")
2033-06-19T12:30:00.000Z SSMM_LOW_RES * CLOSE_FILE (DS_PARAM = 32 FILENAME_PARAM = "File_3_Thumb")
2033-06-19T12:50:00.000Z SSMM_HIGH_RES * MOVE_FILE (SOURCE_PARAM = 31 \
TARGET_PARAM = 33 \
FILENAME_PARAM = "File_2")
2033-06-19T12:55:00.000Z SSMM_HIGH_RES * MOVE_FILE (SOURCE_PARAM = 31 \
TARGET_PARAM = 33 \
FILENAME_PARAM = "File_3")
2033-06-19T15:00:00.000Z SSMM_HIGH_RES * DELETE_FILE (DS_PARAM = 33 \
FILENAME_PARAM = "File_4")
# Set SSMM_RS_BULK to priority 5 (downlink enabled)
2033-06-19T16:00:00.000Z SSMM_HIGH_RES * SET_PRIO ( DS_ID_PARAM = 31 [RAW] PRIORITY_PARAM = 5 )
# Set SSMM_RS_BULK to priority 99 (downlink disabled)
2033-06-19T17:00:00.000Z SSMM_HIGH_RES * SET_PRIO ( DS_ID_PARAM = 31 [RAW] PRIORITY_PARAM = 99 )
More information about how to access this Files Layer from OSVE Callbacks and OSVE Datapacks on: The OsveFiles class at - OSVE Callback Datapacks and the EPS_DS_FILES overlay at OSVE Datapacks
Finally OSVE is able to report in a CSV all the files handled by the all the SSMM Datastores defined in the EPS model. To do so you need to indicate the “ssmmFilesFilePath” in the “outputFiles” section of the OSVE Session File. The generated CSV will look like this example:
# SSMM Datastore Files
# ISE/OSVE version: 9.3.24_69c09def / 2.6.8a1
#
# SSMM Experiment, Datastore name, Filename, File Status, File Size (Bytes), Creation Date, Closing date, Downlink Start Date, Downlink Date, Delete Date
SSMM_HIGH_RES,SSMM_RS_SELECTED,File_2,Downlinked,1125000,2033-06-19T11:00:00,2033-06-19T11:30:00,2033-06-19T12:50:00,2033-06-20T05:13:42,2033-06-20T05:13:42
SSMM_HIGH_RES,SSMM_RS_SELECTED,File_3,Downlinked,1124999,2033-06-19T12:00:00,2033-06-19T12:30:00,2033-06-20T05:13:42,2033-06-20T05:15:52,2033-06-20T05:15:52
SSMM_HIGH_RES,SSMM_RS_SELECTED,File_1,Downlinked,1125000,2033-06-19T10:00:00,2033-06-19T10:30:00,2033-06-20T05:15:52,2033-06-20T05:18:02,2033-06-20T05:18:02
SSMM_HIGH_RES,SSMM_RS_SELECTED,File_4,Deleted,1125000,2033-06-19T13:00:00,2033-06-19T13:30:00,,,2033-06-19T15:00:00
SSMM File Layer Initial and Final States:
EPS/OSVE could report the status of all files handled by the simulation at the end of the simulation if the “FINAL_STATES” is present in the “Output_files:” keyword of the EPS Configuration file:
Output_files: FINAL_STATES ...
EPS/OSVE will generate a “final_states.out” file at the “simOutputFilesPath” folder defined in the OSVE session file. This final states will have between others the files handles by each datastore in the following format:
# Final DS files Settings
# (5 final init DS files)
# Init_ds_file: <mass memory label> <experiment label> <file name label> <file size[[Mbytes]]> <downlinked size[[Mbytes]]> <creation date> <closing date> <downlink start date> <downlink end date> <deleted date>
Init_ds_file: SSMM_RS_BULK SSMM_HIGH_RES SSMM_RS_BULK_31_19 0.00000000 [Gbits] 0.00000000 [Gbits] 2033-06-20T00:00:00Z 2033-06-20T00:00:02Z - - -
Init_ds_file: SSMM_GEO_BULK SSMM_HIGH_RES SSMM_GEO_BULK_40_1 0.00035400 [Gbits] 0.00035400 [Gbits] 2033-06-19T16:28:00Z 2033-06-19T18:28:00Z 2033-06-20T05:13:27Z 2033-06-20T05:13:32Z 2033-06-20T05:13:32Z
Init_ds_file: SSMM_GEO_BULK SSMM_HIGH_RES SSMM_GEO_BULK_40_2 0.00036000 [Gbits] 0.00036000 [Gbits] 2033-06-19T22:28:00Z 2033-06-20T00:28:00Z 2033-06-20T05:19:30Z 2033-06-20T05:19:35Z 2033-06-20T05:19:35Z
Init_ds_file: SSMM_GEO_BULK SSMM_HIGH_RES SSMM_GEO_BULK_40_3 0.00000600 [Gbits] 0.00000600 [Gbits] 2033-06-20T00:28:00Z 2033-06-20T00:30:02Z 2033-06-20T05:21:31Z 2033-06-20T05:21:31Z 2033-06-20T05:21:31Z
Init_ds_file: SSMM_GEO_BULK SSMM_HIGH_RES SSMM_GEO_BULK_40_4 0.00004505 [Gbits] 0.00000000 [Gbits] 2033-06-20T05:28:00Z - - - -
The user could just use this “final_state.out” file for defining the next OSVE execution initial states. For doing so just include this file at the top level ITL:
Include_file: "../../output/eps_output/final_states.out"
The user could also define his own initial states if the mentioned syntax is followed and the current memory of the “Init_data_store”s is updated accordingly (usually by summing up all the file sizes of a given datastore).
In order to write dates as undefined for any “Init_ds_file” keyword, just type “-” instead of a date in date format YYYY-MM-DDTHH-mm-ss as shown in the example for file “SSMM_RS_BULK_31_4”.
In order to preserve the OPEN FILE state parameters association from one execution to the next one, use the following Initial States settings:
# Final DS ParamIDs Settings
# (2 final init DS Param IDs)
# Init_ds_paramId: <experiment label> <mass memory label> <stateParam label> <OPEN_FILE_ELAPSED_TIME|OPEN_FILE_SIZE>
Init_ds_paramId: SSMM_HIGH_RES SSMM_GEO_BULK PARAM_GEO_OPENED_TIME_SP OPEN_FILE_ELAPSED_TIME
Init_ds_paramId: SSMM_HIGH_RES SSMM_GEO_BULK PARAM_GEO_FILESIZE_SP OPEN_FILE_SIZE
This will tell EPS that the opened file SSMM_GEO_BULK_40_4 at datastore SSMM_GEO_BULK, will update the state parameters PARAM_GEO_OPENED_TIME_SP and PARAM_GEO_FILESIZE_SP with the opened elapsed time and also with any file size update.