Skip to content

Conversation

@ian-ross
Copy link
Member

This PR is an initial attempt to organize the performance model part of AEIC more clearly.

Changes

  • Following discussions with @WyattGiroux, a performance model is now fundamentally an object that allows you to give an aircraft state (AEIC.performance.types.AircraftState, i.e. altitude, aircraft mass, requested airspeed, requested rate of climb/descent) and a "flight rule" and to get back performance data (AEIC.performance.types.Performance: actual achievable airspeed, actual achievable rate of climb/descent, fuel flow). "Flight rules" are things like "Mach climb", "constant ROCD climb", "constant altitude cruise", etc., although the current legacy performance model only supports "climb, cruise or descent" via the AEIC.performance.types.SimpleFlightRules type.
  • Performance models are now classes that derive from AEIC.performance.models.BasePerformanceModel. All of these classes are Pydantic models to enable easy initialization from TOML data. The base class has an evaluate method that does the performance calculations, and the derived classes have an evaluate_impl method that actually implements the evaluation (see comments in the code for the reason for splitting things like this). Performance model classes are generic in a flight rules type: this allows for different levels of flight rule flexibility in different performance models. The LegacyPerformanceModel uses a simple climb/cruise/descent enumeration (AEIC.performance.types.SimpleFlightRules), but a more complex performance model could have programmatic flight rules that a caller of the evaluate method can use to implement parameterizable flight rule behavior as required.
  • A performance model loader class (AEIC.performance.models.PerformanceModel) is used to load performance models from TOML files. This has a load method with a polymorphic return type that decides on what type of performance model to return based on a model_type field in the TOML input.
  • The legacy performance model has been refactored to split the performance table out into a separate PerformanceTable class that makes the relationship between flight level and mass and the performance model output quantities more explicit, and that uses a simpler and clearer representation of the performance table data that makes the behavior of the model more legible.
  • The legacy performance model has been simplified: all data is now taken from the input TOML file. This represents a slight change in philosophy: performance model files should be self-contained, and there should not be flags and switches that cause code to read data from alternative sources. Instead, additional executable scripts will be provided to create performance model files from whatever data sources people want to use (see below).
  • The legacy trajectory builder has been modified to use the new performance model API.
  • As part of converting the legacy performance model and trajectory builder, a couple of bugs were fixed: 1. interpolation in the performance table was not implemented correctly because of the way that table values were stored in sorted order in the old code; 2. the holdMass (now hold_mass) value in the calc_starting_mass method was wrong (it wasn't a mass at all).
  • A make-performance-model executable script has been created to generate (legacy) performance model TOML files from selected data sources. (This is not finished, and Adi and/or Wyatt will need to take a look at it. I think it's OK for the purposes of this PR, but it is definitely unfinished.)
  • A "golden test" has been added for end-to-end trajectory simulation. The intention here is to ensure that any changes to simulation results resulting from future code changes are noticed during testing. If the changes are OK, the golden test results should be recreated and committed. (This "golden test" is not from before the changes in this PR, because of the bugs in the performance model detected during this work. Instead, it's from before I did some optimization on the performance table code. I'm thus at least confident that it's possible to make meaningful code changes without breaking the test, as long as... you don't break the test!)
  • Data structures have been introduced for things like APU data, replacing dictionaries.
  • LTO data representation has been improved and the corresponding code in the emissions module simplified.
  • The PFT data parser has been improved, adding an explicit representation of the contents of a PTF file as a data structure instead of using an ad hoc dictionary.
  • Names have been largely converted to Python standards, following PEP8.
  • Units have been rationalized in many places.
  • Documentation has been updated in many places, all non-index documentation pages have been converted to Markdown, and cross references have been introduced (from class and method names) in most areas.
  • Developer documentation has been added, including a data dictionary with standardized names and units for many quantities.

Questions

  • What exactly are the constraints on the data in the legacy performance model files? I've seen comments that indicate that in climb and descent, TAS, ROCD and fuel flow depend only on flight level, and similar things, but the data in the sample performance model file doesn't seem to be consistent with all of the comments I've seen. (I'm being deliberately vague here so that someone has to actually explain the basis of what's going on here rather than just to justify the comments!)
  • Where did the sample performance model data come from? Is there a script somewhere to generate that file?
  • What are LTO files? They are not BADA standard, and I suspect they are "home-made" from some other LAE software (TASOPT?). If so, can they be in a better format, and can someone write some documentation about where they come from and what the contents actually mean? (Something at the standard of the documentation of the PTF file format in the BADA manual would be ideal.)
  • We seem to have two different functions for pressure altitude conversion, one in AEIC.utils.standard_atmosphere and one in the weather module. Can we either settle on one or document why we're using one or the other at any particular time?

Requests

  • @aditeyashukla / @WyattGiroux : could you do a careful review of the logic in the legacy trajectory builder and the legacy performance model, following the flow of control through the LegacyBuilder's calc_starting_mass, fly_climb, fly_cruise and fly_descent methods? I believe that the behavior is now correct everywhere, but it would be good to have some other eyes on it. The simulation-checks.ipynb notebook contains plots of all along-trajectory quantities for a single simulation, and it looks OK to me. (There are discontinuities in airspeed at top-of-climb and beginning-of-descent as the performance model switches between the different parts of the performance table, but those are the only obvious anomalies I've seen.)
  • @aditeyashukla / @WyattGiroux : could you take a look at the make-performance-model script and tell me what's missing/wrong? In particular, can I get Foo_kN from the engine database, and what's the deal with LTO files? (see above)
  • Everyone: could you think about better names for some of the entities I've created? In particular AEIC.performance.types.Performance isn't very good (it's the result of evaluating a performance model on an aircraft state). Also, can you think of a better name for the evaluate method on performance models (this is the main method that takes an aircraft state and returns the performance data)? I would also like an accurate descriptive name for Foo_kN!

@ian-ross
Copy link
Member Author

Bah. My golden test is failing because, according to the airports data we download, Denver International Airport moved between the time I made the golden test output and now... I need to figure out some way of making those things stable for testing. For the moment I'll disable the golden test.

@ian-ross
Copy link
Member Author

OK, I fixed the golden test by making a test airports file that's fixed, based on the sample missions and other airports used in the test. There's a little bit of extra machinery to allow the configuration system to put that stable file "in front of" the normal airports file. Seems to work, and all the tests now pass on both platforms.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant