-
Notifications
You must be signed in to change notification settings - Fork 0
Introduction of returns to our system #70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
62360aa to
035162c
Compare
2bfa4ec to
a88e45d
Compare
Returns will be use to wrap function in container that can be passed around in rail. Thus this will be use for railway function methodology
We no longer are working with the raw value, since it's unclear at runtime if a failure occured or not. Thus the raw value will be encapsulated in a container that can be passed around until the entended reciever consumes it; think of a mail currier package. State of the container can be Success or Failure, but that can be unpack lazily
Writen module docstrings for new modules with the assistant of Claud (AI)
Pipelines is an abstracted module that communicates with the returns layer. This means endpoints can call scratch-core function without caring without caring what the underlying protocol is used for failures/exceptions
Create an upload scan parameters model that hold all the parameters to be used for the pipelines. All the parameters have a defaults and defaults will be generated if no parameters is given, thus nothing changes for the user or endpoint contract
deptry notify that dependencies needed maintainance
SimoneAriens
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
after my comments are resolved, good to merge!
Raytesnel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
| scan_file: Path, step_size_x: int = 1, step_size_y: int = 1 | ||
| ) -> ScanImage: | ||
| """ | ||
| Load a scan image from a file and optionally subsample it. Parsed values will be converted to meters (m). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we should merge the subsampling and parsing step, since we may want to subsample conditionally on the result of the parsed data (e.g. if the parsed image is larger than X pixels we need to subsample).
My initial idea was something like this:
scan_image = load_scan_image(filepath="/some/filepath")
if scan_image.width > MAX_SIZE:
scan_image.subsample(step_size=...) # in-place subsampling
scan_image = scan_image.subsample(step_size=...) # or return a new objectUnless we are sure we will never have to do that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think what he wanted here. (if that is the case i can agree) that this load_scan_image will be a top level callable. therefore the @impure_safe at the top of the function.
so in the end this function will be split up in different sup functions to load a scan_image
so in the end, to top level called only needs to think about loading a scan image. configs should handle the rest.
so software is not fixed, so i can't garentee you the we will never have to do that. But for now i think the logic here is some babysteps to make eventually here a pipeline for loading a file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these are two separate steps, which i would expect in parse_scan_pipeline because loading and subsampling an image are distinct
return run_pipeline(
scan_file,
load_scan_image,
partial(subsample_image, **parameters.as_dict(include={"step_size_x", "step_size_y"}),
error_message=f"Failed to parsed given scan file: {scan_file}",
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yhea, i didnt saw parse_scan_pipeline. i tought this was is first step to make later when there is more a pipeline. but he already has one. il fix it.
Do not merge before checking the tests manually
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some minor changes requested, but overall looks good! :)
Diff CoverageDiff: origin/main..HEAD, staged and unstaged changes
Summary
packages/scratch-core/src/container_models/base.pyLines 8-16 8
9
10 def serialize_ndarray[T: number](array_: NDArray[T]) -> list[T]:
11 """Serialize numpy array to a Python list for JSON serialization."""
! 12 return array_.tolist()
13
14
15 def coerce_to_array[T: number](
16 dtype: DTypeLike, value: Sequence[T] | NDArray[T] | Nonepackages/scratch-core/src/container_models/scan_image.pyLines 17-27 17
18 @property
19 def width(self) -> int:
20 """The image width in pixels."""
! 21 return self.data.shape[1]
22
23 @property
24 def height(self) -> int:
25 """The image height in pixels."""
! 26 return self.data.shape[0]packages/scratch-core/src/renders/image_io.pyLines 30-38 30 input_array: ScanMap2DArray, lower: float, upper: float
31 ) -> ScanMap2DArray:
32 """Perform min-max normalization on the input array and scale to the [0, 255] interval."""
33 if lower >= upper:
! 34 raise ValueError(
35 f"The lower bound ({lower}) should be smaller than the upper bound ({upper})."
36 )
37 return (input_array - lower) / (upper - lower) * 255.0Lines 49-57 49 :param std_scaler: The multiplier for the standard deviation of the data to be clipped.
50 :returns: A tuple containing the clipped data, the lower bound, and the upper bound of the clipped data.
51 """
52 if std_scaler <= 0.0:
! 53 raise ValueError("`std_scaler` must be a positive number.")
54 mean = np.nanmean(data)
55 std = np.nanstd(data, ddof=1) * std_scaler
56 upper = float(mean + std)
57 lower = float(mean - std)packages/scratch-core/src/utils/logger.pyLines 31-43 31 def log_failure(failure_message: str, failure_level: FailureLevel, error: str) -> None:
32 logger.debug(f"{failure_message}: {error}")
33 match failure_level:
34 case FailureLevel.WARNING:
! 35 logger.warning(failure_message)
36 case FailureLevel.ERROR:
37 logger.error(failure_message)
! 38 case FailureLevel.CRITICAL:
! 39 logger.critical(failure_message)
40
41
42 def _log_io_container(
43 result: IOResult,src/preprocessors/schemas.pyLines 41-49 41 :param exclude: Set of field names to exclude
42 :param include: Set of field names to include (mutually exclusive with exclude)
43 """
44 if exclude and include:
! 45 raise ValueError("Cannot specify both 'exclude' and 'include'")
46
47 fields = set(self.__class__.model_fields)
48
49 if include: |
| "Failed to subsample image file", | ||
| "Successfully subsampled scan file", | ||
| ) | ||
| @impure_safe |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is safe not impure_safe, there is no io call here.
SimoneAriens
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there were some issues regarding the difficulty of debugging without seeing a stack trace, so check this before merging
Introduction of returns to our system. Returns uses railway functions methodology in order to deal with None and Exceptions.
Goal of this PR is to integrate implemented library function with the API while handling all the errors