img_phy_sim.eval
Evaluation Utility Functions
This module provides some Functionalities for Evaluation.
Main features:
- Calculate Metrice between rays and a target ray-image
Typical use cases:
- Evaluation / accuracy measurement
Dependencies:
- math
- numpy
- ips -> draw rays
Example:
import img_phy_sim as ips
dataset = ips.data.PhysGenDataset(mode='test', variation="sound_reflection", input_type="osm", output_type="complex_only")
f1_mean = 0
recall_mean = 0
precision_mean = 0
counter = 0
for (input_img, target_img, idx) in dataset:
rays = ips.ray_tracing.trace_beams(rel_position=[0.5, 0.5],
img_src=input_img.squeeze(0).numpy(),
directions_in_degree=ips.math.get_linear_degree_range(start=0, stop=360, step_size=36),
wall_values=None,
wall_thickness=0,
img_border_also_collide=False,
reflexion_order=3,
should_scale_rays=True,
should_scale_img=False)
f1, recall, precision = ips.math.calc_metrices(rays, nm_gt, eval_name=f"{len(ips.math.get_linear_degree_range(start=0, stop=360, step_size=i))} Rays", should_print=False)
f1_mean += f1
recall_mean += recall
precision_mean += precision
counter += 1
f1_mean /= counter
recall_mean /= counter
precision_mean /= counter
print(f"Baseline Accuracy: F1={f1_mean:.2f}, Recall={recall_mean:.02f}, Precision={precision_mean:.02f}")
Functions:
- calc_metrices(...) - Calculate F1, Recall and Precision between rays (or optinal an image) and an image.
1""" 2**Evaluation Utility Functions** 3 4This module provides some Functionalities for Evaluation. 5 6Main features: 7- Calculate Metrice between rays and a target ray-image 8 9Typical use cases: 10- Evaluation / accuracy measurement 11 12Dependencies: 13- math 14- numpy 15- ips -> draw rays 16 17Example: 18```python 19import img_phy_sim as ips 20 21dataset = ips.data.PhysGenDataset(mode='test', variation="sound_reflection", input_type="osm", output_type="complex_only") 22 23f1_mean = 0 24recall_mean = 0 25precision_mean = 0 26counter = 0 27for (input_img, target_img, idx) in dataset: 28 rays = ips.ray_tracing.trace_beams(rel_position=[0.5, 0.5], 29 img_src=input_img.squeeze(0).numpy(), 30 directions_in_degree=ips.math.get_linear_degree_range(start=0, stop=360, step_size=36), 31 wall_values=None, 32 wall_thickness=0, 33 img_border_also_collide=False, 34 reflexion_order=3, 35 should_scale_rays=True, 36 should_scale_img=False) 37 f1, recall, precision = ips.math.calc_metrices(rays, nm_gt, eval_name=f"{len(ips.math.get_linear_degree_range(start=0, stop=360, step_size=i))} Rays", should_print=False) 38 f1_mean += f1 39 recall_mean += recall 40 precision_mean += precision 41 counter += 1 42 43f1_mean /= counter 44recall_mean /= counter 45precision_mean /= counter 46 47print(f"Baseline Accuracy: F1={f1_mean:.2f}, Recall={recall_mean:.02f}, Precision={precision_mean:.02f}") 48``` 49 50Functions: 51- calc_metrices(...) - Calculate F1, Recall and Precision between rays (or optinal an image) and an image. 52""" 53 54 55 56# --------------- 57# >>> Imports <<< 58# --------------- 59import math 60import numpy as np 61 62from .ray_tracing import draw_rays 63 64 65 66# ----------------- 67# >>> Functions <<< 68# ----------------- 69 70def calc_metrices(rays, noise_modelling_gt, rays_format_is_image=False, eval_name="", should_print=True): 71 """ 72 Compute Precision, Recall, and F1 score by comparing ray-based predictions 73 against a ground truth image. 74 75 The function converts a set of rays into a binary image representation 76 (if not already provided as an image), normalizes both prediction and 77 ground truth if necessary, and evaluates their pixel-wise overlap. 78 All non-zero pixels are treated as positives. 79 80 Parameters: 81 - rays (array-like or np.ndarray):<br> 82 Ray representation or pre-rendered ray image, depending on 83 `rays_format_is_image`. 84 - noise_modelling_gt (np.ndarray):<br> 85 Ground truth image used for evaluation. 86 - rays_format_is_image (bool):<br> 87 If True, `rays` is assumed to already be an image. 88 If False, rays are rendered into an image using `draw_rays`. 89 - eval_name (str):<br> 90 Optional identifier printed alongside the evaluation results. 91 - should_print (bool):<br> 92 If True, print the computed metrics to stdout. 93 94 Returns: 95 - tuple:<br> 96 (f1, recall, precision) computed from binary pixel overlap. 97 """ 98 # Create image from rays 99 if rays_format_is_image: 100 ray_img = rays 101 else: 102 ray_img = draw_rays(rays, detail_draw=True, 103 output_format="single_image", 104 img_background=None, ray_value=1.0, ray_thickness=1, 105 img_shape=(256, 256), dtype=float, standard_value=0, 106 should_scale_rays_to_image=True, original_max_width=None, original_max_height=None) 107 108 # Normalize both (if needed) 109 if (noise_modelling_gt > 1.0).any(): 110 # raise ValueError("Noise Modelling Ground Truth Image is not normalized.") 111 noise_modelling_gt /= 255 112 113 if (ray_img > 1.0).any(): 114 # raise ValueError("Ray Image is not normalized.") 115 ray_img /= 255 116 117 # Thresholding to binary images 118 noise_modelling_gt_binary = noise_modelling_gt != 0.0 119 # numpy_info(noise_modelling_gt_binary) 120 rays_binary = ray_img != 0.0 121 122 # Recall, Precision, F1 Score 123 overlap = noise_modelling_gt_binary * rays_binary 124 125 # recall - how is the coverage towards the gt? 126 recall = np.sum(overlap) / np.sum(noise_modelling_gt_binary) 127 128 # precision - how many rays hit the right place? 129 precision = np.sum(overlap) / np.sum(rays_binary) 130 131 # f1 132 f1 = 2*(precision*recall) / (precision+recall) 133 134 if should_print: 135 print(f"Eval {eval_name}: F1={f1:.02f}, Recall={recall:.02f}, Precision={precision:.02f}") 136 return f1, recall, precision
def
calc_metrices( rays, noise_modelling_gt, rays_format_is_image=False, eval_name='', should_print=True):
71def calc_metrices(rays, noise_modelling_gt, rays_format_is_image=False, eval_name="", should_print=True): 72 """ 73 Compute Precision, Recall, and F1 score by comparing ray-based predictions 74 against a ground truth image. 75 76 The function converts a set of rays into a binary image representation 77 (if not already provided as an image), normalizes both prediction and 78 ground truth if necessary, and evaluates their pixel-wise overlap. 79 All non-zero pixels are treated as positives. 80 81 Parameters: 82 - rays (array-like or np.ndarray):<br> 83 Ray representation or pre-rendered ray image, depending on 84 `rays_format_is_image`. 85 - noise_modelling_gt (np.ndarray):<br> 86 Ground truth image used for evaluation. 87 - rays_format_is_image (bool):<br> 88 If True, `rays` is assumed to already be an image. 89 If False, rays are rendered into an image using `draw_rays`. 90 - eval_name (str):<br> 91 Optional identifier printed alongside the evaluation results. 92 - should_print (bool):<br> 93 If True, print the computed metrics to stdout. 94 95 Returns: 96 - tuple:<br> 97 (f1, recall, precision) computed from binary pixel overlap. 98 """ 99 # Create image from rays 100 if rays_format_is_image: 101 ray_img = rays 102 else: 103 ray_img = draw_rays(rays, detail_draw=True, 104 output_format="single_image", 105 img_background=None, ray_value=1.0, ray_thickness=1, 106 img_shape=(256, 256), dtype=float, standard_value=0, 107 should_scale_rays_to_image=True, original_max_width=None, original_max_height=None) 108 109 # Normalize both (if needed) 110 if (noise_modelling_gt > 1.0).any(): 111 # raise ValueError("Noise Modelling Ground Truth Image is not normalized.") 112 noise_modelling_gt /= 255 113 114 if (ray_img > 1.0).any(): 115 # raise ValueError("Ray Image is not normalized.") 116 ray_img /= 255 117 118 # Thresholding to binary images 119 noise_modelling_gt_binary = noise_modelling_gt != 0.0 120 # numpy_info(noise_modelling_gt_binary) 121 rays_binary = ray_img != 0.0 122 123 # Recall, Precision, F1 Score 124 overlap = noise_modelling_gt_binary * rays_binary 125 126 # recall - how is the coverage towards the gt? 127 recall = np.sum(overlap) / np.sum(noise_modelling_gt_binary) 128 129 # precision - how many rays hit the right place? 130 precision = np.sum(overlap) / np.sum(rays_binary) 131 132 # f1 133 f1 = 2*(precision*recall) / (precision+recall) 134 135 if should_print: 136 print(f"Eval {eval_name}: F1={f1:.02f}, Recall={recall:.02f}, Precision={precision:.02f}") 137 return f1, recall, precision
Compute Precision, Recall, and F1 score by comparing ray-based predictions against a ground truth image.
The function converts a set of rays into a binary image representation (if not already provided as an image), normalizes both prediction and ground truth if necessary, and evaluates their pixel-wise overlap. All non-zero pixels are treated as positives.
Parameters:
- rays (array-like or np.ndarray):
Ray representation or pre-rendered ray image, depending onrays_format_is_image. - noise_modelling_gt (np.ndarray):
Ground truth image used for evaluation. - rays_format_is_image (bool):
If True,raysis assumed to already be an image. If False, rays are rendered into an image usingdraw_rays. - eval_name (str):
Optional identifier printed alongside the evaluation results. - should_print (bool):
If True, print the computed metrics to stdout.
Returns:
- tuple:
(f1, recall, precision) computed from binary pixel overlap.