Week 1 homework

cover image cover image

1) Description of a biological engineering application or tool that you want to develop and why:

My idea as a project for HTGAA 2026 is to develop and implement “Hemoneubauer with artificial intelligence”, an application that uses artificial vision, in order to quantify and classify blood cells from a direct image, with the ability to display in real timea precision grid.

A biological engineering tool that contributes to the correct determination of cell concentration in a sample, serving as a bridge between traditional manual counting and automated analysis. In this way, it provides students and professionals with access to an intuitive platform for the revalidation of results, promoting confidence and deepening the understanding of hematological analyses.

Governance Objectives

♦ Main Objective 1: Ensure Safety and Clinical Reliability.

Subgoal 1: Clearly Define and Communicate the Scope and Limitations of the Tool.

• Specification: Prevent misuse in contexts for which the application was not designed or validated, which could lead to misdiagnosis.

• Interface design and documentation that explicitly specify that the tool is a diagnostic and/or educational support device, not an absolute replacement for the judgment of a qualified professional.

• Inclusion of mandatory messages in the application: The results must be interpreted by a health professional in the complete clinical context of the patient.

• Technical restriction for the analysis of samples that present extreme abnormal characteristics not included in the training data set.

♦ Main Objective 2: Promote Equity in Access and Autonomy of the Professional, preventing social harm such as the widening of gaps in access to health or the erosion of human authorship and understanding in the diagnostic process.

Subgoal 1: Design for Accessibility and Reduce Technology Gaps.

• Specification: Ensure that the tool is not a privilege of high-resource laboratories, but can be deployed in medium- and low-resource settings, contributing to more equitable medicine.

• Proposed Mechanisms: Development of a lightweight mode of operation that requires minimal computational capacity and can operate on basic hardware or even offline.

• Complete and open documentation for adapting the tool to mid-range microscopes.

Subgoal 2: Preserve and Strengthen User Autonomy, Competence and Transparency.

• Proposed Mechanisms: The precision grid must be interactive, allowing the user to have an intuitive, efficient, fluid experience focused on the user’s needs.

• Learning Mode: Include guided exercises where the app initially hides its results, allowing the user to manually count on the grid and then compare, encouraging teaching and critical verification.

• Full Audit: All results must be traceable back to the original image and the specific grid used, allowing for full human review in case of doubt.

3) Description of three possible governance actions, considering the following four aspects (Purpose, Design, Assumptions, Risks of Failure and Success):

Purpose:

✓ Promote collaborative auditing, local and technical adaptation, while preventing malicious or unethical commercial use.

✓ Integrate multidisciplinary perspectives into the project lifecycle to ensure that development, deployment and updates align with ethical principles and real clinical needs.

✓ Mitigate the risks of bias of the artificial intelligence model, ensuring that its performance is equitable and robust over time and in different populations.

Design:

✓ Permanent consultative environment composed of:

  1. Clinicians (hematologists, pathologists): To validate diagnostic standards.
  2. Bioethicists: To evaluate social and privacy implications.
  3. Systems and software engineers: To advise on the design of fair algorithms.
  4. Representatives from resource-limited environments: To ensure that the design is truly accessible.

✓ Establish a clear protocol to withdraw or correct the model if significant bias or performance degradation is detected.

✓ The artificial vision model will keep the image displayed on the professional or student’s device.

Supposed:

✓ The scientific and developer community will contribute to improving the code and report vulnerabilities.

✓ Members will have the authority and time to review critical aspects of the project and incorporate their recommendations.

✓ Access to sufficiently diverse and representative test data sets.

Risks of failure and “success”:

✓ The environment could become a symbolic entity without real decision-making power.

✓ Extreme anonymization could unintentionally degrade the quality of data for training. Federated learning adds technical complexity and can slow development.

✓ Malicious users could hack the code to remove development limits.

✓ Council deliberations and reviews could significantly slow down the development cycle and response to urgent technical improvements, adding bureaucracy.

✓ Bugs or minor failures that could affect user confidence in the tool.

✓ Establishing such a high privacy standard could become an entry barrier for collaborators with fewer technological resources.

4) Rating of each of the governance actions according to their policy objectives rubric:

Does the option:Option 1Option 2Option 3
Enhance Biosecurity
• By preventing incidents321
• By helping respond312
Foster Lab Safety
• By preventing incident321
• By helping respond312
Protect the environment
• By preventing incidentsn/an/an/a
• By helping respondn/an/an/a
Other considerations
• Minimizing costs and burdens to stakeholders123
• Feasibility?123
• Not impede research123
• Promote constructive applications312

5) Description of which governance option, or combination of options, you would prioritize and why. Describe the trade-offs you considered, as well as the assumptions and uncertainties:

It considered prioritization in the combination of Embedded Systems and Explainability along with update protocols and privacy by design as a fundamental basis.

Because? • They address the main risk of not detecting errors in the artificial intelligence algorithm that lead to incorrect results.

• Prevent defective versions of the software from being distributed.

• Allows you to investigate and understand any errors that occur, creating a continuous and transparent improvement cycle.

✓ Compensations:

• A slower update cycle to ensure that each new version is secure.

• The risk that users ignore the warnings and explanations that the tool provides.

✓ Assumptions:

• End users will value and demand these transparency and security features, justifying the development effort.

• The availability of diverse and representative data sets to train the models.

✓Uncertainties

• Can it provide explanations that are clear and useful enough for a student or professional, or will they create more confusion?

• The automated one might not be enough, causing update blocks?