Homework

Weekly homework submissions:

  • Week 1 HW: Principles and Practices

    (SlimeMould_teaser.jpg) Question 1: I propose to develop a Living Urban Decision Interface (LUDI): a biohybrid computational device using slime moulds as a spatial decision-making substrate for ecological design. The organism’s network-forming behaviour and electrical oscillations will be interfaced with environmental and human inputs (light, nutrients, moisture proxies) to compute spatial layouts for green corridors, gardens and urban green infrastructure. Unlike simulation models, this system allows a living organism to actively participate in planning decisions, translating ecological processes into design recommendations. Because the slime mould’s network optimisation behaviour arises from identifiable molecular signalling and transport pathways that have already been experimentally used in laboratory studies, this project also establishes a future synthetic-biology direction in which genetically tuned physiological responses could allow the organism to compute specific environmental variables (e.g., pollutants, soil conditions, or water stress). The project aims to prototype a biological planning instrument that mediates between community intention and environmental constraints, opening a new form of participatory urban design in which living systems become co-designers of cities.

  • Week 2 HW: DNA, Read, and Write

    Part 1 Part 3 3.1. My chosen protein: Green Fluorescent Protein (GFP) I chose Green Fluorescent Protein (GFP) originally discovered in the jellyfish Aequorea victoria. I chose this protein because: It is one of the most important tools in modern biotechnology. It glows bright green under blue/UV light. Scientists use it as a reporter protein to see when genes are turned on inside living cells. It lets researchers literally watch biology happen. I retrieved the sequence using UniProt (a protein database).

Subsections of Homework

Week 1 HW: Principles and Practices

cover image cover image (SlimeMould_teaser.jpg)

Question 1:

I propose to develop a Living Urban Decision Interface (LUDI): a biohybrid computational device using slime moulds as a spatial decision-making substrate for ecological design. The organism’s network-forming behaviour and electrical oscillations will be interfaced with environmental and human inputs (light, nutrients, moisture proxies) to compute spatial layouts for green corridors, gardens and urban green infrastructure. Unlike simulation models, this system allows a living organism to actively participate in planning decisions, translating ecological processes into design recommendations. Because the slime mould’s network optimisation behaviour arises from identifiable molecular signalling and transport pathways that have already been experimentally used in laboratory studies, this project also establishes a future synthetic-biology direction in which genetically tuned physiological responses could allow the organism to compute specific environmental variables (e.g., pollutants, soil conditions, or water stress). The project aims to prototype a biological planning instrument that mediates between community intention and environmental constraints, opening a new form of participatory urban design in which living systems become co-designers of cities.

Question 2:

Governance and Policy Goals

Overall goal: Ensure that a Living Urban Decision Interface (a biohybrid computing system using Physarum polycephalum) supports ecological stewardship and public participation without introducing biological harm, technocratic authority, or misleading representations of ecological knowledge.

Goal 1 Biological Safety and Environmental Containment: (Non-malfeasance: prevent harm to ecosystems and people) Even though Physarum polycephalum is non-pathogenic and widely present in soil, a civic-facing biohybrid device requires clear safety governance so it is not treated casually or released improperly.

Sub-goal 1.1: Containment protocols

  • Maintain the organism in closed, recoverable substrates (agar plates or contained bioreactors)
  • Prohibit intentional outdoor release during demonstrations
  • Establish a documented deactivation method (drying, freezing, or ethanol sterilisation)
  • Provide handling instructions to participants

Sub-goal 1.2: Transparency of organism status

  • Clearly communicate that the organism is alive
  • Label installations as biological systems
  • Require informed interaction (no hidden devices)
  • Participants should not unknowingly interact with living biological materials.

Goal 2: Epistemic Responsibility (Preventing ‘Bio-Authority’) (Prevent misuse of biological outputs as unquestionable truth)

The largest risk of this project is not biological, it is decision authority. People may interpret the slime mould as “nature deciding”, and planners could misuse it to justify policies.

Sub-goal 2.1: Non-deterministic interpretation

  • Present outputs as recommendations, not decisions
  • Display uncertainty (multiple possible paths, not a single answer)
  • Require human deliberation alongside organism output

Policy principle: The organism informs, it does not govern.

Sub-goal 2.2: Anti-technocratic safeguards

  • Prevent institutions from using the system to legitimise exclusionary planning
  • Document inputs used in each computation
  • Make datasets and conditions publicly visible

Otherwise a city could say “the biological system determined this neighbourhood should not be greened.” The risk here is not the slime mould but authority laundering through nature.

Goal 3: Participatory Equity and Access (Promoting constructive uses rather than extractive ones). If the tool only exists in universities or smart-city labs, it becomes another technology used on communities rather than with them.

Sub-goal 3.1: Community interpretability

  • Use legible visual outputs (maps, paths, growth patterns)
  • Allow participants to manipulate inputs (light, nutrients, etc)
  • Provide educational explanation of how the organism computes

Goal: People should understand the system well enough to disagree with it.

Sub-goal 3.2: Open civic access

  • Publish protocols openly
  • Use low-cost hardware where possible
  • Enable community gardens and schools to run their own devices

This shifts it from: biotechnology product → civic ecological instrument.

Goal 4: Preventing Anthropomorphic or Extractive Framing (Respecting living systems as collaborators rather than tools). The project risks treating life as a novelty interface.

Sub-goal 4.1: Welfare considerations

  • Avoid life cycles solely for demonstration
  • Maintain proper humidity and feeding
  • Limit unnecessary repeated stress stimuli
  • Even non-sentient organisms deserve stewardship in educational contexts.

Sub-goal 4.2: Representational honesty

  • Avoid presenting the organism as “wanting” specific urban outcomes
  • Frame it as a biological process responding to constraints
  • Avoid claiming ecological knowledge beyond what the experiment measures. This prevents ecological romanticism becoming misinformation.

Goal 5: Ecological Data Rights & Bio-Cybersecurity to protect the integrity and ownership of biological data generated by living systems

The Living Urban Decision Interface converts biological processes (oscillations, growth patterns, and spatial choices of Physarum polycephalum) into digital information used for civic decision-making. This creates a new category of data: ecological behavioural data produced by a living organism rather than a human or a conventional sensor. Governance frameworks should therefore prevent appropriation, manipulation, or enclosure of these biological signals.

Sub-goal 5.1: Biological signal integrity (bio-cybersecurity)

  • Protect recorded organism signals from alteration or algorithmic manipulation
  • Document all translation steps from electrode signal → software → map
  • Maintain open logs of processing methods
  • Prevent “tuning” outputs to support predetermined planning outcomes and prevent scientific data tampering. (ie. once digitised, a city, company, or platform could quietly modify outputs and still claim “the living system recommended this.”)

Sub-goal 5.2: Ecological data stewardship (non-extractive use)

  • Treat organism-generated data as a commons rather than proprietary data
  • Prohibit exclusive ownership or patenting of specific behavioural outputs
  • Require public accessibility of datasets produced in civic contexts
  • The organism’s behaviour should not become a privately enclosed planning resource.

Sub-goal 5.3: Proto-rights of living computational agents

The project adopts a precautionary “nature-rights” stance in which biological participants are considered contributors rather than passive instruments.

Operational implications:

  • The organism cannot be represented as endorsing political or commercial claims
  • Its outputs cannot be used in advertising or greenwashing
  • Its participation must be disclosed in all decision context
  • working with the notion of legal personhood and establishes: a data dignity principle for non-human contributors.

The project includes governance goals to ensure the system contributes to an ethical future. First, biological safety will be addressed through containment protocols, deactivation procedures, and transparent lableing of the organism as a living material. Second, epistemic safeguards will prevent the biohybrid system from being used as an authoritative decision-maker: outputs will be presented as recommendations with documented inputs and uncertainty rather than determinations. Third, the project promotes participatory equity by making protocols open, legible, and community-operable so the tool cannot be restricted to institutional planning contexts. Fourth, the organism will be treated as a living collaborator rather than a novelty interface, with care standards and representational honesty about what the system can and cannot infer about ecological conditions. An additional governance goal concerns ecological data rights and bio-cybersecurity. Because the device translates the behaviour of a living organism into digital planning information, it produces a novel category of data: ecological behavioural data generated by a non-human participant. The project therefore treats organism-generated signals as a shared ecological commons rather than proprietary information. Processing steps from biological signal to spatial recommendation will be documented and transparent to prevent manipulation or institutional misuse. This establishes a form of “data dignity” for non-human contributors and supporting broader principles of nature-rights within civic technology systems.

Question 3:

Governance Action 1: A Transparency & Disclosure Requirements

A. Purpose

What happens now:

Cities and institutions increasingly use algorithmic decision tools (AI planning models, environmental sensors, “smart city” platforms) without clearly disclosing how recommendations are produced. Biohybrid systems introduce an additional layer:

Proposed change:

Require public disclosure whenever a biological computing or biohybrid system contributes to planning, environmental assessment, or civic decisions.

This is similar to:

  • AI transparency policies
  • environmental impact assessments
  • food labeling

The goal is not to restrict research, but to prevent authority laundering through biology.

B. Design

Actors involved:

  • municipal governments / planning departments
  • universities deploying installations
  • community organisations hosting devices

Implementation could include:

  • a “biohybrid disclosure notice” (like building permits)
  • documentation of inputs used (light gradients, nutrients, constraints)
  • explanation of interpretation limits

A simple standard:

Any civic recommendation derived from a living organism must include a description of how the organism’s behaviour was translated into a decision.

C. Assumptions

  • People may over-trust biological outputs.
  • Institutions might present outputs as objective evidence.
  • Civic systems lack frameworks for non-digital computation.
  • Communities may only understand it as experimental
  • The system might never be used in formal planning contexts

B. Risks of Failure & “Success”

  • Institutions ignore or weakly apply disclosure → system becomes symbolic justification for predetermined policies.
  • safety rules to unintentionally create technological gatekeeping.

If the requirement is too bureaucratic, it may:

  • discourage community groups from using the tool
  • centralise it in universities or experimental only

Governance Action 2: Public Ecological Data Commons Licensing

Purpose

What happens now:

Environmental sensing data (air quality sensors, satellite imagery, soil sensors) is often:

  • privately owned
  • platform-controlled
  • monetised

this project produces biological behavioural data → digitised → planning recommendations.

Proposed change:

Treat organism-generated data as a public ecological commons rather than proprietary platform data.

Similar samples:

  • open-source software licenses
  • Creative Commons
  • open meteorological data

Design

Actors:

  • academic labs
  • civic tech organisations
  • municipalities
  • funders
  • citizens

Implementation:

Projects receiving public funding must:

  • publish raw signals
  • publish interpretation method
  • allow reuse by communities

Key rule:

No exclusive ownership of behavioural outputs derived from the organism in civic contexts. This prevents a company from building proprietary nature intelligence planning software.

Assumptions

  • Companies will want to commercialise ecological computation
  • Open access improves democratic participation
  • Maintaining open datasets requires resources
  • Communities may not have capacity to use the data

Risks of Failure & “Success”

Data becomes open but unusable → only experts benefit.

If widely adopted, developers might:

  • mass-deploy biological computing
  • treat organisms as scalable infrastructure

You could accidentally create a bio-smart-city industry that farms living systems as computation utilities.

Governance Action 3: Built-in Technical Safeguards (Design Governance)

Purpose

What happens now:

Most governance relies on rules after technology is deployed. But biohybrid systems allow something different: you can embed governance inside the device itself.

Proposed change:

Incorporate interpretability and uncertainty directly into the interface so the system cannot produce a single authoritative answer. This is a technical safeguard rather than a rule.

Design

Actors:

  • you (researcher/designer)
  • open-source hardware developers
  • academic labs

Implementation features:

  • multiple possible outputs shown simultaneously
  • visible organism state
  • display of input conditions
  • no “optimal solution” output
  • interactive participation

Question 4:

Does the option:Option 1Option 2Option 3
Enhance Biosecurity
• By preventing incidents2
• By helping respond1
Foster Lab Safety
• By preventing incident1
• By helping respond2
Protect the environment
• By preventing incidents1
• By helping respond1
Other considerations
• Minimizing costs and burdens to stakeholders1
• Feasibility?2
• Not impede research1
• Promote constructive applications1

Question 5

Prioritized Governance Approach:

Drawing on the scoring matrix and governance goals, I would prioritize transparency and disclosure requirements as the primary governance mechanism, supported by ecological data commons and stewardship licensing as a complementary policy. The scoring indicates that the most immediate risks arise from interpretation, accountability, and ownership rather than from direct biological hazards.

The scores show that this aligns best across almost every category: it supports biosecurity response, lab safety, environmental protection, minimizes burdens to stakeholders, does not impede research, and promotes constructive applications. This pattern suggests that the central governance challenge of a biohybrid system like the Living Urban Decision Interface is not controlling the organism itself, but governing how its outputs are interpreted and used in decision-making contexts.

The project introduces a new kind of system: a living organism whose behavior is translated into planning recommendations. Without disclosure, a municipality, institution, or organization could present outputs as objective ecological evidence rather than as the result of a mediated experimental process. The scoring therefore points to a governance priority focused on preventing authority laundering (situations in which a biological system is used to legitimise decisions that were effectively human choices.)

A disclosure requirement would ensure that whenever a biohybrid computational system contributes to civic or environmental planning, the conditions of the experiment, the inputs provided, and the interpretive limitations are publicly documented. This improves both prevention and response: it allows communities to scrutinize how results were produced, and it provides accountability if the system is misapplied.

However, transparency alone does not address longer-term risks of enclosure and power concentration. This is where the second governance option becomes important. Although it scored more modestly in immediate safety categories, it addresses a structural issue: once biological signals are digitised, they can become proprietary datasets. Without stewardship rules, a private entity could accumulate organism-generated environmental intelligence and deploy it as a planning platform or optimisation service, turning ecological participation into a privately controlled resource.

Therefore I recommend combining these goals to govern interpretation and ownership/ access. Together they ensure that the organism functions as a civic ecological participant rather than as either a gimmick or a proprietary decision engine.

Intended Audience

I would direct this recommendation primarily to municipal governments, public research institutions, and citizens, such as a city planning office working with a university lab and public organising groups.

This level of governance is appropriate because the technology is likely to appear first in:

  • public demonstrations
  • participatory planning workshops
  • civic environmental pilot projects
  • community science programs

National regulation would likely be premature and overly restrictive, while purely voluntary norms would be insufficient once planning decisions begin referencing biological outputs. Municipal–institutional partnerships are therefore the most realistic and proportionate governance layer.

Implementation could take the form of a simple “biohybrid decision disclosure” requirement for publicly hosted projects, combined with grant conditions requiring open ecological data stewardship for publicly funded research.

Trade-offs Considered

The primary trade-off is between accountability and accessibility. A strong regulatory framework could improve safety but would likely prevent community organisations, schools, and small civic groups from using the system. Because the organism itself is not hazardous, heavy regulation would produce more harm than benefit by centralising experimentation in large institutions.

Goal 1 was prioritised partly because it minimizes burden while still providing accountability. It creates oversight without requiring licensing or specialized certification.

Goal 2 focuses on open ecological data reduces enclosure but may reduce incentives for private development and requires ongoing maintenance of shared datasets. There is also a risk that open data could be misinterpreted or reused out of context. However, the alternative of proprietary ownership of organism-generated ecological knowledge creates a larger long-term governance concern, particularly for urban planning equity.

Goal 3 (technical safeguards) presents another trade-off: embedding uncertainty into the interface may reduce misuse, but if the outputs appear too ambiguous, planners may disregard ecological input entirely. Because its effects are difficult to evaluate. I would treat this as a research and design practice rather than a primary policy mechanism at this stage.

Assumptions and Uncertainties

First, I assume the primary risk pathway is social and institutional misuse, not biological hazard. If future engineered strains were capable of environmental persistence or sensing harmful chemicals, biosafety regulation would need to be strengthened.

Second, I assume municipalities and research institutions will be early adopters. If private smart-city vendors deploy similar systems first, stronger national or procurement-level governance may be required.

Third, there is uncertainty about how people interpret biological outputs. Communities may either over-trust (“nature decided this”) or under-trust (“this is just art”). The governance framework attempts to preserve a middle ground: the organism contributes information but does not make decisions.

Finally, I assume ecological data can function as a commons. In practice, maintaining accessibility, interpretability, and long-term stewardship will require institutional commitment that may not yet exist.

Conclusion

Based on the scoring, the most effective governance strategy is not to tightly regulate the organism but to regulate the relationship between biological computation and civic authority. A combination of transparency requirements and ecological data stewardship best addresses the real risks (misinterpretation, enclosure, and inequitable decision-making) while preserving the benefits of experimentation, public participation, and research innovation.

Reflections: Ethical Concerns and Proposed Governance

During this week I began thinking less about whether the slime mould works as a computational system and more about what it means to introduce a living organism into civic decision-making and data infrastructures. Several ethical concerns emerged that were new to me.

1. Municipal misuse and “authority laundering”

One concern is how municipalities or planning institutions might use outputs from the slime mould. Because the organism is framed as representing ecological processes, its recommendations could be presented as objective or natural decisions rather than as interpretations of an experimental setup. A city could potentially justify a planning outcome — for example, where green infrastructure is or is not placed by claiming that “the living system determined this,” even though the experiment is shaped by human-selected inputs and constraints.

This raised a new ethical issue for me: the risk is not only biological harm but epistemic harm decisions gaining legitimacy through an appeal to nature.

Possible governance actions

  • Require public disclosure of experimental conditions and inputs whenever biohybrid systems are used in civic planning.
  • Require that outputs be presented as recommendations rather than determinations.
  • Include community interpretation sessions alongside demonstrations so results are collectively interpreted, not administratively imposed.

2. Ecological data rights and ownership

Another concern is ownership of the data produced. The slime mould produces electrical and spatial behaviour that becomes digitised and mapped. Currently, whoever builds the interface typically owns the data. However, this data originates from a living organism interacting with environmental conditions and public space. If captured by a company or platform, it could become a proprietary ecological optimisation system.

This led me to think about a new category of information: biological behavioural data generated by non-human participants. The ethical issue is about preventing enclosure of ecological knowledge.

Possible governance actions

  • Treat organism-generated datasets as an environmental commons rather than proprietary data.
  • Publish raw signals and interpretation methods for publicly funded projects.
  • Prohibit exclusive commercial claims that the organism “endorses” planning or environmental decisions.

3. Species instrumentalisation and “slime mould farming”

A concern I had not anticipated was the possibility of scaling. If slime mould computing becomes useful, institutions or companies could culture large quantities as a biological processing substrate effectively “farming” the organism as infrastructure. Even though the organism is not sentient, mass cultivation purely for computational exploitation raises questions about how we relate to living systems. This shifts the organism from collaborator to extractive resource.

Possible governance actions

  • Establish basic organism care standards for public or educational installations (humidity, feeding, and recovery periods).
  • Limit repeated stress stimuli (light shocks, starvation cycles) used only for demonstration.
  • Encourage small-scale cultivation and prohibit industrial-scale deployments without ethical review.

4. Misinterpretation and anthropomorphism

Another issue that emerged in class discussions is that people quickly attribute intention to the slime mould (“it chose this,” “it prefers that neighborhood”). This creates a paradox: over-trusting the organism and misunderstanding what it actually measures. The slime mould does not understand cities; it responds to gradients. If misunderstood, the system could misinform ecological planning rather than enrich it.

This was new to me because the ethical challenge is not accuracy alone but representation.

Possible governance actions

  • Include interpretive explanations alongside all outputs describing what variables the organism actually responds to.
  • Display uncertainty and multiple possible outcomes rather than a single optimal plan.
  • Require facilitators or researchers to explain the mediation layer between biological behaviour and planning recommendations.

Reflection

The main shift for me this week was realising that the ethical questions are not primarily about biosafety. Slime Moulds are relatively harmless biologically. Instead, the ethical risks arise from how biological processes intersect with governance, data, and authority. A living computational system changes who or what is considered a participant in decision-making, and governance therefore needs to address interpretation, ownership, and responsibility, not only containment.

Rather than restricting experimentation, the appropriate response seems to be lightweight but clear governance: transparency, shared stewardship of ecological data, and careful framing of the organism as a contributor to discussion rather than a decision-maker.

Week 2 HW: DNA, Read, and Write

Part 1

content contentcontent content

Part 3

3.1. My chosen protein: Green Fluorescent Protein (GFP)

I chose Green Fluorescent Protein (GFP) originally discovered in the jellyfish Aequorea victoria. I chose this protein because:

  • It is one of the most important tools in modern biotechnology.
  • It glows bright green under blue/UV light.
  • Scientists use it as a reporter protein to see when genes are turned on inside living cells.
  • It lets researchers literally watch biology happen.

I retrieved the sequence using UniProt (a protein database).

Example FASTA-style entry:

>sp|P42212|GFP_AEQVI Green fluorescent protein OS=Aequorea victoria
MSKGEELFTGVVPILVELDGDVNGHKFSVSGEGEGDATYGKLTLKFICTTGKLPVPWPTLVTTFGYGLQCFARYPDH

3.2 Reverse Translate:

ATGAGTAAAGGAGAAGAACTTTTCACTGGAGTTGTCCCAATTCTGGTTGAACTGGACGGCGATGTTAACGGCCACAAATTCAGTGTCTCAGGAGAA This DNA sequence would produce the beginning of the GFP protein.

3.3 codon optimisation

codon optimisation is important in synthetic biology because different organisms prefer different codons even if they code for the same amino acid. This is called codon bias.

For example:

Both code for glycine:

  • GGT → commonly used in bacteria
  • GGG → rarely used in bacteria

Codon optimization rewrites a gene using codons preferred by the host organism so its ribosomes can efficiently translate the mRNA, leading to faster translation and higher protein yield.

Organism chosen:

I optimised the gene for Escherichia coli (E. coli) because:

  • It is the most common laboratory host
  • Cheap and fast growing
  • Standard organism for protein production
  • Used in insulin production

After optimization, the DNA sequence changes but the amino acid sequence stays identical.

I optimized the gene for Escherichia coli (E. coli). I chose E. coli because it is the most commonly used organism for recombinant protein expression in laboratories and biotechnology. It grows quickly, is inexpensive to culture, and its genetics are well understood. Because it is widely used for producing proteins such as insulin and research enzymes, optimizing the codons for E. coli increases the likelihood that the protein will be successfully expressed at high levels.

3.4 Next Steps After Sequence:

After obtaining and codon-optimizing the DNA sequence, the protein can be produced using recombinant protein expression technologies. The DNA does not automatically become a protein — it must first be inserted into a system that contains the cellular machinery needed for transcription and translation.

Two major approaches can be used: cell-dependent expression** and **cell-free expression.

Cell-dependent protein production (inside living cells)

The most common method is to express the gene inside bacteria such as E. coli using recombinant DNA technology.

Step 1: Gene synthesis and cloning

The optimized DNA sequence is chemically synthesized and inserted into a circular DNA vector called a plasmid.

The plasmid contains:

  • a promoter (turns the gene on)
  • a ribosome binding site
  • the protein coding sequence
  • a terminator
  • an antibiotic resistance marker (to select cells that received the plasmid)

This process is called molecular cloning.

Step 2: Transformation

The plasmid is introduced into bacterial cells in a process called transformation (heat shock or electroporation). Some bacteria take up the plasmid and now carry the new gene.

Step 3: Transcription

Inside the bacteria, RNA polymerase binds to the promoter and copies the DNA sequence into messenger RNA (mRNA).

DNA:

ATG GAA TTT

mRNA:

AUG GAA UUU

(Thymine (T) is replaced with Uracil (U))

Step 4: Translation

A ribosome attaches to the mRNA and reads it in 3-nucleotide codons. Transfer RNAs (tRNAs) bring amino acids that match each codon. The ribosome links the amino acids together into a growing polypeptide chain.

Example:

  • AUG → Methionine (start)
  • GAA → Glutamate
  • UUU → Phenylalanine

The chain continues to elongate until a stop codon is reached.

Step 5: Folding and protein formation

After translation, the amino acid chain folds into its 3-dimensional structure. Once folded correctly, it becomes a functional protein. The bacteria now produce large quantities of the protein, which can be purified.

Cell-free protein production (in a test tube)

Instead of living cells, the protein can also be produced using a cell-free expression system. In this method, cellular machinery extracted from cells (ribosomes, enzymes, tRNAs, amino acids, and nucleotides) is mixed in a reaction tube.

When the synthesized DNA is added:

  1. RNA polymerase transcribes DNA → mRNA
  2. Ribosomes translate mRNA → protein

This system essentially recreates the central dogma outside a living organism.

Advantages:

  • faster (hours instead of overnight growth)
  • no need to maintain living genetically modified organisms
  • useful for rapid prototyping proteins

5.1 DNA Read

(i) What DNA would you want to sequence (e.g., read) and why? This could be DNA related to human health (e.g. genes related to disease research), environmental monitoring (e.g., sewage waste water, biodiversity analysis), and beyond (e.g. DNA data storage, biobank).

I would sequence DNA from the slime mould Physarum polycephalum, specifically genes involved in its oscillatory signalling, environmental sensing, and network formation behaviour. Rather than sequencing the entire genome, I would focus on candidate gene families related to calcium signalling, cytoskeletal contraction, and chemo-sensing receptors, which are believed to underlie the organism’s ability to dynamically reorganise its network in response to environmental conditions.

The motivation is connected to my project: I am developing a biohybrid interface in which the organism participates in spatial decision-making for ecological design. Currently the system treats the slime mould as a black box: it grows, moves, and produces electrical oscillations that are interpreted externally. Sequencing specific regions of its DNA would help identify the biological mechanisms that allow the organism to detect gradients such as moisture, nutrients, and chemical repellents.

Understanding these genes would allow me to distinguish between:

  • behaviours that reflect environmental sensing
  • behaviours that are internal metabolic rhythms

This matters because the device relies on interpreting the organism’s behaviour as ecological information. If I can identify the genetic pathways associated with sensing versus internal physiological cycles, I can better calibrate which outputs meaningfully correspond to environmental variables (such as soil quality or resource distribution) and which are unrelated background activity.

More broadly, the sequencing would support a longer-term goal: developing a biological chassis for ecological sensing. Rather than engineering the organism immediately, the first step is reading and mapping the genetic basis of its distributed computation. This would establish whether the organism could, in the future, be tuned to respond more specifically to environmental variables (for example pollutants, salinity, or nutrient availability) and therefore act as a living environmental sensor and participatory planning interface.

In this way, the DNA sequencing is not only basic biological curiosity. It is a foundational step toward understanding how a non-neural organism performs spatial computation and whether its sensing behaviour can be responsibly interpreted within civic ecological design systems.

ii) In lecture, a variety of sequencing technologies were mentioned. What technology or technologies would you use to perform sequencing on your DNA and why? Also answer the following questions: Is your method first-, second- or third-generation or other? How so? What is your input? How do you prepare your input (e.g. fragmentation, adapter ligation, PCR)? List the essential steps. What are the essential steps of your chosen sequencing technology, how does it decode the bases of your DNA sample (base calling)? What is the output of your chosen sequencing technology?

hybrid sequencing strategy: Nanopore (long reads) + Illumina (accurate short reads).

I would use a hybrid sequencing approach combining Oxford Nanopore long-read sequencing and Illumina sequencing-by-synthesis.

The reason is that Physarum polycephalum has a large and repetitive genome and complex gene regulation. Long-read sequencing allows reconstruction of full genes and regulatory regions, while short-read sequencing provides higher base accuracy. Using both allows discovery of candidate sensing and oscillatory signaling genes that may underlie the organism’s network-forming computation.

Nanopore sequencing helps assemble the genome structure, while Illumina sequencing helps confirm base accuracy and detect smaller mutations or gene variants.

Is your method first-, second-, or third-generation?

  • Illumina sequencing-by-synthesis → Second-generation sequencing
    • massively parallel, short accurate reads
  • Oxford Nanopore sequencing → Third-generation sequencing
    • single-molecule, long reads without amplification

Second-generation sequencing reads many short fragments simultaneously, while third-generation sequencing reads individual DNA molecules directly as they pass through a pore.

What is your input and how do you prepare it?

Input

Genomic DNA extracted from cultured slime mould plasmodium.

Because Physarum is multinucleate, the sample contains many nuclei within one cell mass, so I would isolate and purify high-molecular-weight DNA to preserve long fragments.

Preparation steps (library preparation)

1. Cell lysis

  • Break open plasmodium cells
  • Release nuclei and DNA

2. DNA purification

  • Remove proteins, lipids, and RNA
  • Keep long DNA strands intact

3. Fragmentation

  • For Illumina: shear DNA into short fragments (~200–500 bp)
  • For Nanopore: keep DNA long (no fragmentation or minimal shearing)

4. Adapter ligation

  • Attach synthetic DNA adapters to fragment ends
  • These allow the sequencing machine to recognize and bind the DNA

5. (Illumina only) PCR amplification

  • Copy fragments to create enough signal for imaging

6. Load library onto sequencer

Illumina requires amplified libraries attached to a surface, while nanopore sequencing reads native single molecules directly.

Essential sequencing steps & base-calling

A. Illumina (Sequencing-by-Synthesis)

How it works:

  1. DNA fragments bind to a flow cell
  2. Fragments form clusters by bridge amplification
  3. Fluorescently labeled nucleotides are added one at a time
  4. A camera records the color signal after each cycle
  5. The color identifies A, T, C, or G

Each nucleotide has a different fluorescent dye, and the sequence is determined by tracking colour changes during repeated cycles.

Base calling:

Colour detected → nucleotide identity.

B. Nanopore Sequencing

How it works:

  1. A motor protein feeds single-stranded DNA through a nanopore protein embedded in a membrane
  2. An electric current flows through the pore
  3. Each base changes the current in a characteristic way
  4. Software interprets the signal

As DNA passes through the pore, each nucleotide causes a distinct disruption in ionic current, which is converted into sequence information.

Base calling:

Electrical signal pattern → nucleotide identity.

What is the output?

Illumina output

  • Millions of short reads (100–300 base pairs)
  • Very accurate
  • FASTQ files containing:
    • sequence
    • quality score per base

Nanopore output

  • Long reads (10,000–1,000,000+ base pairs)
  • Lower accuracy but reveals gene structure
  • FASTQ signal-derived sequences

Why this matters for my slime-mould project

The goal is to understand how biological computation happens.

Nanopore reads would reveal:

  • gene clusters
  • regulatory regions
  • large signaling genes

Illumina reads would confirm:

  • mutations
  • receptor proteins
  • ion channel genes

Together, the sequencing flow is: genetic pathways → cellular oscillations → spatial decision-making behaviour.

This directly supports my biohybrid living computation interface.

5.2 DNA Write

(i) What DNA would you want to synthesize and why?

I would design and synthesize a reporter gene circuit that makes cellular signaling activity in Physarum polycephalum visible and measurable. Specifically, I would synthesize a construct that couples an activity-responsive promoter to a fluorescent or luminescent reporter gene.

The goal is not to immediately reprogram the slime mould, but to make its internal computational state observable. In my project the organism functions as a spatial decision-making system, but currently it behaves as a black box: we only see network growth or electrical oscillations after they occur. A genetic reporter would allow the organism to directly communicate its internal signaling state.

The specific biological process I want to observe is the calcium-dependent oscillatory signaling that drives cytoplasmic streaming and network reorganisation. These rhythmic contractions are believed to be the mechanism underlying the organism’s path-finding and resource distribution behavior.

Therefore I would synthesize a calcium-responsive reporter construct.

Proposed genetic construct

The DNA I would synthesise is a simple eukaryotic expression cassette composed of:

[Calcium-responsive promoter] → [reporter protein gene] → [terminator]

Function:

When intracellular signaling activity increases, the organism produces a detectable signal (light or fluorescence). This turns slime-mould computation into a readable biological output rather than requiring electrodes.

This makes the organism not just a growth-based computer, but a living sensing display.

Reporter gene choice

I would synthesize a codon-optimized reporter such as:

GFP (Green Fluorescent Protein)

or

Luciferase

Reason:

These reporters are widely used because they convert gene expression into visible output. The lecture slides emphasised how synthetic DNA enables applications across research, medicine, materials, and sensing. My project falls under biological sensing: instead of detecting disease, the construct detects the organism’s internal signaling activity.

Example DNA sequence (simplified coding region)

Below is a shortened GFP coding sequence example (not the full-length gene, but representative of what would be synthesised):

ATGGTGAGCAAGGGCGAGGAGCTGTTCACCGGGGTGGTGCCCATCCTGGTCGAGCTGGACGGCGACGTAAACGGCCACAAGTTCAGCGTGTCCGGCGAGGGCGAGGGCGATGCCACCTACGGCAAGCTGACCCTGAAGTTCATCTGCACCACCGGCAAGCTGCCCGTGCCCTGGCCCACCCTCGTGACCACCCTGACCTACGGCGTGCAGTGCTTCAGCCGCTACCCCGACCACATGAAGCAGCACGACTTCTTCAAGTCCGCCATGCCCGAAGGCTACGTCCAGGAGCGCACCATCTTCTTCAAGGACGACGGCAACTACAAGACCCGCGCCGAGGTGAAGTTCGAGGGCGACACCCTGGTGAACCGCATCGAGCTGAAGGGCATCGACTTCAAGGAGGACGGCAACATCCTGGGGCACAAGCTGGAGTACAACTACAACAGCCACAACGTCTATATCATGGCCGACAAGCAGAAGAACGGCATCAAGGTGAACTTCAAGATCCGCCACAACATCGAGGACGGCAGCGTGCAGCTCGCCGACCACTACCAGCAGAACACCCCCATCGGCGACGGCCCCGTGCTGCTGCCCGACAACCACTACCTGAGCACCCAGTCCGCCCTGAGCAAAGACCCCAACGAGAAGCGCGATCACATGGTCCTGCTGGAGTTCGTGACCGCCGCCGGGATCACTCTCGGCATGGACGAGCTGTACAAGTAA

(For an actual synthesis, the sequence would be codon-optimised for Physarum expression.)

Why this construct matters

Currently, slime-mould computing requires:

  • cameras
  • electrodes
  • external interpretation

A genetic reporter changes the relationship:

Instead of humans interpreting the organism, the organism can directly express its internal state.

This enables:

  • visualizing decision points
  • mapping signaling waves
  • correlating gene activity with spatial behavior

The result: the slime mould becomes a biological interface, not just a biological substrate.

Future extension

Once validated, additional constructs could be synthesized, such as:

  • pollutant-responsive promoters
  • moisture-responsive expression
  • nutrient sensing circuits

This would allow the organism to function as a living environmental sensor rather than a passive computational material.

Why synthesis is necessary

As discussed in the DNA synthesis lecture, synthetic DNA allows researchers to design specific sequences and functions rather than relying on naturally occurring genes slides-lecture-2-leproust. In this case, synthesis enables building a custom interface between cellular signaling and human observation — a foundational step toward a programmable biohybrid ecological computing system.

(ii) What technology or technologies would you use to perform this DNA synthesis and why?

To synthesize the reporter construct, I would use solid-phase chemical DNA synthesis (phosphoramidite synthesis) combined with enzymatic gene assembly.

Rather than copying DNA from an organism, this method builds DNA from individual nucleotides in a programmable way. Synthetic DNA platforms can manufacture custom genes designed in software, which is necessary because my calcium-responsive reporter construct does not naturally exist in Physarum.

Modern synthetic DNA platforms fabricate many oligonucleotides simultaneously using microarray-based synthesis and then assemble them into a full gene. Synthetic DNA tools allow researchers to design specific sequences and produce genes for applications in research, sensing, therapeutics, and materials.

I chose this technology because:

  • I am designing a new genetic circuit
  • there is no natural template to clone
  • the gene must be codon-optimized and modular
  • the DNA must be written not copied

What are the essential steps of the chosen synthesis method?

1. Oligonucleotide chemical synthesis (base-by-base writing)

The gene is first broken into short fragments (~150–300 bp oligos). Each oligo is chemically synthesized on a solid support surface.

Solid-phase phosphoramidite synthesis proceeds in repeating chemical cycles:

  1. Attach first nucleotide to solid surface
  2. Add a protected nucleotide phosphoramidite
  3. Coupling reaction attaches base
  4. Cap unreacted strands
  5. Oxidize phosphate backbone
  6. Remove protecting group (deblock)
  7. Repeat cycle

These steps are repeated many times to build a sequence base-by-base, to essentially make a chemical writing of DNA.

2. Cleavage and purification

After synthesis, the oligos are:

  • released from the surface
  • chemically deprotected
  • purified

3. Gene assembly

The short oligos are then assembled into a full gene:

  • overlapping sequences are designed
  • fragments anneal
  • DNA polymerase fills gaps
  • PCR amplifies the full gene

classical gene synthesis is assembling many short oligos into a full gene through PCR-based assembly

4. Cloning and sequence verification

The final gene is:

  • inserted into a plasmid vector
  • propagated in bacteria
  • sequence-verified (often using next-generation sequencing)

What are the limitations of this synthesis method?

Accuracy limitations

Chemical DNA synthesis is not perfect. Errors occur because each chemical addition step is slightly inefficient.

Typical issues:

  • deletion mutations
  • substitution errors
  • incomplete strands

As sequences get longer, cumulative error rates increase. Therefore longer genes must be assembled from shorter verified fragments.

Length limitations

Single oligos cannot be extremely long.

Typical constraints:

  • individual oligos: ~150–300 nucleotides
  • genes: assembled from many oligos

Highly repetitive DNA, extreme GC content, or hairpin structures are especially difficult to synthesize

Speed limitations

Gene synthesis is not instantaneous because it involves:

  • chemical cycles
  • purification
  • assembly
  • verification sequencing

However, modern platforms dramatically improve speed and scale; thousands of oligos can be produced simultaneously using array-based synthesis

Scalability trade-offs

Strength:

  • extremely scalable (many genes at once)

Weakness:

  • each individual gene still requires assembly and verification
  • long custom constructs take longer than short fragments

Why this method fits the project

The purpose of my project is to create a new biological interface. Because the reporter circuit is a designed genetic construct rather than a naturally occurring gene, the appropriate technology is synthetic DNA manufacturing. Chemical oligo synthesis combined with gene assembly allows precise control over promoter, coding region, and regulatory elements, enabling the slime mould to express a visible signal corresponding to its internal computational state.

This moves the organism from being observed externally to being able to communicate biologically.

5.3 DNA Edit

(i) What DNA would you want to edit and why?

I would edit genes in the slime mould Physarum polycephalum that control its oscillatory signaling and environmental sensing behavior. My goal is not to fundamentally redesign the organism, but to make its computational behavior more interpretable and experimentally controllable.

The organism’s path-finding and network formation depend on rhythmic cytoplasmic streaming driven by intracellular calcium signaling and cytoskeletal contraction. These oscillations determine how the organism distributes resources and selects paths in response to gradients such as nutrients, moisture, and repellents. In my project, the slime mould functions as a biohybrid spatial decision-making system for ecological design, but currently its behavior can only be influenced indirectly using light, salt, or food placement.

I would therefore edit genes involved in three biological functions:

1. Editing sensory receptors (environmental responsiveness)

I would modify or insert chemosensory receptor genes so that the organism responds to specific environmental variables rather than only simple attractants (e.g., oats) or repellents (e.g., salt).

Goal:

Enable the slime mould to detect meaningful environmental conditions such as:

  • soil salinity
  • pollutants
  • nutrient concentration
  • moisture stress

Why:

Currently, the organism reacts to arbitrary laboratory proxies. By editing sensing pathways, its decisions could reflect real ecological signals instead of human-selected stimuli. This would allow the biohybrid system to function as a living environmental sensor.

Type of edit:

Targeted insertion of a sensing pathway promoter or receptor gene under native regulatory control.

2. Editing oscillatory signaling genes (control of computation speed)

The slime mould’s decision-making depends on rhythmic contraction waves. I would edit genes regulating calcium ion channels or actin-myosin cytoskeletal contraction to slightly alter oscillation frequency.

Goal:

Adjust how quickly the organism explores and stabilizes networks.

Why:

Currently, experiments can take many hours or days because the organism’s internal clock sets its computational speed. Modifying the oscillation period could make the system experimentally usable without changing its overall behavior.

Type of edit:

Regulatory modification (promoter tuning) rather than gene knockout adjusting expression levels rather than removing function.

3. Editing reporter expression (internal state visibility)

In addition to writing a reporter gene, I would edit its genomic insertion site so the reporter is expressed only during active signaling events rather than continuously.

Goal:

Allow the organism to visibly signal when it is “making a decision” (i.e., undergoing active network reorganization).

Why:

Right now the organism’s internal state is invisible unless measured electrically. A genomic integration tied to a signaling pathway would allow the organism to communicate its activity biologically.

Type of edit:

Knock-in insertion at a signaling pathway locus (for example, a calcium-responsive gene).

Why editing rather than only writing DNA

The write step adds new functions, but editing is necessary to integrate them into the organism’s native regulatory system. Without editing, the reporter gene would operate independently of the biological processes responsible for computation.

Editing allows:

  • linking sensing → signaling → behavior
  • reducing reliance on external interpretation
  • making the organism a participant rather than an experimental object

Broader implications

The intention is not to optimise the organism for efficiency or industrial use, but to explore a new relationship between biological systems and human decision-making. The edit would allow the slime mould to mediate between environmental conditions and human planning processes, acting as a biological interface.

However, this also raises ethical considerations: editing a living organism to participate in civic decision systems changes how agency and responsibility are distributed. For that reason, I would prioritise reversible and minimal edits (regulatory changes rather than permanent loss-of-function mutations) and maintain contained laboratory use.

I would use DNA editing to make the slime mould not a programmable machine, but a legible ecological collaborator, an organism whose internal biological processes can be meaningfully interpreted rather than inferred indirectly.

(ii) What technology would you use and why?

I would use CRISPR-Cas9 genome editing, combined with homology-directed repair (HDR) and, in some cases, CRISPR activation (CRISPRa).

I chose CRISPR because it allows targeted, programmable editing of specific genes rather than random mutation. My project requires modifying signaling and sensing pathways in Physarum polycephalum while preserving the organism’s viability and behavior. CRISPR enables precise insertion of reporter genes and fine control of gene regulation, which is necessary for connecting biological signaling to observable outputs.

CRISPR is appropriate because I am not trying to create a new organism, but to:

  • insert a reporter gene
  • tune expression of signaling pathways
  • minimally perturb existing functions

How does CRISPR edit DNA?

CRISPR works by using a programmable RNA molecule to guide a nuclease enzyme (Cas9) to a specific DNA sequence.

Essential mechanism

  1. A guide RNA (gRNA) is designed to match a target DNA sequence.
  2. The Cas9 protein binds the guide RNA.
  3. The complex scans DNA in the cell.
  4. When the matching sequence is found, Cas9 cuts the DNA (double-strand break).
  5. The cell repairs the break.

The repair process is what produces the edit.

There are two repair pathways:

1. Non-homologous end joining (NHEJ)

  • error-prone repair
  • produces insertions or deletions
  • used for gene knockouts

2. Homology-directed repair (HDR)

  • precise repair using a template
  • allows insertion of new DNA

For my project I would mainly use HDR, because I want to insert a reporter gene and modify regulatory regions rather than destroy genes.

Essential experimental steps

  1. Identify target gene (e.g., calcium signaling gene)
  2. Design guide RNA sequence
  3. Construct editing plasmid
  4. Deliver CRISPR system into slime mould cells
  5. Cas9 cuts DNA at target site
  6. Cell repairs DNA using provided template
  7. Screen for successful edits

What preparation is required?

Design steps

  • Choose a target gene locus
  • Design guide RNA (20 bp complementary sequence)
  • Design repair template with homology arms (~500–1000 bp)
  • Insert reporter gene into template DNA

Inputs needed

Biological components

  • Cas9 nuclease protein (or gene encoding it)
  • guide RNA (gRNA)
  • donor DNA repair template (containing reporter gene)
  • promoter and terminator sequences

Delivery materials

  • plasmid vector OR ribonucleoprotein complex (Cas9 + gRNA)
  • transformation method (e.g., electroporation or microinjection)

Cells

  • cultured Physarum polycephalum plasmodium

What is CRISPRa and why use it?

In addition to cutting DNA, a modified Cas9 (dead Cas9 or dCas9) can regulate genes without breaking DNA.

dCas9:

  • binds DNA
  • does not cut

If fused to an activator protein, it increases expression of a gene.

I would use CRISPRa to:

  • adjust oscillation frequency
  • tune sensing sensitivity

This avoids permanent genome disruption.

Limitations of CRISPR editing

1. Efficiency

Not every cell receives the edit.

Possible issues:

  • low transformation efficiency
  • multinucleate cells (especially relevant in Physarum)
  • mosaic editing

Result: many cells must be screened.

2. Off-target effects

The guide RNA may bind similar DNA sequences and cut unintended locations.

Consequences:

  • unintended mutations
  • altered behavior

Careful guide design reduces this risk.

3. Repair pathway bias

Cells often prefer NHEJ instead of HDR.

This means:

  • insertions may fail
  • knockouts occur instead of precise edits

HDR is typically less efficient than gene disruption.

4. Precision limitations

Even successful edits may:

  • integrate partially
  • rearrange DNA
  • produce variable expression

Therefore sequencing verification is required after editing.

Why this editing method fits the project

My project depends on linking biological signaling to visible outputs. CRISPR allows insertion of a reporter gene into a native signaling pathway and fine-tuning of sensing behavior. This makes the organism experimentally interpretable without fundamentally redesigning its biology.

Sequencing lets me understand the organism, synthesis lets me add a component, and CRISPR lets me connect the component to the organism’s natural processes.