Steve Zafeiriou (b. 1998, Thessaloniki, GR) is a New Media Artist, Technologist, and Founder of Saphire Labs. His practice investigates how technology can influence, shape, and occasionally distort the ways individuals perceive the external world. By employing generative algorithms, electronic circuits, and interactive installations, he examines human behavior in relation to the illusory qualities of perceived reality, inviting observers to reconsider their assumptions and interpretations.

In search of IKIGAI
dark mode light mode Search Index Menu
Search
Interactive Installations as Research Instruments: Hands prototyping an interactive art sculpture on a workbench—3D-printed parts, ESP32 microcontroller, and a small Waveshare display during assembly.

Interactive Installations as Research Instruments

Interactive installations as research instruments: sensing, logging, evaluation, ethics, museum collaboration, plus futures thinking through artifacts.

If your installation can’t produce evidence, it’s not research; it’s vibes. 

You’re tired of “dwell time” as the whole story. They’re tired of “innovation” that can’t explain what it learned.

Interactive installations don’t have to be “museum candy”.

In HCI and Research through Design, the stronger frame is simple:

Αn installation can be a research instrument; an apparatus designed to generate evidence, insight, and even futures discourse in public space.

And that matters, because museums and universities are wrestling with the same problem: 

How to learn from real audiences without pretending the gallery is a lab.

“Interactive installations as research” means treating an installation as a research instrument: it senses interaction, logs behavior, and supports interpretation that can answer a research question — using field-appropriate evaluation rather than lab-style control.

A defensible setup usually includes:

  1. A clear research question
  2. An instrument stack (sensing → logging → interpretation)
  3. Triangulated methods (logs + observation + interviews)
  4. Ethics suited to public settings
  5. A museum ready collaboration plan
Full Belief Engines ESP32 wall-mounted sculpture featuring a red cross with LED screen and sculpted red hands attached to a metal rod.
Interactive Installations as Research Instruments: Belief Engines, Interactive Art Sculpture developed by Steve Zafeiriou

Why call an interactive installation a “research instrument”?

Because calling it an instrument is a commitment.

You’re not only designing an experience. You’re designing a knowledge-making apparatus.

And that one decision changes:

  1. what you measure
  2. how you document
  3. what you’re allowed to claim

Here’s the lens you keep returning to (because it keeps you honest):

Instrument Lens: What is being sensed? What is being inferred? What is being argued?

In public settings, “validity” doesn’t look like lab control. It looks like ecological validity; real people, real social context, messy participation, and multiple interpretations.

That’s not a weakness.

That’s the whole reason we build in public. But only if your method is explicit and your limits are honest.

Steve Zafeiriou 2025 year in review art book draft—research-driven new media artist and technologist notes on attention, belief, and choice.
Interactive Installations as Research Instruments

Installations as performance and embodied inquiry (HCI relevance)

If you want legitimacy inside HCI, here’s a clean move: treat participation as embodied inquiry.

Nam and Nitsche (2014) frame interactive installations as performance: meaning emerges through audience action, context, and interpretation, not just interface mechanics.

Their constitutive / epistemic / critical lens is basically a scope checker for what kind of knowledge you’re claiming:

  1. Constitutive: the installation constitutes a situation (it produces a lived condition you can study).
  2. Epistemic: it generates knowledge via observation, interaction traces, and reflection.
  3. Critical: it reveals assumptions, norms, or power relations embedded in the setting.

If you can state which one you’re aiming for, you’re already more research grade than most “engagement briefs”.

GeoVision V2 system overview showcasing advanced features for geographic visualization and data integration, designed for enhanced spatial analysis and professional use.
Interactive Installations as Research Instruments: GeoVision, Interactive Art Installation by Steve Zafeiriou exploring cultural interpretations.

Research through Design: the artefact is part of the argument

RtD gets misunderstood as “designing while researching.” That’s the lazy version.

In the rigorous version, the artefact is evidence. It embodies hypotheses and makes them testable through use, critique, and iteration.

Savić and Huang (2014) describe a loop: research questions translate into prototypes, which generate insight through reflection and refinement.

Here’s the rule that separates real RtD from portfolio theatre:

  1. Don’t just document the final build.
  2. Document the iteration logic; what you changed, why you changed it, and what the changes taught you.

That’s where the credibility lives.

GeoVision V2 trends display highlighting cutting-edge geographic data trends and insights for analytics and decision-making.
Interactive Installations as Research Instruments: GeoVision, Custom application for audience interaction

What “instrument” means in public space

A museum is not a lab. And claiming lab like control in a public setting usually backfires.

Public space instruments trade control for realism. Your job is to tighten the chain from signal to inference to claim, while documenting the confounds you can’t eliminate:

  1. crowding
  2. staff facilitation
  3. social influence
  4. lighting
  5. accessibility constraints
  6. self-selection (the people who opt in are not “everyone”)

The cleanest move is to include a limits statement like it’s part of the method, because it is:

  1. What I can claim: patterns observed in this setting under these conditions, supported by these measures.
  2. What I cannot claim: universal causality, stable behavior across contexts, or intent from logs alone.
  3. What would strengthen claims: replication, additional methods, controlled comparisons, or longitudinal follow up.

This isn’t a weakness. It’s how museums and universities learn to trust you.

Interactive art installation at Steve Zafeiriou's booth during MATAROA AWARDS 2024 showcasing 'Sensorify v2.0,' featuring multi-screen digital displays, immersive visuals, and innovative artistic technology within a minimalist gallery setup.
Interactive Installations as Research Instruments: Sensorify, Interactive Art Installation by Steve Zafeiriou

The instrument stack you’re actually building

sensinglogginginterpretation

Most teams overbuild sensing and underbuild interpretation.

So they end up with a haunted warehouse of data and a story that can’t stand up in daylight.

The instrument stack keeps you honest:

  1. what you capture
  2. what you record
  3. what you think it means

Instrument Lens: Sensed data is not insight. Insight requires a defined inference step and an evidence standard.

MAX30102 Heart Rate and SpO2 Results on Display – Real-time heart rate monitoring and blood oxygen (SpO2) levels recorded using the DFRobot MAX30102 sensor, visualized for Arduino health monitoring and biometric sensor projects.
Interactive Installations as Research Instruments

Sensing layer: capture less, mean more

The sensing layer is where you decide what the system can “perceive”:

  1. touch interaction
  2. proximity
  3. computer vision sensing
  4. tangible interfaces
  5. spatial interaction
  6. object manipulation

The trap is sensor soup: collecting everything because you can.

Instead, choose signals that map directly to your research question.

  1. If your question is about attention architecture (how the environment structures perception and choice), then proximity and transition paths may matter more than high-resolution identity tracking.
  2. If your question is about embodied interaction, you may need tangible interface states, not “time in front of screen”.

In your own builds, you treat sensing like operationalization:

  1. take an abstract concept (influence, hesitation, exploration)
  2. translate it into observable variables (dwell time, return visits, interaction transitions, repeated affordance attempts)
  3. then ask: what’s the minimum sensing that supports this?

That’s how you keep the instrument clean.

DFRobot MAX30102 Sensor with Serial Plotter Display – The MAX30102 heart rate and oximeter sensor connected to an Arduino board, displaying real-time heart rate variability and SpO2 measurement on the Arduino Serial Plotter.
Interactive Installations as Research Instruments

Logging layer: your dataset’s integrity is your method

Logging isn’t a technical afterthought. It’s your evidence pipeline.

Common logging primitives in public interactive systems:

  1. interaction events (button press, object placed, gesture detected)
  2. state changes (state machine transitions)
  3. dwell time and re-engagement patterns
  4. path/zone transitions (coarse spatial tracking)
  5. group behavior cues (multiple participants, turn-taking sequences)

And yes: the boring requirements matter.

  1. time synchronization
  2. failure modes
  3. data minimization
  4. uptime

Because if uptime drops, data quality collapses.

Meaning: maintenance becomes part of the method.

Pattakos et al. (2023) are useful here because they document how real world museum constraints (including physical conditions) affect deployment and evaluation.

Interpretation layer: write the inference chain or don’t claim anything

This is where projects become non-defensible:

They jump from raw signals to big claims.

The safer pattern is to write the inference chain explicitly:

  1. Signal: what was logged (events, transitions, dwell)
  2. Inference: what you believe that indicates (exploration, confusion, social influence)
  3. Claim: what you’re arguing about behavior, perception, or discourse

Then you list assumptions and confounds like an adult.

Instrument Lens: What is inferred must be narrower than what is sensed. What is argued must be narrower than what is inferred.

In public contexts, you almost always need qualitative context to avoid over-reading numbers:

  1. observation notes
  2. short interviews
  3. staff/mediator feedback

Because logs can scale, but they can’t explain meaning on their own.

Interactive Installation Design Process: DarkTales by Vandalo Ruins - Public Archive Installation developed by Steve Zafeiriou, displayed at ALEF Festival
Interactive Installations as Research Instruments: Dark Tales Public Archive exhibited at ALEF Cilento, Italy

Methods that make installations researchable

“In the wild” research gets defensible when you stop worshipping single method certainty and start building triangulation.

Not more data. Lower interpretive risk.

Triangulation baseline (field protocol you can actually run)

A practical baseline for installation-based research:

  1. Behavioral observation (what people do, not what they report)
  2. Short semi-structured interviews (what they think they did / why it mattered)
  3. Interaction logfiles (what the system recorded, at scale)

Logs give patterns. Observation gives context. Interviews give meaning-making.

If policies allow, you can go deeper (follow ups, diaries, video coding).

But only if it strengthens the inference chain and stays ethically viable.

ESP32-based motion capture system featuring an MPU6050 gyroscope and accelerometer sensor, commonly used in robotics, gesture recognition, and IoT applications.
Interactive Installations as Research Instruments

Evaluation patterns from museum/HCI systems work

Two clarifications keep your evaluation rigorous without killing the magic:

  1. Experience evaluation doesn’t equal to knowledge claims.
  2. A “good visitor experience” does not automatically validate your inference.

You can use heuristic evaluation to catch obvious failures pre-deployment.

You can use UX questionnaires to quantify experience; if they match your goals.

And you add a technical note that matters in public interactive systems: INP (Interaction to Next Paint) is a Core Web Vitals metric measuring interaction responsiveness; how quickly the interface responds visually after an input.

Even outside the web, the concept transfers:

Perceived latency changes behavior, frustration, and dwell patterns.

If responsiveness is unstable, your “behavior data” might just be a latency artifact.

Ethics & governance in public contexts

Ethics isn’t a postscript. It’s instrument design.

Public displays complicate consent because participation is ambient and social. The defensible approach usually looks like:

  1. clear signage explaining what’s being measured and why
  2. opt-out pathways (or non-instrumented modes)
  3. data minimization (collect only what maps to the research question)
  4. privacy by design defaults (avoid identifiable data unless essential and formally approved)

If you need identifiable data, you’re in a higher governance class of project. Most research aims can be met with non-identifying event logs and qualitative methods.

Nostalgie World: Interactive installation exhibited at MATAROA AWARDS 2025
Interactive Installations as Research Instruments: Nostalgie World, Interactive Art Installation by Steve Zafeiriou raising awareness about Mental Health

Museums as research partners

Treating museums like a deployment location is how you ship a fragile prototype and then blame the institution when it breaks.

Museums are co-research ecosystems: operational realities, ethics obligations, and institutional memory.

That’s why partnership improves both rigor and longevity.

Real constraints you must design for

Your instrument will be shaped by constraints like:

  1. lighting/reflections breaking sensing (especially vision systems)
  2. “non-touch” policies nullifying your interaction model
  3. robustness + cleaning needs changing materials
  4. network constraints affecting logging reliability
  5. accessibility requirements (design-in, don’t retrofit)
  6. maintenance planning as method (uptime affects what you can claim)
Inspire Project 2023 workshop: Development of Sensorify Installation by Steve Zafeiriou. Photo Stefanos Tsakiris
Interactive Installations as Research Instruments: Installation day of “Tension” exhibition at MOMus Museum of Contemporary Art, Thessaloniki, Greece (photo: Stefanos Tsakiris)

Co-design over time: roles, power, and bridge figures

Long-term collaboration is where installation research succeeds or dies.

A pragmatic concept you use:

The bridge figure, someone who translates between curatorial goals, technical constraints, and research requirements.

Without that translation layer, teams drift into misalignment:

  1. what counts as success
  2. what counts as evidence
  3. who owns operational risk after launch

This ties to value co-creation: museums aren’t passive recipients of innovation; they shape what value means in the setting (Sanders & Simons, 2009).

Interactive art installation titled 'Synthetic Memories' by Steve Zafeiriou, showcasing a digital memory network on a vertical screen connected to a curated set of vintage images and a handheld interface.
Interactive Installations as Research Instruments: Synthetic Memories, Interactive Art Installation developed under Inspire Project 2025 at MOMus Museum of Contemporary Art, Thessaloniki, Greece

Museum–university collaboration template

This is the part senior technologists actually read, because it prevents chaos.

1. Governance

  1. Who approves changes during deployment?
  2. What triggers a rollback?
  3. What’s the decision path for ethics concerns?

2. Responsibilities

  1. Who maintains hardware (daily/weekly)?
  2. Who monitors uptime and logging health?
  3. Who responds to failures during open hours?

3. Iteration cycles

  1. What can be adjusted live vs after hours?
  2. What is the reporting cadence (weekly notes, monthly review)?
  3. What counts as a “version” for data interpretation?
interactive art installations for Gen Z: Learn how to design interactive art installations that truly engage Gen Z. Explore the psychology behind agency, sensory immersion, identity expression, and co-creation, with proven frameworks from today’s leading immersive creators.
Interactive Installations as Research Instruments

4. Data handling

  1. What is collected (and what is explicitly not collected)?
  2. Retention period and access control
  3. Anonymization and aggregation approach

5. Museum ready operations checklist

  1. Installation plan (accessibility and safety)
  2. Operation plan (staff training, daily checks)
  3. Troubleshoot plan (failures, escalation)
  4. Deinstallation plan (restoration, documentation)
  5. Archive plan (media, code, logs schema, final report)

This isn’t bureaucracy. This is what lets research survive contact with the real world.

Close-up of a 'Synthetic Memories' interactive wearable device featuring a red and black 3D-printed casing with an embedded ESP32 screen and a nostalgic photo display.
Interactive Installations as Research Instruments: Synthetic Memories, Interaction “book”

Futures thinking through interactive artifacts

Futures oriented installations can be rigorous when you treat speculation as a method.

Not prediction.

Structured inquiry into values, assumptions, and possible worlds.

What speculative artifacts do in HCI

Speculative design and design fiction give you language for outputs that aren’t products: they provoke reflection and discourse (Dunne & Raby, 2013).

In HCI, speculative artifacts are increasingly treated as instruments for future orientation; tools that help communities reason about uncertainty.

The key point: the “output” might be reframed assumptions, new questions, or stakeholder discourse; not adoption metrics.

Four modes as installation strategies

You map the four modes into practical installation patterns:

  1. Reflective: mirror behavior back to participants (attention, choice, hesitation).
    • Pattern: feedback loops that make the invisible visible.
  2. Exploratory: sandbox for possible interactions or futures.
    • Pattern: branching states that let participants explore consequences.
  3. Interventional: introduce a constraint to reveal values.
    • Pattern: designed friction and defaults that make trade-offs legible.
  4. Heuristic: prompts that help people reason about uncertainty.
    • Pattern: guided questions, scenarios, structured comparisons.

This is where attention architecture angle fits responsibly:

Installations are choice structuring environments, and influence is a designed variable, not something you get to assume.

GeoVision V2 rating display showcasing user feedback and performance evaluations for the advanced geographic visualization system.
Interactive Installations as Research Instruments: GeoVision, custom Application for audience interaction

Measuring impact when the goal is discourse

If the goal is discourse, define impact honestly.

Evidence types can include:

  1. recorded responses (anonymous prompts, written reflections)
  2. thematic analysis of interview data and observational notes
  3. documented shifts in framing among stakeholders (curatorial notes, workshop outcomes)
  4. traceable changes in the questions people ask during engagement

And you keep the promise realistic:

Discourse impact isn’t “behavior change at scale.” It’s conversation quality, expanded imagination, clarified values.

A practical blueprint: the instrument design workflow

If you hand this to a lab lead or museum technologist, you’re giving them an actual map; not inspiration.

The goal is to map a research question into interactions, signals, analysis, and limits before you build.

1. Research question, interaction, signal, analysis map

Use this sequence:

  1. Research question
  2. Operationalize into interaction variables (what behaviors matter?)
  3. Choose sensing and logging events (what signals represent those variables?)
  4. Define analysis plan (how will you interpret evidence?)
  5. Decide evidence standard (exploratory, evaluative, futures-discursive)

The instrument doesn’t start at the sensor. It starts at the question.

Arduino For Loop: LilyGO T-Display S3 microcontroller setup with wiring and screen interface, ideal for IoT and display-based projects, from Steve Zafeiriou’s resources.
Interactive Installations as Research Instruments

2. Decide your evidence standard (or your project becomes vague)

Pick one primary evidence standard (and optionally one secondary):

  1. Exploratory (primary): map patterns and generate hypotheses. Success: clear patterns, credible interpretation chain, documented constraints.
  2. Evaluative (primary): test a defined claim about experience or behavior. Success: pre-defined measures, comparison logic (when possible), transparent limits.
  3. Futures discursive (primary): provoke reflection and capture discourse shifts. Success: high-quality qualitative evidence, traceable framing changes, honest scope.

Trying to do all three equally well is how projects turn into fog. Choose what you’re optimizing for.

3. Pre-register what you can; document what you can’t

Pre-registration doesn’t have to be heavy.

Your lightweight package:

  1. claims or hypotheses (what you think you’ll learn)
  2. measures (what you will log/observe/interview)
  3. constraints (what will be messy)
  4. ethics plan (consent, signage, minimization, retention)
  5. failure modes (what breaks and how it affects data)

If you can’t pre-register because the work is emergent, document iteration decisions with timestamps and rationale.

That documentation becomes your credibility.

Installation view of ‘Dark Tales’ featuring an AI-agent chat interface projected within an immersive gallery setting, combining dark narrative aesthetics and interactive machine-learning visualisation.
Interactive Installations as Research Instruments: Dark Tales Public Archive, Interactive Art Installation developed by Steve Zafeiriou for Vandalo Ruins.

The “magic” and the rigor can coexist

You don’t have to choose wonder or rigor.

You can preserve ambiguity for visitors while staying methodologically explicit for peers and partners. That’s the actual craft.

Your center of gravity is transparency:

  • what you measured
  • what you inferred
  • what you cannot claim

Because if you can’t draw that boundary, your installation isn’t a research instrument.

It’s a meme.

(And no, that’s not a compliment.)

DIY motion capture system utilizing an ESP32 microcontroller and an MPU6050 sensor, designed for real-time movement tracking and inertial measurement applications.
Interactive Installations as Research Instruments

Conclusion

If you’re planning a museum–university installation partnership, here’s the next move:

Formalize the instrument map and the governance template before you commit to the build.

You can keep calling it engagement if you want.

Or you can build something that makes knowledge in public, and can defend it afterward.

Your choice.

Total
0
Shares
Write the Artist Statement That Actually Gets You Noticed

Most artists treat their statement like a chore.

A paragraph they “should” write.
A mechanical requirement for grants, curators, and exhibitions.

That’s why most artist statements sound the same: flat, vague, and forgettable.

This guide fixes that.

You’ll learn how to craft an artist statement that actually communicates your vision; the kind that helps curators, galleries, and collectors feel your work, not just skim it.

It’s a practical playbook for turning your ideas, influences, and intentions into a narrative that moves people.