Images of the first module build, from CAD render to 3D printed housing to wiring assembly. The final outcome is an unsuccessful module. It will require disassembly and reassembly outside of the housing to debug and diagnose any problems. Stay tuned…

I’m beginning to understand how my project lacked design, and more so lacked any form of interactive design.

Just because I was using “multi-modality” didn’t mean I was heading in a direction to create an “interactive” outcome.

I am now sorting through papers and evaluating what will be most useful to read based on a few things:
– Participatory Design
– Multi-Modality
– Low Vision
– Assessing if articles are creating interactive design solutions

While there seem to be many solutions for people with low vision to their day to day situations, these solutions seem to be passive and not interactive. Yes they may engage other modes or senses but they aren’t really interactive.

It has taken me a while to arrive at this point. I now need to begin searching for opportunities to create an interactive product.

Even my assessment of existing assistive technologies, however small, is showing me that these products are purely that, products. They do not engage a high level of interactivity in an “interactive design” sense. Yes they react to inputs and give you feedforwards to initialise an interaction but the depth of interaction is shallow. For example the magnification technology is purely that, it magnifies things. Whether it is a digital or analogue magnification, it doesn’t have the depth of interaction that I’m looking for.

Why am I doing this project?

How can I blend depth of interaction with something that is positive and can contribute towards the independence of a person living with low vision or AMD?

Is this even possible?

Is a navigation system for no/low sighted people even considered interactive? I understand that it creates feedbacks allowing the person to engage with their direct surroundings and the software/device itself.