During my Third Semester at NID, an interesting module that was part of the programme was Ubiquitous Computing. This led me to the world of interactive installations which I found fascinating. Being explorative, I aligned my interests in filmmaking and films to the demands of the module and boiled down my brief to the following:
Getting the first, pure reactions of moviegoers as soon as they step out of the movie and showing them again to other moviegoers to help them decide about which movie to watch.
The process involved research, building personas, and working with the personas in mind.
Around this time, my interest in 3D rendering made me confident enough to try making the idea in 3D.
The coursework made me more aware of the growth of technology and what can be possible with it. It gave a clearer picture of what Human-Computer Interaction (HCI) really means, and of course, it let me explore without any worries.
User Journey and Opportunities
Concept - Input
People coming out of the movie, can speak about their experience into the ears on the wall.
In the Inactive State, there is no visible indication on the Listening Wall.
In the Active State, the ears glow when someone comes close, with the nearest one glowing more.
The system detects users with proximity and motion. It is an ambient device and convery ambient information through the subtle changes in light and colour.
Device - Device :
The input from the ears is categorised as those relating to the movie, and those that are not. This is then used for the output visualisation.
Person - Person :
The ears that record similar sentiments from conversations light up in the same colour, letting users know when someone else shares their view.
Person - Device :
Users speak into the ears (i/p)
Users can see the output (o/p)
The ears can be removed or added as and when they are required, and can be arranged in patterns as well depending on the situation
Concept - Output
These are some of the explorations for showing the Output in an understandable way. I took one in particular towards the end and looked at that with slightly more conviction.
Final Concept - Output
It could work in one of the following ways
1. The lips can give conversations live / recorded from the input side.
2. The lips can give sounds depending on whether the movie was good / bad
3. The lips can give light cues depending on whether the movie was good / bad
In the end, the system seemed to have the following Ubiquitous Computing (UbiComp) attributes:
3. Invisible Technology
4. Natural Interactions
5. Context Aware