top of page
Virtual Reality
Artboard 1-8.png

STORY-TIME

Paper published in India Human Computer Interaction 2019, Hyderabad.

Team : Tanya Ballal & Shreya Misra

Role : Researcher and designer

OVERVIEW

This is a speculative design fiction project which revolves around bridging the gap between grandparents and grandchildren and allowing their culture to be passed on through storytelling in virtual environments. 

Screenshot 2020-11-07 at 12.13.33 PM.png

CONTEXT

With an increase in broken families, grandparents and grandchildren hardly spend time with each other and this in turn affects how their culture is being passed on from generation to generation.

PROBLEMS

The loss of culture and diversity is a pressing issue in the current world and we would like to address this by bridging the gap between family members, especially grandparents and their children by increasing the time they bong together through sharing of stories to revive cultures and passing on family morals and values to the younger generation. 

"

klipartz.com (5).png

They are less close to each other as a family.

klipartz.com (5).png

Passing on cultural and ancestral stories is tough.

klipartz.com (5).png

Phone/video calls are not enough for bonding.

klipartz.com (5).png

Meeting during holidays doesn't compensate for the rest of the year.

klipartz.com (5).png

Grandparents are left longing for their grandchildren.

klipartz.com (5).png

Children are are missing out on the experiences provided by grandparents.

To bridge this gap between grandparents and children, we propose a speculative design concept that supports a  virtual environment for both of them to exchange stories.

We aim to allow children to have a completely immersive experience with their grandparents while listening to their stories. Transporting children into the stories while the grandparent narrates these stories would help children understand better as well as let grandparents teach morals, values, and culture through these stories. 

5.png

STORYBOARD

4 (1).png

App with chatbot welcoming the users

7.png

In VR space, the user can make changes to the virtual space

Untitled-2.png
Untitled-4.png

Red arrow indicates the users position in the virtual space.

Untitled-8.png
Untitled-10.png
Untitled-11.png
Untitled-12.png

Scenes of the story that the users can see in the virtual space

Building onto the existing uses and applications of generative art and image/video generation, the idea of utilizing an autonomous system for the generation of VR spaces, modeled based on the scenes described in the stories, is central to the comprehensive functioning of the proposed system.

While the user narrates the story, the in-built AI generates visuals in the virtual environment i.e., the grandmother narrates the story while the AI generates visuals after listening and deducing the narrative.

PHYSICAL TOUCH

The one major element that is absent for bonding is physical touch. Often, grandparents hold the hands of their grandchildren while narrating stories so while people interact in the VR space, the aspect of being physically absent is overcome by a glove that allows the user to feel the warmth and grip of the other person. The glove designed has heating pads and airbags which get activated when the opposite user squeezes their hand mimicking the act of holding hands. 

The glove is temperature and pressure-sensitive.

Screenshot 2020-11-07 at 3.41.15 PM.png

Feeling of physical intimacy

Screenshot 2019-09-23 at 5.25.02 PM.png
Screenshot 2020-11-07 at 3.41.22 PM.png

Glove blueprint

Prototype to understand the feeling of airbags in gloves

FEASIBILITY & FUTURE

Inspired by NVIDIA’s research on the topic of AI rendered virtual landscapes, the aspect of training neural networks to utilize spoken words as input data and then generate the corresponding visualizations is explored as a feature of the system. The rapid advancement in technology paves the way for this sort of speculative design to seem plausible in the near future. 

This research is an ongoing project where we (designers, researchers, engineers) are currently looking at building virtual spaces to understand the scope of AI-generated visuals through speech. We are always on the lookout for further collaborators and participants who can be a part of our testing. 

bottom of page