Joyce Passananti
about projects research art

about me

Hello, I'm Joyce! I'm a third year CS PhD student at the University of California, Santa Barbara doing research in Human Computer Interaction, with a focus on mixed reality and computational design + fabrication. I completed my undergraduate at the University of Chicago with a BS in Computer Science (HCI Specialization) and BA in Media Art & Design (game development + design specialization).

My research has been supported by the NSF GRFP, UCSB's Academic Excellence Fellowship and Graduate Research Mentorship Program Fellowship awards as well as the University of Chicago Metcalf Grant and Green Fund Grant for HCI Research. I'm interested in exploring creative opportunities in digital fabrication, and enjoy game design + development as a hobby.


CV

Research

AR/VR | Computational Fabrication | Creativity Support

I'm currently doing mixed reality and computational design research in the Four Eyes Lab and Expressive Computation Lab, advised by Prof. Tobias Hollerer. Prior to my PhD I worked as a Research Assistant (under Prof. Pedro Lopes) in the Human-Computer Interaction Lab at the University of Chicago, where I discovered my passion for human centered design and cultivated critical research skills.

My current research is in creativity support- collaborating directly with artists to use technology to further creative possibilities in their crafts. I'm interested in discovering the value/benefits mixed reality can provide for visualization, direct manipulation, and skill development for artists. A unique approach I plan to explore is how systems can not only be designed to support artists in collaboration, as tools, but also how they can benefit human capabilities by helping develop human skills that will be retained even once technology is removed from the environment.

My past research has included accessibility design and application, computational fabrication, and developing programs to lower the knowledge barrier for utilizing complex design technology. Projects I've dedicated significant amounts of time too have focused on aiding the visually impaired community as well as making creative technologies more accessible to the general population. I have a strong desire to learn new skills and branch out to discover other areas in which technology can support creativity and growth.

all research -->

Featured Projects

In addition to my research I enjoy being creative and working on various projects, specifically in the domain of game design & development. I've polished my web + app development skills though personal projects and work and decided to build this website from scratch to touch up on those skills. Check out some other cool projects I've worked on for research interests, hackathons, class, or just for fun! I've also linked my Instructables with some breakdowns of recent computational fabrication projects I've done. Feel free to reach out if anything interests you further or you'd like to collaborate on a project together :)

Global Goals

Game dev, social justice

Global Goals is a project I collaborated on with Daria Schifrina for Venus Hacks 2021 which won Best Hack for Social Good. The project devpost can be accessed here, and we have a YouTube walkthrough for it as well. It's a browser-based game hosted here that encourages social activism through a community-oriented app that enables players to grow an oasis of animals by engaging in social justice work. To earn rewards, players can donate to non-profit organizations, share causes on social media, and sign petitions. Our project is hosted on Replit, uses Python, Flask, SQLite. All graphics were proudly designed and drawn by us!

Eyes Free Mobile App

Mobile app, accessibility

This research project focused on examining mobile services available to the visually impaired community, and culminated in an application + paper describing our research process, app development, and study findings. Our solution took shape as a centralized, eyes-free application to help blind or low-vision individuals accomplish many of their day-to-day needs. For dev/evaluation we used the Expo platform and React Native framework, using libraries and sensors compatible with both Android and iOS: the microphone, accelerometer, and camera. Our backend is set up with Google Cloud, leveraging Google Cloud APIs for object identification, speech-to-text, text recognition, and text-to-speech.
all projects -->