Sina Masnadi

masnadi(at)cs.ucf.edu

I received my Ph.D. in Computer Science from University of Central Florida in August 2022, with a thesis titled "Distance Perception Through Head-Mounted Displays". During my graduate school, I worked with Dr. Joseph LaViola, at the Interactive Computing Experiences Research Cluster (ICE).

Education

University of Central Florida

Ph.D.
Computer Science - Augmented Reality | Virtual Reality | Human Computer Interaction
August 2015 - August 2022

Sharif University of Technology

B.Sc.
Computer Science
August 2010 - May 2015

Publications

Effects of Field of View on Egocentric Distance Perception in Virtual Reality

Sina Masnadi, Kevin Pfeil, Jose-Valentin T Sera-Josef, Joseph LaViola
CHI Conference on Human Factors in Computing Systems (CHI 2022)
The convex hull problem has practical applications in mesh generation, file searching, cluster analysis, collision detection, image processing, statistics, etc. In this paper, we present a novel pruning-based approach for finding the convex hull set for 2D and 3D datasets using parallel algorithms. This approach, which is a combination of pruning, divide and conquer, and parallel computing, is flexible to be employed in a distributed computing environment. We propose the algorithm for both CPU and GPU (CUDA) computation models. The results show that ConcurrentHull has a performance gain as the input data size increases. Providing an independently dividable approach, our algorithm has the benefit of handling huge datasets as opposed to other approaches presented in this paper which failed to manage the same datasets.

Distance Perception with a Video See-Through Head-Mounted Display

Kevin Pfeil, Sina Masnadi, Jacob Belga, Jose-Valentin T Sera-Josef, Joseph LaViola
Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI 2021)
In recent years, pass-through cameras have resurfaced as inclusions for virtual reality (VR) hardware. With modern cameras that now have increased resolution and frame rate, Video See-Through (VST) Head-Mounted Displays (HMD) can be used to provide an Augmented Reality (AR) experience. However, because users see their surroundings through video capture and HMD lenses, there is question surrounding how people perceive their environment with these devices. We conducted a user study with 26 participants to help understand if distance perception is altered when viewing surroundings with a VST HMD. Although previous work shows that distance estimation in VR with an HTC Vive is comparable to that in the real world, our results show that the inclusion of a ZED Mini pass-through camera causes a significant difference between normal, unrestricted viewing and that through a VST HMD.

Field of View Effect on Distance Perception in Virtual Reality

Sina Masnadi, Kevin P Pfeil, Jose-Valentin T Sera-Josef, Joseph J LaViola
2021 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
Recent state-of-the-art Virtual Reality (VR) Head-Mounted Displays (HMD) provide wide Field of Views (FoV) which were not possible in the past. Due to this development, HMD FoVs are now approaching a level that parallels natural human eyesight. Previous efforts have shown that reduced FoVs affect user perception of distance in a given environment, but none have investigated VR HMDs with wide FoVs. Therefore, in this paper we directly investigate the effect of HMD FoV on distance estimation in virtual environments. We performed a user study with 14 participants who performed a blind throwing task wearing a Pimax 5K Plus HMD, in which we virtually restricted the FoV to 200°, 110°, and 60°. We found a significant difference in perceived distance between the 200° and 60° FoVs, as well as between the 110° and 60° FoVs. However, no significant difference was observed between 200° and 110°. Our results indicate that users tend to underestimate distance with the narrower FoV

ConcurrentHull: A Fast Parallel Computing Approach to the Convex Hull Problem

Sina Masnadi and Joseph J. LaViola Jr.
International Symposium on Visual Computing (ISVC 2019)
The convex hull problem has practical applications in mesh generation, file searching, cluster analysis, collision detection, image processing, statistics, etc. In this paper, we present a novel pruning-based approach for finding the convex hull set for 2D and 3D datasets using parallel algorithms. This approach, which is a combination of pruning, divide and conquer, and parallel computing, is flexible to be employed in a distributed computing environment. We propose the algorithm for both CPU and GPU (CUDA) computation models. The results show that ConcurrentHull has a performance gain as the input data size increases. Providing an independently dividable approach, our algorithm has the benefit of handling huge datasets as opposed to other approaches presented in this paper which failed to manage the same datasets.

Sketching affordances for human-in-the-loop robotic manipulation tasks

Sina Masnadi, Joseph J LaViola, Jana Pavlasek, Xiaofan Zhu, Karthik Desingh, O Jenkins
ICRA, 2nd Robot Teammates Operating in Dynamic, Unstructured Environments (RT-DUNE)
We propose to enable a human user, without expert knowledge about robotics and programming, to transfer knowledge about affordances in a given scene to a robot. To this end, we propose an easy-to-use system to acquire object geometries and their associated affordances through sketching on a graphical interface. This allows users to interact with robotic systems by utilizing sketch-based techniques to provide a straightforward user interface, as shown in Figure 2. The user sketches the geometry of the object and its affordances. During task execution, when the robot encounters the objects for which it has affordance information, it can execute the affordances by registering the object geometries to its RGB-D data and then performing actions sequentially to achieve the goal.

VRiAssist: An Eye-Tracked Virtual Reality Low Vision Assistance Tool

Sina Masnadi, Brian Williamson, Andrés N Vargas González, Joseph J LaViola
IEEE VR, 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
We present VRiAssist, an eye-tracking-based visual assistance tool designed to help people with visual impairments interact with virtual reality environments. VRiAssist’s visual enhancements dynamically follow a user’s gaze to project corrections on the affected area of the user’s eyes. VRiAssist provides a distortion correction tool to revert the distortions created by bumps on the retina, a color/brightness correction tool that improves contrast and color perception, and an adjustable magnification tool. The results of a small 5 person user study indicate that VRiAssist helped users see better in the virtual environment depending on their level of visual impairment.

AffordIt!: A Tool for Authoring Object Component Behavior in VR

Sina Masnadi, Andrés N Vargas González, Brian Williamson, Joseph J LaViola
IEEE VR, 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)
This paper presents AffordIt!, a tool for adding affordances to the component parts of a virtual object. Following 3D scene reconstruction and segmentation procedures, domain experts find themselves with complete virtual objects, but no intrinsic behaviors have been assigned, forcing them to use unfamiliar Desktop-based 3D editing tools. Our solution allows a user to select a region of interest for a mesh cutter tool, assign an intrinsic behavior and view an animation preview of their work. To evaluate the usability and workload of AffordIt! we ran an exploratory study to gather feedback. Results show high usability and low workload ratings.

A Sketch-Based System for Human-Guided Constrained Object Manipulation

Sina Masnadi, Joseph J. LaViola Jr., Xiaofan Zhu, Karthik Desingh, Odest Chadwicke Jenkins
arXiv preprint arXiv:1911.07340
In this paper, we present an easy to use sketch-based interface to extract geometries and generate affordance files from 3D point clouds for robot-object interaction tasks. Using our system, even novice users can perform robot task planning by employing such sketch tools. Our focus in this paper is employing human-in-the-loop approach to assist in the generation of more accurate affordance templates and guidance of robot through the task execution process. Since we do not employ any unsupervised learning to generate affordance templates, our system performs much faster and is more versatile for template generation. Our system is based on the extraction of geometries for generalized cylindrical and cuboid shapes, after extracting the geometries, affordances are generated for objects by applying simple sketches. We evaluated our technique by asking users to define affordances by employing sketches on the 3D scenes of a door handle and a drawer handle and used the resulting extracted affordance template files to perform the tasks of turning a door handle and opening a drawer by the robot.

Investigating the Value of Privacy within the Internet of Things

Alex Mayle, Neda Hajiakhoond Bidoki, Sina Masnadi, Ladislau Boeloeni, Damla Turgut
GLOBECOM 2017-2017 IEEE Global Communications Conference, 1-6
Many companies within the Internet of Things (IoT) sector rely on the personal data of users to deliver and monetize their services, creating a high demand for personal information. A user can be seen as making a series of transactions, each involving the exchange of personal data for a service. In this paper, we argue that privacy can be described quantitatively, using the game- theoretic concept of value of information (VoI), enabling us to assess whether each exchange is an advantageous one for the user. We introduce PrivacyGate, an extension to the Android operating system built for the purpose of studying privacy of IoT transactions. An example study, and its initial results, are provided to illustrate its capabilities.

Experience

Senior Software Engineer

At Magic Leap, I leverage Unity to design and implement comprehensive user studies aimed at quantifying user experience in augmented reality. My work involves developing interactive AR applications and conducting user studies to gather data, which is then analyzed to enhance usability and overall user satisfaction. This role combines technical expertise in software development with a deep understanding of human-computer interaction, focusing on creating immersive and intuitive AR devices.

October 2022 - Present

Research Scienctist

Developed robotic systems for household and industrial applications using state-of-the-art machine learning and artificial intelligence approaches.

Jan 2021 - September 2022

Full-stack Developer

Developed web-services using Node.js, MongoDB, Heroku, AWS, and Angular JS. Among my responsibilities, I was in charge of transforming the traditional servers to cloud services using Node.js and AWS (Lambda, S3, and EC2). I also helped with Android app development by integrating Google Firebase services for user profile management.

Jun 2016 - Aug 2017

Senior Android Developer

Cafe Bazaar is the largest private mobile software company in Iran. I worked on its main product, called "Bazaar", which is an Android application marketplace (similar to Google Play) for Iranian smartphone users. Currently, it has more than 36 million active users. Some of my collaborations in this company are as follows:

  • UI/UX design based on persona, scenario, and goal
  • Improving UX using user study, A/B testing and data analytics
  • Creating an Android image caching system (this happened before Fresco, UIL, Picasso, and other libraries become popular)
  • Implementing root installation for apps. Automatic update of installed apps which mimics Google Play and App Store
  • Design and implement server/client data transfer protocols and structures
  • Collaborating in implementing apk delta update (Update Android apps by their diff)

Dec 2012 - Aug 2015

Co-Founder & CTO

An Android application for paying and managing bills.

2013 - 2018

Creator

An Android wallpaper application with more than 300,000 active users at its peak.

2013 - 2017