I am an assistant professor in the Computer Science department at Kunsan National University, South Korea.
I earned my Ph.D. at the University of Minnesota under Professor Daniel F. Keefe.
I did a postdoc at the Texas Advanced Computing Center, where I worked on extending Intel's raytracing applications to support immersive virtual reality experiences.
From 2019 to 2022, I created interactive museum installations in South Korea, providing innovative ways to explore historical data.
My research interests include scientific visualization, immersive analytics, and data storytelling.
I specialize in designing and developing 3D user interaction techniques to empower experts across various domains to explore and present their data.
Below, you can explore several projects I have developed throughout my research career.
Journal articles
Conference papers
Book chapters
Abstracts, Workshops, Exhibits, Posters
Awards
PEARC 2023
Immersive OSPRay: Enabling VR Experiences with OSPRay
ACM PEARC 2023, Portland, USA. (DOI: 10.1145/3569951.3597579)
TVCG
V-Mail: 3D-Enabled Correspondence about Spatial Data on (Almost) All Your Devices
IEEE Transactions on Visualization and Computer Graphics, 2022. (DOI: 10.1109/TVCG.2022.3229017)
Book chapter
Hybrid Data Constructs: Interacting with Biomedical Data in Augmented Spaces
In Making Data: Materializing Digital Information, edited by Ian Gwilt, ch. 11, pp. 169-182, Bloomsbury Visual Arts, June 2022. (DOI: 10.5040/9781350133266.ch-011)
VR 2019
Worlds-in-Wedges: Combining WIMs and Portals to Support Comparative Immersive Visualization of Forestry Data
IEEE VR 2019, Osaka, Japan. (DOI: 10.1109/VR.2019.8797871)
Nature Scientific Reports
Signature Maps for Automatic Identification of Prostate Cancer from Colorimetric Analysis of H&E-and IHC-stained Histopathological Specimen
Nature Scientific Reports, vol. 9, no. 6992, May 2019. (DOI: 10.1038/s41598-019-43486-y)
VRST 2019 posters
Effects of Age and Motivation for Visiting on AR Museum Experiences
ACM VRST 2019, Sydney, Australia. (DOI: 10.1145/3359996.3364711)
Leonardo
Spatial Correlation: An Interactive Display of Virtual Gesture Sculpture
Leonardo, vol. 50, no. 1, pp. 94–95, Feb 2017. (DOI: 10.1162/LEON_a_01226)
Nature Scientific Reports
Microstructure Imaging of Crossing (MIX) White Matter Fibers from diffusion MRI
Nature Scientific Reports, vol. 6, no. 38927, Dec 2016. (DOI: 10.1038/srep38927)
Radiology
Detection of Prostate Cancer: Quantitative Multiparametric MR Imaging Models Developed Using Registered Correlative Histopathology
Radiology, vol. 279, no. 3, pp. 805-816, Jan 2016. (DOI: 10.1148/radiol.2015151089)
Courses
At the visitor center, there are high-res tiled displays that show images and videos from visualization projects that researchers at the center have worked on. My task is to bring interactive 3D content into the systems to provide visitors with engaging methods to look at ongoing work. Working with software engineers at Intel and research scientists at TACC, I upgraded Intel's raytracing application to display a single, coherent 3D virtual environment on the tiled displays and added support for gesture-based interaction. We created a proof-of-concept prototype that enables a user to move around a 3D virtual environment by lifting both hands - pretending to be a bird - and leaning the body to fly in that direction.
Collaborative Results
During my stay in South Korea, I worked in a lab that works closely with museums to support new generations of public exhibitions. My task was to develop interactive installations for museums by closely working with experts from other fields, e.g., graphic designers, data curators, and historians. I developed visualization and interaction techniques for use by museum visitors to explore museums' archived data using gesture-based interaction. To ease the process of integrating assets created by designers and data curators, I implemented features to load the assets and populate 3D scenes and GUIs. By refactoring the code, I also provided a codebase for gesture-based interaction in Unity, which enabled another developer to create separate interactive applications. During my stay, I helped create two interactive installations presented at public venues (each lasted about a week).
Collaborative Results
During my Ph.D., I worked with experts from other fields, e.g., geology, medical devices, neuroscience, and health. My task was to create 3D interactive systems to assist these experts with analyzing and presenting their data. Mainly, I worked on creating data-driven 3D virtual environments and integrating VR/AR technologies to enable these experts to immerse in data (and look for findings and confirm hypotheses). Also, focusing on public-facing content, I worked on creating VR solutions to make these technologies accessible to the public for training and education. These works were presented at IEEE VR and VIS conferences.
Collaborative Results
Scientific visualization applications are often complex, requiring substantial expertise in how to use software features. My task was to find ways of facilitating team-science collaboration even when members have different levels of expertise. Our approach is to provide different ways of viewing/interacting with data in one collaborative framework. Data can simply be viewed from a video file created using animation features we developed. More engaged users can load the same video file in a visualization application and thoroughly explore the depicted data. Traveling users can see the video file in our custom video player and sketch or leave comments on data views. With this framework, users can pick and choose a client based on their needs and situations, and importantly, changes made in these clients are made back to the original data, so everyone is in sync. This work was published in TVCG.
Collaborative Results
When a patient is diagnosed with prostate cancer, the organ is taken out of the body. To study cancer, pathologists cut the prostate into slices and further cut it into subsections to be able to scan these tissues with scanning devices. My task was to develop a series of tools to assist pathologists with reconstructing a prostate volume and making further annotations. I helped create a Photoshop-like application that enabled pathologists to stitch back these scanned images and draw cancer boundaries. I also implemented output features to save these annotated slice images to a file format for further data analysis. Researchers in the lab used these annotated data to create models for detecting prostate cancer; their works were published in Nature Scientific Reports and Radiology.
Collaborative Results