Eman Hassan

Personal Website

  • Home
  • Education
  • Teaching
  • Publications
  • Employment
  • Awards
  • Fun

Short Bio

My name is Eman Hassan, a PhD candidate in the school of Informatics Computing and Engineering, Indiana University, USA. My main research interests are Machine Learning, Deep Learning and Computer Vision. My main work during my PhD is in the field of Machine Learning, Deep Learning and Computer Vision. I was working on the use of generative adversarial models for domain adaptation, small object detection, and cross-domain retrieval. I have work experience in enhancing the privacy in video sharing. Similarly, in building 3d reconstruction module and face detection and recognition module. I have class-level experience in reinforcement learning and NLP. I have a good understanding to hardware architectures and a background in electrical engineering. I am proficient in python with lots of experience in Matlab, c++, java.

Email: emhassan@indiana.edu

  • Semantic Consistency: The Key to Improve Traffic Light Detection with Data Augmentation

    Traffic light detection by camera is a challenging task for autonomous driving mainly due to the small size of traffic lights in the road scene especially for early detection. The limited resolution in the corresponding area of traffic lights reduces their contrast to the background, as well as the effectiveness of the visual cues from the traffic light itself. We believe understanding the scene semantics between traffic lights and their surroundings can play a vital role in tackling this challenge. Towards this goal, we build a generative adversarial network (GAN) model to predict the existence of traffic lights from the road scene image where existing traffic lights are removed with image inpainting. Using Cityscape dataset [2], we verify that the proposed GAN model indeed captures the desired semantics by showing effective predictions of existence of traffic lights that are consistent with real images. Moreover, we leverage this model to augment the training data where traffic lights are inserted to the road scene images based on the prediction of the GAN model. While the augmented images may not be realistic looking, results show that such data augmentation can improve the traffic light detector performance that is comparable to using additional real data collection, and better than other data augmentation with various randomization schemes. These results verify the importance of semantic consistency for data augmentation to improve the traffic light detection.

  • Unsupervised Domain Adaptation using Generative Models and Self-ensembling

    Transferring knowledge across different datasets is an important approach to successfully train deep models with a small-scale target dataset or when few labeled instances are available. In this paper, we aim at developing a model that can generalize across multiple domain shifts, so that this model can adapt from a single source to multiple tar- gets. This can be achieved by randomizing the generation of the data of various styles to mitigate the domain mismatch. First, we present a new adaptation to the CycleGAN model to produce stochastic style transfer between two image batches of different domains. Second, we enhance the classifier performance by using a self-ensembling technique with a teacher and student model to train on both original and generated data. Finally, we present experimental results on three datasets Office-31, Office-Home, and Visual Domain adaptation. The results suggest that self-ensembling is better than simple data augmentation with the newly generated data and a single model trained this way can have the best performance across all different transfer tasks.

    Archive Paper: https://arxiv.org/abs/1812.00479 »

  • Cross-domain Generative Models Applied to Cartoon Series

    We investigate Generative Adversarial Networks (GANs) to model one particular kind of image: frames from TV cartoons. Cartoons are particularly interesting because their visual appearance emphasizes the important semantic information about a scene while abstracting out the less important details, but each cartoon series has a distinctive artistic style that performs this abstraction in different ways. We consider a dataset consisting of images from two popular television cartoon series, Family Guy and The Simpsons.

    We examine the ability of GANs to generate images from each of these two domains, when trained independently as well as on both domains jointly. We find that generative models may be capable of finding semantic-level correspondences between these two image domains despite the unsupervised setting, i.e. even when the training data does not give labeled alignments between them.

    Archive Paper: https://arxiv.org/abs/1710.00755 »

    Vision for Privacy:

  • Cartooning for Enhanced Privacy in Lifelogging and Streaming Videos

    We describe an object replacement approach whereby privacy-sensitive objects in videos are replaced by abstract cartoons taken from clip art. Our approach uses a combination of computer vision, deep learning, and image processing techniques to detect objects, abstract details, and replace them with cartoon clip art.

    We conducted a user study (N=85) to discern the utility and effectiveness of our cartoon replacement technique. The results suggest that our object replacement approach preserves a video’s semantic content while improving its privacy by obscuring details of objects.

    Workshop Paper: https://ieeexplore.ieee.org/document/8014909 »

  • Can Privacy Be Satisfying

    Pervasive photo sharing in online social media platforms can cause unintended privacy violations when elements of an image reveal sensitive information. Prior studies have identified image obfuscation methods (e.g., blurring) to enhance privacy, but many of these methods adversely affect viewers’ satisfaction with the photo, which may cause people to avoid using them. In this paper, we study the novel hypothesis that it may be possible to restore viewers’ satisfaction by ‘boosting’ or enhancing the aesthetics of an obscured image, thereby compensating for the negative effects of a privacy transform. Using a between-subjects online experiment, we studied the effects of three artistic transformations on images that had objects obscured using three popular obfuscation methods validated by prior research. Our findings suggest that using artistic transformations can mitigate some negative effects of obfuscation methods, but more exploration is needed to retain viewer satisfaction.

    Conference Paper:https://www.cs.indiana.edu/~kapadia/papers/hasan-chi-19.pdf»

  • Viewer Experience of Obscuring Scene Elements in Photos to Enhance Privacy

    With the rise of digital photography and social networking, people are sharing personal photos online at an unprecedented rate. In addition to their main subject matter, photographs often capture various incidental information that could harm people’s privacy. While blurring and other image filters may help obscure private content, they also often affect the utility and aesthetics of the photos, which is important since images shared in social media are mainly for human consumption. Existing studies of privacy-enhancing image filters either primarily focus on obscuring faces, or do not systematically study how filters affect image utility. To understand the trade-offs when obscuring various sensitive aspects of images, we study eleven filters applied to obfuscate twenty different objects and attributes, and evaluate how effectively they protect privacy and preserve image quality for human viewers.

    Conference Paper:www.cs.indiana.edu/~kapadia/papers/hasan-chi-18.pdf»

    Cartoom Summary:

  • Cartoon Summery Generation for Ego-centric videos

    This work represents a fun way to generate a summarization of life logging data. The main idea is that objects always convey many information about the scene and the activity that is happening, we use mutual information to mine for important object to each activity, then we use that object area to sample frames representing the activity. Finally, we render the summary in a cartoon-like way by background cartooning and object replacements with clip arts. This could produce a possibly nice pleasing cartoon summary for an ego-centric life logging video.

    Computational Photography:

  • Image Inpainting Based on image segmentation and segment classification

    We present a new inpainting algorithm that is based on image segmentation and segment classification. First, we employ the mean shift algorithm to segment the input image. Then, we divide the original inpainting problem to be either one of the two problems: Large Segment Inpainting problem or Non-uniform Segments inpainting problem.

    The reason we do that is that human eye is more discerning to the errors in the structure and texture propagation of a large-uniform regions with less details while it is less discerning to errors in non-uniform regions with more details.

    We propose a novel algorithm for each one of the problems- Large Segment Inpainting and Non- uniform Segments inpainting- according to the main features of each one. The experimental results show the advantage of our technique which produces output images with better perceived visual quality.

Copyright © 2017 - All Rights Reserved

Template by OS Templates