Photorealistic Style Transfer with Screened Poisson Equation
Description:
Recent work has shown impressive success in transferring painterly style to images. These approaches, however, fall short of photorealistic style transfer. Even when both the input and reference images are photographs, the output still exhibits distortions reminiscent of a painting. In this paper we propose an approach that takes as input a stylized image and makes it more photorealistic. It relies on the Screened Poisson Equation, maintaining the fidelity of the stylized image while constraining the gradients to those of the original input image. Our method is fast, simple, fully automatic and shows positive progress in making a stylized image photorealistic. Our results exhibit finer details and are less prone to artifacts than the state-of-the-art.

 
Template Matching with Deformable Diversity Similarity
Description:
We propose a novel measure for template matching named Deformable Diversity Similarity -- based on the diversity of feature matches between a target image window and the template. We rely on both local appearance and geometric information that jointly lead to a powerful approach for matching. Our key contribution is a similarity measure, that is robust to complex deformations, significant background clutter, and occlusions. Empirical evaluation on the most up-to-date benchmark shows that our method outperforms the current state-of-the-art in its detection accuracy while improving computational complexity.

 
Text Documents Binarization
Description:
Given an image containing text, its binarized map classifies each pixel to its origin being a foreground (text) or background pixel. The Binarization is usually a pre-processing step for Character Recognition tasks. Character Recognition performance.
When considering text documents, different artifacts harm the quality of binarization. These artifacts include stains and spills on the original text, internal text-background contrast variations and different degradations that occur over time.
We propose adapting The Visibility Analysis for Image Processing framework for this task. The transformed image is detected for visibility to obtain per-pixel information. This information is later combined with an existing binarization algorithm, and best results are obtained over all competitors in the Document Image Binarization Contest 2016 benchmark.

 
OTC: A Novel Local Descriptor for Scene Classification
Description:
Scene classification is the task of determining the scene type in which a photograph was taken. In this paper we present a novel local descriptor suited for such a task: Oriented Texture Curves (OTC). Our descriptor captures the texture of a patch along multiple orientations, while maintaining robustness to illumination changes, geometric distortions and local contrast differences. We show that our descriptor outperforms all state-of-the-art descriptors for scene classification algorithms on the most extensive scene classification benchmark to-date.

 
How to Evaluate Foreground Maps ?
Description:
The output of many algorithms in computer-vision is either non-binary maps or binary maps (e.g., salient object detection and object segmentation). Several measures have been suggested to evaluate the accuracy of these foreground maps. In this paper, we show that the most commonly-used measures for evaluating both non-binary maps and binary maps do not always provide a reliable evaluation. This includes the Area-Under-the-Curve measure, the Average-Precision measure, the F-measure, and the evaluation measure of the PASCAL VOC segmentation challenge. We start by identifying three causes of inaccurate evaluation. We then propose a new measure that amends these flaws. An appealing property of our measure is being an intuitive generalization of the F-measure. Finally we propose four meta-measures to compare the adequacy of evaluation measures. We show via experiments that our novel measure is preferable.

 
What Makes a Patch Distinct ?
Description:
What makes an object salient? Most previous work assert that distinctness is the dominating factor. The difference between the various algorithms is in the way they compute distinctness. Some focus on the patterns, others on the colors, and several add high-level cues and priors. We propose a simple, yet powerful, algorithm that integrates these three factors. Our key contribution is a novel and fast approach to compute pattern distinctness. We rely on the inner statistics of the patches in the image for identifying unique patterns. We provide an extensive evaluation and show that our approach outperforms all state-of-the-art methods on the five most commonly-used datasets.

 
Depth-map Super Resolution from a Single Image
Description:
Inexpensive 3D cameras such as Microsoft Kinect are becoming increasingly available for various low- cost applications. However, the images acquired by these cameras suffer from low spatial resolution as well as inaccurate depth measurements. In the paper "Super resolution by single image", Glasner et al. offer a fast and effective super resolution method for natural images. Their method does not rely on an external database or prior examples but exploits patch redundancy in the original low resolution image. In this project we implement this approach and expand it to depth images.

 
Mesh Colorization
Description:
This paper proposes a novel algorithm for colorization of meshes. This is important for applications in which the model needs to be colored by just a handful of colors or when no relevant image exists for texturing the model. For instance, archaeologists argue that the great Roman or Greek statues were full of color in the days of their creation, and traces of the original colors can be found. In this case, our system lets the user scribble some desired colors in various regions of the mesh. Colorization is then formulated as a constrained quadratic optimization problem, which can be readily solved. Special care is taken to avoid color bleeding between regions, through the definition of a new direction field on meshes.

Based on the paper:
 
Crowdsourcing Gaze Data Collection
Description:
This project proposes a new type of saliency - context-aware saliency - which aims at detecting the image regions that represent the scene. It presents a detection algorithm that realizes this saliency.

Based on the paper:
 
Saliency Detection
Description:
This project proposes a new type of saliency - context-aware saliency - which aims at detecting the image regions that represent the scene. It presents a detection algorithm that realizes this saliency.

Based on the papers:
 
Icon Scanning
Description:
Undoubtedly, a key feature in the popularity of smartmobile devices is the numerous applications one can install. Frequently, we learn about an application we desire by seeing it on a review site, someone elses device, or a magazine. A user-friendly way to obtain this particular application could be by taking a snapshot of its corresponding icon and being directed automatically to its download link. Such a solution exists today for QR codes, which can be thought of as icons with a binary pattern. In this paper we extend this to App-icons and propose a complete system for automatic icon-scanning: it first detects the icon in a snapshot and then recognizes it.

Based on the paper:
 
Photogrammetric Texture Mapping using Casual Images
Description:
Texture mapping has been a fundamental problem in computer graphics from its early days. As online image databases have become increasingly accessible, the ability to texture 3D models using casual images has gained more importance. This will facilitate, for example, the task of texturing models of an animal using any of the hundreds of images of this animal found on the Internet, or enabling a naive user to create personal avatars using the user's own images.

Based on the paper:
 
Detecting feature curves on surface
Description:
Curves on objects can convey the inherent features of the shape. This paper defines a new class of view-independent curves, denoted demarcating curves. In a nutshell, demarcating curves are the loci of the ``strongest'' inflections on the surface. Due to their appealing capabilities to extract and emphasize 3D textures, they are applied to artifact illustration in archaeology, where they can serve as a worthy alternative to the expensive, time-consuming, and biased manual depiction currently used.

Based on the papers:
Computer-based, automatic recording and illustration of complex archaeological artifacts,
Ayelet Gilboa, Ayellet Tal, Ilan Shimshoni and Michael Kolomenkin.


 
HPR - Simple and fast "Hidden" Point Removal operator
Description:
This paper proposes a simple and fast operator, the "Hidden"
Point Removal operator, which determines the visible points
in a point cloud, as viewed from a given viewpoint.
Visibility is determined without reconstructing a
surface or estimating normals.

Based on the paper:
 
Triangle to Triangle Intersection Test
Description:
A Fast Triangle to Triangle Intersection Test for Collision Detection.

Based on the paper:
 
 
CG&M Lab    Contact Us EE Labs EE Department Technion